followed the eks distro official documentation for the kops option.
on this website
https://distro.eks.amazonaws.com/users/install/kops/
git clone https://github.com/aws/eks-distro.git
cd eks-distro/development/kops
export KOPS_STATE_STORE=s3://my-temp-store
i don't know have a specific domain to create a cluster with that,
so that using the gossip dns as stated in the kops tutorial on
https://kops.sigs.k8s.io/getting_started/aws/
https://kops.sigs.k8s.io/gossip/
export KOPS_CLUSTER_NAME=my-test-cluster.k8s.local
setting the AWS region
export AWS_REGION=us-east-1
Creating a cluster
./install_requirements.sh
./create_values_yaml.sh
./create_configuration.sh
./create_cluster.sh
the create cluster script gives the below message, which is the reason for the following errors.
did not find API endpoint for gossip hostname, may not be able to reach the cluster
the script creates a cluster on AWS with ec2 instances
can get the cluster-info successfully.
kops get
but when I run kubectl commands it gives, can't look up the Kubernetes API
Now the same process as above but with kops directly.
the major difference between the eks distro script mode and manual with kops
is that the kops cluster does things fast and with less data
Major Info, eks Distro scripts run the same kops commands that we run now.
so this is the preferred way mostly, EKS Distro builds upon these commands.
to create a cluster first export the following details, as per your need.
export KOPS_STATE_STORE=s3://my-temp-store
export KOPS_CLUSTER_NAME=my-test-cluster.k8s.local
now create a cluster with this command
kops create cluster --cloud=aws --zones=us-east-1a,us-east-1b,us-east-1c --yes
Info: zones and region are not same, with eks d, AWS region variable only uses one zone in the region,
with the zones in the kops command, it takes the specific zones specified
the above command creates a cluster and gives this output at the end of console
which means the process is successful.
kops has set your kubectl context to
Now for the problem and solution
the main reason the kops cluster process worked while the eks-d process failed
is that the kops process created a load balancer(amazon elb), while the eks-d didn't.
our cluster name is a gossip DNS hostname, so it doesn't have an IP to access or name resolute
this gossip cluster name only communicates inside a cluster, so if we want to access the k8s API
outside the cluster, which we definitely want to
we need a load balancer before that gossip DNS.
here is the key point, I found why kops successfully created a load balancer for our gossip DNS
while eks didn't.
kops get cluster -o yaml
that command gives the cluster config created by both the kops and eks-d
run that individually with the respective deployment process to get the actual config used.
kops cluster
kops get cluster -o yaml > kops.yaml
eks-d cluster
kops get cluster -o yaml > eks.yaml
now when comparing both the YAML files,
I noticed one thing, most of the YAML config is the same,
except with the following
spec:
api:
loadBalancer:
class: Classic
type: Public
# from this everything is same
dns: {} # this is same,
so my idea is to add the above extra config that is in the kops config file to the eks-d config file.
the only way this works with eks-d is by editing the eks-d.tpl file
nano ./eks-d.tpl
add the above lines to the config
now save it and run the following scripts to deploy a cluster
./install_requirements.sh
./create_values_yaml.sh
./create_configuration.sh
./create_cluster.sh
when the process is done, we get this output at the console end.
exporting kubecfg for cluster
which means the local system, can access the k8s API
run this
./cluster_wait.sh
this will take around 10 minutes
but once done, we can run kubectl and helm commands like a normal k8s cluster.
Now for the reason I created this issue, there needs to be an option with environmental variables, if we want a load balancer created with gossip DNS.
even if we are not using this setup in the production, we still need to know the way things work.
so now that you know have a good day.