AWS: creating an EKS cluster

Top Tips

Stuff, perhaps not immediately relevant, but you’ll keep coming back to:

List contexts: kubectx

Switch contexts: `kubectx <your context>`

Namespaces:  `kubectl get pods -o yaml -n kube-system`

(e.g. if you run kubectl get pods and see nothing it may be ‘cos you’re using the wrong namespace – i.e. there are no pods in that namespace)



Notes and Guides:

Notes: EKS is only available in:

  • US West (Oregon) (us-west-2)
  • US East (N. Virginia) (us-east-1)
  • EU (Ireland) (eu-west-1)

Terraform guide:

(The Terraform code provided is here: )

and the AWS EKS guide:


Terraform notes:

  • TF code creates 2 m4.large instances based on the latest EKS Amazon Linux 2 AMI: Operator managed Kubernetes worker nodes for running Kubernetes service deployments
  • Full code:


AWS EKS notes

You’ll need:

  • aws-iam-authenticator

Don’t use the instructions given on unless you want to waste half an hour of your time figuring out why it doesn’t work. I got this error:

Use the instructions here:

i.e. curl -o aws-iam-authenticator

  • helm
  • kubectl


Name of cluster: in AWS console or use:

aws eks list-clusters


To use kubectl:

aws eks update-kubeconfig --name <name of cluster>

This will add the config to your ~/.kube/config.


1. You can check this is in your config with:

  • kubectl config view

See also Kubernetes: kubectl


Note:  aws cli version <= 1.15.53 does not have this. Upgrade AWS CLI, with:`pip install awscli –upgrade –user`

Typical problems when upgrading AWS CLI:

aws --version
aws-cli/1.11.10 Python/2.7.10 Darwin/17.7.0 botocore/1.4.67

pip install awscli --upgrade --user
Collecting awscli
  Downloading (1.4MB)
Successfully installed awscli-1.16.57 botocore-1.12.47

aws --version
aws-cli/1.11.10 Python/2.7.10 Darwin/17.7.0 botocore/1.4.67

You’ve probably got a PATH problem.

Check you haven’t got an older version at /usr/local/bin


2. And that you can see pods in your cluster with:

kubectl get all -n kube-system

E.g. I got this back:

NAME                          READY   STATUS    RESTARTS   AGE
pod/kube-dns-fcd468cb-8fhg2   0/3     Pending   0          41m

NAME               TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
service/kube-dns   ClusterIP   <none>        53/UDP,53/TCP   41m

daemonset.apps/aws-node     0         0         0       0            0           <none>          41m
daemonset.apps/kube-proxy   0         0         0       0            0           <none>          41m

deployment.apps/kube-dns   1         1         1            0           41m

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/kube-dns-fcd468cb   1         1         0       41m



Some more information on debugging Pods

kubectl get events --all-namespaces


kube-system 1m 1h 245 kube-dns-fcd468cb-8fhg2.156899dbda62d287 Pod Warning FailedScheduling default-scheduler no nodes available to schedule pods


kubectl get nodes
No resources found.

so ssh into one of the nodes and run journalctl

You’ll need to add your ssh key to the node and get the public IP address. Then:

ssh -i ~/path/to/key ec2-user@public.ip.address


StackOverflow post:


The trick to solving this is the output that’s generated by Terraform needs to be applied.

i.e. copy `config_map_aws_auth` which, for me, looked like:

apiVersion: v1
kind: ConfigMap
  name: aws-auth
  namespace: kube-system
  mapRoles: |
    - rolearn: arn:aws:iam::<owner id>:role/terraform-eks-demo-node
      username: system:node:{{EC2PrivateDNSName}}
        - system:bootstrappers
        - system:nodes

into a file, and apply as is:

kubectl apply -f

The {{EC2PrivateDNSName}} is parsed by one of the Kubernetes controllers.

More on this issue in #office-hours –



Warning FailedScheduling – default-scheduler no nodes available to schedule pods

error creating EKS Cluster: InvalidParameterException: Error in role params

AWS EKS: An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name





Leave a Reply

Your email address will not be published. Required fields are marked *