AWS: creating an EKS cluster

Top Tips

Stuff, perhaps not immediately relevant, but you’ll keep coming back to:

List contexts: kubectx

Switch contexts: kubectx <your context>

Namespaceskubectl get pods -o yaml -n kube-system

(e.g. if you run kubectl get pods and see nothing it may be ‘cos you’re using the wrong namespace – i.e. there are no pods in that namespace)



Notes and Guides:

Notes: EKS is only available in:

  • US West (Oregon) (us-west-2)
  • US East (N. Virginia) (us-east-1)
  • EU (Ireland) (eu-west-1)

Terraform guide:

(The Terraform code provided is here: )

and the AWS EKS guide:


Terraform notes:

  • TF code creates 2 m4.large instances based on the latest EKS Amazon Linux 2 AMI: Operator managed Kubernetes worker nodes for running Kubernetes service deployments
  • Full code:


AWS EKS notes

You’ll need:

  • aws-iam-authenticator

Don’t use the instructions given on unless you want to waste half an hour of your time figuring out why it doesn’t work. I got this error:

Use the instructions here:

i.e. curl -o aws-iam-authenticator

  • helm
  • kubectl


Name of cluster: in AWS console or use:

aws eks list-clusters


To use kubectl:

aws eks update-kubeconfig --name <name of cluster>

This will add the config to your ~/.kube/config.


1. You can check this is in your config with:

  • kubectl config view

See also Kubernetes: kubectl


Note:  aws cli version <= 1.15.53 does not have this. Upgrade AWS CLI, with:pip install awscli --upgrade --user

Typical problems when upgrading AWS CLI:

You’ve probably got a PATH problem.

Check you haven’t got an older version at /usr/local/bin


2. And that you can see pods in your cluster with:

kubectl get all -n kube-system

E.g. I got this back:



Some more information on debugging Pods


kube-system 1m 1h 245 kube-dns-fcd468cb-8fhg2.156899dbda62d287 Pod Warning FailedScheduling default-scheduler no nodes available to schedule pods


so ssh into one of the nodes and run journalctl

You’ll need to add your ssh key to the node and get the public IP address. Then:

ssh -i ~/path/to/key ec2-user@public.ip.address


StackOverflow post:


The trick to solving this is the output that’s generated by Terraform needs to be applied.

i.e. copy config_map_aws_auth which, for me, looked like:

into a file, and apply as is:

kubectl apply -f

The {{EC2PrivateDNSName}} is parsed by one of the Kubernetes controllers.

More on this issue in #office-hours –



Warning FailedScheduling – default-scheduler no nodes available to schedule pods

error creating EKS Cluster: InvalidParameterException: Error in role params

AWS EKS: An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name





Leave a Reply

Your email address will not be published. Required fields are marked *