aws_autoscaling_group – diffs didn’t match during apply. This is a bug with Terraform

happens when you have code like this:

resource "aws_autoscaling_group" "my_asg" {
  name                 = "my_asg"
  launch_configuration = "${}"
  min_size             = 1
  max_size             = 1
  availability_zones = "${var.availability_zones}"

  vpc_zone_identifier = ["${}"]

Seems you can’t mix availability_zones and vpc_zone_identifier.

Terraform: Error creating launch configuration: AlreadyExists: Launch Configuration by this name already exists

If you’re creating an ASG using an AWS Launch Configuration, you cannot use a name for the Launch Configuration.

The solution? Simply omit name from your launch configuration.

Launch Configurations cannot be updated after creation with the Amazon Web Service API.


Terraform 0.12 HCL and interpolation syntax


HCL2 combines HCL (Hashicorp Language) and HIL (Hashicorp Interpolation Language). So we now have first-class expression syntax. i.e. the end to "${ ... }".

i.e. v0.11

  ip_cidr_range = "${cidrsubnet(var.base_network_cidr, 4, count.index)}"


  ip_cidr_range = cidrsubnet(var.base_network_cidr, 4, count.index)


Note: the wording used here by Hashicorp is confusing:

0.11 wrapped string interpolations in ${}. See

However, 0.12 now extends this to loops and conditionals:

Improved Error messages

Error messages that actually mean something!

Remote Plan and Apply


AWS: add ssh key, check fingerprint and add to Terraform

1. Generate key

ssh-keygen -t rsa -b 4096 -C "<email address>"

File name: /home/dir/.ssh/file-name_id_rsa


2. Upload

AWS Dashboard > EC2 > Key Pairs > Upload


You can check the fingerprint with:

openssl rsa -in path_to_private_key -pubout -outform DER | openssl md5 -c

It’s important to use the correct openssl command. There are 2 separate commands – one for an AWS generated key and the second for a key you upload.


3. add the key_name to Terraform

e.g. a launch configuration:


4. ssh in with

ssh -i ~/.ssh/<new-key> ec2-user@<public ip>

If you’re unable to connect make sure you’ve got port 22 open on the EC2 instance Security Group.

E.g. Inbound rule:

SSH from laptop


AWS EKS: An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name

Following along to

I deployed and then ran:

aws eks update-kubeconfig --name terraform-eks-demo

to get:

An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: terraform-eks-demo.

I can see the cluster so why is this happening?


Let’s try listing the clusters.

aws eks list-clusters

    "clusters": []

Docs say:

Lists the Amazon EKS clusters in your AWS account in the specified Region.

so let’s specify the Region:

aws eks list-clusters --region 'us-west-2'
    "clusters": [

so perhaps it’s our default region that’s the issue however that says:

region = us-west-2

but our  `~/.aws/credentials` says:

aws_access_key_id = <key id>
aws_secret_access_key = <secret access key>

Odd that there was a region in the credentials file. It’s usually seen in the config.

Deleting it fixed the issue so the credentials file must have overridden the config file.



Terraform: EKS cluster – aws_eks_cluster.demo: error creating EKS Cluster: InvalidParameterException: Error in role params

The code I used:

The first few times were fine. Then, on the third terraform apply, I got:

aws_security_group_rule.demo-node-ingress-self: Creation complete after 3s (ID: sgrule-3180869992)

Error: Error applying plan:

1 error(s) occurred:

* aws_eks_cluster.demo: 1 error(s) occurred:

* aws_eks_cluster.demo: error creating EKS Cluster (my-cluster): InvalidParameterException: Error in role params
status code: 400, request id: d063ca1b-ecb0-11e8-acff-5347eb3dd87f

No idea what the issue was but deleting .terraform fixed the problem.



AWS: reliable? It ain’t!

Let me caveat that.

There’s some idea that stuff is bulletproof once on AWS. It’s not.

AWS has internal issues as do any provider. E.g. network problems (remember that S3 EOF bug?), disk drives failing (Retirement Notifications anyone?), etc

On top of that, stuff will hit internal inconsistencies. E.g. you have an ASG which tries to launch an EC2 instance only to fail ‘cos you’ve reached your limit for that particular type of EC2 instance.

But you can build around it in your app with Error Retries and Exponential Backoffs (techniques probably more familiar to mobile developers):


E.g. here’s a solution Terragrunt uses:

terraform state: how do you show an instance with a count?

Say we have an instance which has been built with a count.

After downloading the JSON state file from S3 (assuming you’re hosting it remotely) then you can look in the state file for this resource using:

terraform show ~/Downloads/terraform-state-file.json | less

Assuming it’s called my-instance you can search for it and you’ll find:



So, let’s say you want to taint this resource. Let’s try show initially on it as it’s non-destructive so:

tf state show

But this gives:

Error filtering state: Error parsing address ‘’: Unexpected value for InstanceType field: “0”

Please ensure that all your addresses are formatted properly.

Using my-instance[0]does not work.


You need to specify the instance using: