AWS: creating an EKS cluster

 

Notes: EKS is only available in:

  • US West (Oregon) (us-west-2)
  • US East (N. Virginia) (us-east-1)
  • EU (Ireland) (eu-west-1)

Terraform guide:  https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html

and the AWS EKS guide: https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html

 

Terraform notes:

  • TF code creates 2 m4.large instances based on the latest EKS Amazon Linux 2 AMI: Operator managed Kubernetes worker nodes for running Kubernetes service deployments

Kubernetes: creating a new namespace and using it in a new context

Here’s how to isolate your work using a new namespace called dev in a new context, also called dev:

Note: I’m aliasing kubectl to k.

List namespaces with: kubectl get namespaces --show-labels

 

And to quickly switch between contexts use kubectx:

Install:

brew install kubectx

List contexts:

kubectx

Switch:

kubectx <name>

 

Delete namespaces with:

kubectl delete namespaces <name>

Note: this will not delete it from your config file – https://stackoverflow.com/questions/53283120/kubernetes-cant-delete-namespace/53283273#53283273

 

To do this declaratively, use a config file. E.g.

https://kubernetes.io/docs/tasks/administer-cluster/namespaces/#subdividing-your-cluster-using-kubernetes-namespaces

 

See also:

 

 

Kubernetes: Imperative vs Declarative

TLDR: use Declarative. E.g. Helm charts

 

Imperative commands

  • objects are created and managed/modified using the CLI
  • all operations are done on live objects

E.g.

kubectl create ns ghost
kubectl create quota blog –hard=pods=1 -n ghost
kubectl run ghost –image=ghost -n ghost
kubectl expose deployments ghost –port 2368 –type LoadBalancer -n ghost

Or

kubectl create service clusterip foobar --tcp=80:80

To modify any of the objects you can use the kubectl edit command or use any of the convenience wrappers. For example to scale the deployment do:

kubectl scale deployment ghost --replicas 2 -n ghost

https://kubernetes.io/docs/concepts/overview/object-management-kubectl/imperative-command/

 

Declarative mode

Use a YAML file and run something like:

kubectl apply -f <object>.yaml

https://kubernetes.io/docs/concepts/overview/object-management-kubectl/declarative-config/

 

More: https://medium.com/bitnami-perspectives/imperative-declarative-and-a-few-kubectl-tricks-9d6deabdde

Kubernetes: bootcamp tutorial

Module 3: Exploring your app

https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-interactive/

Step 1 Check application configuration

Let’s verify that the application we deployed in the previous scenario is running. We’ll use the kubectl get command and look for existing Pods:

kubectl get pods

If no pods are running, list the Pods again.

Next, to view what containers are inside that Pod and what images are used to build those containers we run the describe pods command:

kubectl describe pods

We see here details about the Pod’s container: IP address, the ports used and a list of events related to the lifecycle of the Pod.

The output of the describe command is extensive and covers some concepts that we didn’t explain yet, but don’t worry, they will become familiar by the end of this bootcamp.

Note: the describe command can be used to get detailed information about most of the kubernetes primitives: node, pods, deployments. The describe output is designed to be human readable, not to be scripted against.

Step 3 View the container logs

Anything that the application would normally send to STDOUTbecomes logs for the container within the Pod. We can retrieve these logs using the kubectl logs command:

kubectl logs $POD_NAME

Note: We don’t need to specify the container name, because we only have one container inside the pod.

 

Module 4: Services

https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-interactive/

Step 1 Create a new service

kubectl get pods

Next let’s list the current Services from our cluster:

kubectl get services

We have a Service called kubernetes that is created by default when minikube starts the cluster. To create a new service and expose it to external traffic we’ll use the expose command with NodePort as parameter (minikube does not support the LoadBalancer option yet).

kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080

Let’s run again the get services command:

kubectl get services

We have now a running Service called kubernetes-bootcamp. Here we see that the Service received a unique cluster-IP, an internal port and an external-IP (the IP of the Node).

To find out what port was opened externally (by the NodePort option) we’ll run the describe service command:

kubectl describe services/kubernetes-bootcamp

Create an environment variable called NODE_PORT that has the value of the Node port assigned:

export NODE_PORT=$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')
echo NODE_PORT=$NODE_PORT

Now we can test that the app is exposed outside of the cluster using curl, the IP of the Node and the externally exposed port:

curl $(minikube ip):$NODE_PORT

 

Step 2: Using labels

The Deployment created automatically a label for our Pod. With describe deployment command you can see the name of the label:

kubectl describe deployment

Let’s use this label to query our list of Pods. We’ll use the kubectl get pods command with -l as a parameter, followed by the label values:

kubectl get pods -l run=kubernetes-bootcamp

You can do the same to list the existing services:

kubectl get services -l run=kubernetes-bootcamp

Get the name of the Pod and store it in the POD_NAME environment variable:

export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
echo Name of the Pod: $POD_NAME

To apply a new label we use the label command followed by the object type, object name and the new label:

kubectl label pod $POD_NAME app=v1

This will apply a new label to our Pod (we pinned the application version to the Pod), and we can check it with the describe pod command:

kubectl describe pods $POD_NAME

We see here that the label is attached now to our Pod. And we can query now the list of pods using the new label:

kubectl get pods -l app=v1

And we see the Pod.

Step 3 Deleting a service

To delete Services you can use the delete service command. Labels can be used also here:

kubectl delete service -l run=kubernetes-bootcamp

Confirm that the service is gone:

kubectl get services

This confirms that our Service was removed. To confirm that route is not exposed anymore you can curl the previously exposed IP and port:

curl $(minikube ip):$NODE_PORT

This proves that the app is not reachable anymore from outside of the cluster. You can confirm that the app is still running with a curl inside the pod:

kubectl exec -ti $POD_NAME curl localhost:8080

We see here that the application is up.

 

Module 5: Scaling your app

Step 1: Scaling a deployment

To list your deployments use the get deployments command: kubectl get deployments

We should have 1 Pod. If not, run the command again. This shows:

The DESIRED state is showing the configured number of replicas

The CURRENT state show how many replicas are running now

The UP-TO-DATE is the number of replicas that were updated to match the desired (configured) state

The AVAILABLE state shows how many replicas are actually AVAILABLE to the users

Next, let’s scale the Deployment to 4 replicas. We’ll use the kubectl scale command, followed by the deployment type, name and desired number of instances:

kubectl scale deployments/kubernetes-bootcamp --replicas=4

To list your Deployments once again, use get deployments:

kubectl get deployments

The change was applied, and we have 4 instances of the application available. Next, let’s check if the number of Pods changed:

kubectl get pods -o wide

There are 4 Pods now, with different IP addresses. The change was registered in the Deployment events log. To check that, use the describe command:

kubectl describe deployments/kubernetes-bootcamp

You can also view in the output of this command that there are 4 replicas now.

Step 2: Load Balancing

Let’s check that the Service is load-balancing the traffic. To find out the exposed IP and Port we can use the describe service as we learned in the previously Module:

kubectl describe services/kubernetes-bootcamp

Create an environment variable called NODE_PORT that has a value as the Node port:

export NODE_PORT=$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')
echo NODE_PORT=$NODE_PORT

Next, we’ll do a curl to the exposed IP and port. Execute the command multiple times:

curl $(minikube ip):$NODE_PORT

We hit a different Pod with every request. This demonstrates that the load-balancing is working.

Step 3: Scale Down

To scale down the Service to 2 replicas, run again the scalecommand:

kubectl scale deployments/kubernetes-bootcamp --replicas=2

List the Deployments to check if the change was applied with the get deployments command:

kubectl get deployments

The number of replicas decreased to 2. List the number of Pods, with get pods:

kubectl get pods -o wide

This confirms that 2 Pods were terminated.

 

Module 6: Updating your app

Step 1: Update the version of the app

To list your deployments use the get deployments command: kubectl get deployments

To list the running Pods use the get pods command:

kubectl get pods

To view the current image version of the app, run a describecommand against the Pods (look at the Image field):

kubectl describe pods

To update the image of the application to version 2, use the set image command, followed by the deployment name and the new image version:

kubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=jocatalin/kubernetes-bootcamp:v2

The command notified the Deployment to use a different image for your app and initiated a rolling update. Check the status of the new Pods, and view the old one terminating with the get podscommand:

kubectl get pods

Step 2: Verify an update

First, let’s check that the App is running. To find out the exposed IP and Port we can use describe service:

kubectl describe services/kubernetes-bootcamp

Create an environment variable called NODE_PORT that has the value of the Node port assigned:

export NODE_PORT=$(kubectl get services/kubernetes-bootcamp -o go-template='{{(index .spec.ports 0).nodePort}}')
echo NODE_PORT=$NODE_PORT

Next, we’ll do a curl to the the exposed IP and port:

curl $(minikube ip):$NODE_PORT

We hit a different Pod with every request and we see that all Pods are running the latest version (v2).

The update can be confirmed also by running a rollout status command:

kubectl rollout status deployments/kubernetes-bootcamp

To view the current image version of the app, run a describe command against the Pods:

kubectl describe pods

We run now version 2 of the app (look at the Image field)

 

Step 3: Rollback an update

Let’s perform another update, and deploy image tagged as v10 :

kubectl set image deployments/kubernetes-bootcamp kubernetes-bootcamp=gcr.io/google-samples/kubernetes-bootcamp:v10

Use get deployments to see the status of the deployment:

kubectl get deployments

And something is wrong… We do not have the desired number of Pods available. List the Pods again:

kubectl get pods

describe command on the Pods should give more insights:

kubectl describe pods

There is no image called v10 in the repository. Let’s roll back to our previously working version. We’ll use the rollout undo command:

kubectl rollout undo deployments/kubernetes-bootcamp

The rollout command reverted the deployment to the previous known state (v2 of the image). Updates are versioned and you can revert to any previously know state of a Deployment. List again the Pods:

kubectl get pods

Four Pods are running. Check again the image deployed on the them:

kubectl describe pods

We see that the deployment is using a stable version of the app (v2). The Rollback was successful.

 

Codefresh Hello World using Go

There’s a Codefresh Hello World example using Go here:

https://github.com/codefreshdemo/cf-example-golang-hello-world

which unfortunately fails at the test stage with:

Kubernetes: Ingress Controllers

To expose your services, you use a Kubernetes resource called an Ingress (rules and config for how traffic gets forwarded to your service).

However, defining the Ingress resource doesn’t actually do anything. You’ll need an Ingress Controller to actually create the resources.

More reading:

https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers

https://aws.amazon.com/blogs/apn/coreos-and-ticketmaster-collaborate-to-bring-aws-application-load-balancer-support-to-kubernetes/

AWS ALB Target Groups

A target group routes requests to targets (e.g. instances or Kubernetes service).

First, a listener rule is creates which specifies the target group and conditions. i.e. when a rule condition is met, traffic is forwarded to the target group.

 

For more on

Routing Configuration
Target Type
Registered Targets
Target Group Attributes
Deregistration Delay
Slow Start Mode
Sticky Sessions
Create a Target Group
Health Checks for Your Target Groups
Register Targets with Your Target Group
Tags for Your Target Group
Delete a Target Group

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html

E.g.

Sticky sessions route requests to the same target in a target group.  To use sticky sessions, the clients must support cookies. Useful for stateful apps.

Note: WebSockets are inherently sticky ‘cos target returns an HTTP 101 (Switching Protocols: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/101 ) and then, after WebSockets upgrade is complete, cookie-based stickiness is not used.