Fully automated canary deployments in Kubernetes

See here for a Hello World example using Codefresh.

and a manual Canary deployment: https://github.com/codefresh-io/k8s-canary-deployment

which uses a bash script (`k8s-canary-rollout.sh`) with parameters.

 

Otherwise proceed to:

https://medium.com/containers-101/fully-automated-canary-deployments-in-kubernetes-70a671105273

This webinar also shows with / without Istio and using Helm for deployments: https://codefresh.io/webinars/istio-canary-deployment-with-helm-and-codefresh/

 

See

 

k8s deployment strategies

1. start local Kubernetes v1.10.0 cluster…

minikube start --kubernetes-version v1.10.0 --memory 8192 --cpus 2

Errors:

If it hangs on:

Starting cluster components...

You can see what’s going on with:

minikube logs

which spits out thousands of lines of logs.

Annoyingly minikube logs -f does not work even though it’s implemented internally: https://github.com/kubernetes/dashboard/issues/1083

See Installing Kubernetes: Minikube for a solution.

 

2. helm init

3. Install Prometheus

helm install \
    --namespace=monitoring \
    --name=prometheus \
    --version=7.0.0 \
    stable/prometheus

which outputs

NAME:   prometheus
LAST DEPLOYED: Thu Nov  1 13:22:45 2018
NAMESPACE: monitoring
STATUS: DEPLOYED

RESOURCES:
==> v1/PersistentVolumeClaim
NAME                     AGE
prometheus-alertmanager  1s
prometheus-server        1s

==> v1beta1/ClusterRoleBinding
prometheus-kube-state-metrics  1s
prometheus-server              1s

==> v1beta1/DaemonSet
prometheus-node-exporter  1s

==> v1/ConfigMap
prometheus-alertmanager  1s
prometheus-server        1s

==> v1/ServiceAccount
prometheus-alertmanager        1s
prometheus-kube-state-metrics  1s
prometheus-node-exporter       1s
prometheus-pushgateway         1s
prometheus-server              1s

==> v1beta1/ClusterRole
prometheus-kube-state-metrics  1s
prometheus-server              1s

==> v1/Service
prometheus-alertmanager        1s
prometheus-kube-state-metrics  1s
prometheus-node-exporter       1s
prometheus-pushgateway         1s
prometheus-server              1s

==> v1beta1/Deployment
prometheus-alertmanager        1s
prometheus-kube-state-metrics  1s
prometheus-pushgateway         0s
prometheus-server              0s

==> v1/Pod(related)

NAME                                            READY  STATUS             RESTARTS  AGE
prometheus-node-exporter-mfgcj                  0/1    ContainerCreating  0         1s
prometheus-alertmanager-99f6bfbcc-b8hkc         0/2    ContainerCreating  0         0s
prometheus-kube-state-metrics-6584885ccf-fkkxc  0/1    ContainerCreating  0         0s
prometheus-pushgateway-d5fdc4f5b-m7kzj          0/1    ContainerCreating  0         0s
prometheus-server-86887bb56b-sjwhq              0/2    Pending            0         0s


NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
prometheus-server.monitoring.svc.cluster.local


Get the Prometheus server URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace monitoring -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace monitoring port-forward $POD_NAME 9090


The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster:
prometheus-alertmanager.monitoring.svc.cluster.local


Get the Alertmanager URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace monitoring -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace monitoring port-forward $POD_NAME 9093


The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
prometheus-pushgateway.monitoring.svc.cluster.local


Get the PushGateway URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace monitoring -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace monitoring port-forward $POD_NAME 9091

For more information on running Prometheus, visit:
https://prometheus.io/

Note: it mentions `ContainerCreating` – you can check on the current status with:

helm list

This shows you the state of the pod you’ve just created:

prometheus 1 Thu Nov 1 13:22:45 2018 DEPLOYED prometheus-7.0.0 2.3.2 monitoring

which had those 4 containers:

prometheus-node-exporter-mfgcj 0/1 ContainerCreating 0 1s prometheus-alertmanager-99f6bfbcc-b8hkc 0/2 ContainerCreating 0 0s prometheus-kube-state-metrics-6584885ccf-fkkxc 0/1 ContainerCreating 0 0s prometheus-pushgateway-d5fdc4f5b-m7kzj 0/1 ContainerCreating 0 0s prometheus-server-86887bb56b-sjwhq 0/2 Pending 0 0s

 

 

 

Source: https://github.com/ContainerSolutions/k8s-deployment-strategies

Helm Charts

Charts describe a set of Kubernetes resources – e.g. a full web app stack with HTTP servers, databases, caches, etc.

requirements.yamldefines dependencies using:

  • name
  • version
  • repository

Tags: like Ansible

Condition: enabled / disabled – always override tags.

See https://github.com/helm/helm/blob/master/docs/charts.md

 

Manage charts with helm:

  • create – creates chart
  • package – packages
  • lint – checks formatting

 

Getting started with Helm:

1. check kubectl config – i.e. using local minikube

kubectl config view | grep current

2. start helm

helm init

https://medium.com/@anthonyganga/getting-started-with-helm-tiller-in-kubernetes-part-one-3250aa99c6ac

 

Installing MySQL as a Helm Chart

Running helm install stable/mysql

(which uses: https://github.com/helm/charts/tree/master/stable/mysql )

helm install stable/mysql

NAME:   queenly-seahorse
LAST DEPLOYED: Mon Nov  5 11:22:13 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Secret
NAME                    AGE
queenly-seahorse-mysql  0s

==> v1/ConfigMap
queenly-seahorse-mysql-test  0s

==> v1/PersistentVolumeClaim
queenly-seahorse-mysql  0s

==> v1/Service
queenly-seahorse-mysql  0s

==> v1beta1/Deployment
queenly-seahorse-mysql  0s

==> v1/Pod(related)

NAME                                     READY  STATUS   RESTARTS  AGE
queenly-seahorse-mysql-6dc964999c-h4w54  0/1    Pending  0         0s


NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
queenly-seahorse-mysql.default.svc.cluster.local

To get your root password run:

    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default queenly-seahorse-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)

To connect to your database:

1. Run an Ubuntu pod that you can use as a client:

    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il

2. Install the mysql client:

    $ apt-get update && apt-get install mysql-client -y

3. Connect using the mysql cli, then provide your password:
    $ mysql -h queenly-seahorse-mysql -p

To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306

    # Execute the following command to route the connection:
    kubectl port-forward svc/queenly-seahorse-mysql 3306

    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}

Let’s test we can connect to MySQL.

From the output, let’s get the MySQL password:

kubectl get secret --namespace default queenly-seahorse-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo

Copy.

Note: you could have got the pod name with:

kubectl get pods

Now exec into MySQL with:

kubectl exec -it queenly-seahorse-mysql-6dc964999c-h4w54 bash

Install MySQL client:

apt-get update && apt-get install mysql-client -y --force-yes

and connect with:

mysql -h localhost -p

 

 

More on:

  • kubectl commands here: Kubernetes: kubectl
  • MySQL Notes here: https://github.com/helm/charts/blob/master/stable/mysql/templates/NOTES.txt

Installing WordPress as a Helm Chart

helm install --name my-release stable/wordpress

List with

helm list

and delete with

helm delete my-release

https://github.com/helm/charts/tree/master/stable/wordpress

 

Errors

Error: no available release name found

https://github.com/helm/helm/issues/3055

also

https://stackoverflow.com/questions/43499971/helm-error-no-available-release-name-found/43513182

 

Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 10.96.0.1:443: i/o timeout

When you do a helm list

From https://github.com/helm/helm/issues/3055#issuecomment-385371327

suggests

kubectl delete the tiller service and deployment.)

$ kubectl create serviceaccount --namespace kube-system tiller
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller

So: kubectl delete tiller-deploy-6fd8d857bc-fp5s2
error: resource(s) were provided, but no name, label selector, or –all flag specified

kubectl list
Error: unknown command “list” for “kubectl”

This suggests deleting tiller using

helm reset

but this gives:

helm reset
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 10.96.0.1:443: i/o timeout

 

https://stackoverflow.com/questions/47583821/how-to-delete-tiller-from-kubernetes-cluster

and helm ls

Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 10.96.0.1:443: i/o timeout

Another, not very helpful, issue on why you can’t delete tiller:

https://github.com/helm/helm/issues/3536

Checking tiller:

kubectl get deploy -n kube-system

NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
coredns                1         1         1            1           12d
kube-dns               1         1         1            0           71d
kubernetes-dashboard   1         1         1            0           71d
tiller-deploy          1         1         1            1           8d

 

To see pods in kube-system

kubectl get pods –namespace kube-system

e.g.

tiller-deploy-6fd8d857bc-fp5s2 1/1 Running 7 8d

 

Notes:

Tiller namespaces and RBAC

Namespaces are for different environments. E.g. production, staging.

https://medium.com/@amimahloof/how-to-setup-helm-and-tiller-with-rbac-and-namespaces-34bf27f7d3c3

RBAC and Service Accounts: 

https://docs.helm.sh/using_helm/#securing-your-helm-installation

 

Further reading

Use ksonnet to generate Kubernetes configurations from Helm Charts

 

 

Kubernetes: helm

Getting it up and running

Install with brew install kubernetes-helm

https://github.com/helm/helm/blob/master/README.md

helm init
Creating /Users/snowcrash/.helm
Creating /Users/snowcrash/.helm/repository
Creating /Users/snowcrash/.helm/repository/cache
Creating /Users/snowcrash/.helm/repository/local
Creating /Users/snowcrash/.helm/plugins
Creating /Users/snowcrash/.helm/starters
Creating /Users/snowcrash/.helm/cache/archive
Creating /Users/snowcrash/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /Users/snowcrash/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

Main concepts:

1. Chart: Helm package – contains all the resource definitions to run an application in a Kubernetes cluster.

See Helm Charts for examples of using Helm

2. Repository: where charts are stored

3. Release: an instance of a chart in a Kubernetes cluster. E.g. with a MySQL chart, you can have 2 databases running in a cluster by installing the chart twice. Each is its own release with its own release name.

 

Helm has two parts: the client (helm) and the server (tiller).

Tiller runs inside the Kubernetes cluster

 

Commands:

helm – outputs commands available

helm version – outputs client / server version.

helm init – runs helm.

helm init --upgrade– to upgrade Tiller.

 

Errors

Error: Get https://192.168.64.5:8443/api/v1/namespaces/kube-system/pods?labelSelector=app%3Dhelm%2Cname%3Dtiller: dial tcp 192.168.64.5:8443: connect: connection refused

it’s probably ‘cos you need to start Kubernetes. E.g. with minikube use:

minikube start

 

Error: could not find a ready tiller pod

if you run kubectl -n kube-system get po, check you see a tiller-deploy pod available there. Then, given the pod `tiller-deploy-6fd8d857bc-fp5s2

kubectl logs tiller-deploy-6fd8d857bc-fp5s2

Error from server (NotFound): pods "tiller-deploy-6fd8d857bc-fp5s2" not found

Solution to this is to use

 --namespace kube-system

i.e.

kubectl logs --namespace kube-system tiller-deploy-6fd8d857bc-fp5s2

which says:

[main] 2018/10/31 11:39:02 Starting Tiller v2.11.0 (tls=false)
[main] 2018/10/31 11:39:02 GRPC listening on :44134
[main] 2018/10/31 11:39:02 Probes listening on :44135
[main] 2018/10/31 11:39:02 Storage driver is ConfigMap
[main] 2018/10/31 11:39:02 Max history per release is 0

https://github.com/helm/helm/issues/2064

https://github.com/helm/helm/issues/2295

 

version is now working. But unsure if it was down to the earlier command I ran:

minikube addons enable registry-creds

 

Error: could not find tiller

Need to run helm init.

https://stackoverflow.com/questions/51646957/helm-could-not-find-tiller

 

 

For more on Helm Charts see: Helm Charts