Kubernetes Up & Running: Chapter 7



kubectl run alpaca-prod --image=gcr.io/kuar-demo/kuard-amd64:1 --replicas=3 --port=8080 --labels="ver=1,app=alpaca,env=prod"
kubectl expose deployment alpaca-prod
kubectl run bandicoot-prod --image=gcr.io/kuar-demo/kuard-amd64:2 --replicas=2 --port=8080 --labels="ver=2,app=bandicoot,env=prod"
kubectl expose deployment bandicoot-prod
kubectl get services -o wide


In another terminal:

ALPACA_POD=$(kubectl get pods -l app=alpaca -o jsonpath='{.items[0].metadata.name}')
echo $ALPACA_POD alpaca-prod-7f94b54866-dwwxg
kubectl port-forward $ALPACA_POD 48858:8080

Forwarding from -> 8080
Forwarding from [::1]:48858 -> 8080


Now access the cluster with:



  • use http not https
  • If you get localhost refused to connect. then check original pod. E.g.
    • kubectl logs alpaca-prod-7f94b54866-dwwxg
2019/01/09 11:44:22 Starting kuard version: v0.7.2-1
2019/01/09 11:44:22 **********************************************************************
2019/01/09 11:44:22 * WARNING: This server may expose sensitive
2019/01/09 11:44:22 * and secret information. Be careful.
2019/01/09 11:44:22 **********************************************************************
2019/01/09 11:44:22 Config:
  "address": ":8080",
  "debug": false,
  "debug-sitedata-dir": "./sitedata",
  "keygen": {
    "enable": false,
    "exit-code": 0,
    "exit-on-complete": false,
    "memq-queue": "",
    "memq-server": "",
    "num-to-gen": 0,
    "time-to-run": 0
  "liveness": {
    "fail-next": 0
  "readiness": {
    "fail-next": 0
  "tls-address": ":8443",
  "tls-dir": "/tls"
2019/01/09 11:44:22 Could not find certificates to serve TLS
2019/01/09 11:44:22 Serving on HTTP on :8080

which seems to indicate it’s successfully serving on 8080 locally.

So, the issue is with the code on Page 67. i.e. it should be:

kubectl port-forward $ALPACA_POD 8080:8080


DNS Resolver: http://localhost:8080/-/dns

with alpaca-prod


alpaca-prod.default.svc.cluster.local.	5	IN	A

i.e. name of service: alpaca-prod

namespace: default

resource type: svc

base domain: cluster.local

Note: you could use:

  • alpaca-prod.default
  • alpaca-prod.default.svc.cluster.local.


Adding in a readinessProbe:

      - image: gcr.io/kuar-demo/kuard-amd64:1
        imagePullPolicy: IfNotPresent
        name: alpaca-prod
                path: /ready
                port: 8080
            periodSeconds: 2
            initialDelaySeconds: 0
            failureThreshold: 3
            successThreshold: 1

and restart port-forward (as the pods are recreated).

There should now be a Readiness Probe tab where you can make that pod fail / succeed /ready checks.

The pod with that IP address is destroyed after 3 fails and recreated after it succeeds.

k get endpoints alpaca-prod --watch


Now, after halting the port-forward and watch, we’ll look at NodePorts:

kubectl edit service alpaca-prod

and change

type: ClusterIP


type: NodePort

It immediately changes when you save. i.e.

kubectl describe service alpaca-prod

shows `Type: NodePort`

Note: if you misspell the Service type you’ll immediately be bounced back into the Editor with a couple of lines at the top indicating the problem. E.g.

# services "alpaca-prod" was not valid:
# * spec.type: Unsupported value: "Nodey": supported values: "ClusterIP", "ExternalName", "LoadBalancer", "NodePort"




See also Kubernetes: kubectl

Kubernetes Up & Running: Chapter 4

Common kubectl commands

Namespace and Contexts

kubectl config set-context my-context --namespace=mystuff

kubectl config use-context my-context


Note: to list namespaces, see Kubernetes: Namespaces


Page 34

kubectl get pods

Note -o wide gives more information.

kubectl describe pod <pod id>


Page 35

kubectl label pods <pod id> color=green

Show labels:

kubectl get pods --show-labels

Remove label:

kubectl label pods bar color-

Note: the command in the book does not work:

kubectl label pods bar -color


unknown shorthand flag: 'c' in -color



Copy-and-paste: https://github.com/rusrushal13/Kubernetes-Up-and-Running/blob/master/Chapter4.md

Kubernetes Up & Running: Chapter 2

Page 16

1. make sure you’ve cloned the git repo mentioned earlier in the chapter

2. once in the repo, run:

make build

to build the application.

3. create this Dockerfile (not the one mentioned in the book)

FROM alpine
LABEL maintainer="e@snowcrash.eu"
COPY bin/1/amd64/kuard /kuard
ENTRYPOINT ["/kuard"]

MAINTAINER is deprecated. Use a LABEL instead: https://github.com/kubernetes-up-and-running/kuard/issues/7

However, whilst MAINTAINER can take 1 argument, LABELtakes key/value pairs. E.g.

LABEL <key>=<value>



And the  COPY path in the book is incorrect.

and run

docker build -t kuard-amd64:1 .

to build the Dockerfile.

Here we’ve got a repo name of kuard-amd64 and a tag of 1.


4. Check the repo using

docker images

Note: a registry is a collection of repositories, and a repository is a collection of images



Page 17

Files removed in subsequent layers are not available but still present in the image.


Image sizes:

docker images

Or a specific one:

docker image ls <repository name>

E.g. alpine is 4.41MB.


Let’s create a 1MB file and add / remove it:

dd if=/dev/zero of=1mb.file bs=1024 count=1024


then copy it in:

FROM alpine
COPY 1mb.file /

Now building it (and creating a repo called alpine_1mb) we can see the image has increased in size by a bit over 1MB (probably due to the overhead of an additional layer).

However, if we now remove this file in a subsequent Dockerfile – e.g. with something like:

FROM alpine_1mb
RUN rm /1mb.file

the image is still the same size.

The solution is to ensure you use an rm in the same RUN command as you create/use your big file: https://stackoverflow.com/questions/53998310/docker-remove-intermediate-layer

and https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#run


Page 19


To run the kuard app, use:

docker run -d --name kuard --publish 8080:8080 gcr.io/kuar-demo/kuard-amd64:1



docker tag kuard-amd64:1 gcr.io/kuar-demo/kuard-amd64:1

According to https://docs.docker.com/get-started/part2/#tag-the-image

the tag command is:

docker tag image username/repository:tag

so image is kuard-amd64:1

but what’s the username?

Is it gcr.io ?

Or gcr.io/kuar-demo?

The answer is that Docker’s docs here:


are incorrect. You don’t need a username or repository. It’s just a label. E.g. see https://docs.docker.com/engine/reference/commandline/tag/

Correct would be:

docker tag image <any label you want>:tag

BUT for the purposes of pushing to a repository that label DOES need to be of a specific format. i.e. username/image_name.


Shame they didn’t explain that in more detail.


And the next line is misleading too.

docker push gcr.io/kuar-demo/kguard-amd64:1

This creates the impression that you’re pushing your image to a website (or registry) hosted at gcr.io.

It’s not.

It’s just the tag you created above. Having said that, I had to simplify the tag (from 2 slashes to 1 slash) to get it to work. E.g.

docker tag kuard-amd64:1 snowcrasheu/kuard-amd64:1

docker push snowcrash/kuar/kuard-amd64:1
The push refers to repository [docker.io/snowcrash/kuar/kuard-amd64]
7b816b232464: Preparing
73046094a9b8: Preparing
denied: requested access to the resource is denied

The reason for

denied: requested access to the resource is denied

is that (from https://stackoverflow.com/questions/41984399/denied-requested-access-to-the-resource-is-denied-docker )

You need to include the namespace for Docker Hub to associate it with your account.
The namespace is the same as your Docker Hub account name.
You need to rename the image to YOUR_DOCKERHUB_NAME/docker-whale.



To login with docker use:

docker login

or to use a specific username / password.

docker login -u <username> -p <password>

Better is --password-stdin however.

and push with:

docker push snowcrasheu/kuard-amd64:1

which you should then be able to see on Docker Hub. E.g.



Limit CPU / Memory

docker run -d --name kuard
--publish 8080:8080
--memory 200m
--memory-swap 1G
--cpu-shares 1024 



How to change a repository name: 



Handy copy-and-paste code: https://github.com/rusrushal13/Kubernetes-Up-and-Running/blob/master/Chapter2.md