docker system info

This spits out a ton of information. E.g.

docker system info

Containers: 1
 Running: 0
 Paused: 0
 Stopped: 1
Images: 2
Server Version: 18.06.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d64c661f1d51c48782c9cec8fda7604785f93587
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.9.93-linuxkit-aufs
Operating System: Docker for Mac
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.952GiB
Name: linuxkit-025000000001
ID: 3OQR:XG62:6PED:J4FQ:L2XO:IDA2:WNBI:CY2Y:C2RC:UCHP:VKCQ:JVJO
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 24
 Goroutines: 50
 System Time: 2018-08-29T10:05:42.8904929Z
 EventsListeners: 2
HTTP Proxy: gateway.docker.internal:3128
HTTPS Proxy: gateway.docker.internal:3129
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

E.g.

OSType: linux
Architecture: x86_64

and Storage Driver:

Storage Driver: overlay2
Backing Filesystem: extfs

Note on aufsvs overlay2:

OverlayFS is a modern union filesystem that is similar to AUFS, but faster and with a simpler implementation. Docker provides two storage drivers for OverlayFS: the original overlay, and the newer and more stable overlay2.

https://docs.docker.com/storage/storagedriver/overlayfs-driver/

Docker Images

A container is basically a running Image.

An Image is a bunch of layers with a Manifest (saying how the Image should run).

As Images are Read Only, a Read Write layer is created per container.

Images in detail

delete

docker rmi <image id>

Potential errors:

Error response from daemon: conflict: unable to delete <image id> (must be forced) – image is referenced in multiple repositories

You’ll need to untag them all individually. E.g.

docker images | grep <image id>

then

docker rmi <repo>:<tag>

https://docs.docker.com/engine/reference/commandline/rmi/#examples

 

Error response from daemon: conflict: unable to delete ae6b78bedf88 (must be forced) – image is being used by stopped container b6e81decac41

docker rmi -f <image id>

 

 

list

docker images

or

docker image ls

Note: you can optionally use a Repo name to just list those repos. E.g.

docker images alpine

or filter with a wildcard (using Zsh you’ll need to use quotes):

docker images 'alp*

 

 

pull

docker image pull redis

pull does an API request to a registry.

Step 1: get manifest

Step 2: pull layers

First, it looks for a Fat Manifest (aka Manifest List) and then, in turn, gets the Image Manifest. We then get a list of Layers which we pull.

Note: digest is a hash containing the Image ID which we can see with:

docker image ls --digests

Note, even though docker system info reports the Docker Root Dir as /var/lib/docker on the Mac, the images are actually stored in the xhyve virtual machine.

https://forums.docker.com/t/var-lib-docker-does-not-exist-on-host/18314/2

docker history

Say you’ve pulled something with docker image pull redis, you can see the commands that built the image using:

docker history redis

E.g.

 Docker  docker history redis
IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
4e8db158f18d        3 weeks ago         /bin/sh -c #(nop)  CMD ["redis-server"]         0B
<missing>           3 weeks ago         /bin/sh -c #(nop)  EXPOSE 6379/tcp              0B
<missing>           3 weeks ago         /bin/sh -c #(nop)  ENTRYPOINT ["docker-entry…   0B
<missing>           3 weeks ago         /bin/sh -c #(nop) COPY file:9c29fbe8374a97f9…   344B
<missing>           3 weeks ago         /bin/sh -c #(nop) WORKDIR /data                 0B
<missing>           3 weeks ago         /bin/sh -c #(nop)  VOLUME [/data]               0B
<missing>           3 weeks ago         /bin/sh -c mkdir /data && chown redis:redis …   0B
<missing>           3 weeks ago         /bin/sh -c set -ex;   buildDeps='   wget    …   24.8MB
<missing>           3 weeks ago         /bin/sh -c #(nop)  ENV REDIS_DOWNLOAD_SHA=fc…   0B
<missing>           3 weeks ago         /bin/sh -c #(nop)  ENV REDIS_DOWNLOAD_URL=ht…   0B
<missing>           3 weeks ago         /bin/sh -c #(nop)  ENV REDIS_VERSION=4.0.11     0B
<missing>           6 weeks ago         /bin/sh -c set -ex;   fetchDeps="   ca-certi…   3MB
<missing>           6 weeks ago         /bin/sh -c #(nop)  ENV GOSU_VERSION=1.10        0B
<missing>           6 weeks ago         /bin/sh -c groupadd -r redis && useradd -r -…   329kB
<missing>           6 weeks ago         /bin/sh -c #(nop)  CMD ["bash"]                 0B
<missing>           6 weeks ago         /bin/sh -c #(nop) ADD file:919939fa022472751…   55.3MB

For more info see: docker image inspect

and delete with docker image rm redis

Registries

On-premises Registry – e.g. Docker Trusted Registry (part of Docker’s commercial offering).

Note: official registry is docker.io. So, docker pull redis is short hand for:

docker pull docker.io/redis/latest

The image is actually called latest. The repo is called redis and the registry is docker.io.

Note: uncompressed layers use a content hash, Registry uses distribution hash (‘cos the layer gets compressed before being uploaded) and the layers on the file system use a random ID.

 Best Practices

  1. use official images (e.g. alpine)
  2. use specific versions of a docker image (rather than latest)

 

Docker: Architecture and Theory

Architecture Big Picture

Container: isolated area of an OS with resource usage limits applied

To build containers, we use Control Groups and Namespaces (low level, hard-to-use kernel constructs) – note: Docker makes these easy to use.

Workflow:

docker command -> API -> sets up control groups and namespaces -> generates container

Kernel Internals

Namespaces

  • Process ID (pid) – separate process tree each with its own PID 1
  • Network (net) – isolated network stack – i.e. eth0, IP address
  • Filesystem/mount (mnt) – separate file stack
  • Inter-proc comms (ipc) – let processes use shared memory
  • UTS (uts) – i.e. separate hostnames. Note: UTS = Unix Timesharing System
  • User (user) – map accounts inside container to host accounts

Control Groups (aka cgroups – Windows calls them Job Objects / Control Groups)

Police system resources: portions out disk, RAM, CPU

Layers

Union file system

Docker Engine

Note whilst Docker Engine is used for creating containers, there is a whole load of stuff plugging into it such as:

  • Swarm
  • On-prem registry
  • Universal control plane
  • Ecosystem – e.g. Rancher, CircleCI

Some history: dotCloud born (tool called dc) which used LXC. LXC changes were breaking Docker. So Docker replaced LXC with libcontainerdc tool replaced by docker.

The docker daemon became a monolith (i.e. compose, authz, registry, REST API, orchestration, etc).

Kubernetes pulled in docker which already had orchestration – messy. So, Docker started refactoring.

Open Container Initiative.

Now:

Client – daemon (Docker API) – containerd – OCI (Runtime – i.e. interfaces with kernel)

Note: runc is OCI implementation.

On Windows, instead of containerd and runc we have Compute Services.

 

Example: creating a new container on Linux

docker container run

REST POST call to daemon

This then does a client.NewContainer(context, …) call to containerd.

containerd calls a shim which calls runc.

i.e. containerd and runc can be switched out if necessary.

 

And can reinstall / upgrade / restart the daemon which has no effect on running containers.

daemon: orchestration, builds, stacks, overlay-networks.

 

Some buzzwords:

GRPC: RPC framework – https://grpc.io/

containerd is a Cloud Native Computing Foundation

Windows Containers

Two types – Native and Hyper-V.

Native – uses Namespace isolation (i.e. runs on Host OS kernel)

Hyper-V – Windows spins up a Hyper-V kernel. i.e. 1 container per VM. To use, need:

docker container run --isolation=hyperv

 

Letter i was highlighting entire line on my Mac

Just solved a very weird problem on my Mac.

When I typed the letter “i” the entire line would get highlighted. Basically my Mac had become useless. I envisaged days or weeks of lost productivity whilst I sent my Mac back to get a replacement.

It turned out to be a setting I’d enabled in Accessibility > Mouse & Trackpad > Enable Mouse Keys.

Why had I done that? To solve another bug on my Mac where the cursor disappears on external monitors. There seem to be several of these ongoing bugs that have never been fixed.

http://osxdaily.com/2013/07/19/disappearing-mouse-cursor-mac-os-x/

https://apple.stackexchange.com/questions/321447/mouse-cursor-disappeared-on-external-display-when-rotated

 

 

GKE – Google Kontainer Engine

GKE is Google’s Kubernetes service and layered on top of Google Compute Engine(aka GCE) which provides the compute instances.

Note that Container Engine (as referred to in some older documentation) was renamed as Kubernetes Engine back in November 2017.

Note: the submenus have moved as follows:

Compute > Container Engine > Container clusters -> Compute > Kubernetes Engine > Container clusters

Compute > Container Engine > Container Registry (aka Google’s hosted Docker Registry) -> Tools > Container Registry

E.g. spinning up a cluster:

  1. go to Compute > Kubernetes Engine > Container clusters and Add New
  2. select a Zone (by the way, interesting how europe-west1-a is missing. Apparently they took it down for maintenance years ago and never brought it back up! )
  3. leave Nodes at default: 3 (doesn’t include master – e.g. apiserver and scheduler)
  4. click Create (note the Equivalent REST or command line links below)
  5. to see details once created, click your cluster. It should reveal:
    1. apiserver endpoint IP address
  6. click Shell icon (top right) to get a Cloud Shell. e.g.
    1. gcloud container clusters list
  7. click Connect (to right of cluster), paste in shell => configures kubectl to connect to our new cluster. e.g.
    1. kubectl get nodes
  8. can https to our new endpoint

 

 

 

AWS Fargate

Fargate

AWS Fargate is a compute engine for Amazon ECS and EKS that allows you to run containers without having to manage servers or clusters. With AWS Fargate, you no longer have to provision, configure, and scale clusters of virtual machines to run containers.

https://aws.amazon.com/fargate/

Fargate is not currently (August 2018) available in the UK.

How does this differ from ECS (Elastic Container Service) and EKS (Elastic Container Service for Kubernetes) though?

ECS

Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS. Amazon ECS eliminates the need for you to install and operate your own container orchestration software, manage and scale a cluster of virtual machines, or schedule containers on those virtual machines.

https://aws.amazon.com/ecs/

ECS was first to market as a commercial container service between the big players and is now suffering as it’s rather out-dated. It’s basically Docker as a Service offering a Docker Registry (aka Amazon Elastic Container Registry or ECR) and support in its CLI for Docker Compose.

EKS

EKS (aka Amazon Elastic Container Service for Kubernetes) is a managed Kubernetes service.

The differences? Use:

  • ECS if you like using Docker
  • EKS if you like Kubernetes
  • Fargate if you don’t want to managing either Docker or Kubernetes

See also https://dzone.com/articles/ecs-vs-eks-vs-fargate-the-good-the-bad-the-ugly