Docker Networking: 3 major areas – CNM, Libnetwork, Drivers


aka Container Network Model. This is the Docker Networking Model.

Note: there is an alternative – CNI (aka Container Network Interface) from CoreOS which is more suited to Kubernetes. More here:

The CNM has 3 main components:

  • Sandbox: contains configuration of container’s network stack (aka namespace in Linux)
  • Endpoint: joins Sandbox to a Network (aka network interface. e.g. eth0)
  • Network: group of Endpoints that can communicate directly

See also:



aka Control & Management plane

Cross platform and pluggable.

Real-world implementation of CNM by Docker.


Data plane

Network-specific detail

  • Overlay
  • Bridge



docker: Error response from daemon: driver failed programming external connectivity on endpoint …: Bind for failed: port is already allocated.

So, I’m trying to run Jenkins via Docker with:

docker run -d -v jenkins_home:/var/jenkins_home -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts

but getting:

Clearly the previous time I ran Jenkins the container hadn’t been cleanly stopped. Or there was another container using that port (it turned out to be the latter).

Here’s how to fix it.

  1. check port: netstat -nl -p tcp | grep 8080 (interestingly this didn’t show anything even though:
  2. docker ps(showed a container using this port)

docker stop <container name>

to solve the problem.

docker port

Golden rule:

port1:port2 means you’re mapping port1on the host to port2on the container.

i.e. host:container

Say you run:

you’re mapping port 80 in the container to port 8080 on the host.

-p => publish a container’s port to the host

docker port web

80/tcp ->

which means:

80 on containermaps to 8080 on host

See also Tech Rant–p—expose

Tech Rant

Part of the problem with Tech is:

  • keeping so many things going in your brain at once
  • having to be an expert at so many things
  • the brain-crushing complexity

Example 1

I’m trying to figure out how docker port works. i.e. with this:

docker container run --rm -d --name web -p 8080:80 nginx

is 8080 on the host or the container?

E.g. I can run this: docker port web
80/tcp ->

but I’m not clear on the mapping so I check the docs:

which shows you an example:

but does not explain what the line actually means.

Is it container to host or host to container?

Next doc is–p—expose

This explains that 80:8080is host -> container. Which would mean that  the initial mapping I used for nginxabove is mapping 80in the container to 8080on the host. i.e. the other way around.

Let’s test.

1. from the VM

Assuming nginx is outputting to 80 (seems reasonable!) then I’d get something back from 8080 on the host (i.e. in the VM – we haven’t even started with what’s happening on the actual host – i.e. my Mac!) so we should get some output on 8080  from the VM so

curl localhost:8080

(what’s the format for using curl – is it curl localhost:8080 or curl localhost 8080 – check some more docs:  – not unreasonable given that telnet doesn’t use a colon – i.e. you’d do telnet localhost 8080 –

which thankfully gives us some nginx output.

So, going back to:

docker port web
80/tcp ->

This is saying:

80 on containermaps to 8080 on host

Annoyingly, the other way round to the format used earlier (i.e. of host to container).

If I do docker ps I get:


which even more annoyingly is the other way around! i.e. host -> container. I guess the way to remember it is that it’s host -> container unless you examine the container itself – e.g. using docker port web.


Some gotchas here:

  • curl localhost 8080 would give connection refused ‘cos curl by default will test on port 80 – given that we’ve got the command wrong it’s testing 80
  • if we’d tested using the container IP address. e.g.

docker container inspect web

gives  “IPAddress”: “”


that gives us nginx output. ‘cos we’re using the container IP address and port.


curl would give:
curl: (7) Failed to connect to port 8080: Connection refused


2. from the container

we need to execfrom the VM into the container. Another doc page:

docker exec -it web /bin/bash


curl localhost:80
bash: curl: command not found

So we need to install curl. More docs (‘cos it’s different on the Mac (I use brew), on Debian (apt) and CentOS (yum) to find out the OS.

cat /etc/os-release
PRETTY_NAME=”Debian GNU/Linux 9 (stretch)”

so we’re using Debian.

It should be apt-get but I get:

More docs on how to install on Debian:

says apt install curlwhich gives me the same problem.

More docs – seems like you have to run apt-get update first.

And finally I can verify that, in the container,

curl localhost:80

outputs nginx content.


Note also: I’ve got Ubuntu in the VM and Debian in the container.

VM: my Vagrantfile uses = "ubuntu/bionic64"

Container: docker container run --rm -d --name web -p 8080:80 nginx uses a Debian based container.


Finally, I write a blog post so I can remember in future how it all works without having to spend an entire morning figuring it out. I open up WordPress and it’s using Gutenberg which I’ve been struggling with. Trying to disable it is a pain. This doesn’t work:

How to Disable Gutenberg & Return to the Classic WordPress Editor

Groan. I just pasted a link and don’t want the Auto Insert content feature however I can’t even be bothered to try and figure out how to disable the Auto Insert behaviour.

In the end, I posted a test post and went to All Posts and clicked Classic Editor under the post.

Another rant: WordPress’ backtick -> formatted code only occasionally works – very frustrating.


3. To close the loop let’s test from my Mac

As we’re using 8080on the host let’s forward to 8081 on the Mac. Add this to the Vagrantfile: "forwarded_port", guest: 8080, host: 8081

Another rant – trying to reprovision the VM with this gave me a continuous loop of:

I couldn’t be bothered to debug this so just did vagrant destroy vm1 and started again.

Then some more Waiting. e.g.

==> vm1: Waiting for machine to boot. This may take a few minutes…

Given how fast computers are it seems crazy how much Waiting we have to do for them. E.g. web browsers, phones, etc.

End of that rant.


So, testing from my Mac:


did not work.

I tried


which did work. Wtf?

I gave up here. Kind of felt that figuring out the problems here was a rabbit hole too far.


Example 2

You’ve got a million more important things to do but you suddenly find in your AWS console that:

Amazon EC2 Instance scheduled for retirement

Groan. This is part of an Elastic cluster.

So, should be a pretty standard process.

  • disable shard allocation
  • stop elasticsearch on that node
  • terminate the instance
  • bring up another instance using terraform
  • reenable shard allocation

but you find unassigned_shards is stuck at x.

So, now you’ve got to become an elasticsearch expert.

E.g. do a

curl -XGET localhost:9200/_cluster/allocation/explain?pretty

and work out why these shards aren’t being assigned.

There goes the rest of my day wading through reams of debugging output and documentation.


Example 3

Finding information is so slow.

E.g. you want to know why Elasticsearch skipped from versions 2.x to versions 5.x.

And whether it’s important.

So you Google. Eventually, hiding amongst the Release Notes is a StackOverflow page ( ) which says go look at this 1 hour 52 minute keynote for the answer.

Unless you’re an elasticsearch specialist, no-one wants to spend this time finding out that info (the answer, btw, is Elasticsearch: why the jump from 2.x to 5.x ).


Example 4

After you’ve spent days of time finding a solution, the answer is complex.

E.g. let say you have to do a Production restore of Elasticsearch.

Can you imagine the dismay you’d get when you have to face the complex snake’s nest contained here:

The preconditions start with:

The restore operation can be performed on a functioning cluster. However, an existing index can be only restored if it’s closed and has the same number of shards as the index in the snapshot.

and continue for page after page.

There is no simple command like: elasticsearch restore data from backup A


Instead you have to restore an index from a snapshot. How do you work out whether a snapshot contains an index?

Easy! Just search dozens of Google results, wade through several hundred pages of Elasticsearch documentation and Stackoverflow questions for different versions of Elasticsearch and Curator. E.g.

Query Google for:

restore index from elasticsearch snapshot –

and you get not much useful.


Funny – in the Stackoverflow the Answerer had the temerity to say:

I’m confused by your question as you have posted snapshot json which tells you exactly which indices are backed up in each snapshot.

I guess that exactly reflects the lack of understanding / empathy in the tech industry that someone can’t even make the leap to understand how to generalise the question to different indices and snapshots.

Example 5

Software that’s ridiculously complex to use.

E.g. take ElasticHQ – simple web GUI.

But how do you list snapshots?

Perhaps their documentation says something.


How about a search?

That returns 2 results. One about an API and another containing the word in code.

For anyone searching, it’s under Indices > Snapshots.

But if you’ve clicked on Indices, beware ‘cos you’re now in an infinite loop waiting for Indices to return.

Example 6: the acres of gibberish problem

Let’s say you’re learning a new technology.

The first thing you usually do is some Hello World system.

So, you try this with a Kubernetes pod. E.g.

and do

All seems pretty rosy? No!

Now you’re in acres of gibberish territory.

You’ve gone from trying to do a simple Hello World app to having to figure stuff out like:

  • NodeHasSufficientDisk
  • NodeHasNoDiskPressure
  • rpc error: code = Unknown desc = Error response from daemon

It’s the equivalent of learning to drive but, on trying to signal left, finding you have to replace the electrical circuitry of the car to get it working correctly.

This seemed to fix the problem:


Example 7: Complexity


Say I want to find out a version of a piece of software. This should be something even a beginner can do. E.g. it usually goes something like this:

app-name version

Now, however, it’s vastly more complex.

Let’s try helm version.

This is all real output I got.

Day 1:

Fortunately I know this is ‘cos I don’t have Kubernetes running on my local system which I can do with:

minikube start

Now, however,

So, now, when you’re not even at the level of running a hello world app you’re having to debug why you can’t even find out the version of your application!

I could go on with this. E.g. see the post Kubernetes: helm

Kind of frustrating when, at the most basic level, you reach a completely impassable roadblock. E.g.

And then you get:

which you have to spend a day Google’ing a solution.

E.g. on Stackoverflow you get obscure stuff like:

with no explanations why you need it or what it does and that doesn’t work anyway.
Day 3
This was a few days later. When I thought I’d sorted something simple like outputting the version.
helm version

Fortunately, I figured this out in a few minutes. But with less Kubernetes knowledge it could take you several days of effort – it shouldn’t take that long to work out what version of software you’re using.

By the way, this last one turned out to be because my default cluster turned out to be the one in EKS.

Need another example?

Ha ha. I could go on for days.

Let’s say you’ve committed some files using git. Now you realise you committed them on the wrong branch.

OK, if you were using Word or Google Docs, it would be pretty simple.

Select your changes, copy, Cmd Z until you got to the original state, paste.

With git it’s fucking complex.

I dare anyone with less than 3 years experience of git to do something this simple in under an hour.


More examples

offers a dashboard. Great!

Seems to be a single line install. But just before the single line install you read:

IMPORTANT: Read the Access Control guide before performing any further steps. The default Dashboard deployment contains a minimal set of RBAC privileges needed to run.

That sounds ominous. Whilst I assume I don’t need to worry about that as I’m just testing on a local minikube I should read on just in case.

I start scanning through

I get to:

Kubernetes supports few ways of authenticating and authorizing users. You can read about them here and here.

and I’m starting to worry about going down a rabbit hole here. Will I ever get this dashboard installed? Or will I just have to keep reading more and more documentation?

I decide to brave it and go with the single line install:

kubectl apply -f

and get:

The Deployment "kubernetes-dashboard" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"k8s-app":"kubernetes-dashboard"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

which seems pretty obvious. Clearly the Kubernetes authors have spent a lot of time making sure their output error messages are nice, clear and to the point. I’m not sure why they couldn’t have included a few more paragraphs of gibberish just to make things more obscure.

I try the suggested URL anyway


and get:

Again, clearly plenty of time spent creating nice, clear debug output.


What’s the solution?

Ignore all of the detailed advice on the Kubernetes Dashboard github page – i.e.

and, kudos to the Kubernetes documentation team (they outdid themselves here), buried nicely in issue 2898 is the answer:

minikube dashboard

So, translating, what the Kubernetes team mean by:

The Deployment "kubernetes-dashboard" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"k8s-app":"kubernetes-dashboard"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable


The dashboard is already running. Access it using <put URL in here>.

Someone needs to work on a Google Translate engine that maps Kubernetes to English.


More examples

I want to run go on the Mac.

Example 8: Latency

It shouldn’t take 10 seconds to load up a web page.

Every time you click on a link it’s another 10 seconds. Get to the wrong web page? Click back and go to the next. That’s half a minute.


Or printers.

E.g. I printed out a single page of A4. It took around 2 minutes. So, you end up context switching between what you’re doing and checking the printer. And you’re expecting it to fail.

Example 9: Stuff that doesn’t work

I’ve got 2 digital thermometers. Side by side.

One reads 18.4C, the other 17.5.

They’re probably both wrong. So the temperature could be 16C or 19C. Who knows.


Another example, I’ve got an HP printer. HP defined printers.

Yet, on my Mac, in order to print two documents (each with 2 pages), I’ve had to:

  • restart the printer
  • clear a paper jam
  • struggle to refit the USB cable (there’s only one way to fit it and with it at the back of the printer and the printer in an awkward position it’s tricky to see)
  • struggle to find the printer power button (see above)
  • struggle with the paper tray that wouldn’t fit back quite right
  • clear a second paper jam
  • wait for several minutes for the printer to power back on and re-initialize

Another time I kept getting the Printer Offline message so I tried:

  • restarting the printer
  • restarting the printer again
  • resetting the printer system (which wipes out all of your existing printers without any warning):
  • Then Add a Printer doesn’t show the Printer anywhere
  • Restart my Mac (something I really don’t want to do when I’ve got dozens of windows open with current state) – FWIW, this eventually solved the problem. Hopefully, I can just do this in future instead of spending half an hour reinstalling printers every time I want to print a page of A4
    • Btw, reinstalling the printer then failed saying Can't install the software because it is not currently available from the Software Update server. Solution was to try again a couple of times.
  • then the page printed out incorrectly (which turned out to be a problem with Pages. Printing it out from Preview worked)
  • except the colours were all wrong – the beiges I could see on screen came out as blues


To print out 1 page should not take an hour of fiddling about. Am I the only person in the world that thinks so???

I’ve got another HP printer, an HP DeskJet 2630. I don’t even use this any more ‘cos all it will print out are notices saying it can’t connect to its mothership to tell it the status of its ink cartridges. I find it ironic that that’s all it can print out.


  • uninstall the printer
  • install the printer
  • disconnect USB
  • power off the printer
  • power on the printer
  • connect the USB
  • send job to printer
  • wait
  • and wait
  • and wait
  • hear printer whirring
  • wait
  • check printer status menu
  • check printer dialog on mac
  • wait
  • wait
  • check printer dialog on mac
  • check printer status menu
  • check printer dialog on mac
  • wait
  • wait
  • go have a glass of wine
  • come back in 15 minutes
  • your print job hasn’t printed
  • repeat all the above steps several more times
  • get a 49 Error turn off then turn on
  • turn everything off then on
  • install firmware update ( )
  • repeat all the above


More stuff:

  • my mouse which some idiot has designed so that it power saves every x minutes. So I go to use it and it won’t move. I’m constantly having to remember it. Another layer of complexity that makes things difficult to use

Example 10: Web page insertions

You go to click on a Google result. And suddenly Google (and they’re not the only culprits) inserts something else into the page and you click on something else.

Example 11: Cookies

Accept this

Accept this

Accept this

Accept this

Accept this

Accept this

Accept this


Every single website you go to. Over and over and over again. Every day. Of Every month. Of Every Year.

Can’t there just be a one-time option on some website somewhere for me to donate $10 to a monkey to keep clicking “Accept this” on my behalf?


Example 12: Inconsistencies


Why can’t all command line tools use the same flags?

Some use double-dash, some don’t.

They can’t even output versions in the same way.


Aside from the fact they upgraded from 2.x to 5.x (see Elasticsearch: why the jump from 2.x to 5.x ) basic stuff like this version is wrong

i.e. curl -XGET 'localhost:9200'


“lucene_version” : “4.6”

Wtf – 4.6 doesn’t even exist!

Or more elasticsearch oddities like:

/usr/bin/curl -s -XGET
No handler found for uri

Example 13: Crap documentation

Over 90% of respondents to a recent GitHub survey said that one of the top problems with open source projects is incomplete or confusing documentation.

Kubernetes up & Running

E.g. page 16 (i.e. the very start – after the Introductory chapter) of Kubernetes up & Running suggests running:

docker build -t kuard-amd64:1 .

which doesn’t work.

Not only doesn’t it work but it’s incorrect (MAINTAINER is deprecated).

On the most basic example in the book, users have to debug why their 4 line piece of code isn’t working. And 2 out of the 4 lines are incorrect.


E.g. let’s say you want to just get a list of AWS Cloud Config Rules. We have:

aws configservice describe-config-rules

which outputs

Starting off with the jq tutorial ( ),  there’s no JSON in the page until we get towards the bottom. And that doesn’t resemble our extract – e.g. ours has:

and theirs has:

So, let’s give them the benefit of the doubt and try the Manual ( instead.

Note: if you’re not clear on the exact format of JSON then you might go to a site like hoping to get a simple explanation. I think my favourite section (right at the top of the page on the right) for beginners would be the sidebar which starts off with:

and has another 2 pages of gibberish. A perfect example of technical documentation – guaranteed to confuse beginner, intermediate and advanced.



Section 1: jq Manual (development version)

Hmm – not much of use there.

Section 2: Invoking jq

Or here.

Section 3: Basic filters

This starts looking more useful.

Identity Operator

Not clear why you’d want to use the Identity operator but at least it works. E.g.

aws configservice describe-config-rules | jq '.'

Object Identifier-Index

The simplest useful filter is .foo. When given a JSON object (aka dictionary or hash) as input, it produces the value at the key “foo”, or null if there’s none present.

so, given a JSON hash / dictionary looks like this:

{ "key1": "value1", "key2": "value2" }

then we should be able to pull out a key / value pair. EXCEPT we also have these [] items in our JSON. These are JSON arrays / list items –

Looking through the docs we can see:

Array/Object Value Iterator: .[]


To see me eventually get to the bottom of this see:

Using jq




Install Docker via a Vagrantfile

Tons of ways of doing this on the internet – many that don’t work for various reasons – e.g. older versions of Ubuntu, etc… – and ranging from complex to very complex.

Here’s two simple ways:

  1. Most simple

Paste into a Vagrantfile:

2. A little more manual using Docker’s convenience script:

Paste into a Vagrantfile:


Here’s what didn’t work for me:

I guess it doesn’t help that precise64 is quite an old version of Ubuntu but it’s what was given on the Vagrant website for provisioning Docker.


which I basically ignored ‘cos it didn’t have a correct concrete example as of the time of writing this blog post (even though it’s Hashicorp’s own website!).

Docker Provisioner vs Docker Provider

The Vagrant Docker provisioner can automatically install Docker, pull Docker containers and configure containers.

i.e. it helps prepare the environment (i.e. automatic installations).

The Provider is a virtualization solution – e.g. Virtualbox, VMWare.

So a Docker Provider would use Docker as the virtualization solution.

See also




Docker Volumes vs Bind Mounts

Bind mount

File or directory on host mounted into container. You refer to this file/directory using the full file path used on the host.

The problem with a bind mount is you have to have the full host file path which may be different on different hosts. E.g. if you use /Users/dave it’s going to break if someone else doesn’t have a Users/dave directory.


While bind mounts are dependent on the directory structure of the host machine, volumes are completely managed by Docker. They’re created in  /var/lib/docker/volumesand you refer to them by name. E.g.

-v mysql_data:/containerdir

Here’s what this means:

  • the first field is the name of the volume. It’s unique on a given host machine
  • the second field is the path where the file or directory are mounted in the container–v-or—mount-flag

docker container run

docker container run [OPTIONS] IMAGE [COMMAND] [ARG...]



-d => detached

--rm =>Automatically remove the container when it exits



docker container run -d alpine sleep 1d

docker container run --rm -d --network london alpine sleep 1d


Potential problems:

docker: Error response from daemon: Conflict. The container name “<name>” is already in use by container “<container id>”. You have to remove (or rename) that container to be able to reuse that name.