DockerFile: WordPress

Let’s take a look at Docker‘izing WordPress.

 

The Docker pull command is:

docker pull wordpress

pull: pulls an image or a repository from a registry. It doesn’t run it. It just means you have the image locally.

https://docs.docker.com/engine/reference/commandline/pull/

 

You can actually run the image using:

docker run --name some-wordpress --link some-mysql:mysql -d wordpress

some-wordpress is going to be the name of the container.

--link is a bit old school. It connects one container to another. i.e. MySQL to WordPress. Nowadays we use user-defined networks – e.g. overlays. https://docs.docker.com/network/links/

WordPress Docker Repo: https://hub.docker.com/_/wordpress/

 

However, before you can run it you’ll need MySQL. So:

docker pull mysql:5.7.24

(Aside: why not use mysql or mysql:latest? ‘cos MySQL 8 changed the password authentication method. See below.)

and then run it with:

docker run --name test-mysql -e MYSQL_ROOT_PASSWORD=test -d mysql:latest

--name is the name you’re giving to the container,

MYSQL_ROOT_PASSWORDis an environment variable that you set which is read in the container. Note: with MySQL this is done programmatically via docker-entrypoint.sh – https://github.com/docker-library/mysql/blob/696fc899126ae00771b5d87bdadae836e704ae7d/8.0/docker-entrypoint.sh . For more on ARG, Environment variables: https://stackoverflow.com/questions/53592879/dockerfile-and-environment-variable/53593826#53593826

and -d means detach.

To specify a tagged version just add it after the image using a colon. E.g. mysql:5.7.24.

 

Once you’ve run it you can exec in with

docker exec -it test-mysql bash

or check logs with:

docker logs test-mysql

E.g.

2018-12-03T14:24:57.091306Z 0 [Note] mysqld: ready for connections.
Version: '5.7.24' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL)

 

Or even mysql in via another MySQL container using:

docker run -it --link test-mysql:mysql --rm mysql sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'

https://hub.docker.com/_/mysql/

 

So, getting back to WordPress let’s run it now with:

docker run --name test-wordpress --link test-mysql:mysql -d wordpress

and check the logs with:

You should then be able to access your WordPress site on http://localhost:8080

 

Errors

Conflict. The container name “/test-wordpress” is already in use by container

You run

docker run --name test-wordpress --link test-mysql:mysql -d wordpress

and get:

docker: Error response from daemon: Conflict. The container name "/test-wordpress" is already in use by container "0ea70abdaf306d896eb71f3ab585961359f27af23a243b81370bf407d3dd846d". You have to remove (or rename) that container to be able to reuse that name.

You’ve already got a container with that name.

Remove it with: docker rm test-wordpress

https://stackoverflow.com/questions/31676155/docker-error-response-from-daemon-conflict-already-in-use-by-container

This might happen if the container exited and you try and relaunch it.

 

Site can’t be reached

You plug http://localhost:8080 into the web browser but get:

Checking the WordPress logs with docker logs test-wordpress I can see:

however this is a secondary problem. Why are we getting this?

‘cos MySQL 8 introduced a different type of authentication – https://github.com/docker-library/wordpress/issues/313

If you are getting this then you need to use: docker pull mysql:5.7.24 or use a different auth method.

 

Back to the problem at hand – we should still be able to see a (non-functioning) WordPress site on that port. i.e. Apache should be running.

Let’s just do a sanity check:

This 0.0.0.0:8080->80/tcp means the docker host port 8080 is mapped to the container port 80.

https://stackoverflow.com/questions/41798284/understanding-docker-port-mappings

so http://localhost:8080 is correct.

 

It seems the problem really was the lack of MySQL. Using 5.7.24 and looking at the WordPress logs showed (for a successful installation):

 

Installing MySQL and WordPress in under a minute:

https://asciinema.org/a/nW3l0A1hi6wzMde4RUQ1QVPsc?speed=3

 

 

 

 

 

Monitoring containers with Prometheus and Grafana

Architecting Monitoring for Containerized Applications

Why not use Nagios?

Can’t use same method as traditional servers. E.g. putting an agent into a container doesn’t really work.

/metrics exposed for container runtime. Docker uses Prometheus format (i.e. simple text with Key Value format)

Prometheus stores data in time series database.

Prometheus configuration

Is in YAML. E.g.

/etc/prometheus/prometheus.yml

Sections:

scrape_configs

- job_name: <name here>

and

scrape_interval: 60s

Prometheus Dashboard

Status > Targets: lists all monitored targets

Graph > Graph > select from insert metric at cursor

 

Collecting Metrics with Prometheus

Exposing Runtime Metrics with Prometheus

Exposing Application Metrics to Prometheus

Exposing Docker Metrics to Prometheus

Building Dashboards with Grafana

 

 

 

Docker Networking: 3 major areas – CNM, Libnetwork, Drivers

CNM

aka Container Network Model. This is the Docker Networking Model.

Note: there is an alternative – CNI (aka Container Network Interface) from CoreOS which is more suited to Kubernetes. More here: https://kubernetes.io/blog/2016/01/why-kubernetes-doesnt-use-libnetwork/

The CNM has 3 main components:

  • Sandbox: contains configuration of container’s network stack (aka namespace in Linux)
  • Endpoint: joins Sandbox to a Network (aka network interface. e.g. eth0)
  • Network: group of Endpoints that can communicate directly

See also: https://github.com/docker/libnetwork/blob/master/docs/design.md

and

Libnetwork

aka Control & Management plane

https://github.com/docker/libnetwork

Cross platform and pluggable.

Real-world implementation of CNM by Docker.

Drivers

Data plane

Network-specific detail

  • Overlay
  • MACVLAN
  • IPVLAN
  • Bridge

 

 

docker: Error response from daemon: driver failed programming external connectivity on endpoint …: Bind for 0.0.0.0:8080 failed: port is already allocated.

So, I’m trying to run Jenkins via Docker with:

docker run -d -v jenkins_home:/var/jenkins_home -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts

but getting:

Clearly the previous time I ran Jenkins the container hadn’t been cleanly stopped. Or there was another container using that port (it turned out to be the latter).

Here’s how to fix it.

  1. check port: netstat -nl -p tcp | grep 8080 (interestingly this didn’t show anything even though:
  2. docker ps(showed a container using this port)

docker stop <container name>

to solve the problem.

docker port

Golden rule:


port1:port2 means you’re mapping port1on the host to port2on the container.

i.e. host:container


Say you run:

you’re mapping port 80 in the container to port 8080 on the host.

-p => publish a container’s port to the host

docker port web

gives
80/tcp -> 0.0.0.0:8080

which means:

80 on containermaps to 8080 on host

See also Tech Rant

https://docs.docker.com/engine/reference/commandline/run/#publish-or-expose-port–p—expose

https://docs.docker.com/engine/reference/commandline/port/

Tech Rant

Part of the problem with Tech is:

  • keeping so many things going in your brain at once
  • having to be an expert at so many things
  • the brain-crushing complexity

Example 1

I’m trying to figure out how docker port works. i.e. with this:

docker container run --rm -d --name web -p 8080:80 nginx

is 8080 on the host or the container?

E.g. I can run this: docker port web
80/tcp -> 0.0.0.0:8080

but I’m not clear on the mapping so I check the docs:

https://docs.docker.com/engine/reference/commandline/port/#description

which shows you an example:

but does not explain what the line actually means.

Is it container to host or host to container?

Next doc is https://docs.docker.com/engine/reference/commandline/run/#publish-or-expose-port–p—expose

This explains that 80:8080is host -> container. Which would mean that  the initial mapping I used for nginxabove is mapping 80in the container to 8080on the host. i.e. the other way around.

Let’s test.

1. from the VM

Assuming nginx is outputting to 80 (seems reasonable!) then I’d get something back from 8080 on the host (i.e. in the VM – we haven’t even started with what’s happening on the actual host – i.e. my Mac!) so we should get some output on 8080  from the VM so

curl localhost:8080

(what’s the format for using curl – is it curl localhost:8080 or curl localhost 8080 – check some more docs: https://www.unix.com/shell-programming-and-scripting/241172-how-specify-port-curl.html  – not unreasonable given that telnet doesn’t use a colon – i.e. you’d do telnet localhost 8080 – https://www.acronis.com/en-us/articles/telnet/)

which thankfully gives us some nginx output.

So, going back to:

docker port web
80/tcp -> 0.0.0.0:8080

This is saying:

80 on containermaps to 8080 on host

Annoyingly, the other way round to the format used earlier (i.e. of host to container).

If I do docker ps I get:

i.e. 0.0.0.0:8080->80/tcp

which even more annoyingly is the other way around! i.e. host -> container. I guess the way to remember it is that it’s host -> container unless you examine the container itself – e.g. using docker port web.

 

Some gotchas here:

  • curl localhost 8080 would give connection refused ‘cos curl by default will test on port 80 – given that we’ve got the command wrong it’s testing 80
  • if we’d tested using the container IP address. e.g.

docker container inspect web

gives  “IPAddress”: “172.17.0.3”

curl 172.17.0.3:80

that gives us nginx output. ‘cos we’re using the container IP address and port.

and

curl 172.17.0.3:8080 would give:
curl: (7) Failed to connect to 172.17.0.3 port 8080: Connection refused

 

2. from the container

we need to execfrom the VM into the container. Another doc page: https://docs.docker.com/engine/reference/commandline/exec/

docker exec -it web /bin/bash

and

curl localhost:80
bash: curl: command not found

So we need to install curl. More docs (‘cos it’s different on the Mac (I use brew), on Debian (apt) and CentOS (yum) to find out the OS.

cat /etc/os-release
PRETTY_NAME=”Debian GNU/Linux 9 (stretch)”

so we’re using Debian.

It should be apt-get but I get:

More docs on how to install on Debian:

https://www.cyberciti.biz/faq/howto-install-curl-command-on-debian-linux-using-apt-get/

says apt install curlwhich gives me the same problem.

More docs – seems like you have to run apt-get update first.

https://stackoverflow.com/questions/27273412/cannot-install-packages-inside-docker-ubuntu-image

And finally I can verify that, in the container,

curl localhost:80

outputs nginx content.

 

Note also: I’ve got Ubuntu in the VM and Debian in the container.

VM: my Vagrantfile uses  config.vm.box = "ubuntu/bionic64"

Container: docker container run --rm -d --name web -p 8080:80 nginx uses a Debian based container.

 


Finally, I write a blog post so I can remember in future how it all works without having to spend an entire morning figuring it out. I open up WordPress and it’s using Gutenberg which I’ve been struggling with. Trying to disable it is a pain. This doesn’t work:

How to Disable Gutenberg & Return to the Classic WordPress Editor

Groan. I just pasted a link and don’t want the Auto Insert content feature however I can’t even be bothered to try and figure out how to disable the Auto Insert behaviour.

In the end, I posted a test post and went to All Posts and clicked Classic Editor under the post.

Another rant: WordPress’ backtick -> formatted code only occasionally works – very frustrating.

 

3. To close the loop let’s test from my Mac

As we’re using 8080on the host let’s forward to 8081 on the Mac. Add this to the Vagrantfile:

config.vm.network "forwarded_port", guest: 8080, host: 8081

https://www.vagrantup.com/docs/networking/forwarded_ports.html

Another rant – trying to reprovision the VM with this gave me a continuous loop of:

I couldn’t be bothered to debug this so just did vagrant destroy vm1 and started again.

https://www.vagrantup.com/intro/getting-started/teardown.html

Then some more Waiting. e.g.

==> vm1: Waiting for machine to boot. This may take a few minutes…

Given how fast computers are it seems crazy how much Waiting we have to do for them. E.g. web browsers, phones, etc.

End of that rant.

 

So, testing from my Mac:

http://localhost:8081/

did not work.

I tried

http://localhost:8080/

which did work. Wtf?

I gave up here. Kind of felt that figuring out the problems here was a rabbit hole too far.

 

Example 2

You’ve got a million more important things to do but you suddenly find in your AWS console that:

Amazon EC2 Instance scheduled for retirement

Groan. This is part of an Elastic cluster.

So, should be a pretty standard process.

  • disable shard allocation
  • stop elasticsearch on that node
  • terminate the instance
  • bring up another instance using terraform
  • reenable shard allocation

but you find unassigned_shards is stuck at x.

So, now you’ve got to become an elasticsearch expert.

E.g. do a

curl -XGET localhost:9200/_cluster/allocation/explain?pretty

and work out why these shards aren’t being assigned.

There goes the rest of my day wading through reams of debugging output and documentation.

https://www.datadoghq.com/blog/elasticsearch-unassigned-shards/

 

Example 3

Finding information is so slow.

E.g. you want to know why Elasticsearch skipped from versions 2.x to versions 5.x.

And whether it’s important.

So you Google. Eventually, hiding amongst the Release Notes is a StackOverflow page (https://stackoverflow.com/questions/38404144/why-did-elasticsearch-skip-from-version-2-4-to-version-5-0 ) which says go look at this 1 hour 52 minute keynote for the answer.

Unless you’re an elasticsearch specialist, no-one wants to spend this time finding out that info (the answer, btw, is Elasticsearch: why the jump from 2.x to 5.x ).

 

Example 4

After you’ve spent days of time finding a solution, the answer is complex.

E.g. let say you have to do a Production restore of Elasticsearch.

Can you imagine the dismay you’d get when you have to face the complex snake’s nest contained here:

https://www.elastic.co/guide/en/elasticsearch/reference/1.7/modules-snapshots.html#_restore

The preconditions start with:

The restore operation can be performed on a functioning cluster. However, an existing index can be only restored if it’s closed and has the same number of shards as the index in the snapshot.

and continue for page after page.

There is no simple command like: elasticsearch restore data from backup A

 

Instead you have to restore an index from a snapshot. How do you work out whether a snapshot contains an index?

Easy! Just search dozens of Google results, wade through several hundred pages of Elasticsearch documentation and Stackoverflow questions for different versions of Elasticsearch and Curator. E.g.

Query Google for:

restore index from elasticsearch snapshot – https://www.google.com/search?q=restore+index+from+elasticsearch+snapshot&oq=restore+index+from+elasticsearch+snapshot&aqs=chrome..69i57j0.8028j0j4&sourceid=chrome&ie=UTF-8

and you get not much useful.

 

https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html

and

https://stackoverflow.com/questions/39968861/how-to-find-elasticsearch-index-in-snapshot

Funny – in the Stackoverflow the Answerer had the temerity to say:

I’m confused by your question as you have posted snapshot json which tells you exactly which indices are backed up in each snapshot.

I guess that exactly reflects the lack of understanding / empathy in the tech industry that someone can’t even make the leap to understand how to generalise the question to different indices and snapshots.

Example 5

Software that’s ridiculously complex to use.

E.g. take ElasticHQ – simple web GUI.

But how do you list snapshots?

Perhaps their documentation says something.

http://docs.elastichq.org/

No.

How about a search?

That returns 2 results. One about an API and another containing the word in code.

http://docs.elastichq.org/search.html?q=snapshot&check_keywords=yes&area=default

For anyone searching, it’s under Indices > Snapshots.

But if you’ve clicked on Indices, beware ‘cos you’re now in an infinite loop waiting for Indices to return.

Example 6: the acres of gibberish problem

Let’s say you’re learning a new technology.

The first thing you usually do is some Hello World system.

So, you try this with a Kubernetes pod. E.g.

and do

All seems pretty rosy? No!

Now you’re in acres of gibberish territory.

You’ve gone from trying to do a simple Hello World app to having to figure stuff out like:

  • NodeHasSufficientDisk
  • NodeHasNoDiskPressure
  • rpc error: code = Unknown desc = Error response from daemon

It’s the equivalent of learning to drive but, on trying to signal left, finding you have to replace the electrical circuitry of the car to get it working correctly.

This seemed to fix the problem:

e.g.

Example 7: Complexity

Example

Say I want to find out a version of a piece of software. This should be something even a beginner can do. E.g. it usually goes something like this:

app-name version

Now, however, it’s vastly more complex.

Let’s try helm version.

This is all real output I got.

Day 1:

Fortunately I know this is ‘cos I don’t have Kubernetes running on my local system which I can do with:

minikube start

Now, however,

So, now, when you’re not even at the level of running a hello world app you’re having to debug why you can’t even find out the version of your application!

I could go on with this. E.g. see the post Kubernetes: helm

Kind of frustrating when, at the most basic level, you reach a completely impassable roadblock. E.g.

And then you get:

which you have to spend a day Google’ing a solution.

E.g. on Stackoverflow you get obscure stuff like:

with no explanations why you need it or what it does and that doesn’t work anyway.
Day 3
This was a few days later. When I thought I’d sorted something simple like outputting the version.
helm version

Fortunately, I figured this out in a few minutes. But with less Kubernetes knowledge it could take you several days of effort – it shouldn’t take that long to work out what version of software you’re using.

By the way, this last one turned out to be because my default cluster turned out to be the one in EKS.

Need another example?

Ha ha. I could go on for days.

Let’s say you’ve committed some files using git. Now you realise you committed them on the wrong branch.

OK, if you were using Word or Google Docs, it would be pretty simple.

Select your changes, copy, Cmd Z until you got to the original state, paste.

With git it’s fucking complex.

I dare anyone with less than 3 years experience of git to do something this simple in under an hour.

 

More examples

https://github.com/kubernetes/dashboard

offers a dashboard. Great!

Seems to be a single line install. But just before the single line install you read:

IMPORTANT: Read the Access Control guide before performing any further steps. The default Dashboard deployment contains a minimal set of RBAC privileges needed to run.

That sounds ominous. Whilst I assume I don’t need to worry about that as I’m just testing on a local minikube I should read on just in case.

I start scanning through https://github.com/kubernetes/dashboard/wiki/Access-control

I get to:

Kubernetes supports few ways of authenticating and authorizing users. You can read about them here and here.

and I’m starting to worry about going down a rabbit hole here. Will I ever get this dashboard installed? Or will I just have to keep reading more and more documentation?

I decide to brave it and go with the single line install:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

and get:

The Deployment "kubernetes-dashboard" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"k8s-app":"kubernetes-dashboard"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

which seems pretty obvious. Clearly the Kubernetes authors have spent a lot of time making sure their output error messages are nice, clear and to the point. I’m not sure why they couldn’t have included a few more paragraphs of gibberish just to make things more obscure.

I try the suggested URL anyway

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

and get:

Again, clearly plenty of time spent creating nice, clear debug output.

 

What’s the solution?

Ignore all of the detailed advice on the Kubernetes Dashboard github page – i.e. https://github.com/kubernetes/dashboard

and, kudos to the Kubernetes documentation team (they outdid themselves here), buried nicely in issue 2898 is the answer:

minikube dashboard

So, translating, what the Kubernetes team mean by:

The Deployment "kubernetes-dashboard" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"k8s-app":"kubernetes-dashboard"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

is

The dashboard is already running. Access it using <put URL in here>.

Someone needs to work on a Google Translate engine that maps Kubernetes to English.

https://github.com/kubernetes/dashboard/issues/2898

 

More examples

I want to run go on the Mac.

Example 8: Latency

It shouldn’t take 10 seconds to load up a web page.

Every time you click on a link it’s another 10 seconds. Get to the wrong web page? Click back and go to the next. That’s half a minute.

Example 9: Stuff that doesn’t work

I’ve got 2 digital thermometers. Side by side.

One reads 18.4C, the other 17.5.

They’re probably both wrong. So the temperature could be 16C or 19C. Who knows.

Example 10: Web page insertions

You go to click on a Google result. And suddenly Google (and they’re not the only culprits) inserts something else into the page and you click on something else.

Example 11: Cookies

Accept this

Accept this

Accept this

Accept this

Accept this

Accept this

Accept this

 

Every single website you go to. Over and over and over again.

Can’t there just be a one-time option on some website somewhere for me to donate $10 to a monkey to keep clicking “Accept this” on my behalf?

 

Example 12: Inconsistencies

Why can’t all command line tools use the same flags?

Some use double-dash, some don’t.

They can’t even output versions in the same way.

 

Example 13: Crap documentation

E.g. page 16 (i.e. the very start – after the Introductory chapter) of Kubernetes up & Running suggests running:

docker build -t kuard-amd64:1 .

which doesn’t work.

https://github.com/kubernetes-up-and-running/kuard/issues/7

Not only doesn’t it work but it’s incorrect (MAINTAINER is deprecated).

On the most basic example in the book, users have to debug why their 4 line piece of code isn’t working. And 2 out of the 4 lines are incorrect.

 

 

Install Docker via a Vagrantfile

Tons of ways of doing this on the internet – many that don’t work for various reasons – e.g. older versions of Ubuntu, etc… – and ranging from complex to very complex.

Here’s two simple ways:

  1. Most simple

Paste into a Vagrantfile:

2. A little more manual using Docker’s convenience script:

Paste into a Vagrantfile:

Then:

Here’s what didn’t work for me:

https://stackoverflow.com/questions/52498892/install-docker-via-a-vagrantfile

I guess it doesn’t help that precise64 is quite an old version of Ubuntu but it’s what was given on the Vagrant website for provisioning Docker.

and

https://www.vagrantup.com/docs/provisioning/docker.html

which I basically ignored ‘cos it didn’t have a correct concrete example as of the time of writing this blog post (even though it’s Hashicorp’s own website!).