docker: automatically restart a container

Say you need to restart a VM or restart Docker. How do you restart a container?

E.g. you have:

then restarting Docker (e.g. on the Mac you can click the Docker icon in the toolbar and select Docker > Restart) you get:

whilst Docker is restarting and:

after the restart (i.e. no containers).


You can use --restart always

docker container run --restart always -d <image id> sleep 1d

to restart after a docker reboot.




See also


More on Restart Policies

There are 4 restart policies: no, on-failure, unless-stopped, always.

no is the default. i.e. don’t restart if a container stops.

The others are:


We’ve seen this before. E.g. let’s say we have a script:

Note: exit 1 indicates an error (exit 0 would indicate success).

which we use as follows:


We can build and run with:

This will exit.

To restart with always restart policy use:

docker container run --restart always -d testing_restarts

Now, when it crashes, under docker ps you’ll see:



Here we can restart a container if it exits with a non-zero exit code. We can also specify a number of retries. E.g.

--restart on-failure:3

docker container run --restart on-failure:3 -d testing_restarts


  • the container will not restart if you do a docker stop <container id>
  • the container (and oddly even any containers that have stopped as a result of completing the on-failure number of retries – although only the first time the daemon was restarted) WILL restart if you restart the docker daemon



Behaves the same as always except if a container is stopped.

Note: if you manually stop a container its restart policy is ignored until the Docker daemon restarts.


Ensuring Containers Are Always Running with Docker’s Restart Policy


Live Restore

Lets you keep containers alive when the daemon becomes unavailable,.

However, doing this on my installation of Docker gave:

because I was running a swarm service.


I had to restore to Factory Defaults which means signing in to again.


git: error: Your local changes to the following files would be overwritten by merge

You’ve got some local changes in your git repo. What to do?

1. you want to keep your changes

a. and track them

git add <local-changes>; git commit -m "<your message>"

b. but don’t want to track them

Note: if you’re doing a git pullthen:

git update-index --assume-unchanged <file>

will still result in error: Your local changes to the following files would be overwritten by merge

git: what to do with untracked files

I tried --skip-worktree which didn’t work so just moved my .gitignore file (which was causing the problem) out of the way.


2. you don’t want your changes

git co <local-changes>

Debugging SSH

Debugging ssh is monotonous shit ‘cos you get reams of messages which don’t tell you why you can’t connect.



Actual error message should be:


Delete your key on line 293.

Permission denied (publickey).

Your keys aren’t on the server. i.e. your Public Key isn’t in the ~/.ssh/authorized_keys file of the user you’re trying to login with.

Use ssh -v to debug. Ignore the 20 odd lines of useless information that get output and focus on:

debug1: Offering public key: RSA SHA256:hash /Users/snowcrash/.ssh/id_rsa
debug1: Authentications that can continue: publickey
debug1: Offering public key: RSA SHA256:hash /Users/snowcrash/.ssh/another_key
debug1: Authentications that can continue: publickey
debug1: No more authentication methods to try.
snowcrash@ Permission denied (publickey).


ECR Console Version 2

ECR (Amazon Container Registry) now has a dedicated management console.

Simple guide to creating a repo and pushing a docker image to it:

1. and click Create a repository > Get Started

2. Enter a repository name (usually namespace/repo-name). e.g. snowcrash/wordpress

3. You’ll get a panel showing the URI – e.g.

4. You’ll need to push a docker image to this repo. Assuming you’ve got a docker image you’re happy with locally then get a docker login command by running $(aws ecr get-login --no-include-email --region eu-west-2).

You get this aws ecr get-login command from your ECR console by clicking View push commands.

Note: the --no-include-email is required for more recent versions of docker. E.g. if you get the error message:

If it succeeds, you should get:

5.  tag it with

docker tag <image id> <remote tag>

6. and push with

docker push <remote tag>


Kubernetes Up & Running: Chapter 2

Page 16

1. make sure you’ve cloned the git repo mentioned earlier in the chapter

2. once in the repo, run:

make build

to build the application.

3. create this Dockerfile (not the one mentioned in the book)

MAINTAINER is deprecated. Use a LABEL instead:

However, whilst MAINTAINER can take 1 argument, LABELtakes key/value pairs. E.g.

LABEL <key>=<value>


And the  COPY path in the book is incorrect.

and run

docker build -t kuard-amd64:1 .

to build the Dockerfile.

Here we’ve got a repo name of kuard-amd64 and a tag of 1.–t

4. Check the repo using

docker images

Note: a registry is a collection of repositories, and a repository is a collection of images


Page 17

Files removed in subsequent layers are not available but still present in the image.


Image sizes:

docker images

Or a specific one:

docker image ls <repository name>

E.g. alpine is 4.41MB.


Let’s create a 1MB file and add / remove it:

dd if=/dev/zero of=1mb.file bs=1024 count=1024

then copy it in:

Now building it (and creating a repo called alpine_1mb) we can see the image has increased in size by a bit over 1MB (probably due to the overhead of an additional layer).

However, if we now remove this file in a subsequent Dockerfile – e.g. with something like:

the image is still the same size.

The solution is to ensure you use an rm in the same RUN command as you create/use your big file:



Page 19


To run the kuard app, use:

docker run -d --name kuard --publish 8080:8080



docker tag kuard-amd64:1

According to

the tag command is:

docker tag image username/repository:tag

so image is kuard-amd64:1

but what’s the username?

Is it ?


The answer is that Docker’s docs here:

are incorrect. You don’t need a username or repository. It’s just a label. E.g. see

Correct would be:

docker tag image <any label you want>:tag

BUT for the purposes of pushing to a repository that label DOES need to be of a specific format. i.e. username/image_name.

Shame they didn’t explain that in more detail.


And the next line is misleading too.

docker push

This creates the impression that you’re pushing your image to a website (or registry) hosted at

It’s not.

It’s just the tag you created above. Having said that, I had to simplify the tag (from 2 slashes to 1 slash) to get it to work. E.g.

docker tag kuard-amd64:1 snowcrasheu/kuard-amd64:1

The reason for

denied: requested access to the resource is denied

is that (from )


To login with docker use:

docker login

or to use a specific username / password.

docker login -u <username> -p <password>

Better is --password-stdin however.

and push with:

docker push snowcrasheu/kuard-amd64:1

which you should then be able to see on Docker Hub. E.g.


Limit CPU / Memory



How to change a repository name:


Handy copy-and-paste code: