Working with Containers

Big Picture

When containers run, the Build layers are Read Only but there’s a Writable layer that gets written to. This is done via the Union File System using Copy on Write.

Note: container does not contain a kernel. It uses the host’s kernel.

Lifecycle like a VM.

Modernize traditional apps: lift and shift small part of existing app first

Ephemeral (don’t hang around for years) and immutable (we don’t login and fix).

Diving Deeper

docker container run -it alpine sh

To exit shell, Ctrl p q

Note: if you’d like Docker to automatically clean up the container and remove the file system when the container exits, you can add the --rm flag.

Stop container:

docker container stop <first few digits of id>

Re first few digits: we should need enough to be unique.

Note: Docker gives the container 10 seconds to clean up before stopping it.

To see container, we can use:

docker container ls or docker ps

(and use the -a flag to see stopped containers).

Start:

docker container start <first few digits of id>

Default processes for new containers

CMD: run-time arguments override CMD instructions

ENTRYPOINT: run-time arguments are appended to ENTRYPOINT

 

Terragrunt Interpolation syntax

Terragrunt allows you to use Terraform interpolation syntax (i.e. ${...} ) to call Terragrunt-specific functions.

These only work within a terragrunt = { ... } block.

Also, these interpolations do not work in a .tfvars file.

Terragrunt functions:

  • get_env(NAME, DEFAULT)

get_envreturns the environment variable named NAMEif it exists. If it does not exist then it returns the value specified by DEFAULT. E.g. this would return $BUCKETif it exists otherwise it returns my-terraform-bucket.

Note also, Terraform will read in environment variables starting with TF_VAR_ so one way of sharing a variable named foo between Terraform and Terragrunt would be to set its value as the environment variable TF_VAR_foo and read it using this get_env function.

For others see: https://github.com/gruntwork-io/terragrunt

Terraform and Azure

Notes

  • Azure will let you create your own custom Dashboards
  • ARM templates (aka Azure Resource Management) – predefined infrastructure using JSON
  • E.g. using Azure Cloud Shell (which includes terraform, git, etc): git clone https://github.com/scarolan/azure-terraform-beginners
  • Edit terraform.tfvars
    • resource_group
    • hostname (dashes OK, probably not underscores)
    • location: get a list using az account list-locations --output table
    • az vm list-skus -l westindia --output table | grep Standard_A0
  • terraform init: gets workspace ready, pulls in plugins and modules
  • terraform plan
  • terraform apply
  • See it being built in real-time in resource groups

Notes on the code: https://github.com/scarolan/azure-terraform-beginners

  • main.tf:
    • azurerm_resource_group: Azure must have a resource group
    • azurerm_virtual_network
    • azurerm_subnet
    • azurerm_network_security_group
    • azurerm_network_interface
    • provisioner “remote-exec” – simple remote exec. Could use Ansible, Chef

 

Containerizing an App – the Dockerfile

Containerizing an App

Notes:

  • CAPITALIZE instructions
  • <INSTRUCTION> <value>
  • FROM always first instruction
  • FROM = base image
  • Good practice to list maintainer
  • RUN = execute command and create new layer
  • COPY = copy code into image as new layer
  • Some instructions add metadata instead of layers

 

Dockerfile

FROM <base image>

LABEL maintainer=”<eg_your@email.com>”

RUN apk –update nodejs nodejs-npm

COPY . /src

WORKDIR /src

RUN npm install

EXPOSE 8080

ENTRYPOINT [“node”, “./app.js”]

Build with:

docker image build -t mywebapp .

or docker build

docker container run -d --name web1 -p 8080:8080 mywebapp

d=> detached

-p host port:container port

 

Under the hood

Dockerfile just some text instructions for building images.

FROM => creates a layer

LABEL => create metadata

RUN => exec commands

COPY => copy stuff in and create a new layer

WORKDIR => working directory

 

Build context => location of your code.

Subfolders gets included too.

 

Note: the docker client can be on a separate machine from the docker daemon. The context just gets sent. Also, the context could also be a git repo. E.g.

docker image build -t mywebapp https://<github-url>

Multi-stage Builds

Stage 0:

FROM node:latest AS storefront

Stage 1:

FROM maven:latest AS appserver

Stage 2:

FROM java:8-jdk-alpine AS production

COPY --from=storefront /usr/src/atsea/app/react-app/build/ .

This last instruction is key. It pulls out the layer with the build code we need from that image.

E.g. if we build with:

docker image build -t multistage .

we get:

Here you can see how the multistageimage is a fraction of its build stage images.

 

Push an image to Docker Hub

E.g.

  1. pull image

docker pull ubuntu

2. create container

docker run –name test-lamp-server -it ubuntu:latest bash

3. inside container, update:

apt-get update

4. install LAMP

apt-get install lamp-server^

5. commit the changes to an image

docker commit -m “Added LAMP Server” -a “NAME” test-lamp-server USER/test-lamp-server:latest

NAME: full name

USER: your Docker hub username

6. Login to Docker hub

docker login

7. Push image to Docker Hub

docker push USER/test-lamp-server

 

More info

https://www.techrepublic.com/article/how-to-create-a-docker-image-and-push-it-to-docker-hub/