Why is my interface called ens3 rather than eth0?

Because devices now use Predictable Interface Names.

i.e. ens3: Names incorporating Firmware/BIOS provided PCI Express hotplug slot index numbers

See:

Tech Rant

Part of the problem with Tech is:

  • keeping so many things going in your brain at once
  • having to be an expert at so many things
  • the brain-crushing complexity

Example 1

I’m trying to figure out how docker port works. i.e. with this:

docker container run --rm -d --name web -p 8080:80 nginx

is 8080 on the host or the container?

E.g. I can run this: docker port web
80/tcp -> 0.0.0.0:8080

but I’m not clear on the mapping so I check the docs:

https://docs.docker.com/engine/reference/commandline/port/#description

which shows you an example:

but does not explain what the line actually means.

Is it container to host or host to container?

Next doc is https://docs.docker.com/engine/reference/commandline/run/#publish-or-expose-port–p—expose

This explains that 80:8080is host -> container. Which would mean that  the initial mapping I used for nginxabove is mapping 80in the container to 8080on the host. i.e. the other way around.

Let’s test.

1. from the VM

Assuming nginx is outputting to 80 (seems reasonable!) then I’d get something back from 8080 on the host (i.e. in the VM – we haven’t even started with what’s happening on the actual host – i.e. my Mac!) so we should get some output on 8080  from the VM so

curl localhost:8080

(what’s the format for using curl – is it curl localhost:8080 or curl localhost 8080 – check some more docs: https://www.unix.com/shell-programming-and-scripting/241172-how-specify-port-curl.html  – not unreasonable given that telnet doesn’t use a colon – i.e. you’d do telnet localhost 8080 – https://www.acronis.com/en-us/articles/telnet/)

which thankfully gives us some nginx output.

So, going back to:

docker port web
80/tcp -> 0.0.0.0:8080

This is saying:

80 on containermaps to 8080 on host

Annoyingly, the other way round to the format used earlier (i.e. of host to container).

If I do docker ps I get:

i.e. 0.0.0.0:8080->80/tcp

which even more annoyingly is the other way around! i.e. host -> container. I guess the way to remember it is that it’s host -> container unless you examine the container itself – e.g. using docker port web.

 

Some gotchas here:

  • curl localhost 8080 would give connection refused ‘cos curl by default will test on port 80 – given that we’ve got the command wrong it’s testing 80
  • if we’d tested using the container IP address. e.g.

docker container inspect web

gives  “IPAddress”: “172.17.0.3”

curl 172.17.0.3:80

that gives us nginx output. ‘cos we’re using the container IP address and port.

and

curl 172.17.0.3:8080 would give:
curl: (7) Failed to connect to 172.17.0.3 port 8080: Connection refused

 

2. from the container

we need to execfrom the VM into the container. Another doc page: https://docs.docker.com/engine/reference/commandline/exec/

docker exec -it web /bin/bash

and

curl localhost:80
bash: curl: command not found

So we need to install curl. More docs (‘cos it’s different on the Mac (I use brew), on Debian (apt) and CentOS (yum) to find out the OS.

cat /etc/os-release
PRETTY_NAME=”Debian GNU/Linux 9 (stretch)”

so we’re using Debian.

It should be apt-get but I get:

More docs on how to install on Debian:

https://www.cyberciti.biz/faq/howto-install-curl-command-on-debian-linux-using-apt-get/

says apt install curlwhich gives me the same problem.

More docs – seems like you have to run apt-get update first.

https://stackoverflow.com/questions/27273412/cannot-install-packages-inside-docker-ubuntu-image

And finally I can verify that, in the container,

curl localhost:80

outputs nginx content.

 

Note also: I’ve got Ubuntu in the VM and Debian in the container.

VM: my Vagrantfile uses  config.vm.box = "ubuntu/bionic64"

Container: docker container run --rm -d --name web -p 8080:80 nginx uses a Debian based container.

 


Finally, I write a blog post so I can remember in future how it all works without having to spend an entire morning figuring it out. I open up WordPress and it’s using Gutenberg which I’ve been struggling with. Trying to disable it is a pain. This doesn’t work:

How to Disable Gutenberg & Return to the Classic WordPress Editor

Groan. I just pasted a link and don’t want the Auto Insert content feature however I can’t even be bothered to try and figure out how to disable the Auto Insert behaviour.

In the end, I posted a test post and went to All Posts and clicked Classic Editor under the post.

Another rant: WordPress’ backtick -> formatted code only occasionally works – very frustrating.

 

3. To close the loop let’s test from my Mac

As we’re using 8080on the host let’s forward to 8081 on the Mac. Add this to the Vagrantfile:

config.vm.network "forwarded_port", guest: 8080, host: 8081

https://www.vagrantup.com/docs/networking/forwarded_ports.html

Another rant – trying to reprovision the VM with this gave me a continuous loop of:

I couldn’t be bothered to debug this so just did vagrant destroy vm1 and started again.

https://www.vagrantup.com/intro/getting-started/teardown.html

Then some more Waiting. e.g.

==> vm1: Waiting for machine to boot. This may take a few minutes…

Given how fast computers are it seems crazy how much Waiting we have to do for them. E.g. web browsers, phones, etc.

End of that rant.

 

So, testing from my Mac:

http://localhost:8081/

did not work.

I tried

http://localhost:8080/

which did work. Wtf?

I gave up here. Kind of felt that figuring out the problems here was a rabbit hole too far.

 

Example 2

You’ve got a million more important things to do but you suddenly find in your AWS console that:

Amazon EC2 Instance scheduled for retirement

Groan. This is part of an Elastic cluster.

So, should be a pretty standard process.

  • disable shard allocation
  • stop elasticsearch on that node
  • terminate the instance
  • bring up another instance using terraform
  • reenable shard allocation

but you find unassigned_shards is stuck at x.

So, now you’ve got to become an elasticsearch expert.

E.g. do a

curl -XGET localhost:9200/_cluster/allocation/explain?pretty

and work out why these shards aren’t being assigned.

There goes the rest of my day wading through reams of debugging output and documentation.

https://www.datadoghq.com/blog/elasticsearch-unassigned-shards/

 

Example 3

Finding information is so slow.

E.g. you want to know why Elasticsearch skipped from versions 2.x to versions 5.x.

And whether it’s important.

So you Google. Eventually, hiding amongst the Release Notes is a StackOverflow page (https://stackoverflow.com/questions/38404144/why-did-elasticsearch-skip-from-version-2-4-to-version-5-0 ) which says go look at this 1 hour 52 minute keynote for the answer.

Unless you’re an elasticsearch specialist, no-one wants to spend this time finding out that info (the answer, btw, is Elasticsearch: why the jump from 2.x to 5.x ).

 

Example 4

After you’ve spent days of time finding a solution, the answer is complex.

E.g. let say you have to do a Production restore of Elasticsearch.

Can you imagine the dismay you’d get when you have to face the complex snake’s nest contained here:

https://www.elastic.co/guide/en/elasticsearch/reference/1.7/modules-snapshots.html#_restore

The preconditions start with:

The restore operation can be performed on a functioning cluster. However, an existing index can be only restored if it’s closed and has the same number of shards as the index in the snapshot.

and continue for page after page.

There is no simple command like: elasticsearch restore data from backup A

 

Instead you have to restore an index from a snapshot. How do you work out whether a snapshot contains an index?

Easy! Just search dozens of Google results, wade through several hundred pages of Elasticsearch documentation and Stackoverflow questions for different versions of Elasticsearch and Curator. E.g.

Query Google for:

restore index from elasticsearch snapshot – https://www.google.com/search?q=restore+index+from+elasticsearch+snapshot&oq=restore+index+from+elasticsearch+snapshot&aqs=chrome..69i57j0.8028j0j4&sourceid=chrome&ie=UTF-8

and you get not much useful.

 

https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html

and

https://stackoverflow.com/questions/39968861/how-to-find-elasticsearch-index-in-snapshot

Funny – in the Stackoverflow the Answerer had the temerity to say:

I’m confused by your question as you have posted snapshot json which tells you exactly which indices are backed up in each snapshot.

I guess that exactly reflects the lack of understanding / empathy in the tech industry that someone can’t even make the leap to understand how to generalise the question to different indices and snapshots.

Example 5

Software that’s ridiculously complex to use.

E.g. take ElasticHQ – simple web GUI.

But how do you list snapshots?

Perhaps their documentation says something.

http://docs.elastichq.org/

No.

How about a search?

That returns 2 results. One about an API and another containing the word in code.

http://docs.elastichq.org/search.html?q=snapshot&check_keywords=yes&area=default

For anyone searching, it’s under Indices > Snapshots.

But if you’ve clicked on Indices, beware ‘cos you’re now in an infinite loop waiting for Indices to return.

Example 6: the acres of gibberish problem

Let’s say you’re learning a new technology.

The first thing you usually do is some Hello World system.

So, you try this with a Kubernetes pod. E.g.

and do

All seems pretty rosy? No!

Now you’re in acres of gibberish territory.

You’ve gone from trying to do a simple Hello World app to having to figure stuff out like:

  • NodeHasSufficientDisk
  • NodeHasNoDiskPressure
  • rpc error: code = Unknown desc = Error response from daemon

It’s the equivalent of learning to drive but, on trying to signal left, finding you have to replace the electrical circuitry of the car to get it working correctly.

This seemed to fix the problem:

e.g.

Example 7: Complexity

Example

Say I want to find out a version of a piece of software. This should be something even a beginner can do. E.g. it usually goes something like this:

app-name version

Now, however, it’s vastly more complex.

Let’s try helm version.

This is all real output I got.

Day 1:

Fortunately I know this is ‘cos I don’t have Kubernetes running on my local system which I can do with:

minikube start

Now, however,

So, now, when you’re not even at the level of running a hello world app you’re having to debug why you can’t even find out the version of your application!

I could go on with this. E.g. see the post Kubernetes: helm

Kind of frustrating when, at the most basic level, you reach a completely impassable roadblock. E.g.

And then you get:

which you have to spend a day Google’ing a solution.

E.g. on Stackoverflow you get obscure stuff like:

with no explanations why you need it or what it does and that doesn’t work anyway.
Day 3
This was a few days later. When I thought I’d sorted something simple like outputting the version.
helm version

Fortunately, I figured this out in a few minutes. But with less Kubernetes knowledge it could take you several days of effort – it shouldn’t take that long to work out what version of software you’re using.

By the way, this last one turned out to be because my default cluster turned out to be the one in EKS.

Need another example?

Ha ha. I could go on for days.

Let’s say you’ve committed some files using git. Now you realise you committed them on the wrong branch.

OK, if you were using Word or Google Docs, it would be pretty simple.

Select your changes, copy, Cmd Z until you got to the original state, paste.

With git it’s fucking complex.

I dare anyone with less than 3 years experience of git to do something this simple in under an hour.

 

More examples

https://github.com/kubernetes/dashboard

offers a dashboard. Great!

Seems to be a single line install. But just before the single line install you read:

IMPORTANT: Read the Access Control guide before performing any further steps. The default Dashboard deployment contains a minimal set of RBAC privileges needed to run.

That sounds ominous. Whilst I assume I don’t need to worry about that as I’m just testing on a local minikube I should read on just in case.

I start scanning through https://github.com/kubernetes/dashboard/wiki/Access-control

I get to:

Kubernetes supports few ways of authenticating and authorizing users. You can read about them here and here.

and I’m starting to worry about going down a rabbit hole here. Will I ever get this dashboard installed? Or will I just have to keep reading more and more documentation?

I decide to brave it and go with the single line install:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

and get:

The Deployment "kubernetes-dashboard" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"k8s-app":"kubernetes-dashboard"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

which seems pretty obvious. Clearly the Kubernetes authors have spent a lot of time making sure their output error messages are nice, clear and to the point. I’m not sure why they couldn’t have included a few more paragraphs of gibberish just to make things more obscure.

I try the suggested URL anyway

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

and get:

Again, clearly plenty of time spent creating nice, clear debug output.

 

What’s the solution?

Ignore all of the detailed advice on the Kubernetes Dashboard github page – i.e. https://github.com/kubernetes/dashboard

and, kudos to the Kubernetes documentation team (they outdid themselves here), buried nicely in issue 2898 is the answer:

minikube dashboard

So, translating, what the Kubernetes team mean by:

The Deployment "kubernetes-dashboard" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"k8s-app":"kubernetes-dashboard"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

is

The dashboard is already running. Access it using <put URL in here>.

Someone needs to work on a Google Translate engine that maps Kubernetes to English.

https://github.com/kubernetes/dashboard/issues/2898

 

More examples

I want to run go on the Mac.

Example 8: Latency

It shouldn’t take 10 seconds to load up a web page.

Every time you click on a link it’s another 10 seconds. Get to the wrong web page? Click back and go to the next. That’s half a minute.

Example 9: Stuff that doesn’t work

I’ve got 2 digital thermometers. Side by side.

One reads 18.4C, the other 17.5.

They’re probably both wrong. So the temperature could be 16C or 19C. Who knows.

Example 10: Web page insertions

You go to click on a Google result. And suddenly Google (and they’re not the only culprits) inserts something else into the page and you click on something else.

Example 11: Cookies

Accept this

Accept this

Accept this

Accept this

Accept this

Accept this

Accept this

 

Every single website you go to. Over and over and over again.

Can’t there just be a one-time option on some website somewhere for me to donate $10 to a monkey to keep clicking “Accept this” on my behalf?

 

Example 12: Inconsistencies

Why can’t all command line tools use the same flags?

Some use double-dash, some don’t.

They can’t even output versions in the same way.

 

Example 13: Crap documentation

E.g. page 16 (i.e. the very start – after the Introductory chapter) of Kubernetes up & Running suggests running:

docker build -t kuard-amd64:1 .

which doesn’t work.

https://github.com/kubernetes-up-and-running/kuard/issues/7

Not only doesn’t it work but it’s incorrect (MAINTAINER is deprecated).

On the most basic example in the book, users have to debug why their 4 line piece of code isn’t working. And 2 out of the 4 lines are incorrect.

 

 

cp -R vs rsync -pvzar

So what’s the difference between cp -R and rsync -pvzar?

From man cp:

-R If source_file designates a directory, cp copies the directory and the entire subtree connected at that point. If the
source_file ends in a /, the contents of the directory are copied rather than the directory itself. This option also causes
symbolic links to be copied, rather than indirected through, and for cp to create special files rather than copying them as
normal files. Created directories have the same mode as the corresponding source directory, unmodified by the process’
umask.

In -R mode, cp will continue copying even if errors are detected.

Note that cp copies hard-linked files as separate files. If you need to preserve hard links, consider using tar(1),
cpio(1), or pax(1) instead.

So:

  • recursively copy
  • copy symlinks
  • create special files rather than copying them as
    normal files
  • directories have same mode as source

 

From man rsync and looking at pvzar individually:

  •  -p, –perms preserve permissions
  • -v, –verbose increase verbosity
  • -z, –compress
    With this option, rsync compresses the file data as it is sent to the destination machine, which reduces the amount of
    data being transmitted — something that is useful over a slow connection.Note that this option typically achieves better compression ratios than can be achieved by using a compressing remote
    shell or a compressing transport because it takes advantage of the implicit information in the matching data blocks that
    are not explicitly sent over the connection.
  • -a, –archive
    This is equivalent to -rlptgoD. It is a quick way of saying you want recursion and want to preserve almost everything (with -H being a notable omission). The only exception to the above equivalence is when –files-from is specified, in which case -r is not implied.Note that -a does not preserve hardlinks, because finding multiply-linked files is expensive. You must separately specify -H.
  • -r, –recursive
    This tells rsync to copy directories recursively. See also –dirs (-d).

 

Note:

-H, –hard-links
This tells rsync to look for hard-linked files in the transfer and link together the corresponding files on the receiving side. Without this option, hard-linked files in the transfer are treated as though they were separate files.

Note that rsync can only detect hard links if both parts of the link are in the list of files being sent.

 

See also How do you copy a directory?

Linux OS version

Use cat /etc/os-release

e.g.

or