Kubernetes: an odyssey of over complexity

To play around with Kubernetes, rather than building one from scratch and having to fix a billion different things that could go wrong, I decided to start off with a completely working Kubernetes cluster. So, from https://medium.com/@raj10x/multi-node-kubernetes-cluster-with-vagrant-virtualbox-and-kubeadm-9d3eaac28b98 , I used the Vagrantfile gist at the end and ran vagrant up

and found myself, after thousands of unintelligible lines of code had scrolled past, back at the command line. Had things worked?

Did I need to be worried about the reams of gibberish output like:

Who knows?! I had no idea what the state of the cluster was. Had anything worked? Had something worked? Had nothing worked?

So, some debugging:

  1. State of VMs

So, at least the VMs are running. That would have been a good useful thing to output!

2. State of nodes

(^^^^^ I try not to go flying off on a tangent with technology – it’s so easy to end up going down a rabbit hole of realising this is wrong and then that’s wrong however one thing that seems consistently broken across many editors is this 1. 2. numbering indent. The first indent is almost impossible to delete and the second indent is almost impossible to insert)

So, the nodes don’t seem to be ready.

So, something seems wrong with coredns.

Having all these namespaces just adds to the confusion / complexity. E.g.

Getting the logs of something shouldn’t be this difficult. You shouldn’t have to pick up a book on Kubernetes or do the CKA or search StackOverflow to find out simple stuff like this.

OK, this page sounds promising: https://kubernetes.io/docs/tasks/debug-application-cluster/debug-pod-replication-controller/

This page shows how to debug Pods and ReplicationControllers.

Let’s try their first example:

It then says:

Look at the output of the kubectl describe … command above. There should be messages from the scheduler about why it can not schedule your pod

There aren’t any messages from the scheduler so another dead end.


So, going to Slack Kubernetes Office Hours someone suggests:

kubectl -n kube-system logs coredns-f9fd979d6-4gmwp

which usefully outputs nothing at all. Literally nothing. That’s a big fat nothing useful at all. Not even a This Pod is Pending!

It finally turned out the magic invocation was:

and after a whole lot more gibberish it finally gets to:


So, coredns did not deploy because the nodes were tainted with not-ready rather than the Nodes being in a NotReady status due to coredns being pranged. i.e. the problem is with the Nodes.

Checking head:

So, right in the middle of all that gibberish:

Ready False Tue, 22 Sep 2020 14:22:39 +0000 Mon, 21 Sep 2020 14:01:30 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

So, maybe try a different Vagrantfile after I’ve lost a day debugging this one.


Leave a Reply

Your email address will not be published. Required fields are marked *