I’m diving into Kubernetes for a couple of months now. Discovering the possibilities and philosophy behind the hype definitely changed my mind. Yes, it is huge (in every sense ;) ) and it does change the way we, ex-sysops / ops / syasdmins do our work. Not tomorrow, not soon, now.
I’ve had my hands on various managed kubernetes clusters like GKE (Google Container Engine), EKS (AWS Elastic Container Service) or the more humble minikube but I’m not happy when I don’t understand what a technology is made of. So I googled and googled (yeah sorry Qwant and duckduckgo I needed actual answers), until I found >many >incredibly >useful resources.
Finally, after hours of reading, I decided to fire up my own k8s cluster “on premise”, or better said, under my desk ;).
With a couple of hardware I had here and there I built a good old Debian GNU/Linux 9 machine which will be my trustful home-datacenter.
There’s a shitton of resources on how to build up a k8s cluster, but many pointers and the experience of friends like @kedare and @Jean-Alexis put Kubespray on top of the list.
Long story short, Kubespray is a huge ansible playbook with all the bits and pieces necessary to build a high-available, up-to-date kubernetes cluster. It also comes with a
Vagrantfile which helps with the creation of the needed nodes.
Contrary to official Kubespray documentation guidelines, I used virtualenv to install python bits, which is cleaner.
Some notes about how to run or tune
- Can’t have less than 3 nodes, the
playbookwill fail at some point
- Each node can’t have less than 1.5G RAM
- CoreOS has no libvirt vagrant box, stick with
- When the cluster is created with
vagrant, ansible inventory is available at
- Even if
disable_swapis set to
roles/kubespray-defaults/defaults/main.yamlit remains active, preventing
journalctlshowed the following :
Oct 15 06:17:36 k8s-01 kubelet: F1015 06:17:36.672113
Which seems caused by those previous warnings:
Oct 15 06:17:47 k8s-01 kubelet: Flag --fail-swap-on has been deprecated,
Simply fix this by executing:
$ ansible -i inventory/sample/vagrant_ansible_inventory kube-node -a "sudo swapoff -a"
localhost kubectl in
This will populate the
inventory/sample/artifacts/ directory with the
kubectl binary and a proper
admin.conf file in order to use
kubectl from a client able to reach the cluster. Usually, you’d copy it like this:
$ mkdir -p $HOME/.kube
From here, you’ll be able to use
kubectl in the actual host.
You may want to connect from another host which has no direct route to the cluster, in such case simply use
kubectl as a proxy:
$ kubectl proxy --address='0.0.0.0' --accept-hosts='.*'
$ kubectl get nodes
(yeah, keyboard needed, mandatory
F2 at boot because of missing fan…)