Kubernetes under my desk

I’m diving into Kubernetes for a couple of months now. Discovering the possibilities and philosophy behind the hype definitely changed my mind. Yes, it is huge (in every sense ;) ) and it does change the way we, ex-sysops / ops / syasdmins do our work. Not tomorrow, not soon, now.

I’ve had my hands on various managed kubernetes clusters like GKE (Google Container Engine), EKS (AWS Elastic Container Service) or the more humble minikube but I’m not happy when I don’t understand what a technology is made of. So I googled and googled (yeah sorry Qwant and duckduckgo I needed actual answers), until I found >many >incredibly >useful resources.

Finally, after hours of reading, I decided to fire up my own k8s cluster “on premise”, or better said, under my desk ;).
With a couple of hardware I had here and there I built a good old Debian GNU/Linux 9 machine which will be my trustful home-datacenter.
There’s a shitton of resources on how to build up a k8s cluster, but many pointers and the experience of friends like @kedare and @Jean-Alexis put Kubespray on top of the list.
Long story short, Kubespray is a huge ansible playbook with all the bits and pieces necessary to build a high-available, up-to-date kubernetes cluster. It also comes with a Vagrantfile which helps with the creation of the needed nodes.

By default, vagrant uses VirtualBox as its virtual machine provider, using kvm makes it faster and well integrated into our Debian system.
Here is a great tutorial on setting up such a combo.

Contrary to official Kubespray documentation guidelines, I used virtualenv to install python bits, which is cleaner.

Some notes about how to run or tune Vagrantfile:

  • Can’t have less than 3 nodes, the playbook will fail at some point
  • Each node can’t have less than 1.5G RAM
  • CoreOS has no libvirt vagrant box, stick with ubuntu1804
  • When the cluster is created with vagrant, ansible inventory is available at inventory/sample/vagrant_ansible_inventory
  • Even if disable_swap is set to true in roles/kubespray-defaults/defaults/main.yaml it remains active, preventing kubelet to start. journalctl showed the following :
1
2
3
4
5
Oct 15 06:17:36 k8s-01 kubelet[2140]: F1015 06:17:36.672113
2140 server.go:262] failed to run Kubelet: Running with swap on is not supported, please disable swap ! or set --fail-swap-on flag to false.
/proc/swaps contained:
[Filename Type Size Used Priority /dev/sda
2 partition 1999868 0 -2]

Which seems caused by those previous warnings:

1
2
3
Oct 15 06:17:47 k8s-01 kubelet[2369]: Flag --fail-swap-on has been deprecated,
This parameter should be set via the config file specified by the Kubelet's --config flag.
See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.

Simply fix this by executing:

1
2
3
$ ansible -i inventory/sample/vagrant_ansible_inventory kube-node -a "sudo swapoff -a"
$ ansible -i inventory/sample/vagrant_ansible_inventory kube-node -a "sudo systemctl restart kubelet
$ ansible -i inventory/sample/vagrant_ansible_inventory kube-node -a "sudo sed -i'.bak' '/swap/d' /etc/fstab

Enable localhost kubectl in inventory/sample/group_vars/k8s-cluster/k8s-cluster.yml

1
2
kubeconfig_localhost: true
kubectl_localhost: true

This will populate the inventory/sample/artifacts/ directory with the kubectl binary and a proper admin.conf file in order to use kubectl from a client able to reach the cluster. Usually, you’d copy it like this:

1
2
3
$ mkdir -p $HOME/.kube
$ cp inventory/sample/artifacts/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

From here, you’ll be able to use kubectl in the actual host.
You may want to connect from another host which has no direct route to the cluster, in such case simply use kubectl as a proxy:

1
$ kubectl proxy --address='0.0.0.0' --accept-hosts='.*'

And voila:

1
2
3
4
5
$ kubectl get nodes                               
NAME STATUS ROLES AGE VERSION
k8s-01 Ready master,node 1d v1.12.1
k8s-02 Ready master,node 1d v1.12.1
k8s-03 Ready node 1d v1.12.1

PA4

(yeah, keyboard needed, mandatory F2 at boot because of missing fan…)

MATE desktop fixes (updated)

Last week I upgraded my Linux Mint 18 MATE desktop distro to 18.3. With the massive progresses GNU/Linux has done in the desktop, those kind of upgrades use to be simple tasks and no more hassle is to be expected. Except this time where I had several a GUI-related annoyances.

1. MATE panel transparency

I recently became addicted to /r/unixporn and made myself a shiny modern desktop made of MATE and rofi. This desktop use the Arc-Darker theme which used to work nicely with mate-panel version 1.14 but was messing transparency with version 1.18, the one shipped with Mint 18.3.

I fixed this by changing:

1
background-color: #2b2e37;

to

1
background-color: transparent;

In the

1
2
3
4
5
6
.gnome-panel-menu-bar.menubar,
PanelApplet > GtkMenuBar.menubar,
PanelToplevel,
PanelWidget,
PanelAppletFrame,
PanelApplet {

section of the ~/.themes/Arc-Darker/gtk-3.0/gtk.css file.

2. Applet padding

To fix the latter, I first switched to the latest version of the theme, in which the new maintainer changed various small details.
Tastes differ, we all know that, and the new Arc-Darker author felt a bigger padding between panel applets was sexier. I have a different opinion. This is the value to change in the same ~/.themes/Arc-Darker/gtk-3.0/gtk.css file:

1
-NaTrayApplet-icon-padding: 4;

3. Weird rofi behaviour

I opened an issue about this one. Weird compositing glitches like rofi not appearing until mouse was moved (and thus display refreshed) or sometimes appearing partially, or translucent gnome-terminal not refreshing its content.

I suppose this issue is more about my graphic driver (nvidia-384), yet I found that running marco (MATE window manager) with composition disabled and instead using compton fixes this issue. This configuration choice is really easy to switch to as it is a choice in the Preferences → Windows → Desktop Settings → Window Manager drop down menu.

4. Wrong wallpaper resize

I happen to have 4 monitors. Don’t ask, I like it like that, I see everything I need to see in one small head movement, maybe someday I’ll switch to a 4k monitor but right now this setup is like 4 times cheaper for a 4720 x 3000 resolution.

Now about the bug, from time to time, the wallpaper will be zoomed like it is spread among all screens, except it is zoomed only on the main one. Ugly stuff.
This bug happened randomly, often when opening the file manager, caja, and as a matter of fact, I found in the Arch wiki that it is indeed caja which actually manages the desktop, and thus, the background. Also on their wiki I found how to disable this feature:

1
2
$ gsettings set org.mate.background show-desktop-icons false
$ killall caja # Caja will be restarted by session manager

And then I set my wallpaper using the reknown feh command:

1
$ feh --bg-fill ~/Pictures/wallpapers/hex-abstract-material-design-ad-3840x2160.jpg

And voila, no more messing with my wallpaper.

I’ll try to keep this list updated if I find anything else.

Webcam streaming with ffmpeg

I’m a bit of a stressed person, and when I’m not home, I like to have a webcam streaming what’s going on, mainly to see how my dog is doing. At my main house, I use the fabulous motion project, which has a ton of cool features like recording images and videos when there’s movement, playing a sound, handling many cameras and so on.

As I said before, I acquired an apartment destined for rental, and it has really poor Internet access, as it is located on the mountainside, only weak ADSL was possible to get.
Another point, I own a MacBook Pro for music purposes, and when I come crash here, this is the computer I take with me so I can compose if inspiration comes ;) So when I get out the apartment, this will be the machine streaming what’s happening in the house.

First thing, while motion builds on the Mac, it does not find -or at least I didn’t find how- any webcam device. They are exposed via avfoundation and motion doesn’t seem to handle that framework.
After struggling a bit with the incredibly over-complicated vlc command line, I fell back to ffmpeg which package is available through pkgin.

Here I found a gist with a working example, except I had to considerably lower the parameters in order to meet the low bandwidth needs. Here’s a working, low consuming bandwidth example of an ffserver configuration:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
HTTPPort 8090
HTTPBindAddress 0.0.0.0
MaxHTTPConnections 10
MaxClients 10
MaxBandWidth 1000
CustomLog -

<Feed camera.ffm>
File /tmp/camera.ffm
FileMaxSize 5000K
</Feed>

<Stream camera.mpg>
Feed camera.ffm
Format mpjpeg
VideoFrameRate 2
VideoIntraOnly
VideoSize 352x288
NoAudio
Strict -1
</Stream>

Start the server using ffserver -f ffserver.conf.

In order to stream the webcam to this server, here’s the command I use:

1
$ ffmpeg -f avfoundation -framerate 5 -video_size cif -i "0" http://localhost:8090/camera.ffm

Now, the stream is visible at http://192.168.4.12:8090. The command line would be very similar under GNU/Linux except you’d use -f video4linux2 and -i /dev/video0.

Obviously, the goal here is to watch the stream when I’m not home, with my mobile phone web browser.
As I mentioned in my previous post, I use a wrt54g router running dd-wrt which is connected to an OpenVPN hub I can reach from my network. dd-wrt actually has a NAT section, but unfortunately, it only maps internal IPs to the public interface, not taking into account any other type of network.
Nevertheless, it is possible to add very custom iptables rules using the Administration → Commands → Save firewall button. So I added the following rule:

1
2
# redirects tunnel interface (tun1), port 8090 (ffmpeg) to the Macbook
iptables -t nat -A PREROUTING -i tun1 -p tcp --dport 8090 -j DNAT --to-destination 192.168.4.12

Ensuring the Macbook has a static DHCP address.

Of course you probably don’t want this window into your house wide opened to the public and will want to hide it behind an nginx reverse proxy asking for authentication:

1
2
3
4
5
6
7
location /webcam2/ {
auth_basic "Who are you?";
auth_basic_user_file /usr/pkg/etc/nginx/htpasswd;
proxy_pass http://10.0.1.20:8090/; # OpenVPN peer address
proxy_redirect off;
proxy_set_header X-Forwarded-For $remote_addr;
}

And voila! A poor man’s video surveillance system in place ;)

date over HTTP

I always manage to get myself into weird issues… I have this (pretty old) wrt54g router that works well with dd-wrt v3.0-r34311 vpn release. This router is installed in an apartment intended for rental where I happen to crash every now and then. It connects to an OpenVPN hub of mine so I can monit it and be sure guests renting the apartment have working Internet access.

The apartment is located on a small mountain and electricity is not exactly stable, from times to times power goes down and comes back up. And I noticed the openvpn link sometimes fails to reconnect.

After some debugging, I finally noticed that for some reason, the enabled NTP feature sometimes do not fetch the current time, and so the router’s date is stuck to epoch.

In order to be sure the access point gets the right time, I took a different approach, when booting, it will fetch current time online and setup date accordingly.
I was surprised not to find any online website providing some kind of strftime REST API, so I finally decided to put something up by myself.
nginx ssi‘s module has interesting variables for this particular use, namely date_local and date_gmt. Here’s related nginx configuration:

1
2
3
4
5
location /time {
ssi on;
alias /home/imil/www/time;
index index.html;
}

index.html contains the following:

1
2
<!--# config timefmt="%Y-%m-%d %H:%M:%S" -->
<!--# echo var="date_gmt" -->

This particular time format was chosen because this is busybox supported date set format, and as a matter of fact, dd-wrt uses busybox for most of its shell commands.

On the router side, in AdministrationCommands, the following oneliner will be in charge of verifying the current year and call our special URL if we’re still stuck in the 70’s:

1
[ "$(date +%Y)" = "1970" ] && date -s "$(wget -q -O- http://62.210.38.67/time/|grep '^2')"

Yeah, this is my real server IP, use it if you want to but do not assume it will work forever ;)

And that’s it, click on Save Startup in order to save your command to the router’s nvram so it is called at next restart.

Fetch RSVPs from Meetup for further processing

I’m running a couple of demos on how and why to use AWS Athena on a Meetup event tonight here at my hometown of Valencia. Before you start arguing about AWS services being closed source, note that Athena is “just” an hosted version of Apache Hive. Like pretty much every AWS service is a hosted version of a famous FOSS project.
One of the demos is about fetching the RSVP list and process it from a JSON source to a basic \t separated text file to be further read by Athena.
First thing is to get your Meetup API key in order to interact with Meetup’s API. Once done, you can proceed using, for example, curl:

1
2
3
4
$ api_key="b00bf00fefe1234567890"
$ event_id="1234567890"
$ meetup_url="https://api.meetup.com/2/rsvps"
$ curl -o- -s "${meetup_url}?key=${api_key}&event_id=${event_id}&order=name"

There you will receive, in a shiny JSON format, all the informations about the event attendees.
Now a little bit of post processing, as I said, for the purpose of my demo, I’d like to make those datas more human readable, so I’ll process the JSON output with the tool we hate to love, jq:

1
2
$ curl -o- -s "${meetup_url}?key=${api_key}&event_id=${event_id}&order=name" |
jq -r '.results[] | select(.response == "yes") | .member.name + "\t" + .member_photo.photo_link + "\t"+ .venue.country + "\t" + .venue.city'

In this perfectly clear set of jq commands (cough cough), we fetch the results[] section, select only entries whose response value is set to yes (coming to the event) and filter out the name, photo link, country and city.

There you go, extract and analyse your Meetup datas the way you like!

Running Debian from an USB stick on a MacBook Pro

Yeah well, it happened. In my last post I was excited to get back to a BSD UNIX (FreeBSD) for my laptop, I thought I had fought the worse when rebuilding kernel and world in order to have a working DRM module for the Intel Iris 6100 that is bundled with this MacBook Pro generation. But I was wrong. None of the BSDs around had support for the BCM43602 chip that provides WiFi to the laptop. What’s the point of a laptop without WiFi…

So I turned my back at FreeBSD again and as usual, gave a shot at Debian GNU/Linux.

I won’t go through the installation process, you all know it very well and at the end of the day, the only issue is, as often, suspend and resume which seemed to have been broken since kernel 4.9 and superiors.

As for my last post, I’ll only point out how to make an USB stick act as a bootable device used as an additional hard drive with a MacBook Pro.
The puzzle is again EFI and how to prepare the target partitions in order for the Mac to display the USB stick as a bootable choice. The first thing to do is to prepare the USB drive with a first partition than will hold the EFI data. Using gparted I created a vfat partition of 512MB which seems to be the recommended size.


EFI partition

And pick the boot and esp flags:


boot flags

Now assuming the machine you’re preparing the key on is an Ubuntu or the like (I’m using Linux Mint), install the grub-efi-amd64-signed package, create an efi/boot directory at the root of the key, and copy the EFI loader provided by the previously installed package:

1
2
3
4
# apt-get install grub-efi-amd64-signed
# mount -t vfat /dev/sdc1 /mnt
# mkdir -p /mnt/efi/boot
# cp /usr/lib/grub/x86_64-efi-signed/grubx64.efi.signed /mnt/efi/boot/bootx64.efi

Now the trickiest part, this grub EFI loader expects the grub.conf part to reside on an ubuntu directory in the efi directory, because it has been built on an Ubuntu system and that is the value of the prefix parameter.

1
2
3
4
5
6
7
8
9
10
# mkdir /mnt/efi/ubuntu
# cat >/mnt/efi/ubuntu/grub.cfg<<EOF
timeout=20
default=0

menuentry "Debian MBP" {
root=hd0,2
linux /vmlinuz root=/dev/sdb2
initrd /initrd.img
}

And there you go, simply install Debian using kvm or any virtualization system on the second partition formatted as plain ext4 and you’re set, don’t worry about the installer complaining there’s no boot or swap partition, you definitely don’t want to swap on your USB key.


Debian 9 on the MacBook Pro

Running FreeBSD from an USB stick on a MacBook Pro

It is possible to run FreeBSD on a MacBook Pro from an USB drive.
To achieve this, we will first prepare the USB drive from a GNU/Linux machine and make it UEFI friendly:

1
2
3
4
5
6
# apt-get install parted
# parted /dev/sdc
(parted) mklabel gpt
(parted) mkpart ESP fat32 1MiB 513MiB
(parted) set 1 boot on
(parted) quit

From there, install FreeBSD as you would for exmaple using the kvm virtual machine hypervisor on the GNU/Linux machine. Answer “yes” when the installer suggests to create a freebsd-boot partition.

1
$ sudo kvm -hda /dev/sdc -cdrom FreeBSD-11.1-RELEASE-amd64-disc1.iso -boot d

Before exiting the installer, be sure to mount the freebsd-ufs partition and modify /mnt/etc/fstab so it reflects the actual USB drive and not the emulated virtual disk. For me it contains the following:

1
2
# Device	Mountpoint	FStype	Options	Dump	Pass#
/dev/da0p3 / ufs rw 1 1

Lastly, fetch an EFI content, for example from there and dump it to the EFI partition of the pen drive:

1
# dd if=boot1.efifat of=/dev/sdc1

There we go, insert the USB stick in the MacBook, power it up and hold the left-Alt / options key pressed. You should be granted the possibility to boot from the newly created device.


FreeMBP

Cash monitoring

I’m kind of back in the mining arena. Like everyone else nowadays, I’m mining Ethereum with a couple of R9 290 & 290X graphic cards I bought second-hand.
So far everything works as intended, but as a proper control freak, I need to know what’s happening in real-time, what’s my firepower, how’s the mining doing etc…
Like many, I use a mining pool, ethermine to be precise, and those guys had the good taste of exposing a JSON API.
Using collectd-python capabilities, I was able to write a short python script that feeds:

  • current hashrate
  • USD per minute
  • unpaid Ethereums on the pool

to an InfluxDB database, which in turn, is queried by a grafana server in order to provide this kind of graphs:

ETH on grafana

The script itself is available as a GitHub gist, feel free to use and modify it.

Score!

This happened:

Congratulations! You have successfully completed the AWS Certified Solutions Architect - Professional exam and you are now AWS Certified.

[...]

Overall Score: 90%

Topic Level Scoring:
1.0 High Availability and Business Continuity: 100%
2.0 Costing: 100%
3.0 Deployment Management: 85%
4.0 Network Design: 71%
5.0 Data Storage: 90%
6.0 Security: 85%
7.0 Scalability & Elasticity: 100%
8.0 Cloud Migration & Hybrid Architecture: 85%

Not bad

Launch the AWS Console from the CLI or a mobile phone

At ${DAYJOB} I happen to manipulate quite a few AWS accounts for different customers, and I find it really annoying to log out from one web console, to log into a new one, with the right credentials, account ids and MFA.

Here you can read a good blog post on how to enable cross account access for third parties and use a basic script to open a web browser to switch from one account to the other.
I liked this idea so I pushed it a bit further and wrote this small piece of code which allows you not only to switch accounts, but also to simply open any AWS account from the command line.

Tips to remember:

  • The cross account creation process is easier than it seems
    • Create a dedicated cross acount access role on the target
    • Take note of the created role ARN
    • On the source, allow the user to access the created role ARN
  • There’s nothing about this ExternalId mystery, it’s just a password really, and it is read from the URL the client passes, echo $((${RANDOM} * 256)) will do.
  • You can assumeRole to your own local account by simply creating a cross account role with the local account id

Update

Well I pushed it further. Kriskross can now be launched as a tiny web service so you can just copy & paste from your mobile MFA application directly into the mobile browser and thus avoid typos, the micro web server will launch the corresponding AWS session on your desktop.