Golang interfaces, a pragmatic explanation for the programmer

I’m still in the process of learning golang the right way. Yes I already wrote some projects with the Go language (here and here), but I like to understand the real meaning of techniques when using them.
One of what is said to be the most amazing features of Go is interfaces. Here’s what the official golang docs has to say about it:

Interfaces in Go provide a way to specify the behavior of an object: if something can do this, then it can be used here. We’ve seen a couple of simple examples already; custom printers can be implemented by a String method while Fprintf can generate output to anything with a Write method. Interfaces with only one or two methods are common in Go code, and are usually given a name derived from the method, such as io.Writer for something that implements Write.

Am I just stupid? Did you actually understand this sentence at first sight? I didn’t. Maybe my english is not good enough. Let’s try another one:

An interface type is defined by a set of methods. A value of interface type can hold any value that implements those methods.

Ok… this one maybe?

An interface type is defined as a set of method signatures. A value of interface type can hold any value that implements those methods.

To be honest, I kind of got it, but why does every tutorial needs to back a hardly understandable definition with absolutely unnatural examples for us programmers? Animal interface with Dog and Cat types? Singer? Humans? Is that the kind of programs you write?

So I finally came up with a “real life” example that helped me out understand both interfaces capabilities:

  • (the second capability will come later) An interface can be used for genericity, functions declared in an interface type can be mapped to methods so there’s one single name that will do the same kind of operation but using different techniques. I will use an example that we might all understand:

We programmers often have to deal with format conversions, i.e. read the content of a file and then extract its actual data. Let’s imagine you’re working on a project which chose to switch from xml configuration files to json format (yay!). You might want to write a generic call to handle both situations the same way, and this is where an interface can be very convenient. Let’s have a look at what it might look like:

First let’s declare the actual interface generic type

1
2
3
type Map interface {
Unmarsh() map[string][]int
}

Yeah, there’s no need to specify the func keyword here to declare functions as there can only be this type here. So we declare the Unmarsh() function which will -for now- return a map suited for this kind of data: {"foo": [1, 2, 3]}

Then we declare two raw types

1
2
3
4
5
6
7
type Json struct {
input []byte
}

type Xml struct {
input []byte
}

Let’s create a dummy structure to receive decoded xml, nothing related to the interface explanation:

1
2
3
4
type Xmlout struct {
Name string `xml:"name,attr"`
Item []int `xml:"item"`
}

Now the real deal, we will write two methods one for each type, Json and Xml.
But wait, something won’t behave as we would like: in our project, json and xml Unmarshal functions don’t act the same, json.Unmarshal() fills a map, while xml.Unmarshal() fills a struct. This means our “generic” Unmarsh() call can’t return the same type of data.
Actually, json.Unmarshal() knows how to fill a struct, but for the sake of this demonstration, let’s just assume we decided that our project would be far simpler by just using a map.
I’ll use this scenario to demonstrate the second useful usage of interfaces:

  • When no methods are implemented, interface{} can be used as an untyped, generic data receiver. I like to see it as a form of C’s void *.

And here’s how it can be simply used:

1
2
3
type Map interface {
Unmarsh() interface{}
}

That’s right, simply replace the return type with an empty interface and the function can now return whatever type wanted.

As we’re no more bothered by the return type, we can write both Unmarsh() methods:

1
2
3
4
5
6
7
8
9
10
11
func (js Json) Unmarsh() interface{} {
m := make(map[string][]int)
json.Unmarshal(js.input, &m)
return m
}

func (x Xml) Unmarsh() interface{} {
var xo Xmlout
xml.Unmarshal(x.input, &xo)
return xo
}

As you can see, those two method use different techniques to Unmarshal() data located in the input field of the struct, nevertheless, we will now see how this call can be leveraged:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
func main() {

var t Map

t = Json{[]byte(`{"x": [1, 2, 3]}`)}

fmt.Println(t.Unmarsh())

t = Xml{[]byte(`
<list name="x">
<item>1</item>
<item>2</item>
<item>3</item>
</list>
`)}


fmt.Println(t.Unmarsh())
}

The trick here is that the data receiver is a Map typed (the interface one) variable named t, the interface will then transparently do its magic in order to pick the right struct and associated method based on the passed parameter type. Here I found a pretty good explanation of the implementation of such wizardry.

Witness the output of this example:

1
2
map[x:[1 2 3]]
{x [1 2 3]}

That’s right, one call to rule them all. Now obviously this is not the final result you would want to see and you’d probably transform the latter to a map pretty easily, but now hopefully you understand both usages of Go‘s interfaces!

An Elasticsearch from the past

Here’s a procedure I came up with in order to migrate an elasticsearch 1.1 database to version 6 (actually 6.4 but probably any 6.x version).

  1. Fire up a temporary elasticsearch version 1.1

Fetch the tar.gz version from https://www.elastic.co/downloads/past-releases/filebeat-1-1-2 and untar it.

Use the following basic configuration file

1
2
3
4
5
$ egrep -v '^[[:space:]]*(#|$)' ~/tmp/elasticsearch-1.1.2/config/elasticsearch.yml 
http.port: 9202
transport.tcp.port: 9302
path.conf: /home/imil/tmp/elasticsearch-1.1.2/config
path.data: /var/db/elasticsearch

Note that I changed the standard ports to $((standard_port + 2)).

From the untarred directory, lauch elasticsearch

1
$ ES_HOME=$(pwd) ES_INCLUDE=$(pwd)/bin/elasticsearch.in.sh bin/elasticsearch -p ./es.pid

Check that it’s working correctly by listing indexes

1
$ curl -X GET "localhost:9202/_cat/indices?v"

Next, as pointed out by the documentation, we need to create the indexes, types and mappings in the new database. In order to do so, we will need the previous mappings

1
$ curl -X GET "localhost:9202/rhonrhon/_mapping/"|json_pp > rhonrhon.mapping

In my example, the original index (version 1.1) is called rhonrhon

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
{
"mappings" : {
"gcutest_infos" : {
"properties" : {
"date" : {
"type" : "date",
"format" : "dateOptionalTime"
},
"topic" : {
"type" : "text"
},
"users" : {
"type" : "text"
},
"ops" : {
"type" : "text"
},
"channel" : {
"type" : "text"
}
}
},
"gcu_infos" : {
"properties" : {
"topic" : {
"type" : "text"
},
"ops" : {
"type" : "text"
},
"users" : {
"type" : "text"
},
"channel" : {
"type" : "text"
},
"date" : {
"type" : "date",
"format" : "dateOptionalTime"
}
}
},

...

}
}

As you can see, this index had multiples types in it, as it was perfectly legit in versions < 5, but it turns out elasticsearch is removing mapping types, and moreover, now only supports one type per index:

Indices created in Elasticsearch 6.0.0 or later may only contain a single mapping type. Indices created in 5.x with multiple mapping types will continue to function as before in Elasticsearch 6.x. Mapping types will be completely removed in Elasticsearch 7.0.0.

It was then mandatory to split our previous mapping by type

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
{
"mappings" : {
"gcu_infos" : {
"properties" : {
"topic" : {
"type" : "text"
}
,

"ops" : {
"type" : "text"
}
,

"users" : {
"type" : "text"
}
,

"channel" : {
"type" : "text"
}
,

"date" : {
"type" : "date",
"format" : "dateOptionalTime"
}

}

}

}

}

  1. Now fire up the target elsticsearch 6 service as you normally would

Note that according to the official migration guide, you should set refresh_interval to -1 and number_of_replicas to 0 for faster reindexing. I didn’t do it, all went well but took a couple of minutes.

Create the mappings for each type / index in the new elasticsearch database

1
2
3
$ curl -XPUT http://localhost:9200/gcutest_infos -H 'Content-Type: application/json' -d "$(cat gcutest_infos.mapping)"
$ curl -XPUT http://localhost:9200/gcu_infos -H 'Content-Type: application/json' -d "$(cat gcu_infos.mapping)"
$ ...

curl -X GET "localhost:9200/_cat/indices?v" should display one index for each type.

Now in order to start the actual synchronization, we must indicate the new cluster where to reindex from and to

1
2
3
4
5
6
7
8
9
10
11
12
{
"source": {
"remote": {
"host": "http://localhost:9202"
}
,

"index": "rhonrhon",
"type": "gcu_infos"
}
,

"dest": {
"index": "gcu_infos"
}

}

And finally trigger the migration by reindexing every index / type

1
2
3
$ curl -XPOST http://localhost:9200/_reindex -H 'Content-Type: application/json' -d "$(cat reindex_gcu_infos.json)"
$ curl -XPOST http://localhost:9200/_reindex -H 'Content-Type: application/json' -d "$(cat reindex_gcutest_infos.json)"
...

There we go!

1
2
3
4
5
6
$ curl -X GET "localhost:9200/_cat/indices?v"
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open gcu tr_rFfjGRDKjVtrDHK46fg 5 1 6083599 0 973.2mb 973.2mb
yellow open gcu_infos ul7-xX8AT4ugckLn_8a-gQ 5 1 87161 0 57.7mb 57.7mb
yellow open gcutest_infos n2OE75poTV-2gDms1Eteaw 5 1 43644 0 3.9mb 3.9mb
yellow open gcutest -rTMi8ZFRNicQfvvA9FDWw 5 1 931 0 212.5kb 212.5kb

See you on next backward compatibility breakage!

OpenVPN routes dynamic NATting

Assume the following scenario: your {Open,Free}BSD pf-enabled (yes, I know what’s missing and it’s a pity, I am well aware of it) gateway connects to an OpenVPN server. This server pushes a couple of routes to your gateway that you’d like to be able to reach from within your own private network. As routers on the other end don’t have routes to your network(s), mandatory NAT is to be configured, but let’s also assume those routes are subject to change, and there’s more than a couple of them, some kind of dynamic rule adding should be considered.

Well luckily, OpenVPN‘s third party script calling plus pf‘s tables are a perfect match for such a task.

First up, we’ll need a script which will be passed the routes received by OpenVPN:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#!/bin/sh

PATH=/sbin:/usr/bin:/usr/local/bin

if [ "$#" -lt 1 ]; then
echo "usage: $0 <table> [flush]"
exit 1
fi

if [ "${2}" = "flush" ]; then
pfctl -t ${1} -T flush
exit 0
fi

i=1
while :
do
eval routenet=\$route_network_${i}
[ -z "${routenet}" ] && exit 0
eval routemsk=\$route_netmask_${i}

network=$(ipcalc -n ${routenet} ${routemsk}|awk '/^Network/ {print $2}')

pfctl -t ${1} -T add ${network}

i=$(($i + 1))
done

This script uses the route_network_{n} and route_netmask_{n} environmental variables sent by OpenVPN‘s route-up command:

1
2
3
# openvpn configuration file
route-up "/home/imil/bin/routeup.sh myroutes"
down "/home/imil/bin/routeup.sh myroutes flush"

Then, in pf‘s end, only the following rules are necessary:

1
2
3
4
5
tap_if="tap0"

table <myroutes> persist

nat on $tap_if from <mynet_allowed> to <myroutes> -> ($tap_if)

There you go, once your tunnel is up and routes are pushed, the script will push the routes to the myroutes pf table, and pf will NAT traffic to those destinations with our tunnel endpoint.

From GitLab CI to Docker Hub

With all the noise around those topics I would have imagined this one had been covered thousands of time, yet I did not find a single complete resource on this subject which I found to be a basic building block: pushing docker images from GitLab CI to the Docker Hub registry.

There’s actually an opened issue on Docker GitHub‘s that’s sitting there for 3 years, and it really feels more like a political / strategic / commercial issue than a technical one. Point being, there’s no straightforward integration between GitLab.com and Docker Hub.

I really don’t like the method that’s been used by all the people in the same situation but I just can’t find any other one: you basically have to create variables within GitLab.com‘s CI settings with your Docker Hub login and password. This looks like this :

CI / CD Variables

One more catch, the $CI_REGISTRY is the value found in docker info|grep -i registry:, here being https://index.docker.io/v1/. Thanks to this post for the hint.

Once the credentials and endpoint set, you can write a .gitlab-ci.yml file that looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
image: docker:latest

services:
- docker:dind

stages:
- build

variables:
IMAGE_TAG: imil/myimage:$CI_COMMIT_TAG

before_script:
- docker login -u $CI_REGISTRY_USER -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY

build:
stage: build
script:
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG

I didn’t find GitLab documentation impressive on how to use the CI in their very own website, here are some pointers :

About this $IMAGE_TAG variable, you could replace myimage with $CI_PROJECT_NAME if the project has the same name on Docker Hub and GitLab (which is a good idea).
The $CI_COMMIT_TAG is set when an actual git tag is pushed, so this is triggered this way:

1
2
3
$ git commit -m "things changed" foo .gitlab-ci.yml
$ git tag 0.27
$ git push tags 0.27

This will summon the CI / CD pipeline with a $CI_COMMIT_TAG value of 0.27 , hopefully build the docker image and push it to Docker Hub.

Do you have another solution? Please tell me in the comments!

Kubernetes under my desk

I’m diving into Kubernetes for a couple of months now. Discovering the possibilities and philosophy behind the hype definitely changed my mind. Yes, it is huge (in every sense ;) ) and it does change the way we, ex-sysops / ops / syasdmins do our work. Not tomorrow, not soon, now.

I’ve had my hands on various managed kubernetes clusters like GKE (Google Container Engine), EKS (AWS Elastic Container Service) or the more humble minikube but I’m not happy when I don’t understand what a technology is made of. So I googled and googled (yeah sorry Qwant and duckduckgo I needed actual answers), until I found >many >incredibly >useful resources.

Finally, after hours of reading, I decided to fire up my own k8s cluster “on premise”, or better said, under my desk ;).
With a couple of hardware I had here and there I built a good old Debian GNU/Linux 9 machine which will be my trustful home-datacenter.
There’s a shitton of resources on how to build up a k8s cluster, but many pointers and the experience of friends like @kedare and @Jean-Alexis put Kubespray on top of the list.
Long story short, Kubespray is a huge ansible playbook with all the bits and pieces necessary to build a high-available, up-to-date kubernetes cluster. It also comes with a Vagrantfile which helps with the creation of the needed nodes.

By default, vagrant uses VirtualBox as its virtual machine provider, using kvm makes it faster and well integrated into our Debian system.
Here is a great tutorial on setting up such a combo.

Contrary to official Kubespray documentation guidelines, I used virtualenv to install python bits, which is cleaner.

Some notes about how to run or tune Vagrantfile:

  • Can’t have less than 3 nodes, the playbook will fail at some point
  • Each node can’t have less than 1.5G RAM
  • CoreOS has no libvirt vagrant box, stick with ubuntu1804
  • When the cluster is created with vagrant, ansible inventory is available at inventory/sample/vagrant_ansible_inventory
  • Even if disable_swap is set to true in roles/kubespray-defaults/defaults/main.yaml it remains active, preventing kubelet to start. journalctl showed the following :
1
2
3
4
5
Oct 15 06:17:36 k8s-01 kubelet[2140]: F1015 06:17:36.672113
2140 server.go:262] failed to run Kubelet: Running with swap on is not supported, please disable swap ! or set --fail-swap-on flag to false.
/proc/swaps contained:
[Filename Type Size Used Priority /dev/sda
2 partition 1999868 0 -2]

Which seems caused by those previous warnings:

1
2
3
Oct 15 06:17:47 k8s-01 kubelet[2369]: Flag --fail-swap-on has been deprecated,
This parameter should be set via the config file specified by the Kubelet's --config flag.
See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.

Simply fix this by executing:

1
2
3
$ ansible -i inventory/sample/vagrant_ansible_inventory kube-node -a "sudo swapoff -a"
$ ansible -i inventory/sample/vagrant_ansible_inventory kube-node -a "sudo systemctl restart kubelet
$ ansible -i inventory/sample/vagrant_ansible_inventory kube-node -a "sudo sed -i'.bak' '/swap/d' /etc/fstab

Enable localhost kubectl in inventory/sample/group_vars/k8s-cluster/k8s-cluster.yml

1
2
kubeconfig_localhost: true
kubectl_localhost: true

This will populate the inventory/sample/artifacts/ directory with the kubectl binary and a proper admin.conf file in order to use kubectl from a client able to reach the cluster. Usually, you’d copy it like this:

1
2
3
$ mkdir -p $HOME/.kube
$ cp inventory/sample/artifacts/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

From here, you’ll be able to use kubectl in the actual host.
You may want to connect from another host which has no direct route to the cluster, in such case simply use kubectl as a proxy:

1
$ kubectl proxy --address='0.0.0.0' --accept-hosts='.*'

And voila:

1
2
3
4
5
$ kubectl get nodes                               
NAME STATUS ROLES AGE VERSION
k8s-01 Ready master,node 1d v1.12.1
k8s-02 Ready master,node 1d v1.12.1
k8s-03 Ready node 1d v1.12.1

PA4

(yeah, keyboard needed, mandatory F2 at boot because of missing fan…)

MATE desktop fixes (updated)

Last week I upgraded my Linux Mint 18 MATE desktop distro to 18.3. With the massive progresses GNU/Linux has done in the desktop, those kind of upgrades use to be simple tasks and no more hassle is to be expected. Except this time where I had several a GUI-related annoyances.

1. MATE panel transparency

I recently became addicted to /r/unixporn and made myself a shiny modern desktop made of MATE and rofi. This desktop use the Arc-Darker theme which used to work nicely with mate-panel version 1.14 but was messing transparency with version 1.18, the one shipped with Mint 18.3.

I fixed this by changing:

1
background-color: #2b2e37;

to

1
background-color: transparent;

In the

1
2
3
4
5
6
.gnome-panel-menu-bar.menubar,
PanelApplet > GtkMenuBar.menubar,
PanelToplevel,
PanelWidget,
PanelAppletFrame,
PanelApplet {

section of the ~/.themes/Arc-Darker/gtk-3.0/gtk.css file.

2. Applet padding

To fix the latter, I first switched to the latest version of the theme, in which the new maintainer changed various small details.
Tastes differ, we all know that, and the new Arc-Darker author felt a bigger padding between panel applets was sexier. I have a different opinion. This is the value to change in the same ~/.themes/Arc-Darker/gtk-3.0/gtk.css file:

1
-NaTrayApplet-icon-padding: 4;

3. Weird rofi behaviour

I opened an issue about this one. Weird compositing glitches like rofi not appearing until mouse was moved (and thus display refreshed) or sometimes appearing partially, or translucent gnome-terminal not refreshing its content.

I suppose this issue is more about my graphic driver (nvidia-384), yet I found that running marco (MATE window manager) with composition disabled and instead using compton fixes this issue. This configuration choice is really easy to switch to as it is a choice in the Preferences → Windows → Desktop Settings → Window Manager drop down menu.

4. Wrong wallpaper resize

I happen to have 4 monitors. Don’t ask, I like it like that, I see everything I need to see in one small head movement, maybe someday I’ll switch to a 4k monitor but right now this setup is like 4 times cheaper for a 4720 x 3000 resolution.

Now about the bug, from time to time, the wallpaper will be zoomed like it is spread among all screens, except it is zoomed only on the main one. Ugly stuff.
This bug happened randomly, often when opening the file manager, caja, and as a matter of fact, I found in the Arch wiki that it is indeed caja which actually manages the desktop, and thus, the background. Also on their wiki I found how to disable this feature:

1
2
$ gsettings set org.mate.background show-desktop-icons false
$ killall caja # Caja will be restarted by session manager

And then I set my wallpaper using the reknown feh command:

1
$ feh --bg-fill ~/Pictures/wallpapers/hex-abstract-material-design-ad-3840x2160.jpg

And voila, no more messing with my wallpaper.

I’ll try to keep this list updated if I find anything else.

Webcam streaming with ffmpeg

I’m a bit of a stressed person, and when I’m not home, I like to have a webcam streaming what’s going on, mainly to see how my dog is doing. At my main house, I use the fabulous motion project, which has a ton of cool features like recording images and videos when there’s movement, playing a sound, handling many cameras and so on.

As I said before, I acquired an apartment destined for rental, and it has really poor Internet access, as it is located on the mountainside, only weak ADSL was possible to get.
Another point, I own a MacBook Pro for music purposes, and when I come crash here, this is the computer I take with me so I can compose if inspiration comes ;) So when I get out the apartment, this will be the machine streaming what’s happening in the house.

First thing, while motion builds on the Mac, it does not find -or at least I didn’t find how- any webcam device. They are exposed via avfoundation and motion doesn’t seem to handle that framework.
After struggling a bit with the incredibly over-complicated vlc command line, I fell back to ffmpeg which package is available through pkgin.

Here I found a gist with a working example, except I had to considerably lower the parameters in order to meet the low bandwidth needs. Here’s a working, low consuming bandwidth example of an ffserver configuration:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
HTTPPort 8090
HTTPBindAddress 0.0.0.0
MaxHTTPConnections 10
MaxClients 10
MaxBandWidth 1000
CustomLog -

<Feed camera.ffm>
File /tmp/camera.ffm
FileMaxSize 5000K
</Feed>

<Stream camera.mpg>
Feed camera.ffm
Format mpjpeg
VideoFrameRate 2
VideoIntraOnly
VideoSize 352x288
NoAudio
Strict -1
</Stream>

Start the server using ffserver -f ffserver.conf.

In order to stream the webcam to this server, here’s the command I use:

1
$ ffmpeg -f avfoundation -framerate 5 -video_size cif -i "0" http://localhost:8090/camera.ffm

Now, the stream is visible at http://192.168.4.12:8090. The command line would be very similar under GNU/Linux except you’d use -f video4linux2 and -i /dev/video0.

Obviously, the goal here is to watch the stream when I’m not home, with my mobile phone web browser.
As I mentioned in my previous post, I use a wrt54g router running dd-wrt which is connected to an OpenVPN hub I can reach from my network. dd-wrt actually has a NAT section, but unfortunately, it only maps internal IPs to the public interface, not taking into account any other type of network.
Nevertheless, it is possible to add very custom iptables rules using the Administration → Commands → Save firewall button. So I added the following rule:

1
2
# redirects tunnel interface (tun1), port 8090 (ffmpeg) to the Macbook
iptables -t nat -A PREROUTING -i tun1 -p tcp --dport 8090 -j DNAT --to-destination 192.168.4.12

Ensuring the Macbook has a static DHCP address.

Of course you probably don’t want this window into your house wide opened to the public and will want to hide it behind an nginx reverse proxy asking for authentication:

1
2
3
4
5
6
7
location /webcam2/ {
auth_basic "Who are you?";
auth_basic_user_file /usr/pkg/etc/nginx/htpasswd;
proxy_pass http://10.0.1.20:8090/; # OpenVPN peer address
proxy_redirect off;
proxy_set_header X-Forwarded-For $remote_addr;
}

And voila! A poor man’s video surveillance system in place ;)

date over HTTP

I always manage to get myself into weird issues… I have this (pretty old) wrt54g router that works well with dd-wrt v3.0-r34311 vpn release. This router is installed in an apartment intended for rental where I happen to crash every now and then. It connects to an OpenVPN hub of mine so I can monit it and be sure guests renting the apartment have working Internet access.

The apartment is located on a small mountain and electricity is not exactly stable, from times to times power goes down and comes back up. And I noticed the openvpn link sometimes fails to reconnect.

After some debugging, I finally noticed that for some reason, the enabled NTP feature sometimes do not fetch the current time, and so the router’s date is stuck to epoch.

In order to be sure the access point gets the right time, I took a different approach, when booting, it will fetch current time online and setup date accordingly.
I was surprised not to find any online website providing some kind of strftime REST API, so I finally decided to put something up by myself.
nginx ssi‘s module has interesting variables for this particular use, namely date_local and date_gmt. Here’s related nginx configuration:

1
2
3
4
5
location /time {
ssi on;
alias /home/imil/www/time;
index index.html;
}

index.html contains the following:

1
2
<!--# config timefmt="%Y-%m-%d %H:%M:%S" -->
<!--# echo var="date_gmt" -->

This particular time format was chosen because this is busybox supported date set format, and as a matter of fact, dd-wrt uses busybox for most of its shell commands.

On the router side, in AdministrationCommands, the following oneliner will be in charge of verifying the current year and call our special URL if we’re still stuck in the 70’s:

1
[ "$(date +%Y)" = "1970" ] && date -s "$(wget -q -O- http://62.210.38.67/time/|grep '^2')"

Yeah, this is my real server IP, use it if you want to but do not assume it will work forever ;)

And that’s it, click on Save Startup in order to save your command to the router’s nvram so it is called at next restart.

Fetch RSVPs from Meetup for further processing

I’m running a couple of demos on how and why to use AWS Athena on a Meetup event tonight here at my hometown of Valencia. Before you start arguing about AWS services being closed source, note that Athena is “just” an hosted version of Apache Hive. Like pretty much every AWS service is a hosted version of a famous FOSS project.
One of the demos is about fetching the RSVP list and process it from a JSON source to a basic \t separated text file to be further read by Athena.
First thing is to get your Meetup API key in order to interact with Meetup’s API. Once done, you can proceed using, for example, curl:

1
2
3
4
$ api_key="b00bf00fefe1234567890"
$ event_id="1234567890"
$ meetup_url="https://api.meetup.com/2/rsvps"
$ curl -o- -s "${meetup_url}?key=${api_key}&event_id=${event_id}&order=name"

There you will receive, in a shiny JSON format, all the informations about the event attendees.
Now a little bit of post processing, as I said, for the purpose of my demo, I’d like to make those datas more human readable, so I’ll process the JSON output with the tool we hate to love, jq:

1
2
$ curl -o- -s "${meetup_url}?key=${api_key}&event_id=${event_id}&order=name" |
jq -r '.results[] | select(.response == "yes") | .member.name + "\t" + .member_photo.photo_link + "\t"+ .venue.country + "\t" + .venue.city'

In this perfectly clear set of jq commands (cough cough), we fetch the results[] section, select only entries whose response value is set to yes (coming to the event) and filter out the name, photo link, country and city.

There you go, extract and analyse your Meetup datas the way you like!

Running Debian from an USB stick on a MacBook Pro

Yeah well, it happened. In my last post I was excited to get back to a BSD UNIX (FreeBSD) for my laptop, I thought I had fought the worse when rebuilding kernel and world in order to have a working DRM module for the Intel Iris 6100 that is bundled with this MacBook Pro generation. But I was wrong. None of the BSDs around had support for the BCM43602 chip that provides WiFi to the laptop. What’s the point of a laptop without WiFi…

So I turned my back at FreeBSD again and as usual, gave a shot at Debian GNU/Linux.

I won’t go through the installation process, you all know it very well and at the end of the day, the only issue is, as often, suspend and resume which seemed to have been broken since kernel 4.9 and superiors.

As for my last post, I’ll only point out how to make an USB stick act as a bootable device used as an additional hard drive with a MacBook Pro.
The puzzle is again EFI and how to prepare the target partitions in order for the Mac to display the USB stick as a bootable choice. The first thing to do is to prepare the USB drive with a first partition than will hold the EFI data. Using gparted I created a vfat partition of 512MB which seems to be the recommended size.


EFI partition

And pick the boot and esp flags:


boot flags

Now assuming the machine you’re preparing the key on is an Ubuntu or the like (I’m using Linux Mint), install the grub-efi-amd64-signed package, create an efi/boot directory at the root of the key, and copy the EFI loader provided by the previously installed package:

1
2
3
4
# apt-get install grub-efi-amd64-signed
# mount -t vfat /dev/sdc1 /mnt
# mkdir -p /mnt/efi/boot
# cp /usr/lib/grub/x86_64-efi-signed/grubx64.efi.signed /mnt/efi/boot/bootx64.efi

Now the trickiest part, this grub EFI loader expects the grub.conf part to reside on an ubuntu directory in the efi directory, because it has been built on an Ubuntu system and that is the value of the prefix parameter.

1
2
3
4
5
6
7
8
9
10
# mkdir /mnt/efi/ubuntu
# cat >/mnt/efi/ubuntu/grub.cfg<<EOF
timeout=20
default=0

menuentry "Debian MBP" {
root=hd0,2
linux /vmlinuz root=/dev/sdb2
initrd /initrd.img
}

And there you go, simply install Debian using kvm or any virtualization system on the second partition formatted as plain ext4 and you’re set, don’t worry about the installer complaining there’s no boot or swap partition, you definitely don’t want to swap on your USB key.


Debian 9 on the MacBook Pro