date over HTTP

I always manage to get myself into weird issues… I have this (pretty old) wrt54g router that works well with dd-wrt v3.0-r34311 vpn release. This router is installed in an apartment intended for rental where I happen to crash every now and then. It connects to an OpenVPN hub of mine so I can monit it and be sure guests renting the apartment have working Internet access.

The apartment is located on a small mountain and electricity is not exactly stable, from times to times power goes down and comes back up. And I noticed the openvpn link sometimes fails to reconnect.

After some debugging, I finally noticed that for some reason, the enabled NTP feature sometimes do not fetch the current time, and so the router’s date is stuck to epoch.

In order to be sure the access point gets the right time, I took a different approach, when booting, it will fetch current time online and setup date accordingly.
I was surprised not to find any online website providing some kind of strftime REST API, so I finally decided to put something up by myself.
nginx ssi‘s module has interesting variables for this particular use, namely date_local and date_gmt. Here’s related nginx configuration:

1
2
3
4
5
location /time {
ssi on;
alias /home/imil/www/time;
index index.html;
}

index.html contains the following:

1
2
<!--# config timefmt="%Y-%m-%d %H:%M:%S" -->
<!--# echo var="date_gmt" -->

This particular time format was chosen because this is busybox supported date set format, and as a matter of fact, dd-wrt uses busybox for most of its shell commands.

On the router side, in AdministrationCommands, the following oneliner will be in charge of verifying the current year and call our special URL if we’re still stuck in the 70’s:

1
[ "$(date +%Y)" = "1970" ] && date -s "$(wget -q -O- http://62.210.38.67/time/|grep '^2')"

Yeah, this is my real server IP, use it if you want to but do not assume it will work forever ;)

And that’s it, click on Save Startup in order to save your command to the router’s nvram so it is called at next restart.

Fetch RSVPs from Meetup for further processing

I’m running a couple of demos on how and why to use AWS Athena on a Meetup event tonight here at my hometown of Valencia. Before you start arguing about AWS services being closed source, note that Athena is “just” an hosted version of Apache Hive. Like pretty much every AWS service is a hosted version of a famous FOSS project.
One of the demos is about fetching the RSVP list and process it from a JSON source to a basic \t separated text file to be further read by Athena.
First thing is to get your Meetup API key in order to interact with Meetup’s API. Once done, you can proceed using, for example, curl:

1
2
3
4
$ api_key="b00bf00fefe1234567890"
$ event_id="1234567890"
$ meetup_url="https://api.meetup.com/2/rsvps"
$ curl -o- -s "${meetup_url}?key=${api_key}&event_id=${event_id}&order=name"

There you will receive, in a shiny JSON format, all the informations about the event attendees.
Now a little bit of post processing, as I said, for the purpose of my demo, I’d like to make those datas more human readable, so I’ll process the JSON output with the tool we hate to love, jq:

1
2
$ curl -o- -s "${meetup_url}?key=${api_key}&event_id=${event_id}&order=name" |
jq -r '.results[] | select(.response == "yes") | .member.name + "\t" + .member_photo.photo_link + "\t"+ .venue.country + "\t" + .venue.city'

In this perfectly clear set of jq commands (cough cough), we fetch the results[] section, select only entries whose response value is set to yes (coming to the event) and filter out the name, photo link, country and city.

There you go, extract and analyse your Meetup datas the way you like!

Running Debian from an USB stick on a MacBook Pro

Yeah well, it happened. In my last post I was excited to get back to a BSD UNIX (FreeBSD) for my laptop, I thought I had fought the worse when rebuilding kernel and world in order to have a working DRM module for the Intel Iris 6100 that is bundled with this MacBook Pro generation. But I was wrong. None of the BSDs around had support for the BCM43602 chip that provides WiFi to the laptop. What’s the point of a laptop without WiFi…

So I turned my back at FreeBSD again and as usual, gave a shot at Debian GNU/Linux.

I won’t go through the installation process, you all know it very well and at the end of the day, the only issue is, as often, suspend and resume which seemed to have been broken since kernel 4.9 and superiors.

As for my last post, I’ll only point out how to make an USB stick act as a bootable device used as an additional hard drive with a MacBook Pro.
The puzzle is again EFI and how to prepare the target partitions in order for the Mac to display the USB stick as a bootable choice. The first thing to do is to prepare the USB drive with a first partition than will hold the EFI data. Using gparted I created a vfat partition of 512MB which seems to be the recommended size.


EFI partition

And pick the boot and esp flags:


boot flags

Now assuming the machine you’re preparing the key on is an Ubuntu or the like (I’m using Linux Mint), install the grub-efi-amd64-signed package, create an efi/boot directory at the root of the key, and copy the EFI loader provided by the previously installed package:

1
2
3
4
# apt-get install grub-efi-amd64-signed
# mount -t vfat /dev/sdc1 /mnt
# mkdir -p /mnt/efi/boot
# cp /usr/lib/grub/x86_64-efi-signed/grubx64.efi.signed /mnt/efi/boot/bootx64.efi

Now the trickiest part, this grub EFI loader expects the grub.conf part to reside on an ubuntu directory in the efi directory, because it has been built on an Ubuntu system and that is the value of the prefix parameter.

1
2
3
4
5
6
7
8
9
10
# mkdir /mnt/efi/ubuntu
# cat >/mnt/efi/ubuntu/grub.cfg<<EOF
timeout=20
default=0

menuentry "Debian MBP" {
root=hd0,2
linux /vmlinuz root=/dev/sdb2
initrd /initrd.img
}

And there you go, simply install Debian using kvm or any virtualization system on the second partition formatted as plain ext4 and you’re set, don’t worry about the installer complaining there’s no boot or swap partition, you definitely don’t want to swap on your USB key.


Debian 9 on the MacBook Pro

Running FreeBSD from an USB stick on a MacBook Pro

It is possible to run FreeBSD on a MacBook Pro from an USB drive.
To achieve this, we will first prepare the USB drive from a GNU/Linux machine and make it UEFI friendly:

1
2
3
4
5
6
# apt-get install parted
# parted /dev/sdc
(parted) mklabel gpt
(parted) mkpart ESP fat32 1MiB 513MiB
(parted) set 1 boot on
(parted) quit

From there, install FreeBSD as you would for exmaple using the kvm virtual machine hypervisor on the GNU/Linux machine. Answer “yes” when the installer suggests to create a freebsd-boot partition.

1
$ sudo kvm -hda /dev/sdc -cdrom FreeBSD-11.1-RELEASE-amd64-disc1.iso -boot d

Before exiting the installer, be sure to mount the freebsd-ufs partition and modify /mnt/etc/fstab so it reflects the actual USB drive and not the emulated virtual disk. For me it contains the following:

1
2
# Device	Mountpoint	FStype	Options	Dump	Pass#
/dev/da0p3 / ufs rw 1 1

Lastly, fetch an EFI content, for example from there and dump it to the EFI partition of the pen drive:

1
# dd if=boot1.efifat of=/dev/sdc1

There we go, insert the USB stick in the MacBook, power it up and hold the left-Alt / options key pressed. You should be granted the possibility to boot from the newly created device.


FreeMBP

Cash monitoring

I’m kind of back in the mining arena. Like everyone else nowadays, I’m mining Ethereum with a couple of R9 290 & 290X graphic cards I bought second-hand.
So far everything works as intended, but as a proper control freak, I need to know what’s happening in real-time, what’s my firepower, how’s the mining doing etc…
Like many, I use a mining pool, ethermine to be precise, and those guys had the good taste of exposing a JSON API.
Using collectd-python capabilities, I was able to write a short python script that feeds:

  • current hashrate
  • USD per minute
  • unpaid Ethereums on the pool

to an InfluxDB database, which in turn, is queried by a grafana server in order to provide this kind of graphs:

ETH on grafana

The script itself is available as a GitHub gist, feel free to use and modify it.

Score!

This happened:

Congratulations! You have successfully completed the AWS Certified Solutions Architect - Professional exam and you are now AWS Certified.

[...]

Overall Score: 90%

Topic Level Scoring:
1.0 High Availability and Business Continuity: 100%
2.0 Costing: 100%
3.0 Deployment Management: 85%
4.0 Network Design: 71%
5.0 Data Storage: 90%
6.0 Security: 85%
7.0 Scalability & Elasticity: 100%
8.0 Cloud Migration & Hybrid Architecture: 85%

Not bad

Launch the AWS Console from the CLI or a mobile phone

At ${DAYJOB} I happen to manipulate quite a few AWS accounts for different customers, and I find it really annoying to log out from one web console, to log into a new one, with the right credentials, account ids and MFA.

Here you can read a good blog post on how to enable cross account access for third parties and use a basic script to open a web browser to switch from one account to the other.
I liked this idea so I pushed it a bit further and wrote this small piece of code which allows you not only to switch accounts, but also to simply open any AWS account from the command line.

Tips to remember:

  • The cross account creation process is easier than it seems
    • Create a dedicated cross acount access role on the target
    • Take note of the created role ARN
    • On the source, allow the user to access the created role ARN
  • There’s nothing about this ExternalId mystery, it’s just a password really, and it is read from the URL the client passes, echo $((${RANDOM} * 256)) will do.
  • You can assumeRole to your own local account by simply creating a cross account role with the local account id

Update

Well I pushed it further. Kriskross can now be launched as a tiny web service so you can just copy & paste from your mobile MFA application directly into the mobile browser and thus avoid typos, the micro web server will launch the corresponding AWS session on your desktop.

Tricking bash HISTTIMEFORMAT

While trying to find a clean method to remove line numbers from the history command, I found an interesting trick by using the HISTTIMEFORMAT environment variable. Here’s what bash‘s man says:

1
2
3
4
5
6
7
8
HISTTIMEFORMAT
If this variable is set and not null, its value is used as a
format string for strftime(3) to print the time stamp associated
with each history entry displayed by the history builtin. If
this variable is set, time stamps are written to the history
file so they may be preserved across shell sessions. This uses
the history comment character to distinguish timestamps from
other history lines.

But it turns out you can actually put pretty much anything in there, and for example, an ANSI escape sequence that does a line feed and erases the current line:

1
$ HISTTIMEFORMAT="$(echo -e '\r\e[K')"

There we go, no more line numbers:

1
2
$ history |tail -1
history |tail -1

Extract data-bits from your Jenkins jobs

Another quicky.

I read here that cool trick to convert HTML entities to plain text:

1
alias htmldecode="perl -MHTML::Entities -pe 'decode_entities(\$_)'"

In a Debian based system, this suppose to apt-get install libhtml-parser-perl.
Why bother you may ask? Well because the (awful) Jenkins-cli outputs text areas content in encoded HTML entities, and for example I like the idea of being able to test a standalone packer template that’s failing.

Finally, here’s the full usecase:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
$ java -jar jenkins-cli.jar -s http://127.0.0.1:8080/ get-job "packertest" --username foo --password bar | htmldecode

<?xml version='1.0' encoding='UTF-8'?>
<project>
<actions/>
<description></description>
<keepDependencies>false</keepDependencies>
<properties/>
<scm class="hudson.scm.NullSCM"/>
<canRoam>true</canRoam>
<disabled>false</disabled>
<blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
<blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
<triggers/>
<concurrentBuild>false</concurrentBuild>
<builders/>
<publishers>
<biz.neustar.jenkins.plugins.packer.PackerPublisher plugin="packer@1.4">
<name>Default</name>
<jsonTemplate></jsonTemplate>
<jsonTemplateText>{
"variables": {
"aws_access_key": "",
"aws_secret_key": ""
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "eu-central-1",
"source_ami": "ami-02724d1f",
"instance_type": "t2.micro",
"ssh_username": "admin",
"ami_name": "jenkins-packer (RED) {{timestamp}}",
"vpc_id": "vpc-00000000",
"subnet_id": "subnet-00000000",
"security_group_id": "sg-00000000",
"associate_public_ip_address": true
}],
"provisioners": [{
"type": "shell",
"inline": [
"echo 'Acquire::ForceIPv4 \"true\";' | sudo tee /etc/apt/apt.conf.d/99force-ipv4",
"echo 'deb http://cloudfront.debian.net/debian jessie-backports main' |sudo tee /etc/apt/sources.list.d/backports.list",
"sudo apt-get update",
"sudo apt-get -t jessie-backports install -y ansible"
]
}, {
"type": "ansible-local",
"playbook_file": "{{ user `test_yml` }}",
"command": "PYTHONUNBUFFERED=1 ansible-playbook"
}]
}</jsonTemplateText>
<params>-color=false</params>
<useDebug>false</useDebug>
<changeDir></changeDir>
<templateMode>text</templateMode>
<fileEntries>
<biz.neustar.jenkins.plugins.packer.PackerFileEntry>
<varFileName>test_yml</varFileName>
<contents>---

- hosts: all

tasks:
- name: install stuff
apt: name=vim state=installed
become: true</contents>
</biz.neustar.jenkins.plugins.packer.PackerFileEntry>
</fileEntries>
</biz.neustar.jenkins.plugins.packer.PackerPublisher>
</publishers>
<buildWrappers/>

Ansible playbook with packer in Jenkins

Quick one.

While working on a build chain in order to register home-baked AMIs, I wanted to use the ansible-local packer provisioner to setup the instance with a very basic playbook. I needed to provide ansible a playbook but didn’t find immediately how to achieve this within the Jenkins-packer module. Turns out it’s tricky, in the JSON Template Text (or the template file), declare the playbook_file like this:

1
2
3
4
5
[{
"type": "ansible-local",
"playbook_file": "{{ user `test_yml` }}",
"command": "PYTHONUNBUFFERED=1 ansible-playbook"
}]

Then in the File Entries field, the Variable Name must be test_yml and File Contents filled with the playbook.

Jenkins packer