Extract data-bits from your Jenkins jobs

Another quicky.

I read here that cool trick to convert HTML entities to plain text:

alias htmldecode="perl -MHTML::Entities -pe 'decode_entities(\$_)'"

In a Debian based system, this suppose to apt-get install libhtml-parser-perl. Why bother you may ask? Well because the (awful) Jenkins-cli outputs text areas content in encoded HTML entities, and for example I like the idea of being able to test a standalone packer template that’s failing.

Finally, here’s the full usecase:

Ansible playbook with packer in Jenkins

Quick one.

While working on a build chain in order to register home-baked AMIs, I wanted to use the ansible-local packer provisioner to setup the instance with a very basic playbook. I needed to provide ansible a playbook but didn’t find immediately how to achieve this within the Jenkins-packer module. Turns out it’s tricky, in the JSON Template Text (or the template file), declare the playbook_file like this:

  [{
    "type": "ansible-local",
    "playbook_file": "{{ user `test_yml` }}",
    "command": "PYTHONUNBUFFERED=1 ansible-playbook"
  }]

Then in the File Entries field, the Variable Name must be test_yml and File Contents filled with the playbook.

30 python lines Dynamic DNS

Here in Spain, I chose Movistar as my Internet provider, I must say I’m pretty happy with it, symmetric 300Mbps fiber optics and good service. The only annoying aspect is that they do not provide static IP for free, something I was used to and was very convenient.

In order to reach my network from places where I can’t connect to my VPN, I wrote a very simple Dynamic DNS system using dnspython, and it turned out to be fairly easy.

CPU temperature collectd report on NetBSD

pkgsrc’s collectd does not support the thermal plugin, so in order to publish thermal information I had to use the exec plugin:

LoadPlugin exec
# more plugins

<Plugin exec>
        Exec "nobody:nogroup" "/home/imil/bin/temp.sh"
</Plugin>

And write this simple script that reads CPUs temperature from NetBSD’s envstat command:

$ cat bin/temp.sh 
#!/bin/sh

hostname=$(hostname)
interval=10

while :
do
        envstat|awk '/cpu[0-9]/ {printf "%s %s\n",$1,$3}'|while read c t
        do
                echo "PUTVAL ${hostname}/temperature/temperature-zone${c#cpu} interval=${interval} N:${t%%.*}"
        done
        sleep ${interval}
done

I then send those values to an influxdb server:

Ansible and AWS ASG, a (really) dynamic inventory

I found myself searching ridiculously too long to achieve what I believed was a simple task: to apply an Ansible role to newly created instances… started by an Auto Scaling Group. If you’re used to Ansible you know that it relies on an inventory to apply a playbook, but obviously, when you’re firing up EC2 instances with the same playbook, you are not able to know what will be your virtual machines IP addresses, nor can ec2.py, the recommended method to deal with dynamic inventories.

Run CoreOS on FreeBSD's bhyve

No, I’m not following the hype, only I like to test things plus I feel there will be a growing demand for docker at ${DAYWORK}. I read here and there that CoreOS was the Linux distribution of choice to play with docker, so while at it, I picked up this one to dive into the container world. Finally, I’ve been willing to put my hands on bhyve for quite a while, so I took this opportunity to learn all those new (to me) technologies at once.

Migrate FreeBSD root on UFS to ZFS

At ${DAYJOB} I’m using a FreeBSD workstation for quite a while. Everything goes smoothly except for the filesystem. When I first installed it, I chose UFS because FreeBSD installer said that root-on-ZFS was “experimental”. I later learned that nobody uses UFS anymore and that root-on-ZFS is perfectly stable. Thing is, I chose UFS and I deeply regret it. Not because of ZFS’s features that absolutely do not matter for me on the desktop, but because FreeBSD implementation of UFS is terribly, terribly slow when it comes to manipulate big files. When I say slow, I mean that pkg upgrade tends to FREEZE the entire machine while extracting archives. That slow. And before you ask, yes, there’s been a lot of tuning on that side.

Fetch monit status in JSON

I wanted to use monit as my desktop-alerting system, meaning that when a service or a machine is unreachable on my personnal network, I’d see a red dot somewhere on my desktop. Why not nagios you’d ask? because my needs are not worth the hassle.

Unfortunately, monit does not have simple and nice little desktop apps like nagstamon, so I decided to write my own.

It does not seem to be well known, but monit publishes a special URI that shows a status report in XML when the mini-HTTP status server is enabled. The JSON one is only available for the commercial product they sell, M/Monit, so I wrote this small utility to manipulate status values in a JSON format and show a status report within your shell console.

Letsencrypt friendly nginx configuration

So I use this great cheat sheet in order to use letsencrypt free Certificate authority on my own servers, but while this small doc is very straightforward it doesn’t explain much about nginx’s configuration. So I’ll drop my own right here so your journey through TLS is even simpler:

$ cat /usr/pkg/etc/nginx/nginx.conf

# this nginx installation comes from pkgsrc for both Linux and NetBSD
# you might have to adapt paths to suit your needs... or switch to pkgsrc ;)

user   nginx  nginx;
worker_processes  2;

events {
    worker_connections  1024;
}

http {
    include       /usr/pkg/etc/nginx/mime.types;
    default_type  application/octet-stream;

    sendfile        on;
    keepalive_timeout  65;

    # a little bit of browser leverage doesn't hurt :)
    gzip  on;
    gzip_vary on;
    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript;
    gzip_proxied any;

    server {
        # serve boths IPv4 and IPv6 FWIW
        listen       [::]:80;
        listen       80;

        server_name  localhost example.com *.example.com;

        # this is where letsencrypt will drop the callenge
        location /.well-known/acme-challenge {
                default_type "text/plain";
                root /var/www/letsencrypt;
        }

        # redirect everything else to HTTPS
        location / { return 302 https://$host$request_uri; }
    }

    server {
        listen       [::]:443 ssl;
        listen       443 ssl;

        # you'll have to declare those domains accordingly in letsencrypt conf
        server_name  localhost example.com *.example.com;

        # here lies letsencrypt PEM files
        ssl_certificate      /etc/letsencrypt/live/example.com/fullchain.pem;
        ssl_certificate_key  /etc/letsencrypt/live/example.com/privkey.pem;

        # harden used protocols a little
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_session_cache    shared:SSL:1m;
        ssl_session_timeout  5m;
        ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
        ssl_prefer_server_ciphers  on;

        # and then include actual locations
        include sites/*;
    }
}

A very basic proxy_pass location would be:

5 minutes collectd + facette setup

I recently added a fantastic graphing tool named facette to pkgsrc. Facette knows how to pull data sources from various backends, and among them, the famous collectd.

In this article, we will see how to setup both on NetBSD but keep in mind it should also work for any platform supported by pkgsrc.

First up, collectd installation. It can be done either with pkgin (binary installation) or pkgsrc (source installation):

  • with pkgin
$ sudo pkgin in collectd collectd-rrdtool
  • with pkgsrc
 $ cd /usr/pkgsrc/sysutils/collectd
 $ sudo make install clean
 $ cd ../collectd-rrdtool
 $ sudo make install clean

Tune up a minimal collectd configuration