Migrate FreeBSD root on UFS to ZFS

At ${DAYJOB} I’m using a FreeBSD workstation for quite a while. Everything goes smoothly except for the filesystem. When I first installed it, I chose UFS because FreeBSD installer said that root-on-ZFS was “experimental”. I later learned that nobody uses UFS anymore and that root-on-ZFS is perfectly stable. Thing is, I chose UFS and I deeply regret it. Not because of ZFS‘s features that absolutely do not matter for me on the desktop, but because FreeBSD implementation of UFS is terribly, terribly slow when it comes to manipulate big files. When I say slow, I mean that pkg upgrade tends to FREEZE the entire machine while extracting archives. That slow. And before you ask, yes, there’s been a lot of tuning on that side.

So I got another hard drive and migrated the system over it.

In this memo, the disk to be root-on-ZFS is being seen as ada1 and the current UFS one as ada0. I first tried the “GPT / root on ZFS“ method but later realized my machine couldn’t boot over it. So I felt back to “root on ZFS using FreeBSD-ZFS partition in a FreeBSD MBR Slice“ method.

The commands listed below are all well documented in FreeBSD Wiki, in particular here: https://wiki.freebsd.org/RootOnZFS/ZFSBootPartition

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
gpart create -s mbr ada1
# align to 4096 bytes
gpart add -t freebsd -b 4032 ada1
gpart create -s BSD ada1s1
gpart set -a active -i 1 ada1
# I allocate 450G on a 500G disk
gpart add -s 450G -t freebsd-zfs ada1s1
gpart add -s 4G -t freebsd-swap ada1s1
zpool create zroot /dev/ada1s1a
zpool set bootfs=zroot zroot
gpart bootcode -b /boot/boot0 ada1
zpool export zroot
dd if=/boot/zfsboot of=/tmp/zfsboot1 count=1
gpart bootcode -b /tmp/zfsboot1 /dev/ada1s1
dd if=/boot/zfsboot of=/dev/ada1s1a skip=1 seek=1024
zpool import zroot
zfs set checksum=fletcher4 zroot

The simplest way of sync’ing the UFS disk to the ZFS disk is using rsync, here’s the “exclude-list” I used:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ cat tmp/exclude-list.txt 
/dev/*
/proc/*
/sys/*
/tmp/*
/mnt/*
/media/*
/lost+found
/usr/ports/*
/usr/src/*
/usr/home/imil/games/*
/usr/home/imil/.PlayOnLinux/*
/usr/home/imil/.wine/*
/.amd_mnt
/zroot

And the rsync command, including the X and A flags in order to keep extended attributes and ACLs:

1
rsync -aAXv --delete --exclude-from 'tmp/exclude-list.txt' / /zroot/

Finally

1
2
3
4
echo 'zfs_load="YES"' > /zroot/boot/loader.conf
echo 'vfs.root.mountfrom="zfs:zroot"' >> /zroot/boot/loader.conf
# disable geom naming by diskid to avoid swapon errors
echo 'kern.geom.label.disk_ident.enable=0' >> /zroot/boot/loader.conf

and replace the swap line in /etc/fstab:

1
/dev/ada1s1b    none            swap    sw      0       0

I’m using that migrated disk as we speak, it’s blazing fast, no more freezes, plus I could add the old disk as a mirror using ZFS. Not bad after all.

Update

Here’s an article that explains the procedure to have a mirrored setup of the latter migration; no surprise, you’ll just have to follow the steps described before on a new disk and then attach both disks to a mirror vdev:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
zpool attach zroot ada0s1a ada1s1a
zpool status -v
pool: zroot
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Fri Apr 29 10:59:36 2016
1.29G scanned out of 134G at 69.7M/s, 0h32m to go
1.29G resilvered, 0.96% done
config:

NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0s1a ONLINE 0 0 0
ada1s1a ONLINE 0 0 0 (resilvering)

Fetch monit status in JSON

I wanted to use monit as my desktop-alerting system, meaning that when a service or a machine is unreachable on my personnal network, I’d see a red dot somewhere on my desktop. Why not nagios you’d ask? because my needs are not worth the hassle.

Unfortunately, monit does not have simple and nice little desktop apps like nagstamon, so I decided to write my own.

It does not seem to be well known, but monit publishes a special URI that shows a status report in XML when the mini-HTTP status server is enabled. The JSON one is only available for the commercial product they sell, M/Monit, so I wrote this small utility to manipulate status values in a JSON format and show a status report within your shell console.

It will read a ~/.getmonitrc configuration file with the following format:

1
2
3
4
5
6
7
{
"MyNet": {
"url": "https://mynet.net/monit/_status?format=xml",
"user": "foo",
"passwd": "My Extreme Password"
}

}

The script itself can be fetched from this gist url, it is used like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
$ python getmonit.py 
✘ global status

$ python getmonit.py v
✔ senate
✔ home
✔ coruscant
✔ starkiller
✔ tatooine
✘ sidious

$ python getmonit.py vv
✔ senate
✔ home
✓ icmp
✓ port 443/HTTP
✓ port 22/SSH
✔ coruscant
✓ icmp
✓ port 22/SSH
✔ starkiller
✓ icmp
✓ port 22/SSH
✔ tatooine
✓ icmp
✘ sidious
✗ icmp
✗ port 22/SSH

The output is actually colored, you can see it here.

Letsencrypt friendly nginx configuration

So I use this great cheat sheet in order to use letsencrypt free Certificate authority on my own servers, but while this small doc is very straightforward it doesn’t explain much about nginx‘s configuration. So I’ll drop my own right here so your journey through TLS is even simpler:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
$ cat /usr/pkg/etc/nginx/nginx.conf

# this nginx installation comes from pkgsrc for both Linux and NetBSD
# you might have to adapt paths to suit your needs... or switch to pkgsrc ;)

user nginx nginx;
worker_processes 2;

events {
worker_connections 1024;
}

http {
include /usr/pkg/etc/nginx/mime.types;
default_type application/octet-stream;

sendfile on;
keepalive_timeout 65;

# a little bit of browser leverage doesn't hurt :)
gzip on;
gzip_vary on;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript;
gzip_proxied any;

server {
# serve boths IPv4 and IPv6 FWIW
listen [::]:80;
listen 80;

server_name localhost example.com *.example.com;

# this is where letsencrypt will drop the callenge
location /.well-known/acme-challenge {
default_type "text/plain";
root /var/www/letsencrypt;
}

# redirect everything else to HTTPS
location / { return 302 https://$host$request_uri; }
}

server {
listen [::]:443 ssl;
listen 443 ssl;

# you'll have to declare those domains accordingly in letsencrypt conf
server_name localhost example.com *.example.com;

# here lies letsencrypt PEM files
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

# harden used protocols a little
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;

# and then include actual locations
include sites/*;
}
}

A very basic proxy_pass location would be:

1
2
3
4
5
6
$ cat /usr/pkg/etc/nginx/sites/example.com
location / {
proxy_pass http://mydomU:8080/;
# forward real address for statistic purposes
proxy_set_header X-Forwarded-For $remote_addr;
}

For an even more hardened configuration, you might want to checkout 2*yo‘s own configuration.

5 minutes collectd + facette setup

I recently added a fantastic graphing tool named facette to pkgsrc.
Facette knows how to pull data sources from various backends, and among them, the famous collectd.

In this article, we will see how to setup both on NetBSD but keep in mind it should also work for any platform supported by pkgsrc.

First up, collectd installation. It can be done either with pkgin (binary installation) or pkgsrc (source installation):

  • with pkgin
1
$ sudo pkgin in collectd collectd-rrdtool
  • with pkgsrc
1
2
3
4
$ cd /usr/pkgsrc/sysutils/collectd
$ sudo make install clean
$ cd ../collectd-rrdtool
$ sudo make install clean

Tune up a minimal collectd configuration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Hostname    "myname"
BaseDir "/var/db/collectd"
PIDFile "/var/run/collectd.pid"
PluginDir "/usr/pkg/lib/collectd"
LoadPlugin syslog
LoadPlugin cpu
LoadPlugin interface
LoadPlugin load
LoadPlugin memory
LoadPlugin rrdcached
LoadPlugin rrdtool
<Plugin rrdtool>
DataDir "/var/db/collectd/rrd"
CreateFilesAsync false
CacheTimeout 120
CacheFlush 900
WritesPerSecond 50
</Plugin>

Enable and start collectd

1
2
# echo "collectd=YES" >> /etc/rc.conf
# /etc/rc.d/collectd start

Wait a couple of minutes for collectd to actually collect some datas, you should see them appear in /var/db/collectd/rrd.

As I write these lines, facette is not yet available as a binary package, it will probably be available in the next pkgsrc release, so we’ll have to install it using pkgsrc

1
2
3
4
5
$ cd /usr/pkgsrc/sysutils/facette
$ sudo make install clean
$ sudo -s
# echo "facette=YES" >> /etc/rc.conf
# /etc/rc.d/facette start

That’s right, no configuration, I’ve setup the package so it is working out-of-the-box with a basic collectd installation.

If everything went well, you should be admiring a facette console by connecting with your web browser to the port 12003

facette

Simpler postfix + dspam

I have read a shitload of overcomplicated setups to bring up a postfix / dspam SMTP + antispam server, and finally came to a much lighter and simpler configuration by basically reading documentation and real life examples.
Note this is suitable for a personnal and basic environment, no database, no virtual setup. Basic stuff.

The target system is NetBSD but this short doc should apply to pretty much any UNIX / Linux.

On dspam‘s side, I added the following parameters:

1
2
3
4
5
6
7
8
9
10
# really postfix
TrustedDeliveryAgent "/usr/sbin/sendmail"
[...]
# add involved users
Trust dspam
Trust postfix
[...]
# declare UNIX socket
ServerDomainSocketPath "/tmp/dspam.sock"
ClientHost /tmp/dspam.sock

On postfix‘s main.cf side:

1
2
3
4
5
# don't overwhelm dspam, only one message at a time
dspam_destination_recipient_limit = 1
smtpd_client_restrictions =
permit_sasl_authenticated
check_client_access regexp:/etc/postfix/dspam_filter_access

Warning, I used regexp: instead of pcre: because that’s what NetBSD base’s postfix supports.
The dspam_filter_access pipes the message to dspam‘s socket by matching everything:

1
2
$ cat /etc/postfix/dspam_filter_access
/./ FILTER dspam:unix:/tmp/dspam.sock

The only remaining piece is to declare the dspam service in postfix‘s master.cf file:

1
2
dspam     unix  -       n       n       -       10      pipe
flags=Ru user=dspam argv=/usr/pkg/bin/dspam --deliver=innocent,spam -i -f ${sender} --user ${user} -- ${recipient}

The final delivery method is up to you, but I chose procmail, mostly because I have written my rules a while ago and am too lazy to adapt them to sieve :)

1
mailbox_command = /usr/pkg/bin/procmail

Sources:

Start pkgsrc's nginx with systemd

Not so long ago, I wrote about using pkgsrc on Debian GNU/Linux, and assumed you’d start an installed service using rc.d. When I setup the new iMil.net server, I decided to give a try to kvm as it is easier to maintain, has good performances (sometimes better than Xen), nice administration tools, plus NetBSD now has a good VirtIO driver but no PVHVM support yet.

The first thing I do when setting up a Debian Jessie server is getting rid of systemd, whose philosophy and quality don’t match my personnal taste; but in that case, I wanted to use libvirtd so I could manage my virtual machines with virt-manager, and as a matter of fact, libvirtd has a hard dependency on systemd. There was no escape this time, I had to learn and use it.

Once nginx installed through pkgsrc, I wrote a unit file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ cat /etc/systemd/system/nginx.service 
[Unit]
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network.target

[Service]
Type=forking
ExecStartPre=/usr/pkg/sbin/nginx -t
ExecStart=/usr/pkg/sbin/nginx
ExecReload=/usr/pkg/sbin/nginx -s reload
ExecStop=/usr/pkg/sbin/nginx -s quit

[Install]
WantedBy=multi-user.target

Then enabled the nginx service:

1
$ sudo systemctl enable nginx

And finally started it:

1
$ sudo systemctl start nginx

Witness everything went as expected:

1
2
3
4
5
6
7
8
9
10
11
$ sudo systemctl status nginx -l
● nginx.service - The NGINX HTTP and reverse proxy server
Loaded: loaded (/etc/systemd/system/nginx.service; enabled)
Active: active (running) since Mon 2016-02-08 11:54:48 CET; 2 weeks 5 days ago
Main PID: 23512 (nginx)
CGroup: /system.slice/nginx.service
├─11453 nginx: worker proces
└─23512 nginx: master process /usr/pkg/sbin/ngin

Feb 08 11:54:48 starkiller nginx[23508]: nginx: the configuration file /usr/pkg/etc/nginx/nginx.conf syntax is ok
Feb 08 11:54:48 starkiller nginx[23508]: nginx: configuration file /usr/pkg/etc/nginx/nginx.conf test is successful

Note that this does not prevent from using nginx‘s -t or -s flags.

(not-so) new website!

If you’re used to this website you might have noticed the layout has somewhat changed. Actually the engine itself has changed, iMil.net is no more powered by wordpress, instead I switched to a static website generator called hexo. While it can be tricky sometimes, the tool is nicely organized and easy to handle.

On the service side, this web site defaults to HTTPS and is natively IPv6 ready, it is served by an nginx server contained in a sailor ship. Of course, the virtual machine runs NetBSD, on a kvm hypervisor, hosted on Debian GNU/Linux.

Hope you like it!

GRE tunnel PREROUTING

Here’s a simple solution in order to forward GRE tunnels to a server, or here a virtual machine, which is located behind a GNU/Linux gateway:

1
2
# iptables -t nat -A PREROUTING -i eth0 -p gre -j DNAT --to-destination 192.168.0.1
# modprobe nf_conntrack_proto_gre

No need for complex PREROUTING / POSTROUTING / FORWARD combinations as I could read here and there.

In my case, the virtual machine is a NetBSD domU where I created the following gre(4) interface:

1
2
3
4
# cat /etc/ifconfig.gre0
create
tunnel 1.2.3.4 192.168.0.1 up
inet 172.16.0.1 172.16.0.2 netmask 255.255.255.252

1.2.3.4 being the remote public IP address
192.168.0.1 is the domU private IP address
172.16.0.1 and 2 are the tunnel endpoints.

NetBSD/amd64 7.0 on kvm

If you recently tried to install NetBSD 7.0 using Linux KVM you might have encountered the following failure:

This bug have been recently fixed on the 7-branch but the official ISO images are not yet updated, so you’ll have to use NetBSD daily builds mini-ISO which includes Christos fix to bus_dma.c

For the record, here’s the virt-install command I use:

1
2
3
4
5
6
7
8
9
10
11
sudo virt-install
--virt-type kvm
--name korriban
--ram 4096 --disk path=/dev/vms/korriban,bus=virtio
--vcpus 2
--network bridge:br0,model=virtio
--graphics vnc
--accelerate
--noautoconsole
--cdrom /home/imil/iso/boot.iso
--cpu host

Performances are really good, the host is a Debian GNU/Linux 8.0 amd64 running on Online’s Dedibox Classic 2015.

YAML and markdown based website rendering with AngularJS

A couple of weeks ago, Clark / @jeaneymerit told me he was digging into AngularJS, and as I’m working on a private project where a static website is involved, I thought this framework could help me make that website lightweight in terms of external dependencies.

The site I’m working on contains exclusively static content, and most of it is text, I wanted a simple and elegant method in order to manipulate that content easily, so I wrote a basic website generator in python based on jinja2, for the record it’s available here.

The principle is that a basic YAML file will contain the text of a page in a key / value fashion. For my own needs, there are two kinds of text, simple or markdown, the file look like this:

1
2
3
4
5
welcome: 'welcome!'
md_about: |
**Yes**! this _new_ website is powered by [markdown][1]

[1]: http://daringfireball.net/projects/markdown/

Now, with modern frameworks poping all around the web, I felt that “building” the website, even if it was a really simple task, was an oldschool approach, plus I needed a pretext to get my hands on AngularJS :)

It turns out there were plenty of modules to handle the YAML / markdown mechanism, I picked js-yaml and marked after trying a couple of other similar modules which were less effective and / or buggy.

Now the interesting part. I won’t explain AngularJS principles in this blog post as there are very informative and well written documentation on the subject, among them this quick tutorial from w3schools and of course the official AngularJS documentation; instead, I’ll drop here the main component of the AngularJS-based website, the controller:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
var mySite = angular.module('mySite', ["ngSanitize"]);

mySite.controller('contentCtrl', function($scope, $parse, $http, $rootScope) {

$http.get('content/index_' + $scope.lang + '.yaml')
.success(function(response) {
angular.forEach(jsyaml.load(response), function(v, k) {
if (k.startsWith('md_') == true) {
$parse('md.' + k.substring(3)).assign($scope, marked(v));
} else {
$parse(k).assign($scope, v);
}
if (k == 'title') {
$rootScope.pageTitle = v;
}
});
});
})

That’s right, in less than 20 lines of code, we read our YAML file, converted it to good old JSON and passed the content of both simple and markdown values to the HTML view.

A word of explanation on this controller. I chose that the YAML file name would correspond to the language the site is displayed with, for example, if lang == 'en', content/index_en.yaml will be read. The distinction between simple and markdown content is made by reading the first 3 characters of the key, if it begins with md_, then it’s markdown and it should be interpreted by marked.

There are two tricky traps here. First about the rendered markdown. For security reasons, AngularJS will not display the produced HTML when the expression is called with double braces {{ .. }} within the HTML view, instead it will show the HTML code escaped. It is mandatory to reference the key variable through a ng-bind-html tag, and as the documentation explains, you’ll have to add angularjs-sanitize.js to the list of loaded modules.

<span ng-bind-html="md.about"></span>

Now about the scope variables organization. I learned that an AngularJS expression will not be interpreted as a scope variable if it is the result of a string manipulation, for example, if foo == 'about':

<span ng-bind-html="'md_' + foo"></span>

will only print md_foo, it will not reference the md_about scope variable. In order to play with dynamic naming, I was forced to organize the markdown variables in an array, thus the following trick in the controller:

$parse('md.' + k.substring(3)).assign($scope, marked(v));

so I can have the following code in the HTML view:

<span ng-bind-html="md[foo]"></span>

Why so much pain? Because thanks to this little trick, I’ll be able to create loops for similar blocks in the HTML view, such as:

<div class="col-md-4" ng-repeat="cat in ['foo', 'bar', 'baz']">
  <img class="center-block" src="media/images/undefined.png" />
  <h3 class="text-center"></h3>
  <span ng-bind-html="md[cat]"></span>
</div>

I like to factorize.

I’m yet a total AngularJS newbie, if something in this blogpost seem awful to you, ng-guru, please feel free to contact me!