Ansible and AWS ASG, a (really) dynamic inventory

I found myself searching ridiculously too long to achieve what I believed was a simple task: to apply an Ansible role to newly created instances… started by an Auto Scaling Group. If you’re used to Ansible you know that it relies on an inventory to apply a playbook, but obviously, when you’re firing up EC2 instances with the same playbook, you are not able to know what will be your virtual machines IP addresses, nor can ec2.py, the recommended method to deal with dynamic inventories.

I read that refreshing the inventory can be achieved using the following instruction:

1
meta: refresh_inventory

Yet I wanted to try a fully dynamic and self-contained method without the need of an external helper.

When starting an EC2 instance with the Ansible ec2 module, you’re able to retrieve those datas dynamically via the ec2 registered variable and then add the hosts to the inventory using the add_host module. Strangely enough, the ec2_asg module does not provide informations about the created instances, this is where the ec2_remote_facts comes into the play.

Consider the following playbook:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
# deploy.yml

- hosts: localhost
connection: local
gather_facts: no

roles:
- foo

---
# roles/foo/tasks/main.yml

- name: Create Launch Configuration
ec2_lc:
region: "{{ region }}"
name: "{{ dname }}"
image_id: "{{ ami_result.results[0].ami_id }}"
key_name: "{{ keypair }}"
instance_type: "{{ instance_type }}"
security_groups: "{{ security_groups }}"
when: "{{ curstate == 'present' }}"

- name: Fire up ASG
ec2_asg:
region: "{{ region }}"
name: sandbox
launch_config_name: "{{ dname }}-lc"
availability_zones: "{{ azs }}"
vpc_zone_identifier: "{{ subnets }}"
desired_capacity: 2
min_size: 2
max_size: 2
state: "{{ curstate }}"
tags:
- "env": red
register: asg_result

I naively thought the asg_result variable would hold needed informations, but it actually doesn’t. So I had to add the following task:

1
2
3
4
- ec2_remote_facts:
filters:
"tag:env": "red"
register: instance_facts

Which apply the tag filter and adds the newly created instances metadatas into the instance_facts variable.

Here’s an example of such gathered data:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
ok: [localhost] => {
"msg": {
"changed": false,
"instances": [
{
"ami_launch_index": "0",
"architecture": "x86_64",
"client_token": "foobarfoobar",
"ebs_optimized": false,
"groups": [
{
"id": "sg-2bd06143",
"name": "ICMP+SSH"
}
],
"hypervisor": "xen",
"id": "i-845e1238",
"image_id": "ami-02724d1f",
"instance_profile": null,
"interfaces": [
{
"id": "eni-0638f67a",
"mac_address": "01:1b:11:1f:11:a1"
}
],
"kernel": null,
"key_name": "foofoo",
"launch_time": "2016-08-05T07:09:59.000Z",
"monitoring_state": "disabled",
"persistent": false,
"placement": {
"tenancy": "default",
"zone": "eu-central-1b"
},
"private_dns_name": "ip-10-1-1-2.eu-central-1.compute.internal",
"private_ip_address": "10.1.1.2",
"public_dns_name": "",
"ramdisk": null,
"region": "eu-central-1",
"requester_id": null,
"root_device_type": "ebs",
"source_destination_check": "true",
"spot_instance_request_id": null,
"state": "running",
"tags": {
"aws:autoscaling:groupName": "sandbox",
"env": "red"
},
"virtualization_type": "hvm",
"vpc_id": "vpc-11111111"
},
[...]

Thanks to this very well written blog post, I learned how to extract the information I needed, the instances private IP addresses:

1
2
3
- name: group hosts
add_host: hostname={{ item }} groups=launched
with_items: "{{ instance_facts.instances|selectattr('state', 'equalto', 'running')|map(attribute='private_ip_address')|list }}"

Quite a filter huh? :) Here we select only the instances with a running state, look up for the private_ip_address attributes and make them a list which then can be processed as items (more on Jinja 2 filters).
These items are added to the hosts inventory via the add_host module in a group named launched. We will use that group name in the main deploy.yml file:

1
2
3
4
5
6
- hosts: launched
gather_facts: no

tasks:
- name: wait for SSH
wait_for: port=22 host="{{ inventory_hostname }}" search_regex=OpenSSH delay=5

And voila! The launched group now holds our freshly created instances with which you can now interact with from your playbook.

Another great read on the subject: Using Ansibles in-memory inventory to create a variable number of instances

Run CoreOS on FreeBSD's bhyve

No, I’m not following the hype, only I like to test things plus I feel there will be a growing demand for docker at ${DAYWORK}. I read here and there that CoreOS was the Linux distribution of choice to play with docker, so while at it, I picked up this one to dive into the container world.
Finally, I’ve been willing to put my hands on bhyve for quite a while, so I took this opportunity to learn all those new (to me) technologies at once.

Here I’ll write down a quick set of commands and configuration files I found, discovered or modified.

First thing, we’ll have to prepare our FreeBSD system for bhyve. Load the vmm and mndm, respectively bhyve and the serial device module. We’ll also need a tap device and bridging capability in order to bring network to our future virtual machine:

1
2
3
4
5
$ cat /boot/loader.conf
vmm_load="YES"
nmdm_load="YES"
if_bridge_load="YES"
if_tap_load="YES"

In /etc/rc.conf, we’ll bring up the tap interface and attach it to a bridge, which will be bond to our physical interface:

1
2
cloned_interfaces="bridge0 tap0"
ifconfig_bridge0="addm em0 addm tap0 up"

In order to bring up the tap interface when it’s opened, add the following to /etc/sysctl.conf:

1
net.link.tap.up_on_open=1

Now that the host OS is ready, let’s fetch CoreOS ISO image. The following link might change, check https://coreos.com/os/docs/latest/booting-with-iso.html

1
$ curl -O https://stable.release.core-os.net/amd64-usr/current/coreos_production_iso_image.iso

Let’s create a ZFS volume to receive the system:

1
$ sudo zfs create -V16G -o volmode=dev zroot/coredisk0

To boot a Linux bhyve virtual machine, you’ll need the grub-bhyve package, simply install it using pkg(8):

1
$ sudo pkg install grub2-bhyve

You’ll then have to create a device.map file:

1
2
3
$ cat device.map
(hd0) /dev/zvol/zroot/coredisk0
(cd0) /usr/home/imil/iso/coreos_production_iso_image.iso

Copy the commands to give to grub in a text file:

1
$ cat grub-cd0.cfg
linux (cd0)/coreos/vmlinuz coreos.autologin                                     
initrd (cd0)/coreos/cpio.gz                                                     
boot

And load the kernel:

1
$ sudo grub-bhyve -m device.map -r cd0 -M 1024M coreos < grub-cd0.cfg

You can now fire up the virtual machine:

1
$ sudo bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap0 -s3:0,virtio-blk,/dev/zvol/zroot/coredisk0 -l com1,stdio -c 2 -m 1024M coreos

Once the OS has booted, you should be able to interact with the virtual machine serial console. If there’s no DHCP server on your network, you might have to add an IP address, default route and nameserver:

1
2
3
4
$ sudo -s
# ip a a 192.168.1.100/24 dev eth0
# route add default gw 192.168.1.254
# echo 'nameserver 8.8.8.8' > /etc/resolv.conf

It is now time to install the OS to the ZFS volume, seen as a block device from the virtual machine point of view. You’ll find the associated device in /dev/disk/by-id:

1
$ sudo coreos-install -d /dev/disk/by-id/virtio-BHYVE-580E-3135-7E05 -C stable

Once finished, just shutdown the VM you booted from the ISO image:

1
$ sudo halt

And kill the virtual machine:

1
$ sudo bhyvectl --destroy --vm=coreos

Create a new set of grub commands in order to boot from the ZFS volume:

1
$ cat grub-hd0.cfg 
linux (hd0,gpt1)/coreos/vmlinuz-a console=ttyS0 ro root=LABEL=ROOT usr=LABEL=USR-A coreos.autologin
boot

Note the usr=LABEL=USR-A parameter, it took me a little while to figure out why the system would fail at mounting /usr, as no documentation I found mentionned this.

Load the commands using grub-bhyve:

1
sudo grub-bhyve -m device.map -r hd0 -M 1024M coreos < grub-hd0.cfg

And finally, fire up the resulting operating system:

1
$ sudo bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap0 -s3:0,virtio-blk,/dev/zvol/zroot/coredisk0 -l com1,stdio -c 2 -m 1024M coreos

You might want to run a headless system, in which case you’ll have to replace stdio with /dev/nmdm0A and connect to the other end of the serial device (/dev/nmdm0B) with any serial console utility like cu(1) or screen.

Resources:

Migrate FreeBSD root on UFS to ZFS

At ${DAYJOB} I’m using a FreeBSD workstation for quite a while. Everything goes smoothly except for the filesystem. When I first installed it, I chose UFS because FreeBSD installer said that root-on-ZFS was “experimental”. I later learned that nobody uses UFS anymore and that root-on-ZFS is perfectly stable. Thing is, I chose UFS and I deeply regret it. Not because of ZFS‘s features that absolutely do not matter for me on the desktop, but because FreeBSD implementation of UFS is terribly, terribly slow when it comes to manipulate big files. When I say slow, I mean that pkg upgrade tends to FREEZE the entire machine while extracting archives. That slow. And before you ask, yes, there’s been a lot of tuning on that side.

So I got another hard drive and migrated the system over it.

In this memo, the disk to be root-on-ZFS is being seen as ada1 and the current UFS one as ada0. I first tried the “GPT / root on ZFS“ method but later realized my machine couldn’t boot over it. So I felt back to “root on ZFS using FreeBSD-ZFS partition in a FreeBSD MBR Slice“ method.

The commands listed below are all well documented in FreeBSD Wiki, in particular here: https://wiki.freebsd.org/RootOnZFS/ZFSBootPartition

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
gpart create -s mbr ada1
# align to 4096 bytes
gpart add -t freebsd -b 4032 ada1
gpart create -s BSD ada1s1
gpart set -a active -i 1 ada1
# I allocate 450G on a 500G disk
gpart add -s 450G -t freebsd-zfs ada1s1
gpart add -s 4G -t freebsd-swap ada1s1
zpool create zroot /dev/ada1s1a
zpool set bootfs=zroot zroot
gpart bootcode -b /boot/boot0 ada1
zpool export zroot
dd if=/boot/zfsboot of=/tmp/zfsboot1 count=1
gpart bootcode -b /tmp/zfsboot1 /dev/ada1s1
dd if=/boot/zfsboot of=/dev/ada1s1a skip=1 seek=1024
zpool import zroot
zfs set checksum=fletcher4 zroot

The simplest way of sync’ing the UFS disk to the ZFS disk is using rsync, here’s the “exclude-list” I used:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ cat tmp/exclude-list.txt 
/dev/*
/proc/*
/sys/*
/tmp/*
/mnt/*
/media/*
/lost+found
/usr/ports/*
/usr/src/*
/usr/home/imil/games/*
/usr/home/imil/.PlayOnLinux/*
/usr/home/imil/.wine/*
/.amd_mnt
/zroot

And the rsync command, including the X and A flags in order to keep extended attributes and ACLs:

1
rsync -aAXv --delete --exclude-from 'tmp/exclude-list.txt' / /zroot/

Finally

1
2
3
4
echo 'zfs_load="YES"' > /zroot/boot/loader.conf
echo 'vfs.root.mountfrom="zfs:zroot"' >> /zroot/boot/loader.conf
# disable geom naming by diskid to avoid swapon errors
echo 'kern.geom.label.disk_ident.enable=0' >> /zroot/boot/loader.conf

and replace the swap line in /etc/fstab:

1
/dev/ada1s1b    none            swap    sw      0       0

I’m using that migrated disk as we speak, it’s blazing fast, no more freezes, plus I could add the old disk as a mirror using ZFS. Not bad after all.

Update

Here’s an article that explains the procedure to have a mirrored setup of the latter migration; no surprise, you’ll just have to follow the steps described before on a new disk and then attach both disks to a mirror vdev:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
zpool attach zroot ada0s1a ada1s1a
zpool status -v
pool: zroot
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Fri Apr 29 10:59:36 2016
1.29G scanned out of 134G at 69.7M/s, 0h32m to go
1.29G resilvered, 0.96% done
config:

NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0s1a ONLINE 0 0 0
ada1s1a ONLINE 0 0 0 (resilvering)

Fetch monit status in JSON

I wanted to use monit as my desktop-alerting system, meaning that when a service or a machine is unreachable on my personnal network, I’d see a red dot somewhere on my desktop. Why not nagios you’d ask? because my needs are not worth the hassle.

Unfortunately, monit does not have simple and nice little desktop apps like nagstamon, so I decided to write my own.

It does not seem to be well known, but monit publishes a special URI that shows a status report in XML when the mini-HTTP status server is enabled. The JSON one is only available for the commercial product they sell, M/Monit, so I wrote this small utility to manipulate status values in a JSON format and show a status report within your shell console.

It will read a ~/.getmonitrc configuration file with the following format:

1
2
3
4
5
6
7
{
"MyNet": {
"url": "https://mynet.net/monit/_status?format=xml",
"user": "foo",
"passwd": "My Extreme Password"
}

}

The script itself can be fetched from this gist url, it is used like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
$ python getmonit.py 
✘ global status

$ python getmonit.py v
✔ senate
✔ home
✔ coruscant
✔ starkiller
✔ tatooine
✘ sidious

$ python getmonit.py vv
✔ senate
✔ home
✓ icmp
✓ port 443/HTTP
✓ port 22/SSH
✔ coruscant
✓ icmp
✓ port 22/SSH
✔ starkiller
✓ icmp
✓ port 22/SSH
✔ tatooine
✓ icmp
✘ sidious
✗ icmp
✗ port 22/SSH

The output is actually colored, you can see it here.

Letsencrypt friendly nginx configuration

So I use this great cheat sheet in order to use letsencrypt free Certificate authority on my own servers, but while this small doc is very straightforward it doesn’t explain much about nginx‘s configuration. So I’ll drop my own right here so your journey through TLS is even simpler:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
$ cat /usr/pkg/etc/nginx/nginx.conf

# this nginx installation comes from pkgsrc for both Linux and NetBSD
# you might have to adapt paths to suit your needs... or switch to pkgsrc ;)

user nginx nginx;
worker_processes 2;

events {
worker_connections 1024;
}

http {
include /usr/pkg/etc/nginx/mime.types;
default_type application/octet-stream;

sendfile on;
keepalive_timeout 65;

# a little bit of browser leverage doesn't hurt :)
gzip on;
gzip_vary on;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript;
gzip_proxied any;

server {
# serve boths IPv4 and IPv6 FWIW
listen [::]:80;
listen 80;

server_name localhost example.com *.example.com;

# this is where letsencrypt will drop the callenge
location /.well-known/acme-challenge {
default_type "text/plain";
root /var/www/letsencrypt;
}

# redirect everything else to HTTPS
location / { return 302 https://$host$request_uri; }
}

server {
listen [::]:443 ssl;
listen 443 ssl;

# you'll have to declare those domains accordingly in letsencrypt conf
server_name localhost example.com *.example.com;

# here lies letsencrypt PEM files
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

# harden used protocols a little
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;

# and then include actual locations
include sites/*;
}
}

A very basic proxy_pass location would be:

1
2
3
4
5
6
$ cat /usr/pkg/etc/nginx/sites/example.com
location / {
proxy_pass http://mydomU:8080/;
# forward real address for statistic purposes
proxy_set_header X-Forwarded-For $remote_addr;
}

For an even more hardened configuration, you might want to checkout 2*yo‘s own configuration.

5 minutes collectd + facette setup

I recently added a fantastic graphing tool named facette to pkgsrc.
Facette knows how to pull data sources from various backends, and among them, the famous collectd.

In this article, we will see how to setup both on NetBSD but keep in mind it should also work for any platform supported by pkgsrc.

First up, collectd installation. It can be done either with pkgin (binary installation) or pkgsrc (source installation):

  • with pkgin
1
$ sudo pkgin in collectd collectd-rrdtool
  • with pkgsrc
1
2
3
4
$ cd /usr/pkgsrc/sysutils/collectd
$ sudo make install clean
$ cd ../collectd-rrdtool
$ sudo make install clean

Tune up a minimal collectd configuration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Hostname    "myname"
BaseDir "/var/db/collectd"
PIDFile "/var/run/collectd.pid"
PluginDir "/usr/pkg/lib/collectd"
LoadPlugin syslog
LoadPlugin cpu
LoadPlugin interface
LoadPlugin load
LoadPlugin memory
LoadPlugin rrdcached
LoadPlugin rrdtool
<Plugin rrdtool>
DataDir "/var/db/collectd/rrd"
CreateFilesAsync false
CacheTimeout 120
CacheFlush 900
WritesPerSecond 50
</Plugin>

Enable and start collectd

1
2
# echo "collectd=YES" >> /etc/rc.conf
# /etc/rc.d/collectd start

Wait a couple of minutes for collectd to actually collect some datas, you should see them appear in /var/db/collectd/rrd.

As I write these lines, facette is not yet available as a binary package, it will probably be available in the next pkgsrc release, so we’ll have to install it using pkgsrc

1
2
3
4
5
$ cd /usr/pkgsrc/sysutils/facette
$ sudo make install clean
$ sudo -s
# echo "facette=YES" >> /etc/rc.conf
# /etc/rc.d/facette start

That’s right, no configuration, I’ve setup the package so it is working out-of-the-box with a basic collectd installation.

If everything went well, you should be admiring a facette console by connecting with your web browser to the port 12003

facette

Simpler postfix + dspam

I have read a shitload of overcomplicated setups to bring up a postfix / dspam SMTP + antispam server, and finally came to a much lighter and simpler configuration by basically reading documentation and real life examples.
Note this is suitable for a personnal and basic environment, no database, no virtual setup. Basic stuff.

The target system is NetBSD but this short doc should apply to pretty much any UNIX / Linux.

On dspam‘s side, I added the following parameters:

1
2
3
4
5
6
7
8
9
10
# really postfix
TrustedDeliveryAgent "/usr/sbin/sendmail"
[...]
# add involved users
Trust dspam
Trust postfix
[...]
# declare UNIX socket
ServerDomainSocketPath "/tmp/dspam.sock"
ClientHost /tmp/dspam.sock

On postfix‘s main.cf side:

1
2
3
4
5
# don't overwhelm dspam, only one message at a time
dspam_destination_recipient_limit = 1
smtpd_client_restrictions =
permit_sasl_authenticated
check_client_access regexp:/etc/postfix/dspam_filter_access

Warning, I used regexp: instead of pcre: because that’s what NetBSD base’s postfix supports.
The dspam_filter_access pipes the message to dspam‘s socket by matching everything:

1
2
$ cat /etc/postfix/dspam_filter_access
/./ FILTER dspam:unix:/tmp/dspam.sock

The only remaining piece is to declare the dspam service in postfix‘s master.cf file:

1
2
dspam     unix  -       n       n       -       10      pipe
flags=Ru user=dspam argv=/usr/pkg/bin/dspam --deliver=innocent,spam -i -f ${sender} --user ${user} -- ${recipient}

The final delivery method is up to you, but I chose procmail, mostly because I have written my rules a while ago and am too lazy to adapt them to sieve :)

1
mailbox_command = /usr/pkg/bin/procmail

Sources:

Start pkgsrc's nginx with systemd

Not so long ago, I wrote about using pkgsrc on Debian GNU/Linux, and assumed you’d start an installed service using rc.d. When I setup the new iMil.net server, I decided to give a try to kvm as it is easier to maintain, has good performances (sometimes better than Xen), nice administration tools, plus NetBSD now has a good VirtIO driver but no PVHVM support yet.

The first thing I do when setting up a Debian Jessie server is getting rid of systemd, whose philosophy and quality don’t match my personnal taste; but in that case, I wanted to use libvirtd so I could manage my virtual machines with virt-manager, and as a matter of fact, libvirtd has a hard dependency on systemd. There was no escape this time, I had to learn and use it.

Once nginx installed through pkgsrc, I wrote a unit file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ cat /etc/systemd/system/nginx.service 
[Unit]
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network.target

[Service]
Type=forking
ExecStartPre=/usr/pkg/sbin/nginx -t
ExecStart=/usr/pkg/sbin/nginx
ExecReload=/usr/pkg/sbin/nginx -s reload
ExecStop=/usr/pkg/sbin/nginx -s quit

[Install]
WantedBy=multi-user.target

Then enabled the nginx service:

1
$ sudo systemctl enable nginx

And finally started it:

1
$ sudo systemctl start nginx

Witness everything went as expected:

1
2
3
4
5
6
7
8
9
10
11
$ sudo systemctl status nginx -l
● nginx.service - The NGINX HTTP and reverse proxy server
Loaded: loaded (/etc/systemd/system/nginx.service; enabled)
Active: active (running) since Mon 2016-02-08 11:54:48 CET; 2 weeks 5 days ago
Main PID: 23512 (nginx)
CGroup: /system.slice/nginx.service
├─11453 nginx: worker proces
└─23512 nginx: master process /usr/pkg/sbin/ngin

Feb 08 11:54:48 starkiller nginx[23508]: nginx: the configuration file /usr/pkg/etc/nginx/nginx.conf syntax is ok
Feb 08 11:54:48 starkiller nginx[23508]: nginx: configuration file /usr/pkg/etc/nginx/nginx.conf test is successful

Note that this does not prevent from using nginx‘s -t or -s flags.

(not-so) new website!

If you’re used to this website you might have noticed the layout has somewhat changed. Actually the engine itself has changed, iMil.net is no more powered by wordpress, instead I switched to a static website generator called hexo. While it can be tricky sometimes, the tool is nicely organized and easy to handle.

On the service side, this web site defaults to HTTPS and is natively IPv6 ready, it is served by an nginx server contained in a sailor ship. Of course, the virtual machine runs NetBSD, on a kvm hypervisor, hosted on Debian GNU/Linux.

Hope you like it!

GRE tunnel PREROUTING

Here’s a simple solution in order to forward GRE tunnels to a server, or here a virtual machine, which is located behind a GNU/Linux gateway:

1
2
# iptables -t nat -A PREROUTING -i eth0 -p gre -j DNAT --to-destination 192.168.0.1
# modprobe nf_conntrack_proto_gre

No need for complex PREROUTING / POSTROUTING / FORWARD combinations as I could read here and there.

In my case, the virtual machine is a NetBSD domU where I created the following gre(4) interface:

1
2
3
4
# cat /etc/ifconfig.gre0
create
tunnel 1.2.3.4 192.168.0.1 up
inet 172.16.0.1 172.16.0.2 netmask 255.255.255.252

1.2.3.4 being the remote public IP address
192.168.0.1 is the domU private IP address
172.16.0.1 and 2 are the tunnel endpoints.