Cash monitoring

I’m kind of back in the mining arena. Like everyone else nowadays, I’m mining Ethereum with a couple of R9 290 & 290X graphic cards I bought second-hand.
So far everything works as intended, but as a proper control freak, I need to know what’s happening in real-time, what’s my firepower, how’s the mining doing etc…
Like many, I use a mining pool, ethermine to be precise, and those guys had the good taste of exposing a JSON API.
Using collectd-python capabilities, I was able to write a short python script that feeds:

  • current hashrate
  • USD per minute
  • unpaid Ethereums on the pool

to an InfluxDB database, which in turn, is queried by a grafana server in order to provide this kind of graphs:

ETH on grafana

The script itself is available as a GitHub gist, feel free to use and modify it.

Score!

This happened:

Congratulations! You have successfully completed the AWS Certified Solutions Architect - Professional exam and you are now AWS Certified.

[...]

Overall Score: 90%

Topic Level Scoring:
1.0 High Availability and Business Continuity: 100%
2.0 Costing: 100%
3.0 Deployment Management: 85%
4.0 Network Design: 71%
5.0 Data Storage: 90%
6.0 Security: 85%
7.0 Scalability & Elasticity: 100%
8.0 Cloud Migration & Hybrid Architecture: 85%

Not bad

Launch the AWS Console from the CLI or a mobile phone

At ${DAYJOB} I happen to manipulate quite a few AWS accounts for different customers, and I find it really annoying to log out from one web console, to log into a new one, with the right credentials, account ids and MFA.

Here you can read a good blog post on how to enable cross account access for third parties and use a basic script to open a web browser to switch from one account to the other.
I liked this idea so I pushed it a bit further and wrote this small piece of code which allows you not only to switch accounts, but also to simply open any AWS account from the command line.

Tips to remember:

  • The cross account creation process is easier than it seems
    • Create a dedicated cross acount access role on the target
    • Take note of the created role ARN
    • On the source, allow the user to access the created role ARN
  • There’s nothing about this ExternalId mystery, it’s just a password really, and it is read from the URL the client passes, echo $((${RANDOM} * 256)) will do.
  • You can assumeRole to your own local account by simply creating a cross account role with the local account id

Update

Well I pushed it further. Kriskross can now be launched as a tiny web service so you can just copy & paste from your mobile MFA application directly into the mobile browser and thus avoid typos, the micro web server will launch the corresponding AWS session on your desktop.

Tricking bash HISTTIMEFORMAT

While trying to find a clean method to remove line numbers from the history command, I found an interesting trick by using the HISTTIMEFORMAT environment variable. Here’s what bash‘s man says:

1
2
3
4
5
6
7
8
HISTTIMEFORMAT
If this variable is set and not null, its value is used as a
format string for strftime(3) to print the time stamp associated
with each history entry displayed by the history builtin. If
this variable is set, time stamps are written to the history
file so they may be preserved across shell sessions. This uses
the history comment character to distinguish timestamps from
other history lines.

But it turns out you can actually put pretty much anything in there, and for example, an ANSI escape sequence that does a line feed and erases the current line:

1
$ HISTTIMEFORMAT="$(echo -e '\r\e[K')"

There we go, no more line numbers:

1
2
$ history |tail -1
history |tail -1

Extract data-bits from your Jenkins jobs

Another quicky.

I read here that cool trick to convert HTML entities to plain text:

1
alias htmldecode="perl -MHTML::Entities -pe 'decode_entities(\$_)'"

In a Debian based system, this suppose to apt-get install libhtml-parser-perl.
Why bother you may ask? Well because the (awful) Jenkins-cli outputs text areas content in encoded HTML entities, and for example I like the idea of being able to test a standalone packer template that’s failing.

Finally, here’s the full usecase:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
$ java -jar jenkins-cli.jar -s http://127.0.0.1:8080/ get-job "packertest" --username foo --password bar | htmldecode

<?xml version='1.0' encoding='UTF-8'?>
<project>
<actions/>
<description></description>
<keepDependencies>false</keepDependencies>
<properties/>
<scm class="hudson.scm.NullSCM"/>
<canRoam>true</canRoam>
<disabled>false</disabled>
<blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
<blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
<triggers/>
<concurrentBuild>false</concurrentBuild>
<builders/>
<publishers>
<biz.neustar.jenkins.plugins.packer.PackerPublisher plugin="packer@1.4">
<name>Default</name>
<jsonTemplate></jsonTemplate>
<jsonTemplateText>{
"variables": {
"aws_access_key": "",
"aws_secret_key": ""
},
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "eu-central-1",
"source_ami": "ami-02724d1f",
"instance_type": "t2.micro",
"ssh_username": "admin",
"ami_name": "jenkins-packer (RED) {{timestamp}}",
"vpc_id": "vpc-00000000",
"subnet_id": "subnet-00000000",
"security_group_id": "sg-00000000",
"associate_public_ip_address": true
}],
"provisioners": [{
"type": "shell",
"inline": [
"echo 'Acquire::ForceIPv4 \"true\";' | sudo tee /etc/apt/apt.conf.d/99force-ipv4",
"echo 'deb http://cloudfront.debian.net/debian jessie-backports main' |sudo tee /etc/apt/sources.list.d/backports.list",
"sudo apt-get update",
"sudo apt-get -t jessie-backports install -y ansible"
]
}, {
"type": "ansible-local",
"playbook_file": "{{ user `test_yml` }}",
"command": "PYTHONUNBUFFERED=1 ansible-playbook"
}]
}</jsonTemplateText>
<params>-color=false</params>
<useDebug>false</useDebug>
<changeDir></changeDir>
<templateMode>text</templateMode>
<fileEntries>
<biz.neustar.jenkins.plugins.packer.PackerFileEntry>
<varFileName>test_yml</varFileName>
<contents>---

- hosts: all

tasks:
- name: install stuff
apt: name=vim state=installed
become: true</contents>
</biz.neustar.jenkins.plugins.packer.PackerFileEntry>
</fileEntries>
</biz.neustar.jenkins.plugins.packer.PackerPublisher>
</publishers>
<buildWrappers/>

Ansible playbook with packer in Jenkins

Quick one.

While working on a build chain in order to register home-baked AMIs, I wanted to use the ansible-local packer provisioner to setup the instance with a very basic playbook. I needed to provide ansible a playbook but didn’t find immediately how to achieve this within the Jenkins-packer module. Turns out it’s tricky, in the JSON Template Text (or the template file), declare the playbook_file like this:

1
2
3
4
5
[{
"type": "ansible-local",
"playbook_file": "{{ user `test_yml` }}",
"command": "PYTHONUNBUFFERED=1 ansible-playbook"
}]

Then in the File Entries field, the Variable Name must be test_yml and File Contents filled with the playbook.

Jenkins packer

30 python lines Dynamic DNS

Here in Spain, I chose Movistar as my Internet provider, I must say I’m pretty happy with it, symmetric 300Mbps fiber optics and good service. The only annoying aspect is that they do not provide static IP for free, something I was used to and was very convenient.

In order to reach my network from places where I can’t connect to my VPN, I wrote a very simple Dynamic DNS system using dnspython, and it turned out to be fairly easy.

This is the code running on my public server:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
$ cat ddns.py
from flask import Flask

import dns.update
import dns.query
import dns.tsigkeyring
import dns.resolver

app = Flask(__name__)

@app.route('/query/<t>/<fqdn>')
def query(t, fqdn):
resolver = dns.resolver.Resolver(configure=False)
resolver.nameservers = ['127.0.0.1']
answer = dns.resolver.query(fqdn, t)
if t == 'a':
return '{0}\n'.format(answer.rrset[0].address)

@app.route("/update/<domain>/<host>/<ip>", methods=['POST'])
def update(domain, host, ip):

keyring = dns.tsigkeyring.from_text({
'rndc-key' : 'myRNDCkey=='
})

update = dns.update.Update('{0}.'.format(domain), keyring=keyring)
update.replace(host, 300, 'A', ip)
response = dns.query.tcp(update, '127.0.0.1', timeout=10)

return "update with {0}\n".format(ip)

if __name__ == "__main__":
app.run()

This code is served by gunicorn:

$ gunicorn ddns:app 

Behind an nginx reverse proxy:

1
2
3
4
5
location /dyndns/ {
auth_basic "booh.";
auth_basic_user_file /etc/nginx/htpasswd;
proxy_pass http://private-server:8000/;
}

On the client side (home), I wrote this very simple shell script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#!/bin/sh

PATH=${PATH}:/sbin:/usr/sbin:/bin:/usr/bin:/usr/pkg/bin

name="mybox"
domain="mydomain"
fqdn="${name}.${domain}"
auth="user:mymagicpassword"

# here retrieve your actual public IP from a website like httpbin.org
curip=$(curl -s -o- http://some.website.like.httpbin.org/ip)
# fetch recorded IP address
homeip=$(curl -u ${auth} -s -o- https://my.public.server/dyndns/query/a/${fqdn})

if [ "${curip}" != "${homeip}" ]; then
warnmsg="/!\\ home IP changed to ${curip} /!\\"

echo "${warnmsg}"|mail -s "${warnmsg}" me@mydomain.net

curl -u ${auth} \
-X POST https://my-public.server/dyndns/update/${domain}/${name}/${curip}
fi

Which is cron’ed to run every 5 minutes.

And here I am with my own cheap Dynamic DNS system.

CPU temperature collectd report on NetBSD

pkgsrc’s collectd does not support the thermal plugin, so in order to publish thermal information I had to use the exec plugin:

1
2
3
4
5
6
LoadPlugin exec
# more plugins

<Plugin exec>
Exec "nobody:nogroup" "/home/imil/bin/temp.sh"
</Plugin>

And write this simple script that reads CPUs temperature from NetBSD’s envstat command:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ cat bin/temp.sh 
#!/bin/sh

hostname=$(hostname)
interval=10

while :
do
envstat|awk '/cpu[0-9]/ {printf "%s %s\n",$1,$3}'|while read c t
do
echo "PUTVAL ${hostname}/temperature/temperature-zone${c#cpu} interval=${interval} N:${t%%.*}"
done
sleep ${interval}
done

I then send those values to an influxdb server:

1
2
3
4
5
6
LoadPlugin network
# ...

<Plugin network>
Server "192.168.1.5" "25826"
</Plugin>

And display them using grafana:

grafana setup
NetBSD temperature in grafana

HTH!

Ansible and AWS ASG, a (really) dynamic inventory

I found myself searching ridiculously too long to achieve what I believed was a simple task: to apply an Ansible role to newly created instances… started by an Auto Scaling Group. If you’re used to Ansible you know that it relies on an inventory to apply a playbook, but obviously, when you’re firing up EC2 instances with the same playbook, you are not able to know what will be your virtual machines IP addresses, nor can ec2.py, the recommended method to deal with dynamic inventories.

I read that refreshing the inventory can be achieved using the following instruction:

1
meta: refresh_inventory

Yet I wanted to try a fully dynamic and self-contained method without the need of an external helper.

When starting an EC2 instance with the Ansible ec2 module, you’re able to retrieve those datas dynamically via the ec2 registered variable and then add the hosts to the inventory using the add_host module. Strangely enough, the ec2_asg module does not provide informations about the created instances, this is where the ec2_remote_facts comes into the play.

Consider the following playbook:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
# deploy.yml

- hosts: localhost
connection: local
gather_facts: no

roles:
- foo

---
# roles/foo/tasks/main.yml

- name: Create Launch Configuration
ec2_lc:
region: "{{ region }}"
name: "{{ dname }}"
image_id: "{{ ami_result.results[0].ami_id }}"
key_name: "{{ keypair }}"
instance_type: "{{ instance_type }}"
security_groups: "{{ security_groups }}"
when: "{{ curstate == 'present' }}"

- name: Fire up ASG
ec2_asg:
region: "{{ region }}"
name: sandbox
launch_config_name: "{{ dname }}-lc"
availability_zones: "{{ azs }}"
vpc_zone_identifier: "{{ subnets }}"
desired_capacity: 2
min_size: 2
max_size: 2
state: "{{ curstate }}"
tags:
- "env": red
register: asg_result

I naively thought the asg_result variable would hold needed informations, but it actually doesn’t. So I had to add the following task:

1
2
3
4
- ec2_remote_facts:
filters:
"tag:env": "red"
register: instance_facts

Which apply the tag filter and adds the newly created instances metadatas into the instance_facts variable.

Here’s an example of such gathered data:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
ok: [localhost] => {
"msg": {
"changed": false,
"instances": [
{
"ami_launch_index": "0",
"architecture": "x86_64",
"client_token": "foobarfoobar",
"ebs_optimized": false,
"groups": [
{
"id": "sg-2bd06143",
"name": "ICMP+SSH"
}
],
"hypervisor": "xen",
"id": "i-845e1238",
"image_id": "ami-02724d1f",
"instance_profile": null,
"interfaces": [
{
"id": "eni-0638f67a",
"mac_address": "01:1b:11:1f:11:a1"
}
],
"kernel": null,
"key_name": "foofoo",
"launch_time": "2016-08-05T07:09:59.000Z",
"monitoring_state": "disabled",
"persistent": false,
"placement": {
"tenancy": "default",
"zone": "eu-central-1b"
},
"private_dns_name": "ip-10-1-1-2.eu-central-1.compute.internal",
"private_ip_address": "10.1.1.2",
"public_dns_name": "",
"ramdisk": null,
"region": "eu-central-1",
"requester_id": null,
"root_device_type": "ebs",
"source_destination_check": "true",
"spot_instance_request_id": null,
"state": "running",
"tags": {
"aws:autoscaling:groupName": "sandbox",
"env": "red"
},
"virtualization_type": "hvm",
"vpc_id": "vpc-11111111"
},
[...]

Thanks to this very well written blog post, I learned how to extract the information I needed, the instances private IP addresses:

1
2
3
- name: group hosts
add_host: hostname={{ item }} groups=launched
with_items: "{{ instance_facts.instances|selectattr('state', 'equalto', 'running')|map(attribute='private_ip_address')|list }}"

Quite a filter huh? :) Here we select only the instances with a running state, look up for the private_ip_address attributes and make them a list which then can be processed as items (more on Jinja 2 filters).
These items are added to the hosts inventory via the add_host module in a group named launched. We will use that group name in the main deploy.yml file:

1
2
3
4
5
6
- hosts: launched
gather_facts: no

tasks:
- name: wait for SSH
wait_for: port=22 host="{{ inventory_hostname }}" search_regex=OpenSSH delay=5

And voila! The launched group now holds our freshly created instances with which you can now interact with from your playbook.

Another great read on the subject: Using Ansibles in-memory inventory to create a variable number of instances

Run CoreOS on FreeBSD's bhyve

No, I’m not following the hype, only I like to test things plus I feel there will be a growing demand for docker at ${DAYWORK}. I read here and there that CoreOS was the Linux distribution of choice to play with docker, so while at it, I picked up this one to dive into the container world.
Finally, I’ve been willing to put my hands on bhyve for quite a while, so I took this opportunity to learn all those new (to me) technologies at once.

Here I’ll write down a quick set of commands and configuration files I found, discovered or modified.

First thing, we’ll have to prepare our FreeBSD system for bhyve. Load the vmm and mndm, respectively bhyve and the serial device module. We’ll also need a tap device and bridging capability in order to bring network to our future virtual machine:

1
2
3
4
5
$ cat /boot/loader.conf
vmm_load="YES"
nmdm_load="YES"
if_bridge_load="YES"
if_tap_load="YES"

In /etc/rc.conf, we’ll bring up the tap interface and attach it to a bridge, which will be bond to our physical interface:

1
2
cloned_interfaces="bridge0 tap0"
ifconfig_bridge0="addm em0 addm tap0 up"

In order to bring up the tap interface when it’s opened, add the following to /etc/sysctl.conf:

1
net.link.tap.up_on_open=1

Now that the host OS is ready, let’s fetch CoreOS ISO image. The following link might change, check https://coreos.com/os/docs/latest/booting-with-iso.html

1
$ curl -O https://stable.release.core-os.net/amd64-usr/current/coreos_production_iso_image.iso

Let’s create a ZFS volume to receive the system:

1
$ sudo zfs create -V16G -o volmode=dev zroot/coredisk0

To boot a Linux bhyve virtual machine, you’ll need the grub-bhyve package, simply install it using pkg(8):

1
$ sudo pkg install grub2-bhyve

You’ll then have to create a device.map file:

1
2
3
$ cat device.map
(hd0) /dev/zvol/zroot/coredisk0
(cd0) /usr/home/imil/iso/coreos_production_iso_image.iso

Copy the commands to give to grub in a text file:

1
$ cat grub-cd0.cfg
linux (cd0)/coreos/vmlinuz coreos.autologin                                     
initrd (cd0)/coreos/cpio.gz                                                     
boot

And load the kernel:

1
$ sudo grub-bhyve -m device.map -r cd0 -M 1024M coreos < grub-cd0.cfg

You can now fire up the virtual machine:

1
$ sudo bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap0 -s3:0,virtio-blk,/dev/zvol/zroot/coredisk0 -l com1,stdio -c 2 -m 1024M coreos

Once the OS has booted, you should be able to interact with the virtual machine serial console. If there’s no DHCP server on your network, you might have to add an IP address, default route and nameserver:

1
2
3
4
$ sudo -s
# ip a a 192.168.1.100/24 dev eth0
# route add default gw 192.168.1.254
# echo 'nameserver 8.8.8.8' > /etc/resolv.conf

It is now time to install the OS to the ZFS volume, seen as a block device from the virtual machine point of view. You’ll find the associated device in /dev/disk/by-id:

1
$ sudo coreos-install -d /dev/disk/by-id/virtio-BHYVE-580E-3135-7E05 -C stable

Once finished, just shutdown the VM you booted from the ISO image:

1
$ sudo halt

And kill the virtual machine:

1
$ sudo bhyvectl --destroy --vm=coreos

Create a new set of grub commands in order to boot from the ZFS volume:

1
$ cat grub-hd0.cfg 
linux (hd0,gpt1)/coreos/vmlinuz-a console=ttyS0 ro root=LABEL=ROOT usr=LABEL=USR-A coreos.autologin
boot

Note the usr=LABEL=USR-A parameter, it took me a little while to figure out why the system would fail at mounting /usr, as no documentation I found mentionned this.

Load the commands using grub-bhyve:

1
sudo grub-bhyve -m device.map -r hd0 -M 1024M coreos < grub-hd0.cfg

And finally, fire up the resulting operating system:

1
$ sudo bhyve -A -H -P -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap0 -s3:0,virtio-blk,/dev/zvol/zroot/coredisk0 -l com1,stdio -c 2 -m 1024M coreos

You might want to run a headless system, in which case you’ll have to replace stdio with /dev/nmdm0A and connect to the other end of the serial device (/dev/nmdm0B) with any serial console utility like cu(1) or screen.

Resources: