Using pkgsrc on debian GNU/Linux

Tags: ,
No Comments »

While I tend to appreciate debian GNU/Linux, its tendency to be quite late on software versionning is sometimes annoying. Also, as a pkgsrc developer, I am used to have greater control over the packages I install, for example regarding the options I’d like to include.

For these reasons and a couple more, I sometimes choose to use pkgsrc along with apt to deal with particular packages. In this article, I’ll show how to achieve that task.

First install build pre-requisite packages:

# apt-get install cvs libncurses5-dev gcc g++ zlib1g-dev zlib1g libssl-dev

Then fetch pkgsrc:

# cd /usr && cvs -d co pkgsrc

export the SH environment variable to /bin/bash:

# export SH=/bin/bash

And bootstrap pkgsrc:

# cd /usr/pkgsrc/bootstrap
# ./bootstrap

From now on, you’ll have a /usr/pkg directory filled with necessary bits for building packages from pkgsrc.

If you are to install services from pkgsrc packages, you’ll have to copy NetBSD‘s /etc/rc.subr to debian‘s /etc directory:

# wget -O/etc/rc.subr ""

Create an ad-hoc rc.d directory:

# mkdir /usr/pkg/etc/rc.d

Let’s say you’d like to install nginx out of pkgsrc, possibly because debian‘s version is outdated or it does not contain your favorite module. Add the desired option to pkgsrc's options file, /usr/pkg/etc/mk.conf:

PKG_OPTIONS.nginx+= naxsi spdy

Build the software:

# cd /usr/pkgsrc/www/nginx
# /usr/pkg/bin/bmake install clean clean-depends

Copy the startup script:

# cp /usr/pkg/share/examples/rc.d/nginx /usr/pkg/etc/rc.d/

Enable the service:

# echo "nginx=YES" >> /etc/rc.conf

And start it:

# /usr/pkg/etc/rc.d/nginx start

Now, how you integrate services start to your favorite init system is up to you!

EC2 VPN connection informations (updated)

Tags: , , , ,
No Comments »

For a mysterious reason, EC2 VPN connection informations are stored in XML within the JSON data retrieved by either boto or the awscli command line tool.

Here’s a quick python snippet to convert those datas in a convenient, easily parsable dict:

#!/usr/bin/env python

import sys
import boto3
import xmltodict

profile = sys.argv[1]

s = boto3.Session(profile_name=profile)
ec2 = s.client('ec2')

vpn = ec2.describe_vpn_connections()
x = vpn['VpnConnections'][0]['CustomerGatewayConfiguration']

d = xmltodict.parse(x)

# ...

Combining this piece of code with jinja2 could help you generate racoon (or whatever IPSec software you use) on the fly.


here‘s a complete example of an automatic generation for racoon / ipsec configuration files using the previous snippet, along with jinja2.

Latency based Alias DNS record in Route53

Tags: , , , ,
No Comments »

Yes, I know I write a lot about AWS these days, but you know, obsession is my thing.

So as I wrote earlier, I generate my CloudFormation templates using troposphere, and the one thing I had to finish today was to register a latency based Alias record on Route53 for an ELB. While Route53 GUI is fairly easy to use, I’ve been stuck on its programmatic emanation for quite a while, so here’s a troposphere definition of such a CloudFormation object:

if scheme == 'internal':
    # for details about this condition, read:
    canonzn = 'DNSName'
    canonzn = 'CanonicalHostedZoneName'

name = 'foo'
profile = 'eu-west-1'

fooDNSRecord = t.add_resource(RecordSetType(
    HostedZoneName = Join('', [Ref('SubZone'), '.']),
    Comment = '{0} DNS Name'.format(name),
    Name = Join('', ['{0}.'.format(name), Ref('SubZone'), '.']),
    Type = 'A',
    Region = region,
    SetIdentifier = '{0}-{1}'.format(name, profile),
    AliasTarget = AliasTarget(
        GetAtt('{0}LoadBalancer'.format(name), 'CanonicalHostedZoneNameID'),
        GetAtt('{0}LoadBalancer'.format(name), canonzn)

Note the catch, you can’t use Ref('AWS::Region') for the Region parameter or your CloudFormation will fail at the DNS entry creation with the Invalid request error. Do not forget to declare the SetIdentifier parameter which is mandatory for a latency-type record.

Rock your CloudFormation with troposphere and boto

Tags: , , ,
No Comments »

So you’re using AWS CloudFormation in order to bring up complex infrastructures; haven’t you already told yourself that instead of writing down all those JSON lines by hand, you could bring more fun to your architect life?
I did, and I found a way to programmatically design a whole architecture using troposphere and boto3.
Simply put, troposphere gives you bindings in order to generate CloudFormation‘s JSON template, but hey, it’s python, meaning that you can create loops, use conditions and even dynamically build objects.

Let me give a simple example on how this will change your Cloud builder life. Let’s assume you’d like to create a couple of defaults SecurityGroups, we’ll create a convenient dict that will hold needed informations:

sgs = [{
    'name': 'SSHAny',
    'tag': 'ssh-any',
    'cidr': '',
    'proto': 'tcp',
    'port': 22
    'name': 'ICMPAny',
    'tag': 'icmp-any',
    'cidr': '',
    'proto': 'icmp',
    'port': -1
    'name': 'HTTPAny',
    'tag': 'http-any',
    'cidr': '',
    'proto': 'tcp',
    'port': 80
    'name': 'HTTPSAny',
    'tag': 'https-any',
    'cidr': '',
    'proto': 'tcp',
    'port': 443
    'name': 'SSHHQAny',
    'tag': 'ssh-hq',
    'cidr': Ref('HQCIDR'),
    'proto': 'tcp',
    'port': 22
    'name': 'SSHOver2000',
    'tag': 'ssh-over-2000-any',
    'cidr': '',
    'proto': 'tcp',
    'port': '2000-2200'
    'name': 'SMTPAny',
    'tag': 'smtp-any',
    'cidr': '',
    'proto': 'tcp',
    'port': 25

Note that you could use any structure type you like, I just found this one to be practical. Now, dynamically build all the associated SecurityGroups:

def mksg(cidr, proto, ports):
    sg = []
    for port in ports:
        if type(port) is not int and '-' in port:
            [fromport, toport] = port.split('-')
            fromport = port
            toport = port
                IpProtocol = proto,
                FromPort = fromport,
                ToPort = toport,
                CidrIp = cidr
    return sg

# ...

for sg in sgs:
    sgname = '{0}SecurityGroup'.format(sg['name'])
    vars()[sgname] = t.add_resource(SecurityGroup(
        GroupDescription = 'Enable port(s) {0} from {1}'.format(
            sg['port'], sg['cidr']
        SecurityGroupIngress = mksg(sg['cidr'], sg['proto'], [sg['port']]),
        VpcId = Ref('VPC'),
        Tags = Tags(Name = '{0}'.format(sg['tag']))

Now, to bring a little bit more of awesomeness, consider adding the power of Amazon‘s own python module, boto3. Haven’t you struggled by writing AMI mapping directly on the JSON file? Well consider the following function:

import boto3

s = boto3.session.Session(profile_name = sys.argv[1])
ec2 = s.resource('ec2')

def getami(glob):
    return [
        i for i in ec2.images.filter(
            Filters=[ {'Name': 'name', 'Values': [glob]} ]

And later on your JSON generator, using troposphere wrappers:

MyInstance = t.add_resource(Instance(

    ImageId = getami('amzn-ami-vpc-nat-hvm*').id,

    InstanceType = 't2.micro',
    KeyName = Ref('KeyName'),
    SecurityGroupIds = [Ref('SecurityGroup')],
    SubnetId = Ref('MySubnet'),

Troposphere website has an example section that contains a lot of useful examples in order to manipulate all well-known AWS products.


Reserved Instances mystery solved

Tags: ,
No Comments »

AWS is an amazing piece of cloud, but the documentation is not always clear. I’ve been scratching my head trying to understand how Reserved Instances pricing was applied to actual instances. First I was searching for a “Launch a Reserved Instance” button, or even “Associate this Reserved Instance”, but no, nothing. I found the official documentation to be quite evasive so I took my chance on the ##aws IRC channel on; there I found a very friendly community that explained me (and many more after me) the simple truth: It’s all automagic!

In short: once you purchased a Reserved Instance, any instance that will match this RI properties will have its price set to the purchased RI. That simple.

aws cli and jq filtering

Tags: , , ,

Long time no see huh? ;)

I’m diving into Amazon Web Services for some months now, and I must say I’m pretty impressed by the overall quality. Compared to the other “clouds” I’ve played with, it’s the most mature and comprehensive by far.

While writing a couple of tools to make my life easier, there’s one piece that took me longer: filtering the output of the aws ec2 describe-instances command. The output is in JSON, which is quite nice you might say, and it is, but when it comes to interact with JSON in the command line, things can get a little messy.

There’s a fantastic tool around to ease the process, its name is jq. While its basic usage is pretty straightforward, things are way more complicated to order and filter datas coming out from the aws command line.

Here are a couple of examples that I would have loved someone had written before me:

aws ec2 describe-instances | jq '.Reservations[].Instances[]'

That command will output the full instances list and their properties.

aws ec2 describe-instances | jq -r '.Reservations[].Instances[].InstanceId'

This will output your instances id list in a raw (-r) format

Now what about listing your instances ids, followed by the names you gave in the tag property? Sounds easy right? well let’s see:

aws ec2 describe-instances | jq -r '.Reservations[].Instances[]|.InstanceId + " " + (.Tags[]|select(.["Key"] == "Name")|.Value)'

Yeaah, right, now you get what I meant. You might want to crawl jq’s official documentation, but honestly it served me very little in comparison of this fantastic post written by a guy with much more patience than I have :)

Hope this helps!

WP Theme & Icons based on GlossyBlue by N.Design Studio
Banner from
Entries RSS Comments RSS Log in
Optimization WordPress Plugins & Solutions by W3 EDGE