Building a Jekyll Environment with NixOS

So there is this idea with NixOS to install only the very base system in the global environment and augment these using Development Environments. And as I’m creating this blog using Github Pages aka Jekyll, writing in Markdown, and would like to be able to preview any changes locally, I of course need Jekyll running locally. Jekyll is even on nixpkgs, … but there are Jekyll plugins which aren’t bundled with this package and essential for correct rendering of e.g. @-mentions and source code blocks.

… so the obvious step was to create such a NixOS Development Environment, which has Ruby 2.2, Jekyll and all the required plugins installed. Turns out there even is a github-pages Gem, so we just need to “package” that. Packaging Ruby gems is pretty straight forward actually, …

so first let’s create a minimal Gemfile first:

source ''
gem 'github-pages'

The Github page has this , group: :jekyll_plugins thing in it, … I had to remove it, otherwise nix-shell complains that it cannot find the Jekyll gem file, once you try to run it (later).

Then we need to create Gemfile.lock by running bundler (from within a nix-shell that has bundler):

$ nix-shell -p bundler
$ bundler package --no-install --path vendor
$ rm -rf .bundler vendor
$ exit  # leave nix-shell

… and derive a Nix expression from Gemfile.lock like so (be sure to not accidentally run this command from within the other nix-shell, which would fail with strange SSL errors otherwise):

$ $(nix-build '<nixpkgs>' -A bundix)/bin/bundix
$ rm result   # nix-build created this (linking to bundix build)

… and last but not least we need a default.nix file which actually triggers the environment creation and also automatically starts jekyll serve after build:

with import <nixpkgs> { };

let jekyll_env = bundlerEnv rec {
    name = "jekyll_env";
    ruby = ruby_2_2;
    gemfile = ./Gemfile;
    lockfile = ./Gemfile.lock;
    gemset = ./gemset.nix;
  stdenv.mkDerivation rec {
    name = "jekyll_env";
    buildInputs = [ jekyll_env ];

    shellHook = ''
      exec ${jekyll_env}/bin/jekyll serve --watch

Take note of the exec in shellHook which actually replaces the shell which nix-shell is about to start by Jekyll itself, so once you stop it by pressing C-c the environment is immediately closed as well.

So we’re now ready to just start it all:

[stesie@faulobst:~/Projekte/]$ nix-shell 
Configuration file: /home/stesie/Projekte/
            Source: /home/stesie/Projekte/
       Destination: /home/stesie/Projekte/
 Incremental build: enabled
                    done in 0.147 seconds.
 Auto-regeneration: enabled for '/home/stesie/Projekte/'
Configuration file: /home/stesie/Projekte/
    Server address:
  Server running... press ctrl-c to stop.

On Replacing Ubuntu with NixOS (part 1)

After I heard a great talk (at FrOSCon) given by @fpletz on NixOS, which is a Linux distribution built on top of the purely functional Nix package manager, … and I am on holiday this week … I decided to give it a try.

So I backed up my homedir and started replacing my Ubuntu installation, without much of a clue on NixOS … just being experienced with other more or less custom Linux installations (like Linux From Scratch back in the days, various Gentoo boxes, etc.)

Here’s my report and a collection of first experiences with my own very fresh installation, underlining findings which seem important to me. This is the first part (and I intend to add at least two more: one on package customisation and one on development environments)…

Requirements & Constraints

  • my laptop: Thinkpad X220, 8 GB RAM, 120 GB SSD
  • NixOS replacing Ubuntu (preserving nothing)
  • fully encrypted root filesystem & swap space (LUKS)
  • i3 improved tiling window manager among screen locking et al

Starting out

First read NixOS’ manual, at least the first major chapter (Installation) and the Wiki page on Encrypted Root on NixOS.

On NixOS’ Download Page there are effectively two choices, a Graphical live CD and a minimal installation. The Thinkpad X220 doesn’t have a CD-ROM drive, and the only USB stick I could find was just a few megabytes too small to fit the graphical live cd … so I went with the minimal installation …

First steps are fairly common, using fdisk to create a new partition table, add a small /boot partition and another big one taking the rest of the space (for LUKS). Then configure LUKS like

$ cryptsetup luksFormat /dev/sda2
$ cryptsetup luksOpen /dev/sda2 crypted

… and create a LVM volume group from it + three logical volumes (swap, root filesystem and /home)

$ pvcreate /dev/mapper/crypted
$ vgcreate cryptedpool /dev/mapper/crypted
$ lvcreate -n swap cryptedpool -L 8GB
$ lvcreate -n root cryptedpool -L 50GB
$ lvcreate -n home cryptedpool -L 20GB

… last not least format those partitions …

$ mkfs.ext2 /dev/sda1
$ mkfs.ext4 /dev/cryptedpool/root
$ mkfs.ext4 /dev/cryptedpool/home
$ mkswap /dev/cryptedpool/swap

… and finally mount them …

$ mount /dev/cryptedpool/root /mnt
$ mkdir /mnt/{boot,home}
$ mount /dev/sda1 /mnt/boot
$ mount /dev/cryptedpool/home /mnt/home
$ swapon /dev/cryptedpool/swap

Initial Configuration

So we’re now ready to install, … and with other distributions we would now just launch the installer application. Not so with NixOS however, it expects you to create a configuration file first … to simplify this it provides a small generator tool:

$ nixos-generate-config --root /mnt

… which generates two files in /mnt/etc/nixos:

  • hardware-configuration.nix which mainly lists the required mounts
  • configuration.nix, after all the config file you’re expected to edit (primarily)

The installation image comes with nano pre-installed, so let’s use it to modify the hardware-configuration.nix file to amend some filesystem options:

  • configure root filesystem to not store access times and enable discard
  • configure /home to support discard as well

… so let’s add option blocks (and leave the rest untouched):

  fileSystems."/" =
    { device = "/dev/disk/by-uuid/9d347599-a960-4076-8aa3-614bb9524322";
      fsType = "ext4";
      options = [ "noatime" "nodiratime" "discard" ];

  fileSystems."/home" =
    { device = "/dev/disk/by-uuid/d4057681-2533-41b0-9175-18f134d7401f";
      fsType = "ext4";
      options = [ "discard" ];

I have enabled (online) discard on the encrypted filesystems as well as on the LUKS device (see below), also known as TRIM support. TRIM tells the SSD hardware which parts of the filesystem are unused and hence benefits wear leveling. However using discard in combination with encrypted filesystems makes some information (which blocks are unused) leak through the full disk encryption … an attacker might e.g. guess the filesystem type from the pattern of unused blocks. For me this doesn’t matter much but YMMV ;-)

So let’s continue and edit (read: add more stuff to) configuration.nix. First some general system configuration:

  # Define on which hard drive you want to install Grub.
  boot.loader.grub.device = "/dev/sda";

  # Tell initrd to unlock LUKS on /dev/sda2
  boot.initrd.luks.devices = [
    { name = "crypted"; device = "/dev/sda2"; preLVM = true; allowDiscards = true; }

  networking.hostName = "faulobst"; # Define your hostname.

  # create a self-resolving hostname entry in /etc/hosts
  networking.extraHosts = " faulobst";

  # Let NetworkManager handle Network (mainly Wifi)
  networking.networkmanager.enable = true;

  # Select internationalisation properties.
  i18n = {
    consoleFont = "Lat2-Terminus16";
    defaultLocale = "en_US.UTF-8";

    # keyboard layout for your Linux console (i.e. off X11), dvp is for "Programmer Dvorak",
    # if unsure pick "us" or "de" :)
    consoleKeyMap = "dvp";

  # Set your time zone.
  time.timeZone = "Europe/Berlin";

  # Enable the X11 windowing system.
  services.xserver.enable = true;
  services.xserver.layout = "us";
  services.xserver.xkbVariant = "dvp";  # again, pick whichever layout you'd like to have
  services.xserver.xkbOptions = "lv3:ralt_switch";

  # Use i3 window manager
  services.xserver.windowManager.i3.enable = true;


Of course our Linux installation should have some software installed. Unlike other distributions, where you typically install software every now and then and thus gradually mutate system state, NixOS allows a more declarative approach: you just list all system packages you’d like to have. Once you don’t want to have a package anymore you simply remove it from the list and Nix will arrange that it’s not available any longer.

Long story short, you have this central list of system packages (which you can also modify any time later) you’d like to have installed and NixOS will ensure they are installed:

… here is my current selection:

  environment.systemPackages = with pkgs; [
    libxml2 # xmllint
    psmisc # pstree, killall et al

    i3 i3lock i3status dmenu
    networkmanagerapplet networkmanager_openvpn

    # gtk icons & themes
    gtk gnome.gnomeicontheme hicolor_icon_theme shared_mime_info

    dunst libnotify


… and we need to configure our Xsession startup …

  • I went with xfsettingsd, the settings daemon from Xfce (inspiration from here)
  • NetworkManager applet, sitting in the task tray
  • xautolock to lock the screen (using i3lock) after 1 minute (including a notification 10 seconds before actually locking the screen)
  • xss-lock to lock the screen on suspend (including keyboard hotkey)
  services.xserver.displayManager.sessionCommands = ''
    # Set GTK_PATH so that GTK+ can find the Xfce theme engine.
    export GTK_PATH=${pkgs.xfce.gtk_xfce_engine}/lib/gtk-2.0

    # Set GTK_DATA_PREFIX so that GTK+ can find the Xfce themes.
    export GTK_DATA_PREFIX=${config.system.path}

    # Set GIO_EXTRA_MODULES so that gvfs works.
    export GIO_EXTRA_MODULES=${pkgs.xfce.gvfs}/lib/gio/modules

    # Launch xfce settings daemon.
    ${pkgs.xfce.xfce4settings}/bin/xfsettingsd &

    # Network Manager Applet
    ${pkgs.networkmanagerapplet}/bin/nm-applet &

    # Screen Locking (time-based & on suspend)
    ${pkgs.xautolock}/bin/xautolock -detectsleep -time 1 \
                -locker "${pkgs.i3lock}/bin/i3lock -c 000070" \
                -notify 10 -notifier "${pkgs.libnotify}/bin/notify-send -u critical -t 10000 -- 'Screen will be locked in 10 seconds'" &
    ${pkgs.xss-lock}/bin/xss-lock -- ${pkgs.i3lock}/bin/i3lock -c 000070 &

User Configuration

… and our system needs a user account of course :)

NixOS allows for “mutable users”, i.e. you are allowed to create, modify and delete user accounts at runtime (including changig the user’s password). Contrary you can disable mutable users and controlling user accounts from configuration.nix. As NixOS is about system purity I went with the latter approach, so some more statements for the beloved configuration.nix file:

  users.mutableUsers = false;

  users.extraUsers.stesie = {
    isNormalUser = true;
    home = "/home/stesie";
    description = "Stefan Siegl";
    extraGroups = [ "wheel" "networkmanager" ];
    hashedPassword = "$6$VInXo5W.....$dVaVu.....cmmm09Q26r/";

… and finally we’re ready to go:

$ nixos-install

… if everything went well, just reboot and say hello to your new system. If you’ve mis-typed something fear not, simply fix it and re-run nixos-install.

After you’ve finished installation and rebooted into your new system you can always come back and further modify configuration.nix file, just run nixos-rebuild switch to apply the changes.

Starting a local developer meetup

As Ansbach (and the region around it) neither has a vibrant developer community nor a (regular) meetup to attract people to share their knowledge, mainly @niklas_heer and me felt like having to get active…

Therefore we came up with the idea to host a monthly meetup named /dev/night at @Tradebyte office (from August on regularly every 2nd Tuesday evening), give a short talk to provide food for thought and afterwards tackle a small challenge together.

… looking for some initial topics we noticed that patterns are definitely useful to stay on track and that there are many good ones beyond the good old GoF patterns. And as both of us are working for an eCommerce middleware provider we came to eCommerce patterns … and finally decided to go with Transactional Patterns for the first meeting.

So yesterday @niklas_heer gave a small presentation on what ACID really means and why it is useful beyond database management system design (ever thought of implementing an automated teller machine? or maybe to stick with eCommerce what about fetching DHL labels from a web-service if you’re immediately charged for them? You definitely want to make sure that you don’t fetch them twice if two requests hit your system simultaneously). Besides he showed how to use two-phase commit to construct composite transactions from multiple, smaller ACID-compliant transactions and how this can aid (i.e. simplify) your system’s architecture.

As a challenge we thought of implementing a fictitious, distributed Club Mate vending machine, … where you’ve got one central “controller” service that drives another (remote) service doing the cash handling (money collection and provide change as needed) as well as a Club Mate dispensing service (that of course also tracks its stock). Obviously it is the controller’s task to make sure that no Mate is dispensed if money collection fails, nor should the customer be charged if there’s not enough stock left.

… this story feels a bit constructed, but it fits the two-phase commit idea well and also suits the microservice bandwagon :-)


  • the challenge we came up with was (again) too large – quite like last Thursday when I was hosting the Pig Latin Kata in Nuremberg … the team forming and getting the infrastructure working took way longer than expected (one team couldn’t even start to implement the transaction handling, as they got lost to details earlier on)
  • after all implementing a distributed system was funny, even so we couldn’t do a final test drive together (as not all of the services were feature complete)
  • … and it’s a refreshing difference to “just doing yet another kata”
  • the chosen topic Transactional Patterns turned out to be a good one, @sd_alt told us that he recently implemented some logic which would have benefitted from this pattern
  • one participant was new to test-driven development (hence his primary takeaway was how to do that with PHP and phpspec/codeception)
  • this also emphasises that we should address developers not familiar with TDD in our invitation (and should try not to scare them away by asking to bring laptops with an installed TDD-ready environment with them)
  • for visitors from Nuremberg 6:30 was too early, they ask to start at 7 o’clock
  • all participants want us to carry on :-)

… so the next /dev/night/ is about to take place on September 13, 2016 at 7:10 p.m. The topic is going to be Command Query Responsibility Segregation pattern and Event Sourcing.

Pig Latin Kata

Yesterday I ran the Pig Latin kata at the local software craftsmenship meetup in Nuremberg. Picking Pig Latin as the kata to do was more a coincidence than planned, but it turned out to be an interesting choice.

So what I’ve prepared were four user stories (from which we only tackeled two; one team did three), going like this:

(if you’d like to do the kata refrain from reading ahead and do one story after another)

Pig Latin is an English language game that alters each word of a phrase/sentence, individually.

Story 1:

  • a phrase is made up of several words, all lowercase, split by a single space
  • if the word starts with a vowel, the transformed word is simply the input + “ay” (e.g. “apple” -> “appleay”)
  • in case the word begins with a consonant, the consonant is first moved to the end, then the “ay” is appended likewise (e.g. “bird” -> “irdbay”)
  • test case for a whole phrase (“a yellow bird” -> “aay ellowyay irdbay”)

Story 2:

  • handle consonant clusters “ch”, “qu”, “th”, “thr”, “sch” and any consonant + “qu” at the word’s beginning like a single consonant (e.g. “chair” -> “airchay”, “square” -> “aresquay”, “thread” -> “eadthray”)
  • handle “xr” and “yt” at the word’s beginning like vowels (“xray” -> “xrayay”)

Story 3:

  • uppercase input should yield uppercase output (i.e. “APPLE” -> “APPLEAY”)
  • also titlecase input should be kept intact, the first letter should still be uppercase (i.e. “Bird” -> “Irdbay”)

Story 4:

  • handle commas, dashes, fullstops, etc. well

The End. Don’t read on if you’d like to do the kata yourself.


When I was running this kata at Softwerkskammer meetup we had eight participants, who interestingly formed three groups (mostly with three people each), instead of like four pairs. The chosen languages were Java, Java Script (ES6) and (thanks to Gabor) Haskell :-)

… the Haskell group unfortunately didn’t do test first development, but I think even if they would have they’d anyways have been the fastest team. Since the whole kata is about data transformation the functional aspects really pay off here. What I really found interesting regarding their implementation of story 3 was that they kept their transformation function for lowercase words unmodified (like I would have expected) but before that detected the word’s case and build a pair consisting of the lower case word plus a transformation function to restore the casing afterwards. When I did the kata on my own I kept the case in a variable and then used some conditionals (which I think is a bit less elegant) …

Besides that feedback was positive and we had a lot of fun doing the kata.

… and as a facilitator I underestimated how long it takes the pairs/teams to form, choose a test framework and get started. Actually I did the kata myself with a stopwatch, measuring how long each step would take as I was nervous that my four stories wouldn’t be enough :-) … turns out we spent more time exercising and didn’t even finish all stories.

Further material:

Serverless JS-Webapp Pub/Sub with AWS IoT

I’m currently very interested in serverless (aka no dedicated backend required) JavaScript Web Applications … with AWS S3, Lambda & API Gateway you can actually get pretty far.
Yet there is one thing I didn’t know how to do: Pub/Sub or “Realtime Messaging”.

Realtime messaging allows to build web applications that can instantly receive messages published by another application (or the same one running in a different person’s browser). There even are cloud services permitting to do exactly this, e.g. Realtime Messaging Platform and PubNub Data Streams

However recently having played with AWS Lambda and S3 I was wondering how this could be achieved on AWS… and at first it seemed like it really isn’t possible. Especially the otherwise very interesting article Receiving AWS IoT messages in your browser using websockets by @jtparreira misled me, as he’s telling that it wouldn’t be possible. The article was published Nov 2015, … not so long ago. But turns out it’s outdated anyways…

Enter AWS IoT

While reading I stumbled over AWS IoT which allows to connect “Internet of Things” devices to the AWS cloud and furthermore provides messaging between those devices. It has a message broker (aka Device Gateway) sitting in the middle and “things” around it that connect to it. It’s based on the MQTT protocol and there are SDKs for the Raspberry Pi (Node.js), Android & iOS … sound’s interesting, but not at all like “web browsers”

MQTT over Web Sockets

Then I found an announcement: AWS IoT Now Supports WebSockets published Jan 28, 2016.
Brand new, but sounds great :)

… so even when IoT still sounds strange to do Pub/Sub with - it looks like a way to go.

Making it work

For the proof of concept I didn’t care to publish AWS IAM User keys to the web application (of course this is a smell to be fixed before production use). So I went to “IAM” in the AWS management console and created a new user first, attaching the pre-defined AWSIoTDataAccess policy.

So the proof of concept should involve a simple web page that allows to establish a connection to the broker, features a text box where a message can be typed plus a publish button. So if two browsers are connected simultaneously then both should immediately receive messages published by one of them.

required parts: … we of course need a MQTT client and we need to do AWS-style request signing in the browser. NPM modules to the rescue:

  • aws-signature-v4 does the signature calculation
  • crypto helps it + some extra hashing we need to do
  • mqtt has an MqttClient

… all of them have browser support through webpack. So we just need some more JavaScript to string everything together. To set up the connection:

let client = new MqttClient(() => {
    const url = v4.createPresignedURL(
        crypto.createHash('sha256').update('', 'utf8').digest('hex'),
            'key': AWS_ACCESS_KEY,
            'secret': AWS_SECRET_ACCESS_KEY,
            'protocol': 'wss',
            'expires': 15

    return websocket(url, [ 'mqttv3.1' ]);

… here createPresignedURL from aws-signature-v4 first does the heavy-lifting for us. We tell it the IoT endpoint address, protocol plus AWS credentials and it provides us with the signed URL to connect to.

There was just one stumbling block to me: I had upper-case letters in the hostname (as it is output by aws iot describe-endpoint command), the module however doesn’t convert these to lower case as expected by AWS’ V4 signing process … and as a matter of that access was denied first.

Having the signed URL we simply pass it on to a websocket-stream and create a new MqttClient instance around it.

Connection established … time to subscibe to a topic. Turns out to be simple:

client.on('connect', () => client.subscribe(MQTT_TOPIC));

Handling incoming messages … also easy:

client.on('message', (topic, message) => console.log(message.toString()));

… and last not least publishing messages … trivial again:

client.publish(MQTT_TOPIC, message);

… that’s it :-)

My proof of concept

here’s what it looks like:

screenshot of demo web page

… the last incoming message was published from another browser running the exact same application.

I’ve published my source code as a Gist on Github, feel free to re-use it.

To try it yourself:

  • clone the Gist
  • adjust the constants declared at the top of main.js as needed
    • create a user in IAM first, see above
    • for the endpoint host run aws iot describe-endpoint CLI command
  • run npm install
  • run ./node_modules/.bin/webpack-dev-server --colors

Next steps

This was just the first (big) part. There’s more stuff left to be done:

  • neither is hard-coding AWS credentials into the application source the way to go nor is publishing the secret key at all
  • … one possible approach would be to use the API Gateway + Lambda to create pre-signed URLs
  • … this could be further limited by using IAM roles and temporary identity federation (through STS Token Service)
  • there’s no user authentication yet, this should be achievable with AWS Cognito
  • … with that publishing/subscribing could be limitted to identity-related topics (depends on the use case)

Heroku custom platform repo for V8Js

Yesterday @dzuelke poked me to migrate the old PHP buildpack adjusted for V8Js to the new custom platform repo infrastructure. The advantage is that the custom platform repo only contains the v8js extension packages now, the rest (i.e. Apache and PHP itself) are pulled from the lang-php bucket, aka normal php buildpack.

As I already had that on my TODO list, I just immediately did that :-)

… so here’s the new heroku-v8js Github repository that has all the build formulas. Besides that there now is a S3 bucket heroku-v8js that stores the pre-compiled V8Js extensions for PHP 5.5, 5.6 and 7.0. packages.json file here.

To use with Heroku, just run


with Dokku:


replacing Huginn with λ

I used to self-host the Ruby application Huginn which is some kind of IFTTT on steroids. That is it allows to configure so-called agents that perform certain tasks online, automatically. One of those tasks was to regularly scrape the Firefox website for the latest firefox version number (which happens to be a data-attribute on the html element by the way), take only the major version number, compare it to the most recent known value (aka last crawl cycle) and send an email notification if it changes. I wanted to have that notification so I could test, update & release Geierlein.

The thing is that that worked really well (I had it around for almost a year now), … nevertheless I decided to cut down (many) self-hosted projects (saving time on hosting, constantly updating, etc. to have more time for honing my software development skills). But I still needed those notifications so I had to find an alternative … and I found it in AWS Lambda.

(actually I’ve been interested in Lambda since they had it in private beta, I even applied for the beta program, … but never really used it as I had no idea what to do with it back then)

So my all AWS services approach involves

  • a CloudWatch scheduler event that triggers AWS Lambda
  • AWS Lambda doing the web scraping & flow control
  • S3 to persist the last known major version number
  • SES (simple e-mail services) to send the e-mail notification

I’ve used S3 and configured stuff with IAM before, SES is really straight forward, so actually only Lambda was new to me. Then the learning curve is okayish, as the AWS documentation guides into the right direction and Google + StackOverflow helps for the rest. If you’ve never used AWS services before, then the learning curve might be a bit steeper (mainly because of IAM) …

All in all I got it working within two hours or maybe three … and it just works now :)
… without nothing for me to host anymore
… and actually everything for free (as Lambda & SES stay within free usage quota and the single S3 object’s cost is negligible)

In case you want to follow along, here’s my …

step by step guide

under IAM service …

  • create AWS user with API keys to do local development (using AWS root account is undesirable)
  • grant that user the necessary permissions
    • managed policy AWSLambdaFullAccess (that includes full access to logs & S3)
    • yet it doesn’t include the right to send e-mails via SES, therefore create a user policy like
    "Version": "2012-10-17",
    "Statement": [
            "Sid": "Stmt1459031930000",
            "Effect": "Allow",
            "Action": [
            "Resource": [

under S3 service …

  • create a new Bucket to be used with Lambda, I picked lambdabrain (so pick something else)

again under IAM service …

  • create an AWS role, to be used by our lambda function later on
  • choose AWS Lambda from AWS Service Roles in Step 2 of the assistant, then attach AWSLambdaBasicExecutionRole policy
  • do not attach the AWSLambdaExecute managed policy as it includes read/write access to all object of all your S3 buckets
  • last not least add a custom Role Policy to grant rights on the newly created S3 Bucket + ses:SendEmail with
    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [

… turns out the s3:ListBucket is actually needed to initially create the persistance object.

under AWS SES

  • validate your mail domain (so you can send mails to yourself)
  • if you would like to send mails to other domains you also need to request a limit increase also

After setting up AWS CLI finally it’s time to (locally) create a Node.js application (the Lambda function to be).

  • create a new folder
  • … and an initial package.json file like this:
  "name": "firefox-version-notifier",
  "version": "0.0.1",
  "description": "firefox version checker & notifier",
  "main": "index.js",
  "dependencies": {
    "promise": "^7.1.1",
    "scrape": "^0.2.3"
  "devDependencies": {
    "node-lambda": "^0.7.1",
    "aws-sdk": "^2.2.47"
  "author": "Stefan Siegl <>",
  "license": "MIT"

I used promises throughout my code, and scrape to do the web scraping.

  • aws-sdk is actually needed in production as well, still I declared it under devDependencies as it is available globally on AWS Lambda and hence need not be included in the ZIP archive upload later on.
  • node-lambda is a neat tool to assist development for AWS Lambda

  • run npm install and ./node_modules/.bin/node-lambda setup
  • configure node-lambda through the newly created .env file as needed
    • AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY of the IAM user from above
    • AWS_ROLE_ARN is the full role ARN (from above)
    • AWS_HANDLER=index.handler (index because of the index.js file name, handler will be the exported name in there)

Here’s my straight-forward code, … definitely deserves some more love, yet it’s just a better shell script …
Adapt the name of the S3 bucket and the e-mail addresses (sender and receiver) of course.

var Promise = require('promise');
var AWS = require('aws-sdk');
var scrape = Promise.denodeify(require('scrape').request);

var brain = new AWS.S3({ params: { Bucket: 'lambdabrain' }});
var ses = new AWS.SES();

function getCurrentFirefoxVersion() {
	return scrape('')
		.then(function($) {
			var currentFirefoxVersion = $('html')[0].attribs['data-latest-firefox'].split(/\./)[0];
			console.log('current firefox version: ', currentFirefoxVersion);
			return currentFirefoxVersion;

function getBrainValue(key) {
	return new Promise(function(resolve, reject) {
		brain.getObject({ Key: key })
		.on('success', function(response) {
		.on('error', function(error, response) {
			if(response.error.code === 'NoSuchKey') {
			} else {

function setBrainValue(key, value) {
	return new Promise(function(resolve, reject) {
		brain.putObject({ Key: key, Body: value })
		.on('success', function(response) {
		.on('error', function(error) {

function sendNotification(subject, message) {
	return new Promise(function(resolve, reject) {
			Source: '',
			Destination: { ToAddresses: [ '' ] },
			Message: {
				Subject: { Data: subject },
				Body: {
					Text: { Data: message }
		.on('success', function(response) {
		.on('error', function(error, response) {
			console.log(error, response);

exports.handler = function(event, context) {
	.then(function(results) {
		if(results[0] === results[1]) {
			console.log('Firefox versions remain unchanged');
		} else {
			return sendNotification('New Firefox version!', 'Version: ' + results[0])
				.then(function() {
					return setBrainValue('last-notified-firefox', results[0]);
	.then(function(results) {
	.catch(function(error) {;
  • exports.handler function initially creates an all-promise that (in parallel)
    • scrapes the Firefox website
    • fetches the S3 object
  • then compares the two and (if different) …
    • creates another promise to send a notification
    • … (if successful) then updates the S3 object
  • and finally marks the lambda function as successful (via context.succeed)

I really like how the promises allow to easily parallelize stuff as well as make things depend on another (S3:PutObject on SES:SendMail)

Run ./node_modules/.bin/node-lambda run to test the script locally. If it works run ./node_modules/.bin/node-lambda deploy to upload.

Back in the AWS console, now under “Lambda”

  • you should see the new function, click it and hit “Test” to try it on AWS.
  • if it does, choose “Publish new version” from the “Actions”.
  • under “Event sources” add a new event source, choose “CloudWatch Events - Schedule” and choose an interval (I picked daily)

V8Js: improved fluent setter performance

After fixing V8Js’ behaviour of not retaining the object identity of passed back V8Object instances (i.e. always re-wrapping them, instead of re-using the already existing object) I tried how V8Js handles fluent setters (those that return $this at the end).

Unfortunately they weren’t handled well, that is V8Js always wrapped the same object again and again (in both directions). Functionality-wise that doesn’t make a big difference since the underlying object is the same, hence further setters can still be called.

But still the wrapping code takes some time – with simple “just store that” setters it is approximately half of the time. Here is a performance comparison of calling 200000 minimalist fluent setters one after another:

performance comparison of old & new handling

Besides the performance gain it also keeps object identity intact, however I assume noone ever stores the result of such a setter to a variable and compares it against another object. So that isn’t a big deal by itself.

The behaviour is changed with pull requests #220 and #221.