Life & much, much more

2018 New years resolutions

The first half of January’s resolution probation period has ended, and so it’s perhaps safe to post the goals for the year. So in no particular order, here they are.

  • Blog more. There’s a lot that’s been happening, and very little of it sees the light of day online. There are plenty of projects to provoke, reflect upon, or just answer your organic search query. My blogging habits used to be a couple times a week, and slowly died down as life took over. It certainly shows in the analytics dashboard. By the end of the year, monthly sessions should equal the same numbers seen in 2015. This means content creation, content creation, and more content creation. You can probably already see that a mobile friendly theme has been refreshed, new categories, and a few posts already published.
  • Divest. Financially, investors in their 20s can take a long-term view. This is the time to build up investing habits, and experience different markets. By the end of the year, I would like to invest in 20 different markets and start understanding my risk profile. Last year I experienced managed funds, blue-chip stocks, and rode the crypto currency roller coaster. This year will be more.
  • Consume intelligently. The environment is changing. Now is as good a time as any to build habits to be a more ethical consumer. We vote with our dollars, and it is our responsibility to support supply chains that promote good values in our society. Once consumed, we should break the disposable habit that arose sometime in the previous generation, and go towards zero-waste.
  • Improve digital security. The crypto boom is the public’s first taste of moving more traditional assets into a decentralised network. Unlike centralised systems, decentralised systems are very hard to kill. I foresee more of our digital lives being interconnected, even if we don’t realise it. It is pertinent that we promote more usage of privacy practices, such as password managers, secure protocols, self-hosted infrastructure, encryption, and signing.
  • Begin longer term work and life. I’ve been in the architecture industry for a year and a half now after being primarily in software. It’s probably time for training wheels off, and to start specialising in an area of architecture that is socially beneficial. Similarly, despite the prohibitive housing costs here in Sydney, the ongoing market correction suggests it’s time to revisit settling down in the more traditional sense.

Until 2019, then.

Technical

Basic rendering tutorial with Radiance

Radiance is the authoritative validated rendering engine out there. Unlike other rendering engines, which focus more on artistic license, Radiance focuses on scientific validation — that is, the results are not just physically based, they will produce the exact same output as measured by a physical optical sensor. This is great if you’d like to produce an image that not only looks photo-realistic, but actually matches what a similar setup in real life would look like. As you’d expect, this appeals to scientists, and designers who work with light and materials.

In addition, Radiance is open-source, completely free, and is Unix-like. If you’ve used other tools that claim to do all of the above, it probably uses Radiance under the hood anyway and rebrands itself with a more friendly interface. However, working with Radiance directly will give you a finer grain of control over the image you produce, and as I will likely write about in the future, scale up to do highly complex renders. Today, we’re going to dive into the guts of the tool itself and render a simple object. This can be a bit scary to those who are not particularly technical, and there’s not a lot of friendly material out there that doesn’t look like it’s from a 1000-page technical manual. Hopefully this walkthrough will focus on the more practical aspects without getting too bogged down in technicalities.

To begin, I’m also going to assume you have Radiance installed, and know how to open up a terminal window in your operating system of choice. If you haven’t got that far yet, go and install something simple like Ubuntu Linux and / or install Radiance. Radiance is not a program you double click on and see a window with buttons and menus that you can click on. Radiance is a collection of programs that work by typing in commands.

Let’s create a model first. Start with a simple mesh with a minimum of polygons. I am using Blender, which is a another open-source, free, and Unix-friendly software. In this case, I have started with a default scene, and arbitrarily replaced the default cube with a mesh of the Blender monkey mascot. I have also given the mesh a material, named white.

Default scene with Blender monkey

Using Blender is optional, of course, and you can use whatever 3D program you like. Radiance works with the OBJ format, which is an open format, plain text, and beautifully simple. As such, export the mesh to get yourself a resultant OBJ file, of which I have named model.obj. The exported accompanying model.mtl file is largely unnecessary right now: we will define our own materials with physical units, of which the .mtl file is not designed to do. When exporting, take care to only export the mesh, and ensure that the proper axes are facing up.

In the same directory that you have your model.obj and your model.mtl, let’s create a new file which will hold all the materials for your model. In this case, there is only one material, called white. So let’s create a new plain text file, called materials.rad and insert the following in it:

void plastic white
0
0
5 1 1 1 0 0

It’s the simplest possible material definition (and rather unrealistic, as it defines an RGB reflectance value of 1, 1, 1), but it’ll do for now. You can read about how “plastic” (i.e. non-metallic) materials as defined in the Radiance reference manual. In short, the first line says we are defining a plastic material called white, and the last line says that there are 5 parameters for this material, and their values are 1, 1, 1, 0, 0 respectively. The first three parameters refer to the R, G, and B reflectance of the material. This definition is provided in the Radiance manual, and so in the future it will serve you well to peruse the manual.

Now, open up a terminal window in the same folder where you have the model.obj and materials.rad file. We are going to run a Radiance program called obj2mesh which will combine our OBJ with the material definitions we have provided in our materials.rad, and spit out a Radiance triangulated mesh .rtm file. Execute the following command:

$ obj2mesh -a materials.rad model.obj model.rtm

If it succeeds, you will see a new file in that same directory called model.rtm. You may see a few lines pop up with warnings, but as long as they are not fatal, you may safely disregard them. This .rtm file is special to Radiance, as it does not work directly with the OBJ format.

Now, we will create a scene in Radiance and place our mesh within it. There will be no other objects in the scene. Let’s call it scene.rad, a simple text file with the following contents:

void mesh model
1 model.rtm
0
0

The first line simply defines a new mesh in the scene called model. The second line tells it that it can find the mesh in the model.rtm file. The final line (the zero) says that there are no parameters for this mesh.

Now, we will convert our scene into an octree, which is an efficient binary format (as opposed to all the simple text files we’ve been writing) that Radiance uses to do its calculations. We will run another Radiance program called oconv to do this. So open up your terminal window again and execute:

$ oconv scene.rad > scene.oct

You should now find a scene.oct file appear in the same folder as the rest of your files. This is the final file we send off to render. But before we do this final step, we will need to decide where our camera is. A camera in Radiance is defined by three parameters. The first parameter, vp, or view position, is the XYZ coordinate of the camera. The second parameter, vd, or view direction, is the XYZ vector that the camera is facing. The third parameter, vu, or view up, is the XYZ vector of where “up” is, so it knows if the camera is rotated or not. When specifying a parameter to Radiance, you will prefix the parameter name with a hyphen, followed by the parameter value. So, for a camera at the origin facing east (where +X is east and +Z is up), I can tell Radiance this by typing -vp 0 0 0 -vd 1 0 0 -vu 0 0 1.

Radiance camera definition

Calculating these vectors is a real pain unless your camera is in a really simple location and is orthogonal to the world axes like in my previous example. However, here’s a fancy script you can run in Blender which will calculate the values for the camera named Camera.

import bpy
from mathutils import Vector

cam = bpy.data.objects['Camera']
location = cam.location
up = cam.matrix_world.to_quaternion() * Vector((0.0, 1.0, 0.0))
direction = cam.matrix_world.to_quaternion() * Vector((0.0, 0.0, -1.0))

print(
    '-vp ' + str(location.x) + ' ' + str(location.y) + ' ' +  str(location.z) + ' ' +
    '-vd ' + str(direction.x) + ' ' + str(direction.y) + ' ' + str(direction.z) + ' ' +
    '-vu ' + str(up.x) + ' ' + str(up.y) + ' ' + str(up.z)
)

The output will be in the Blender console window. For those on other programs, you’ve got homework to do.

Once you know your coordinates and vectors for vp, vd, and vu, let’s use the rpict Radiance program to render from that angle. Please replace my numbers given to the three camera parameters with your own in the command below. We will also specify -av 1 1 1, which tells Radiance to render with an ambient RGB light value of 1, 1, 1. Of course, in real life we don’t have this magical ambient light value, but as we haven’t specified any other lights in our scene, it’ll have to do. We will also specify -ab 2, which allows for 2 ambient bounces of light, just so that we have a bit of shading (if we didn’t have any light bounces, we would have a flat silhouette of our monkey).

$ rpict -vp 7.481131553649902 -6.5076398849487305 5.34366512298584 -vd -0.6515582203865051 0.6141704320907593 -0.44527149200439453 -vu -0.32401347160339355 0.3054208755493164 0.8953956365585327 -av 1 1 1 -ab 2 scene.oct > render.pic

Great, after the render completes, you should see a new file called render.pic in your folder. Let’s look at this image using the Radiance ximage program.

$ ximage render.pic

You should see something like the following:

Final Radiance render

One final step. It’s quite irksome and technical to run all of the commands for rpict, oconv and such, and so it’s much better to use the executive control program rad. rad allows you to write the intention of your render in simple terms, and it’ll work out most of the technical details for you. Of course, everything can be overridden. The rad program parses a .rif configuration file. I’ve included a sample one below, saved as scene.rif:

# Specify where the compiled octree should be generated
OCTREE=scene.oct
# Specify an (I)nterior or (E)xterior scene, along with the bounding box of the scene, obtainable via `getbbox scene.rad`
ZONE=E  -2.25546   4.06512  -3.15161   3.16896  -2.94847    3.3721
# A list of of the rad files which make up our scene
scene=scene.rad
# Camera view options
view=-vp 7.481131553649902 -6.5076398849487305 5.34366512298584 -vd -0.6515582203865051 0.6141704320907593 -0.44527149200439453 -vu -0.32401347160339355 0.3054208755493164 0.8953956365585327
# Option overrides to specify when rendering
render=-av 1 1 1
# Choose how indirect the lighting is
INDIRECT=2
# Choose the quality of the image, from LOW, MEDIUM, or HIGH
QUALITY=HIGH
# Choose the resolution of mesh detail, from LOW, MEDIUM, or HIGH
DETAIL=HIGH
# Choose the light value variance variability, from LOW, MEDIUM, or HIGH
VARIABILITY=MEDIUM
# Where to output the raw render
RAWFILE=output_raw.pic
# Where to output a filtered version of the render (scaled down for antialiasing, exposure correction, etc)
PICTURE=output.pic
# The time duration in minutes before reporting a status update of the render progress
REPORT=0.1

Execute rad scene.rif to get the results. If you’d like to interactively render it, on an X server you can run rad -o x11 scene.rif. I used the above .rif file and ran it against a higher resolution mesh, and I’ve included the results below.

Rad rendered image

All done! We’ve learned about bringing in an OBJ mesh with Radiance materials, placing them in a scene, and rendering it from a camera. Hope it’s been useful. Of course, our final image doesn’t look exactly great – this is because the material and lighting we have set are basically physically impossible. Similarly, the simulation we’ve run has been quite rudimentary. In the future, we’ll look at specifying a much more realistic environment.

Life & much, much more

Brand new Gentoo desktop computer

It’s 2018, and my 5 year old trusty Thinkpad 420i has decided to overheat for its last time. After more than 10 years of laptops, I decided to go for a desktop. I spoke to a fellow at Linux Now, who supplies custom boxes with Linux preinstalled, and are located in Melbourne, Australia (as of writing, no complaints with their service at all). A week later, I was booting up and my old laptop was headed to the nearest e-waste recycling centre. Here’s the obligatory Larry cowsay:

$ cowsay `uname -a`
 _______________________________________ 
/ Linux dooby 4.12.12-gentoo #1 SMP Tue \
| Nov 28 09:55:21 AEDT 2017 x86_64 AMD  |
| Ryzen 5 1600X Six-Core Processor      |
\ AuthenticAMD GNU/Linux                /
 --------------------------------------- 
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

Being a desktop machine, it lacks portability but this is mitigated as you can run Gentoo on your phone. Combine your phone with a Bluetooth keyboard and mouse, and you have a full-on portable workstation. Your desktop will be much more powerful than your laptop, at half the price.

Tower machine

And of course, here are the hardware specs.

  • AMD Ryzen 5 1600X CPU (5 times faster than my laptop) As of writing, these Ryzens are experiencing some instability related to kernel bug 196683, but the workarounds in the bug report seem to solve it.
  • NVIDIA GeForce GTX 1050Ti GPU (16 times faster than my laptop). Yes, proprietary blob drivers are in use.
  • 16GB DDR4 2400Mhz RAM
  • 250GB SSD
  • 23.6in 1920×1080 16:9 LCD Monitor
  • Filco Majestouch-2, Tenkeyless keyboard. If you’ve never needed a clean-cut professional mechanical keyboard that isn’t as bulky as the IBM Model M, I’d highly recommend this one.

Filco Majestouch-2, Tenkeyless keyboard

Software-wise, it is running Gentoo Linux with KDE. Backup server is hosted by rsync.net. The worldfile is largely the same as my old laptop, with the addition of newsbeuter for RSS feeds. Happy 2018!

Technical

Stitching high resolution photos from NSW SIX Maps

Have you ever wanted to save out a super high resolution satellite photo from Google Maps or similar? Perhaps you’ve screenshotted satellite photos from your browser and then merged them together in your favourite photo editor like Photoshop or The GIMP. Well, in New South Wales, Australia, there’s a NSW GIS (geographic information service) government service known as NSW GIS SIX Maps.

A beautiful statellite photo from NSW GIS SIX Maps of the Sydney Harbour area

Doing this manually is quite an annoying task, but here’s an automatic script that will do it for you right in your browser, and will save up to a 10,000 by 10,000 pixel resolution image to your computer. Yep, no need to download any further software. There are a couple of prerequisites, but it should work on almost any computer. Just follow the step by step below to merge the tiles together. First, make sure you have Google Chrome as your browser.

  • Go to http://maps.six.nsw.gov.au/
  • Open up the browser inspector (Press ctrl-shift-i on Chrome), and click on the “Console” tab. This is where you will copy, paste, and run the scripts.
  • Zoom to desired max resolution level (there are 20 stops available) to the top left of the bounding rectangle of the region you’d like to stitch.
  • Copy, paste and hit enter to run the code below.

This code snippet will allow your browser to use more system resources required to perform this task.

performance.setResourceTimingBufferSize(1000);

You should see “undefined” if it completes successfully.

  • Pan a little top-left to load another tile, this’ll set the top-left boundary coordinate.
  • Pan to the bottom-right coordinate, triggering a tile-load in the process. This’ll set the bottom-right boundary coordinate.
  • Copy, paste, and hit enter to run the code in below.

This will give your browser permission to save the resulting image to your computer. For the curious, it’s called a Polyfill and here is an explanation by the FileSaver author about what it does.

/* FileSaver.js
 * A saveAs() FileSaver implementation.
 * 1.3.2
 * 2016-06-16 18:25:19
 *
 * By Eli Grey, http://eligrey.com
 * License: MIT
 *   See https://github.com/eligrey/FileSaver.js/blob/master/LICENSE.md
 */

/*global self */
/*jslint bitwise: true, indent: 4, laxbreak: true, laxcomma: true, smarttabs: true, plusplus: true */

/*! @source http://purl.eligrey.com/github/FileSaver.js/blob/master/FileSaver.js */

var saveAs = saveAs || (function(view) {
    "use strict";
    // IE <10 is explicitly unsupported
    if (typeof view === "undefined" || typeof navigator !== "undefined" && /MSIE [1-9]\./.test(navigator.userAgent)) {
        return;
    }
    var
          doc = view.document
          // only get URL when necessary in case Blob.js hasn't overridden it yet
        , get_URL = function() {
            return view.URL || view.webkitURL || view;
        }
        , save_link = doc.createElementNS("http://www.w3.org/1999/xhtml", "a")
        , can_use_save_link = "download" in save_link
        , click = function(node) {
            var event = new MouseEvent("click");
            node.dispatchEvent(event);
        }
        , is_safari = /constructor/i.test(view.HTMLElement) || view.safari
        , is_chrome_ios =/CriOS\/[\d]+/.test(navigator.userAgent)
        , throw_outside = function(ex) {
            (view.setImmediate || view.setTimeout)(function() {
                throw ex;
            }, 0);
        }
        , force_saveable_type = "application/octet-stream"
        // the Blob API is fundamentally broken as there is no "downloadfinished" event to subscribe to
        , arbitrary_revoke_timeout = 1000 * 40 // in ms
        , revoke = function(file) {
            var revoker = function() {
                if (typeof file === "string") { // file is an object URL
                    get_URL().revokeObjectURL(file);
                } else { // file is a File
                    file.remove();
                }
            };
            setTimeout(revoker, arbitrary_revoke_timeout);
        }
        , dispatch = function(filesaver, event_types, event) {
            event_types = [].concat(event_types);
            var i = event_types.length;
            while (i--) {
                var listener = filesaver["on" + event_types[i]];
                if (typeof listener === "function") {
                    try {
                        listener.call(filesaver, event || filesaver);
                    } catch (ex) {
                        throw_outside(ex);
                    }
                }
            }
        }
        , auto_bom = function(blob) {
            // prepend BOM for UTF-8 XML and text/* types (including HTML)
            // note: your browser will automatically convert UTF-16 U+FEFF to EF BB BF
            if (/^\s*(?:text\/\S*|application\/xml|\S*\/\S*\+xml)\s*;.*charset\s*=\s*utf-8/i.test(blob.type)) {
                return new Blob([String.fromCharCode(0xFEFF), blob], {type: blob.type});
            }
            return blob;
        }
        , FileSaver = function(blob, name, no_auto_bom) {
            if (!no_auto_bom) {
                blob = auto_bom(blob);
            }
            // First try a.download, then web filesystem, then object URLs
            var
                  filesaver = this
                , type = blob.type
                , force = type === force_saveable_type
                , object_url
                , dispatch_all = function() {
                    dispatch(filesaver, "writestart progress write writeend".split(" "));
                }
                // on any filesys errors revert to saving with object URLs
                , fs_error = function() {
                    if ((is_chrome_ios || (force && is_safari)) && view.FileReader) {
                        // Safari doesn't allow downloading of blob urls
                        var reader = new FileReader();
                        reader.onloadend = function() {
                            var url = is_chrome_ios ? reader.result : reader.result.replace(/^data:[^;]*;/, 'data:attachment/file;');
                            var popup = view.open(url, '_blank');
                            if(!popup) view.location.href = url;
                            url=undefined; // release reference before dispatching
                            filesaver.readyState = filesaver.DONE;
                            dispatch_all();
                        };
                        reader.readAsDataURL(blob);
                        filesaver.readyState = filesaver.INIT;
                        return;
                    }
                    // don't create more object URLs than needed
                    if (!object_url) {
                        object_url = get_URL().createObjectURL(blob);
                    }
                    if (force) {
                        view.location.href = object_url;
                    } else {
                        var opened = view.open(object_url, "_blank");
                        if (!opened) {
                            // Apple does not allow window.open, see https://developer.apple.com/library/safari/documentation/Tools/Conceptual/SafariExtensionGuide/WorkingwithWindowsandTabs/WorkingwithWindowsandTabs.html
                            view.location.href = object_url;
                        }
                    }
                    filesaver.readyState = filesaver.DONE;
                    dispatch_all();
                    revoke(object_url);
                }
            ;
            filesaver.readyState = filesaver.INIT;

            if (can_use_save_link) {
                object_url = get_URL().createObjectURL(blob);
                setTimeout(function() {
                    save_link.href = object_url;
                    save_link.download = name;
                    click(save_link);
                    dispatch_all();
                    revoke(object_url);
                    filesaver.readyState = filesaver.DONE;
                });
                return;
            }

            fs_error();
        }
        , FS_proto = FileSaver.prototype
        , saveAs = function(blob, name, no_auto_bom) {
            return new FileSaver(blob, name || blob.name || "download", no_auto_bom);
        }
    ;
    // IE 10+ (native saveAs)
    if (typeof navigator !== "undefined" && navigator.msSaveOrOpenBlob) {
        return function(blob, name, no_auto_bom) {
            name = name || blob.name || "download";

            if (!no_auto_bom) {
                blob = auto_bom(blob);
            }
            return navigator.msSaveOrOpenBlob(blob, name);
        };
    }

    FS_proto.abort = function(){};
    FS_proto.readyState = FS_proto.INIT = 0;
    FS_proto.WRITING = 1;
    FS_proto.DONE = 2;

    FS_proto.error =
    FS_proto.onwritestart =
    FS_proto.onprogress =
    FS_proto.onwrite =
    FS_proto.onabort =
    FS_proto.onerror =
    FS_proto.onwriteend =
        null;

    return saveAs;
}(
       typeof self !== "undefined" && self
    || typeof window !== "undefined" && window
    || this.content
));
// `self` is undefined in Firefox for Android content script context
// while `this` is nsIContentFrameMessageManager
// with an attribute `content` that corresponds to the window

if (typeof module !== "undefined" && module.exports) {
  module.exports.saveAs = saveAs;
} else if ((typeof define !== "undefined" && define !== null) && (define.amd !== null)) {
  define("FileSaver.js", function() {
    return saveAs;
  });
}

Now that you have the SaveAs polyfill, your browser will be able to save the results to your hard drive.

  • Finally, copy, paste and run the code below.

This code does the actual work, and will stitch the tiles together with a progress notification. It will save as “output.png” in your Downloads folder when complete.

var tiles = performance.getEntriesByType('resource').filter(item => item.name.includes("MapServer/tile"));
var resolution = null;
var coords = [];
var maxX = null;
var maxY = null;
var minX = null;
var minY = null;
var tileSize = 256;

for (var i=0; i<tiles.length; i++) {
    var tileUrlTokens = tiles[i].name.split('?')[0].split('/');
    var tileResolution = tileUrlTokens[tileUrlTokens.length-3]
    if (tileResolution > resolution || resolution == null) {
        resolution = tileResolution;
    }
};
for (var i=0; i<tiles.length; i++) {
    var tileUrlTokens = tiles[i].name.split('?')[0].split('/');
    var tileResolution = tileUrlTokens[tileUrlTokens.length-3]
    var x = tileUrlTokens[tileUrlTokens.length-1]
    var y = tileUrlTokens[tileUrlTokens.length-2]
    if (tileResolution != resolution) {
        continue;
    }
    if (x > maxX || maxX == null) {
        maxX = parseInt(x);
    }
    if (y > maxY || maxY == null) {
        maxY = parseInt(y);
    }
    if (x < minX || minX == null) {
        minX = parseInt(x);
    }
    if (y < minY || minY == null) {
        minY = parseInt(y);
    }
};

var canvas = document.createElement('canvas');
canvas.id = "sixgis";
canvas.width = 10240;
canvas.height = 10240;
canvas.style.position = "absolute";
var body = document.getElementsByTagName("body")[0];
body.appendChild(canvas);
sixgis = document.getElementById("sixgis");
var ctx = canvas.getContext("2d");

var currentTileIndex = 0;

function renderTile(resolution, x, y) {
    var img = new Image();
    img.crossOrigin='Anonymous';
    img.onload = function(response) {
        currentTileIndex++;
        console.log('Rendering '+currentTileIndex+' / '+((maxX-minX)*(maxY-minY)));
        var renderX = (response.path[0].tileX - minX) * tileSize;
        var renderY = (response.path[0].tileY - minY) * tileSize;
        ctx.drawImage(response.path[0], renderX, renderY);
        if (x < maxX && y < maxY) {
            renderTile(resolution, x, y+1);
        } else if (y >= maxY) {
            renderTile(resolution, x+1, minY);
        } else {
            canvas.toBlob(function(blob) {
                saveAs(blob, 'output.png');
            });
        }
    }
    img.tileX = x;
    img.tileY = y;
    img.src='https://maps3.six.nsw.gov.au/arcgis/rest/services/sixmaps/LPI_Imagery_Best/MapServer/tile/'+resolution+'/'+y+'/'+x+'?blankTile=false';
}
renderTile(resolution, minX, minY);

All done! One more final thing to consider, is that their terms of service probably has something to say on the matter of stitching photos together, but I am not a lawyer, so go figure :)

Creative

LearnMusicSheets – download PDFs of music sheet exercises

Today I’d like to talk about a brand new project: LearnMusicSheets. If you are a music teacher, or are learning music, you have no doubt searched the internet looking for music score PDFs. Examples of music scores are major and minor scales, arpeggios, or even some blank manuscript. I’ve searched before for scores to present in my lessons, but have so far been unable to find scores with a suitable level of quality. Specifically, I’m looking for no copyright notices, no badly notated scores, and comprehensive. Instead, often I find scores with horrible jazz Sibelius music fonts, inconsistent naming, or obscene fingerings notation. On the rare occasion that I find a sheet typeset half-decently, it is often incomplete and doesn’t contain all the relevant exercises.

So I have taken the time to notate various music scores for learning piano and music in general. These music scores are all typeset beautifully using the Lilypond notation software. I haven’t put any copyright notices, and have provided a variety of paper sizes and combination of exercises. I’ve used these myself in my lessons for many years and they work great! Hopefully, you’ll enjoy them as much as I have!

Beautiful music exercises to download

Today I would like to launch LearnMusicSheets. Learnmusicsheets is a website where you can download music sheets for learning music. Currently, it offers blank manuscript paper, major and minor scales, major and minor arpeggios, and intervals and cadences. All these scores are available as an instant PDF download. More exercises, such as jazz scales, will come soon. If you’re curious, go and download music sheets now!

Example PDF of major and minor arpeggios

If you have any comments or suggestions, please feel free to send me a message. Or if you use them yourself, let me know how they are!

Life & much, much more

Practical Abhidhamma Course for Theravāda Buddhists

Today, I’d like to briefly introduce a project for Theravāda Buddhists. Buddhism, like most religions, have a few sacred texts to describe their teachings. One of these texts, the “Abhidhamma”, is rather elusive and complicated to understand. My dad has been teaching this difficult topic for the past 15 years, and over the past year and half, has written a 200-page introductory book for those who want to see what all the fuss is about. It’s chock-full of diagrams, references, and bad jokes.

To quote from the actual page:

There are eight lessons in this course covering selected topics from the Abhidhamma that are most practical and relevant to daily life. Though it is called a “Practical Abhidhamma Course,” it is also a practical Dhamma course using themes from the Abhidhamma. The Dhamma and the Abhidhamma are not meant for abstract theorizing; they are meant for practical application. I hope you approach this course not only to learn new facts, but also to consider how you can improve yourself spiritually.

So, click to go ahead and learn about the Abhidhamma.

Practical-Abhidhamma

I had the pleasure of helping on various technical and visual aspects, and I’m happy to launch PracticalAbhidhamma.com which will serve the book as well as any future supplementary content. For those interested, the book was typeset with LaTeX, with diagrams provided by Inkscape with LaTeX rendering for text labels.

Life & much, much more

Space architecture – a history of space station designs

To quote the beginning of the full article:

This article explores different priorities of human comfort and how these priorities were satisfied in standalone artificial environments, such as space stations.

If you’re impatient and just want to read the full article, click to read A history of design and human factors in Space Stations.

… or if you want a bit more background, read on …

I began investigating in more detail the field of space architecture last year. Although I had a bit of experience from the ISSDC, I was much more interested in real current designs as opposed to hypothetical scenarios.

Space architecture, and its parent field of design is a broad one. It’s an engineering challenge, an economic challenge, a logistical challenge, a political challenge, you name it. As an architect, the priorities of space station/settlement designs lie with the people that inhabit it. Simply put, you don’t call an architect to build a rocket, but when a person is living inside that rocket, especially if they’re stuck there for a while, that’s when you call your architect.

This means that when an architect looks at designing a space station, although they need to be aware of the technical constraints of the environment (gravity, air, temperature, structure, radiation, transport, health), their true expertise lies in understanding how to make people comfortable and productive within that space. This means that space architects need to understand to an incredible amount of detail how we perceive and are affected by our environment. Much more so than Earth architects, who have the advantage of the natural world, which is usually much nicer than whatever is indoors, as well as existing social and urban infrastructure. Space architects don’t have this benefit, and so the entire “world” is limited to what they can fit inside a large room.

This point: space architects are responsible for the happiness of humans, is an absolutely vital one, and unfortunately often missed. Too many architects are instead raptured by the technological pornography of the environment, the intricate constraints, or even the ideological ability to “reimagine our future”. No. The reality is much more humble: space architecture is about rediscovering what humans hold dear in the world. You cannot claim to reinvent a better future if you do not yet understand what we already appreciate in the present.

And so if my point has made any impact, please go ahead and read A history of design and human factors in Space Stations, where I walk through the history of space station designs, their priorities, and what architects are looking at now.

Space architecture - how cosy

Cosy, isn’t it? Also, a TED Talk on How to go to space, without having to go to space shares many of my thoughts, and would be worth watching.

Technical

Clean code, and how to write it

Note: the article was originally circulated on #cleancode and #kohana on Freenode and is now recorded here as an archive. It seems very useful as something to link people to on IRC when they have questions, so feel free to share as well.

At SevenStrokes, we practice Clean Code. Although code speaks louder than words, at the moment my public repositories are heavily outdated. What isn’t as outdated, however, is a short introductory guide I wrote on Clean Code for the internal use of SevenStrokes. Although it is a guide which focuses on the basics, it does make some assumptions on the reader having some knowledge about programming. You’ll notice that the examples are primarily written in PHP, but are applicable in all languages.

Clean code architectures

The article answers the question of why good code matters, what is good code, and covers the three pillars of good code: syntax, architecture, and workflow. It shows coding examples of how to write good code, introduces you to the more abstract architectural jargon, and different tools and processes out there.

Without further ado, please click to read: SevenStrokes: Learn how to write Good Code.

Creative

Blender artwork: something’s not quite right

Note: extra comments here

It’s not often that I show my Blender artwork nowadays, but here’s three samples that I hope you’ll appreciate.

Oh look. A tree with a rock.

Hmm

No wait. That’s not a rock. It’s a heart.

Yep, definitely a heart.

Those leaves aren’t right. What is it?

But wait, there’s more!

Ahum

I’m pretty sure these are hands. Perhaps more than one.

Yeah, more than one.

How about something lighter?

Bubbles

I didn’t say this post would make sense :) But such is the nature of this type of artwork.

Yadda yadda yadda.

Life & much, much more

Things I should’ve done earlier.

On Linux, there are things that you know are better but you don’t switch because you’re comfortable where you are. Here’s a list of the things I’ve changed the past year that I really should’ve done earlier.

  • screen -> tmux
  • irssi/quassel -> weechat + relay
  • apache -> nginx
  • dropbox -> owncloud
  • bash -> zsh
  • bootstrapping vim-spf -> my own tailored and clean dotfiles
  • phing -> make
  • sahi -> selenium
  • ! mpd -> mpd (oh why did I ever leave you)
  • ! mutt -> mutt (everything else is severely broken)
  • a lot of virtualbox instances -> crossbrowsertesting.com (much less hassle, with support for selenium too!)

… would be interested to know what else I could be missing out on! :)

Life & much, much more

Competitive weight loss with WeightRace.net

So last year (or perhaps even the year before, time flies!) two people close to me participated in a friendly weight-loss competition. To do this, they used WeightRace.net.

WeightRace is a small web application I built a while ago for fun, which allows up to four contestants to compete towards a weight goal which they would set. They would be prompted daily for weight updates, and would set a reward for the winner. It also used some lightweight gamification so contestants could earn bonus “wobbly bits” when achieving things like their BMI.

But enough talking about the application — applications are boring! Much more interesting are results! Let’s see:

WeightRace - competitive weight loss

The two contestants — whom we shall refer to as Rob and Julie, which may or may not be their real name — and their results are shown in the graph above. Julie is red, Rob is blue, and their linear trajectories towards their weight goal is shown via the corresponding coloured dotted line.

If I could hear a sped-up commentary of the results, it would truly be exciting! Rob makes an excellent head-start well ahead of his trajectory, whereas Julie is having trouble beginning. As we near the holiday (Christmassy) season, we see Rob’s progress plateauing, whereas Julie gets her game on and updates with a rigorous discipline. Also great to notice is the regular upward hikes in Julie’s weight – those correspond with weekends! As the holidays pass, Rob makes gains and is unable to recover.

In the end, although Julie wins the Race, neither Julie or Rob met their weight goal (note that in terms of absolute figures, Rob actually wins). However, this was all not in vain. Given that almost another year has passed since this race finished, and I can see that Rob’s weight is now well under control and has indeed achieved his goal, I’d like to think that the WeightRace has played a role.

In particular, the WeightRace helped raise daily awareness. I believe that it was this daily awareness of the current weight that helped most in the long-term. In addition, the WeightRace helped Rob’s body to stabilise around 90kg for half a year! I suspect his body figured out that it could manage at that weight, which made it easier for him to (after the WeightRace) continue to lose weight at a healthy pace.

For those interested in playing with the WeightRace, you can check it out online at WeightRace.net. Note though that it is not actually complete, but works good enough for a competition. For those interested in the source, it’s up on my GitHub.

Creative

Architectural visualisation renders with Blender

It’s been a while since I’ve made a post. Although there are posts in the queue, I figured I might post this as it’s a quick one. Let’s see the pictures first.

Visualisation 1

… and the other …

Visualisation 2

Images done with Blender and Cycles. Piano in second render done by RegusTtef. These images are 50% of actual size, together these images took a day and a half. Second image is a panoramic shot.

The building is a proposal for the Sydney Museum of Profligate Steel Welders. The rest writes itself :)

Technical

Building REST APIs with auto-discoverable auto-tested code

For the past few months, one of the projects I’ve been working on with SevenStrokes involves building a REST API for a service. REST APIs are tricky things to get right: they’re deceptively simple to describe, yet play host to plenty of interesting topics to delve into. Such topics can be statelessness, resource scope, authentication, hypermedia representation and so on.

However I’m going to only talk about the very basics (which many people overlook), and demonstrate how the Richardson Maturity Model can help with automated testing and documentation. If you haven’t heard of RMM yet, I recommend you stop reading and go through it now (especially if you’ve built a REST-like API before).

Let’s say our REST API conforms to a level 3 RMM: we have a set of standardised verbs, querying logical resources, receiving standardised status codes, and being able to navigate the entire system via links. We’ve got a pretty good setup so far. All these items in the RMM help our REST API system scale better. However what is doesn’t yet help with is keeping our documentation up to date. This is vital, because we know that the holy grail for REST API is an auto-generated, always up-to-date, stylish documentation that promotes your site/product api. There’s a bunch of tools that help you do this right now, but I think they’re all rather half-baked and used as a bolt-on rather than a core part of your application.

To solve this, I’m going to recommend one more addition: every resource must have the OPTIONS verb implemented. When invoked, it will respond with the following:

  1. An Allow header, specifying all the other verbs available on the invoked resource.
  2. A response body, containing the verbs, and under them in the hierarchy of the body (in whatever format), a description of:
    • Their input parameters, including type, and required boolean
    • A list of example requests and responses, detailing what headers, parameters and body are included in the request, and what headers, status code and body is included in the response.
  3. A list of assumptions that are being made for each example scenario (if applicable)
  4. A list of effects on the system for each example scenario (if applicable)
  5. A list of links to any subresources with descriptions

Let’s see a brief example:

# OPTIONS /user/

{
    "GET": {
        "title": "Get information about your user",
        "parameters": {
            "foobar": {
                "title": "A description of what foobar does",
                "type": "string",
                "required": false
            },
            [ ... snip ... ]
        },
        "examples": [
            {
                "title": "View profile information successfully",
                "request": { "headers": { "Authentication": "{usersignature}" } },
                "response": {
                    "status": 200,
                    "data": {
                        "id": "1",
                        "username": "username1",
                        [ ... snip ... ]
                    }
                }
            },
            [ ... snip ... ]
        ]
    },
    [ ... snip ... ]
    "_links": {
        "self": {
            "href": "\/makkoto-api\/user"
        },
        [ ... snip ... ]
    }
}

Sound familiar? That’s right. It’s documentation. Better than that, it’s embedded documentation. Oh, and better still, it’s auto-discoverable documentation. And if that isn’t great enough, it’s documentation identical to the format of requests and responses that API clients will be working with.

Sure, it’s pretty nifty. But that’s not all! Let’s combine this with TDD/BDD. I’ve written a quick test here:

Feature: Discover
    In order to learn how the REST API works
    As an automated, standards-based REST API client
    I can auto-discover and auto-generate tests for the API

    Scenario: Generate all tests
        Given that I have purged all previously generated tests
        Then I can generate all API tests

That’s right. This test crawls the entire REST API resource tree (starting at the top-level resource, of course), invokes OPTIONS for each resource, and generates tests based on the documentation that you’ve written.

Let’s see a quick demo in action.

Auto-documentation for REST APIs in action

It’s a really great workflow: write documentation first, generate tests from it, and then zone in on your tests in detail. This ensure that your code, tests and documentation are always in sync.

I hope someone finds this useful :) For the curious, the testing tool is Behat, and output format used is application/hal+json, using the HAL specification for linking, and link URI templates.

Technical

Using Sahi, Mink and Behat to test HTML5 drag and drop file uploads

For those that don’t know, Behat is an excellent tool for testing the business expectations of an application. In other words, it’s a behavior-driven approach towards full-stack application acceptance testing. Mink is a browser abstraction layer, allowing you to easily control different browser emulators through a common interface. Combining the two together brings us a mean bag of tricks when it comes to testing web applications.

This morning I had set myself the task of writing the tests for a spiffy HTML5 drag and drop file upload script that is all the rage nowadays. Needless to say it took far longer than I had thought it would. Let’s get started.

Testable elements of the HTML5 drag and drop

Drag and drops work by triggering the drop event of an element. This drop event contains a list of files in a format defined by the HTML5 FileAPI. The Javascript can loop over these file objects and perform client-side file validation checks. This data is then posted via AJAX to another URL. After the server-side processing is done, we get a response object with the results, and we parse these to give feedback to the user whether the upload finally succeeded. As you can see, there are various places we can begin to test.

Attempt 1: Just test the AJAX POST

Because the data is finally POSTed via AJAX, one option is to just test that and leave the rest to manual QA. In fact, we can forego AJAX altogther, and use PHP with cURL to make the request and check the response. Easy. Actually, too easy – we’re ignoring what makes our app cool – the drag and drop!

Attempt 2: Test the legacy file input fallback

Bah. This isn’t why you’re reading this post. You know how to do this already. And anyway, you’ve probably already got a legacy test but now you want to test the spiffy HTML5 one. Moving on.

Attempt 3: Use Sahi to run your test

Hello Sahi! Sahi is a web test automation tool with fully fledged GUI. But more relevant is that it supports Javascript, unlike its faster headless relatives (yes, there’s PhantomJS, but I wouldn’t mind seeing what’s going on in a drag-and-drop widget).

Before we even hit Mink and Behat, try recording the events to turn into a Sahi script. You’ll quickly notice that Sahi (unsurprisingly) doesn’t properly record the event of dropping a file onto the page.

The issue here is that Sahi has no concept of files outside the emulated browser window. There’s a sneaky trick around this. In our Behat definition , we’ll run evaluateScript to dynamically add a file input field, then attach our image file to that field. Now we can grab the file object from that!

$session = $this->getSession();
$session->evaluateScript('$("body").after("<input type=\"file\" id=\"sahibox\">")');
$session->getPage()->attachFileToField('sahibox', '/home/dion/image.png');
myfile = $("#sahibox").get(0).files[0];

If we run the Javascript manually, it works fine. And it also creates a good opportunity to stop and peek at exactly what’s your File object built from. However in Sahi, we don’t have the file object. Why? Because input file field values cannot be manipulated by Javascript for security reasons. But then why does Sahi even provide a function for this? Because “Sahi intercepts the request in the proxy, reads the file off the file system and inserts it into the multipart request”. So Sahi just does a sneaky slide into the form submit at the end.

Taking a peek at Sahi’s setFile documentation, they note they have a _setFile2 function – which essentially converts the input field into a text field in the process. This isn’t going to work either, because we actually need the file object to test.

Finally, Sahi provides a third alternative to selecting files to uploads by emulating native events in the process of selecting a file. It’s at the bottom of their setFile documentation. It basically walks through the steps of opening up the file browse dialogue, typing in the file path with keystrokes … on and on until we get what we want. It’ll work!

Yes, it’ll work. But not nicely. It’s slow. It’s littered with _waits(). Wouldn’t it be nicer if we could create the file object ourselves rather than emulate browsing our filesystem?

Attempt 4: Grab a file object from an image already on the server

Aha! We’ve already got images in our app, let’s just try to upload one of those. We’ll need two things: an image source, and a way to create a file.

For an image source, we’ll grab one with an XMLHttpRequest() in Javascript. We need to make sure that this image source is within Sahi’s proxy, though. This is because otherwise we’d run into cross-domain issues. That’s fine, we’ll upload the Sahi logo as our test image.

To create a File, we’ll create a Blob instead. Files inherit from Blobs, and so we can swap them in an out. Right, let’s see.

var xhr = new XMLHttpRequest();
xhr.open( "GET", "http://sahi.example.com/_s_/spr/images/sahi_os_logo1.png", true );
xhr.responseType = "arraybuffer";
xhr.onload = function( e ) {
    var arrayBufferView = new Uint8Array( this.response );
    window.myfile = new Blob( [ arrayBufferView ], { type: "image/png" } );
};
xhr.send();

Great! So window.myfile will be populated with our file object now. But a test that relies on the existence of a Sahi image? Nasty.

Attempt 5: Create our file object from a base64 string

Simple but effective and none of that extra request messing around. Let’s create an image first. I made a black 100px square image for testing. The simpler the image the better, as it’ll make your base64 string smaller. Now let’s turn that image into base64:

$ base64 image.png 
iVBORw0KGgoAAAANSUhEUgAAAGQAAABkCAAAAABVicqIAAAACXBIWXMAAAsTAAALEwEAmpwYAAAA
B3RJTUUH3gIYBAEMHCkuWQAAAB1pVFh0Q29tbWVudAAAAAAAQ3JlYXRlZCB3aXRoIEdJTVBkLmUH
AAAAQElEQVRo3u3NQQ0AAAgEoNN29i9kCh9uUICa3OtIJBKJRCKRSCQSiUQikUgkEolEIpFIJBKJ
RCKRSCQSiUTyPlnSFQER9VCp/AAAAABJRU5ErkJggg==

Great. Now as it turns out, the folks at Mozilla have already worked out how to decode a base64 string into Uint8Array. Steal their functions and we’re good to go :)

So our test script will:

  1. Convert a base64 image into a Uint8Array
  2. Use that Uint8Array to construct a Blob with the mimetype of image/png
  3. Set various metadata that file uploaders need, such as file name and last modified date
  4. Create a new list of files, and put our Blob in there
  5. Create a new “drop” event.
  6. Add our list of files to the dataTransfer attribute of that drop event
  7. Trigger our on-page element with the drop event
  8. Wait for the AJAX call and server-side processing to be done

And here is the full script in action from our Behat definition (with the base64 string snipped out because it’s very long):

$session = $this->getSession();
$session->evaluateScript('myfile = new Blob([base64DecToArr("...snip...")], {type: "image/png"})');
$session->evaluateScript('myfile.name = "myfile.png"');
$session->evaluateScript('myfile.lastModifiedDate = new Date()');
$session->evaluateScript('myfile.webkitRelativePath = ""');
$session->evaluateScript('sahiFileList = Array()');
$session->evaluateScript('sahiFileList.push(myfile)');
$session->evaluateScript('e = jQuery.Event("drop")');
$session->evaluateScript('e.dataTransfer = { files: sahiFileList }');
$session->evaluateScript('$("#dropbox").trigger(e)');
$session->wait(1000);

Great! It’s testable!