Technical

Basic rendering tutorial with Radiance

Radiance is the authoritative validated rendering engine out there. Unlike other rendering engines, which focus more on artistic license, Radiance focuses on scientific validation — that is, the results are not just physically based, they will produce the exact same output as measured by a physical optical sensor. This is great if you’d like to produce an image that not only looks photo-realistic, but actually matches what a similar setup in real life would look like. As you’d expect, this appeals to scientists, and designers who work with light and materials.

In addition, Radiance is open-source, completely free, and is Unix-like. If you’ve used other tools that claim to do all of the above, it probably uses Radiance under the hood anyway and rebrands itself with a more friendly interface. However, working with Radiance directly will give you a finer grain of control over the image you produce, and as I will likely write about in the future, scale up to do highly complex renders. Today, we’re going to dive into the guts of the tool itself and render a simple object. This can be a bit scary to those who are not particularly technical, and there’s not a lot of friendly material out there that doesn’t look like it’s from a 1000-page technical manual. Hopefully this walkthrough will focus on the more practical aspects without getting too bogged down in technicalities.

To begin, I’m also going to assume you have Radiance installed, and know how to open up a terminal window in your operating system of choice. If you haven’t got that far yet, go and install something simple like Ubuntu Linux and / or install Radiance. Radiance is not a program you double click on and see a window with buttons and menus that you can click on. Radiance is a collection of programs that work by typing in commands.

Let’s create a model first. Start with a simple mesh with a minimum of polygons. I am using Blender, which is a another open-source, free, and Unix-friendly software. In this case, I have started with a default scene, and arbitrarily replaced the default cube with a mesh of the Blender monkey mascot. I have also given the mesh a material, named white.

Default scene with Blender monkey

Using Blender is optional, of course, and you can use whatever 3D program you like. Radiance works with the OBJ format, which is an open format, plain text, and beautifully simple. As such, export the mesh to get yourself a resultant OBJ file, of which I have named model.obj. The exported accompanying model.mtl file is largely unnecessary right now: we will define our own materials with physical units, of which the .mtl file is not designed to do. When exporting, take care to only export the mesh, and ensure that the proper axes are facing up.

In the same directory that you have your model.obj and your model.mtl, let’s create a new file which will hold all the materials for your model. In this case, there is only one material, called white. So let’s create a new plain text file, called materials.rad and insert the following in it:

void plastic white
0
0
5 1 1 1 0 0

It’s the simplest possible material definition (and rather unrealistic, as it defines an RGB reflectance value of 1, 1, 1), but it’ll do for now. You can read about how “plastic” (i.e. non-metallic) materials as defined in the Radiance reference manual. In short, the first line says we are defining a plastic material called white, and the last line says that there are 5 parameters for this material, and their values are 1, 1, 1, 0, 0 respectively. The first three parameters refer to the R, G, and B reflectance of the material. This definition is provided in the Radiance manual, and so in the future it will serve you well to peruse the manual.

Now, open up a terminal window in the same folder where you have the model.obj and materials.rad file. We are going to run a Radiance program called obj2mesh which will combine our OBJ with the material definitions we have provided in our materials.rad, and spit out a Radiance triangulated mesh .rtm file. Execute the following command:

$ obj2mesh -a materials.rad model.obj model.rtm

If it succeeds, you will see a new file in that same directory called model.rtm. You may see a few lines pop up with warnings, but as long as they are not fatal, you may safely disregard them. This .rtm file is special to Radiance, as it does not work directly with the OBJ format.

Now, we will create a scene in Radiance and place our mesh within it. There will be no other objects in the scene. Let’s call it scene.rad, a simple text file with the following contents:

void mesh model
1 model.rtm
0
0

The first line simply defines a new mesh in the scene called model. The second line tells it that it can find the mesh in the model.rtm file. The final line (the zero) says that there are no parameters for this mesh.

Now, we will convert our scene into an octree, which is an efficient binary format (as opposed to all the simple text files we’ve been writing) that Radiance uses to do its calculations. We will run another Radiance program called oconv to do this. So open up your terminal window again and execute:

$ oconv scene.rad > scene.oct

You should now find a scene.oct file appear in the same folder as the rest of your files. This is the final file we send off to render. But before we do this final step, we will need to decide where our camera is. A camera in Radiance is defined by three parameters. The first parameter, vp, or view position, is the XYZ coordinate of the camera. The second parameter, vd, or view direction, is the XYZ vector that the camera is facing. The third parameter, vu, or view up, is the XYZ vector of where “up” is, so it knows if the camera is rotated or not. When specifying a parameter to Radiance, you will prefix the parameter name with a hyphen, followed by the parameter value. So, for a camera at the origin facing east (where +X is east and +Z is up), I can tell Radiance this by typing -vp 0 0 0 -vd 1 0 0 -vu 0 0 1.

Radiance camera definition

Calculating these vectors is a real pain unless your camera is in a really simple location and is orthogonal to the world axes like in my previous example. However, here’s a fancy script you can run in Blender which will calculate the values for the camera named Camera.

import bpy
from mathutils import Vector

cam = bpy.data.objects['Camera']
location = cam.location
up = cam.matrix_world.to_quaternion() * Vector((0.0, 1.0, 0.0))
direction = cam.matrix_world.to_quaternion() * Vector((0.0, 0.0, -1.0))

print(
    '-vp ' + str(location.x) + ' ' + str(location.y) + ' ' +  str(location.z) + ' ' +
    '-vd ' + str(direction.x) + ' ' + str(direction.y) + ' ' + str(direction.z) + ' ' +
    '-vu ' + str(up.x) + ' ' + str(up.y) + ' ' + str(up.z)
)

The output will be in the Blender console window. For those on other programs, you’ve got homework to do.

Once you know your coordinates and vectors for vp, vd, and vu, let’s use the rpict Radiance program to render from that angle. Please replace my numbers given to the three camera parameters with your own in the command below. We will also specify -av 1 1 1, which tells Radiance to render with an ambient RGB light value of 1, 1, 1. Of course, in real life we don’t have this magical ambient light value, but as we haven’t specified any other lights in our scene, it’ll have to do. We will also specify -ab 2, which allows for 2 ambient bounces of light, just so that we have a bit of shading (if we didn’t have any light bounces, we would have a flat silhouette of our monkey).

$ rpict -vp 7.481131553649902 -6.5076398849487305 5.34366512298584 -vd -0.6515582203865051 0.6141704320907593 -0.44527149200439453 -vu -0.32401347160339355 0.3054208755493164 0.8953956365585327 -av 1 1 1 -ab 2 scene.oct > render.pic

Great, after the render completes, you should see a new file called render.pic in your folder. Let’s look at this image using the Radiance ximage program.

$ ximage render.pic

You should see something like the following:

Final Radiance render

All done! We’ve learned about bringing in an OBJ mesh with Radiance materials, placing them in a scene, and rendering it from a camera. Hope it’s been useful. Of course, our final image doesn’t look exactly great – this is because the material and lighting we have set are basically physically impossible. Similarly, the simulation we’ve run has been quite rudimentary. In the future, we’ll look at specifying a much more realistic environment.

Technical

Stitching high resolution photos from NSW SIX Maps

Have you ever wanted to save out a super high resolution satellite photo from Google Maps or similar? Perhaps you’ve screenshotted satellite photos from your browser and then merged them together in your favourite photo editor like Photoshop or The GIMP. Well, in New South Wales, Australia, there’s a NSW GIS (geographic information service) government service known as NSW GIS SIX Maps.

A beautiful statellite photo from NSW GIS SIX Maps of the Sydney Harbour area

Doing this manually is quite an annoying task, but here’s an automatic script that will do it for you right in your browser, and will save up to a 10,000 by 10,000 pixel resolution image to your computer. Yep, no need to download any further software. There are a couple of prerequisites, but it should work on almost any computer. Just follow the step by step below to merge the tiles together. First, make sure you have Google Chrome as your browser.

  • Go to http://maps.six.nsw.gov.au/
  • Open up the browser inspector (Press ctrl-shift-i on Chrome), and click on the “Console” tab. This is where you will copy, paste, and run the scripts.
  • Zoom to desired max resolution level (there are 20 stops available) to the top left of the bounding rectangle of the region you’d like to stitch.
  • Copy, paste and hit enter to run the code below.

This code snippet will allow your browser to use more system resources required to perform this task.

performance.setResourceTimingBufferSize(1000);

You should see “undefined” if it completes successfully.

  • Pan a little top-left to load another tile, this’ll set the top-left boundary coordinate.
  • Pan to the bottom-right coordinate, triggering a tile-load in the process. This’ll set the bottom-right boundary coordinate.
  • Copy, paste, and hit enter to run the code in below.

This will give your browser permission to save the resulting image to your computer. For the curious, it’s called a Polyfill and here is an explanation by the FileSaver author about what it does.

/* FileSaver.js
 * A saveAs() FileSaver implementation.
 * 1.3.2
 * 2016-06-16 18:25:19
 *
 * By Eli Grey, http://eligrey.com
 * License: MIT
 *   See https://github.com/eligrey/FileSaver.js/blob/master/LICENSE.md
 */

/*global self */
/*jslint bitwise: true, indent: 4, laxbreak: true, laxcomma: true, smarttabs: true, plusplus: true */

/*! @source http://purl.eligrey.com/github/FileSaver.js/blob/master/FileSaver.js */

var saveAs = saveAs || (function(view) {
    "use strict";
    // IE <10 is explicitly unsupported
    if (typeof view === "undefined" || typeof navigator !== "undefined" && /MSIE [1-9]\./.test(navigator.userAgent)) {
        return;
    }
    var
          doc = view.document
          // only get URL when necessary in case Blob.js hasn't overridden it yet
        , get_URL = function() {
            return view.URL || view.webkitURL || view;
        }
        , save_link = doc.createElementNS("http://www.w3.org/1999/xhtml", "a")
        , can_use_save_link = "download" in save_link
        , click = function(node) {
            var event = new MouseEvent("click");
            node.dispatchEvent(event);
        }
        , is_safari = /constructor/i.test(view.HTMLElement) || view.safari
        , is_chrome_ios =/CriOS\/[\d]+/.test(navigator.userAgent)
        , throw_outside = function(ex) {
            (view.setImmediate || view.setTimeout)(function() {
                throw ex;
            }, 0);
        }
        , force_saveable_type = "application/octet-stream"
        // the Blob API is fundamentally broken as there is no "downloadfinished" event to subscribe to
        , arbitrary_revoke_timeout = 1000 * 40 // in ms
        , revoke = function(file) {
            var revoker = function() {
                if (typeof file === "string") { // file is an object URL
                    get_URL().revokeObjectURL(file);
                } else { // file is a File
                    file.remove();
                }
            };
            setTimeout(revoker, arbitrary_revoke_timeout);
        }
        , dispatch = function(filesaver, event_types, event) {
            event_types = [].concat(event_types);
            var i = event_types.length;
            while (i--) {
                var listener = filesaver["on" + event_types[i]];
                if (typeof listener === "function") {
                    try {
                        listener.call(filesaver, event || filesaver);
                    } catch (ex) {
                        throw_outside(ex);
                    }
                }
            }
        }
        , auto_bom = function(blob) {
            // prepend BOM for UTF-8 XML and text/* types (including HTML)
            // note: your browser will automatically convert UTF-16 U+FEFF to EF BB BF
            if (/^\s*(?:text\/\S*|application\/xml|\S*\/\S*\+xml)\s*;.*charset\s*=\s*utf-8/i.test(blob.type)) {
                return new Blob([String.fromCharCode(0xFEFF), blob], {type: blob.type});
            }
            return blob;
        }
        , FileSaver = function(blob, name, no_auto_bom) {
            if (!no_auto_bom) {
                blob = auto_bom(blob);
            }
            // First try a.download, then web filesystem, then object URLs
            var
                  filesaver = this
                , type = blob.type
                , force = type === force_saveable_type
                , object_url
                , dispatch_all = function() {
                    dispatch(filesaver, "writestart progress write writeend".split(" "));
                }
                // on any filesys errors revert to saving with object URLs
                , fs_error = function() {
                    if ((is_chrome_ios || (force && is_safari)) && view.FileReader) {
                        // Safari doesn't allow downloading of blob urls
                        var reader = new FileReader();
                        reader.onloadend = function() {
                            var url = is_chrome_ios ? reader.result : reader.result.replace(/^data:[^;]*;/, 'data:attachment/file;');
                            var popup = view.open(url, '_blank');
                            if(!popup) view.location.href = url;
                            url=undefined; // release reference before dispatching
                            filesaver.readyState = filesaver.DONE;
                            dispatch_all();
                        };
                        reader.readAsDataURL(blob);
                        filesaver.readyState = filesaver.INIT;
                        return;
                    }
                    // don't create more object URLs than needed
                    if (!object_url) {
                        object_url = get_URL().createObjectURL(blob);
                    }
                    if (force) {
                        view.location.href = object_url;
                    } else {
                        var opened = view.open(object_url, "_blank");
                        if (!opened) {
                            // Apple does not allow window.open, see https://developer.apple.com/library/safari/documentation/Tools/Conceptual/SafariExtensionGuide/WorkingwithWindowsandTabs/WorkingwithWindowsandTabs.html
                            view.location.href = object_url;
                        }
                    }
                    filesaver.readyState = filesaver.DONE;
                    dispatch_all();
                    revoke(object_url);
                }
            ;
            filesaver.readyState = filesaver.INIT;

            if (can_use_save_link) {
                object_url = get_URL().createObjectURL(blob);
                setTimeout(function() {
                    save_link.href = object_url;
                    save_link.download = name;
                    click(save_link);
                    dispatch_all();
                    revoke(object_url);
                    filesaver.readyState = filesaver.DONE;
                });
                return;
            }

            fs_error();
        }
        , FS_proto = FileSaver.prototype
        , saveAs = function(blob, name, no_auto_bom) {
            return new FileSaver(blob, name || blob.name || "download", no_auto_bom);
        }
    ;
    // IE 10+ (native saveAs)
    if (typeof navigator !== "undefined" && navigator.msSaveOrOpenBlob) {
        return function(blob, name, no_auto_bom) {
            name = name || blob.name || "download";

            if (!no_auto_bom) {
                blob = auto_bom(blob);
            }
            return navigator.msSaveOrOpenBlob(blob, name);
        };
    }

    FS_proto.abort = function(){};
    FS_proto.readyState = FS_proto.INIT = 0;
    FS_proto.WRITING = 1;
    FS_proto.DONE = 2;

    FS_proto.error =
    FS_proto.onwritestart =
    FS_proto.onprogress =
    FS_proto.onwrite =
    FS_proto.onabort =
    FS_proto.onerror =
    FS_proto.onwriteend =
        null;

    return saveAs;
}(
       typeof self !== "undefined" && self
    || typeof window !== "undefined" && window
    || this.content
));
// `self` is undefined in Firefox for Android content script context
// while `this` is nsIContentFrameMessageManager
// with an attribute `content` that corresponds to the window

if (typeof module !== "undefined" && module.exports) {
  module.exports.saveAs = saveAs;
} else if ((typeof define !== "undefined" && define !== null) && (define.amd !== null)) {
  define("FileSaver.js", function() {
    return saveAs;
  });
}

Now that you have the SaveAs polyfill, your browser will be able to save the results to your hard drive.

  • Finally, copy, paste and run the code below.

This code does the actual work, and will stitch the tiles together with a progress notification. It will save as “output.png” in your Downloads folder when complete.

var tiles = performance.getEntriesByType('resource').filter(item => item.name.includes("MapServer/tile"));
var resolution = null;
var coords = [];
var maxX = null;
var maxY = null;
var minX = null;
var minY = null;
var tileSize = 256;

for (var i=0; i<tiles.length; i++) {
    var tileUrlTokens = tiles[i].name.split('?')[0].split('/');
    var tileResolution = tileUrlTokens[tileUrlTokens.length-3]
    if (tileResolution > resolution || resolution == null) {
        resolution = tileResolution;
    }
};
for (var i=0; i<tiles.length; i++) {
    var tileUrlTokens = tiles[i].name.split('?')[0].split('/');
    var tileResolution = tileUrlTokens[tileUrlTokens.length-3]
    var x = tileUrlTokens[tileUrlTokens.length-1]
    var y = tileUrlTokens[tileUrlTokens.length-2]
    if (tileResolution != resolution) {
        continue;
    }
    if (x > maxX || maxX == null) {
        maxX = parseInt(x);
    }
    if (y > maxY || maxY == null) {
        maxY = parseInt(y);
    }
    if (x < minX || minX == null) {
        minX = parseInt(x);
    }
    if (y < minY || minY == null) {
        minY = parseInt(y);
    }
};

var canvas = document.createElement('canvas');
canvas.id = "sixgis";
canvas.width = 10240;
canvas.height = 10240;
canvas.style.position = "absolute";
var body = document.getElementsByTagName("body")[0];
body.appendChild(canvas);
sixgis = document.getElementById("sixgis");
var ctx = canvas.getContext("2d");

var currentTileIndex = 0;

function renderTile(resolution, x, y) {
    var img = new Image();
    img.crossOrigin='Anonymous';
    img.onload = function(response) {
        currentTileIndex++;
        console.log('Rendering '+currentTileIndex+' / '+((maxX-minX)*(maxY-minY)));
        var renderX = (response.path[0].tileX - minX) * tileSize;
        var renderY = (response.path[0].tileY - minY) * tileSize;
        ctx.drawImage(response.path[0], renderX, renderY);
        if (x < maxX && y < maxY) {
            renderTile(resolution, x, y+1);
        } else if (y >= maxY) {
            renderTile(resolution, x+1, minY);
        } else {
            canvas.toBlob(function(blob) {
                saveAs(blob, 'output.png');
            });
        }
    }
    img.tileX = x;
    img.tileY = y;
    img.src='https://maps3.six.nsw.gov.au/arcgis/rest/services/sixmaps/LPI_Imagery_Best/MapServer/tile/'+resolution+'/'+y+'/'+x+'?blankTile=false';
}
renderTile(resolution, minX, minY);

All done! One more final thing to consider, is that their terms of service probably has something to say on the matter of stitching photos together, but I am not a lawyer, so go figure :)

Technical

Clean code, and how to write it

Note: the article was originally circulated on #cleancode and #kohana on Freenode and is now recorded here as an archive. It seems very useful as something to link people to on IRC when they have questions, so feel free to share as well.

At SevenStrokes, we practice Clean Code. Although code speaks louder than words, at the moment my public repositories are heavily outdated. What isn’t as outdated, however, is a short introductory guide I wrote on Clean Code for the internal use of SevenStrokes. Although it is a guide which focuses on the basics, it does make some assumptions on the reader having some knowledge about programming. You’ll notice that the examples are primarily written in PHP, but are applicable in all languages.

Clean code architectures

The article answers the question of why good code matters, what is good code, and covers the three pillars of good code: syntax, architecture, and workflow. It shows coding examples of how to write good code, introduces you to the more abstract architectural jargon, and different tools and processes out there.

Without further ado, please click to read: SevenStrokes: Learn how to write Good Code.

Technical

Building REST APIs with auto-discoverable auto-tested code

For the past few months, one of the projects I’ve been working on with SevenStrokes involves building a REST API for a service. REST APIs are tricky things to get right: they’re deceptively simple to describe, yet play host to plenty of interesting topics to delve into. Such topics can be statelessness, resource scope, authentication, hypermedia representation and so on.

However I’m going to only talk about the very basics (which many people overlook), and demonstrate how the Richardson Maturity Model can help with automated testing and documentation. If you haven’t heard of RMM yet, I recommend you stop reading and go through it now (especially if you’ve built a REST-like API before).

Let’s say our REST API conforms to a level 3 RMM: we have a set of standardised verbs, querying logical resources, receiving standardised status codes, and being able to navigate the entire system via links. We’ve got a pretty good setup so far. All these items in the RMM help our REST API system scale better. However what is doesn’t yet help with is keeping our documentation up to date. This is vital, because we know that the holy grail for REST API is an auto-generated, always up-to-date, stylish documentation that promotes your site/product api. There’s a bunch of tools that help you do this right now, but I think they’re all rather half-baked and used as a bolt-on rather than a core part of your application.

To solve this, I’m going to recommend one more addition: every resource must have the OPTIONS verb implemented. When invoked, it will respond with the following:

  1. An Allow header, specifying all the other verbs available on the invoked resource.
  2. A response body, containing the verbs, and under them in the hierarchy of the body (in whatever format), a description of:
    • Their input parameters, including type, and required boolean
    • A list of example requests and responses, detailing what headers, parameters and body are included in the request, and what headers, status code and body is included in the response.
  3. A list of assumptions that are being made for each example scenario (if applicable)
  4. A list of effects on the system for each example scenario (if applicable)
  5. A list of links to any subresources with descriptions

Let’s see a brief example:

# OPTIONS /user/

{
    "GET": {
        "title": "Get information about your user",
        "parameters": {
            "foobar": {
                "title": "A description of what foobar does",
                "type": "string",
                "required": false
            },
            [ ... snip ... ]
        },
        "examples": [
            {
                "title": "View profile information successfully",
                "request": { "headers": { "Authentication": "{usersignature}" } },
                "response": {
                    "status": 200,
                    "data": {
                        "id": "1",
                        "username": "username1",
                        [ ... snip ... ]
                    }
                }
            },
            [ ... snip ... ]
        ]
    },
    [ ... snip ... ]
    "_links": {
        "self": {
            "href": "\/makkoto-api\/user"
        },
        [ ... snip ... ]
    }
}

Sound familiar? That’s right. It’s documentation. Better than that, it’s embedded documentation. Oh, and better still, it’s auto-discoverable documentation. And if that isn’t great enough, it’s documentation identical to the format of requests and responses that API clients will be working with.

Sure, it’s pretty nifty. But that’s not all! Let’s combine this with TDD/BDD. I’ve written a quick test here:

Feature: Discover
    In order to learn how the REST API works
    As an automated, standards-based REST API client
    I can auto-discover and auto-generate tests for the API

    Scenario: Generate all tests
        Given that I have purged all previously generated tests
        Then I can generate all API tests

That’s right. This test crawls the entire REST API resource tree (starting at the top-level resource, of course), invokes OPTIONS for each resource, and generates tests based on the documentation that you’ve written.

Let’s see a quick demo in action.

Auto-documentation for REST APIs in action

It’s a really great workflow: write documentation first, generate tests from it, and then zone in on your tests in detail. This ensure that your code, tests and documentation are always in sync.

I hope someone finds this useful :) For the curious, the testing tool is Behat, and output format used is application/hal+json, using the HAL specification for linking, and link URI templates.

Technical

Using Sahi, Mink and Behat to test HTML5 drag and drop file uploads

For those that don’t know, Behat is an excellent tool for testing the business expectations of an application. In other words, it’s a behavior-driven approach towards full-stack application acceptance testing. Mink is a browser abstraction layer, allowing you to easily control different browser emulators through a common interface. Combining the two together brings us a mean bag of tricks when it comes to testing web applications.

This morning I had set myself the task of writing the tests for a spiffy HTML5 drag and drop file upload script that is all the rage nowadays. Needless to say it took far longer than I had thought it would. Let’s get started.

Testable elements of the HTML5 drag and drop

Drag and drops work by triggering the drop event of an element. This drop event contains a list of files in a format defined by the HTML5 FileAPI. The Javascript can loop over these file objects and perform client-side file validation checks. This data is then posted via AJAX to another URL. After the server-side processing is done, we get a response object with the results, and we parse these to give feedback to the user whether the upload finally succeeded. As you can see, there are various places we can begin to test.

Attempt 1: Just test the AJAX POST

Because the data is finally POSTed via AJAX, one option is to just test that and leave the rest to manual QA. In fact, we can forego AJAX altogther, and use PHP with cURL to make the request and check the response. Easy. Actually, too easy – we’re ignoring what makes our app cool – the drag and drop!

Attempt 2: Test the legacy file input fallback

Bah. This isn’t why you’re reading this post. You know how to do this already. And anyway, you’ve probably already got a legacy test but now you want to test the spiffy HTML5 one. Moving on.

Attempt 3: Use Sahi to run your test

Hello Sahi! Sahi is a web test automation tool with fully fledged GUI. But more relevant is that it supports Javascript, unlike its faster headless relatives (yes, there’s PhantomJS, but I wouldn’t mind seeing what’s going on in a drag-and-drop widget).

Before we even hit Mink and Behat, try recording the events to turn into a Sahi script. You’ll quickly notice that Sahi (unsurprisingly) doesn’t properly record the event of dropping a file onto the page.

The issue here is that Sahi has no concept of files outside the emulated browser window. There’s a sneaky trick around this. In our Behat definition , we’ll run evaluateScript to dynamically add a file input field, then attach our image file to that field. Now we can grab the file object from that!

$session = $this->getSession();
$session->evaluateScript('$("body").after("<input type=\"file\" id=\"sahibox\">")');
$session->getPage()->attachFileToField('sahibox', '/home/dion/image.png');
myfile = $("#sahibox").get(0).files[0];

If we run the Javascript manually, it works fine. And it also creates a good opportunity to stop and peek at exactly what’s your File object built from. However in Sahi, we don’t have the file object. Why? Because input file field values cannot be manipulated by Javascript for security reasons. But then why does Sahi even provide a function for this? Because “Sahi intercepts the request in the proxy, reads the file off the file system and inserts it into the multipart request”. So Sahi just does a sneaky slide into the form submit at the end.

Taking a peek at Sahi’s setFile documentation, they note they have a _setFile2 function – which essentially converts the input field into a text field in the process. This isn’t going to work either, because we actually need the file object to test.

Finally, Sahi provides a third alternative to selecting files to uploads by emulating native events in the process of selecting a file. It’s at the bottom of their setFile documentation. It basically walks through the steps of opening up the file browse dialogue, typing in the file path with keystrokes … on and on until we get what we want. It’ll work!

Yes, it’ll work. But not nicely. It’s slow. It’s littered with _waits(). Wouldn’t it be nicer if we could create the file object ourselves rather than emulate browsing our filesystem?

Attempt 4: Grab a file object from an image already on the server

Aha! We’ve already got images in our app, let’s just try to upload one of those. We’ll need two things: an image source, and a way to create a file.

For an image source, we’ll grab one with an XMLHttpRequest() in Javascript. We need to make sure that this image source is within Sahi’s proxy, though. This is because otherwise we’d run into cross-domain issues. That’s fine, we’ll upload the Sahi logo as our test image.

To create a File, we’ll create a Blob instead. Files inherit from Blobs, and so we can swap them in an out. Right, let’s see.

var xhr = new XMLHttpRequest();
xhr.open( "GET", "http://sahi.example.com/_s_/spr/images/sahi_os_logo1.png", true );
xhr.responseType = "arraybuffer";
xhr.onload = function( e ) {
    var arrayBufferView = new Uint8Array( this.response );
    window.myfile = new Blob( [ arrayBufferView ], { type: "image/png" } );
};
xhr.send();

Great! So window.myfile will be populated with our file object now. But a test that relies on the existence of a Sahi image? Nasty.

Attempt 5: Create our file object from a base64 string

Simple but effective and none of that extra request messing around. Let’s create an image first. I made a black 100px square image for testing. The simpler the image the better, as it’ll make your base64 string smaller. Now let’s turn that image into base64:

$ base64 image.png 
iVBORw0KGgoAAAANSUhEUgAAAGQAAABkCAAAAABVicqIAAAACXBIWXMAAAsTAAALEwEAmpwYAAAA
B3RJTUUH3gIYBAEMHCkuWQAAAB1pVFh0Q29tbWVudAAAAAAAQ3JlYXRlZCB3aXRoIEdJTVBkLmUH
AAAAQElEQVRo3u3NQQ0AAAgEoNN29i9kCh9uUICa3OtIJBKJRCKRSCQSiUQikUgkEolEIpFIJBKJ
RCKRSCQSiUTyPlnSFQER9VCp/AAAAABJRU5ErkJggg==

Great. Now as it turns out, the folks at Mozilla have already worked out how to decode a base64 string into Uint8Array. Steal their functions and we’re good to go :)

So our test script will:

  1. Convert a base64 image into a Uint8Array
  2. Use that Uint8Array to construct a Blob with the mimetype of image/png
  3. Set various metadata that file uploaders need, such as file name and last modified date
  4. Create a new list of files, and put our Blob in there
  5. Create a new “drop” event.
  6. Add our list of files to the dataTransfer attribute of that drop event
  7. Trigger our on-page element with the drop event
  8. Wait for the AJAX call and server-side processing to be done

And here is the full script in action from our Behat definition (with the base64 string snipped out because it’s very long):

$session = $this->getSession();
$session->evaluateScript('myfile = new Blob([base64DecToArr("...snip...")], {type: "image/png"})');
$session->evaluateScript('myfile.name = "myfile.png"');
$session->evaluateScript('myfile.lastModifiedDate = new Date()');
$session->evaluateScript('myfile.webkitRelativePath = ""');
$session->evaluateScript('sahiFileList = Array()');
$session->evaluateScript('sahiFileList.push(myfile)');
$session->evaluateScript('e = jQuery.Event("drop")');
$session->evaluateScript('e.dataTransfer = { files: sahiFileList }');
$session->evaluateScript('$("#dropbox").trigger(e)');
$session->wait(1000);

Great! It’s testable!

Technical

A short and simple beginners look at Markdown

At SevenStrokes, we forego email support and go straight to a forum / discussion-based system based off Vanilla. This is great, because we can organise client discussions much better, focus discussions on certain topics, split and merge topics as they spin off from original topics, and through an intuitive interface that takes no time to learn. Best of all, we can escape from those badly formatted client emails with the annoying 10-line signature and get to the point. That’s the reason our discussion post formatting is based off Markdown.

Too bad it’s not obvious enough how to use Markdown.

I wrote this very short, basic, and purposely omitting details guide to What is Markdown? – I hope you like it :)

Technical

Installing Gentoo on Android with chroot

Note: recently edited 8th Nov 2014

Installing Gentoo in a chroot alongside Android is easy, so if you already use Gentoo and have an Android phone, there’s really no reason why you shouldn’t do it. With a ginormous phablet like the Samsung Galaxy Note 2 and a bluetooth keyboard, you can get a super-mobile full Linux workstation everywhere you go.

Before we begin, let’s see the pretty pictures. Here’s Larry saying hello :) (Installing a talking cow should be the top priority once the base system is up and running)

Larry saying hello on Android

… and of course a shot of emerging stuff …

Gentoo on Android compiling stuff

… and finally we’re running Enlightenment 17 with the Webkit-based Midori browser with X, accessed via (Tight)VNC …

E17 on Android with Gentoo Linux

Installing Gentoo on Android

Prerequisites first: you’ll need a rooted device. You’ll also need a terminal with busybox. I recommend Android Terminal Emulator and busybox by stericson. I would also recommend installing Hacker’s Keyboard, which gives you a full keylayout.

Installing is rather straightforward: modern Android phones usually run on ARMv7 so just follow the appropriate handbook. If you are installing it onto your internal storage (not on an external SD), you can skip to chapter 5 :)

You will need to be root to install, so su - in your terminal emulator of choice. Similarly, remount Android into read-write so that you can create the necessary files for Gentoo with mount -o remount,rw /. Finally, remember to install in /data/gentoo instead of /mnt/gentoo so to not conflict with Android’s mounting preferences.

Since we’re only installing a chroot and not booting alongside android, you can safely skip configuring the kernel, configuring fstab, configuring networking, and setting up the bootloader.

When mounting, you will need to do so as root user, and use the busybox implementation for --rbind support, as so:

$ su -
[ ... superuser access granted ... ]
$ cd /
$ mount -t proc proc /data/gentoo/proc
$ busybox mount --rbind /dev /data/gentoo/dev
$ busybox mount --rbind /sys /data/gentoo/sys
$ chroot /data/gentoo /bin/bash
[ ... now in the chroot ... ]
$ source /etc/profile

This is assuming you’ve put Gentoo in /data/gentoo

Android quirks

There doesn’t seem to be a /dev/fd on Android, so let’s fix that:

[ ... in Gentoo chroot ... ]
$ cd /dev
$ ln -s /proc/self/fd`

Portage won’t be able to download files as it doesn’t download as root, but instead as another user by default. No problem:

[ ... in /etc/portage/make.conf ... ]
FEATURES="-userfetch"`

Sometimes I’ve noticed that on bad reboots the /etc/resolv.conf can get reset. This will cause host resolving issues. Resolving is as easy as:

[ ... in /etc/resolv.conf ... ]
nameserver 8.8.4.4
nameserver 8.8.8.8`

It will be a good idea to set your main user to the same UID as the Android normal user. Also, running id -a in android will show you that your user is part of various reserved Android groups. To fix issues such as your Gentoo user’s (in)ability to go online or use bluetooth, just create these groups in your Gentoo install with matching GIDs, and add your user to these groups. Here’s a list of Android UIDS and GIDS. For example, I needed to add my Gentoo user to groups with GIDs 3003 and 3004 before it could successfully go online.

If you want an X server, VNC will do the trick. I recommend android-vnc-viewer 24-bit colour seems to work, and perhaps change the input method to touchpad rather than touchscreen so it’s relatively usable.

Finally, with no fan and big heatsink on a mobile phone, you might find yourself running hot. So even though monsters like the Galaxy Note 2 have 4 cores, I recommend sticking it to MAKEOPT="-j2"

Technical

vtemplate 1.0.0 released

Today I’d like to release vtemplate 1.0.0. I’ve blogged about vtemplate before, but now I am relatively confident about its maturity to make an official release. Jeff Atwood has spoken about The Rule of Three in Reusable Software, and I’m happy to say that I’ve used vtemplate in far more than three sites since then. Oh, and if you are not a web developer, this post is probably not for you.

What is vtemplate?

The Git repository says it well enough: vtemplate is a boilerplate setup for starting new [web] projects which combines various industry standards. There are a few tweaks here and there, but otherwise it’s loyal to upstream. You’ll recognise many favourite technologies used in vtemplate, ranging from Stylus to Kohana to Behat. But before I run through these technologies, I’d like to highlight the ideals behind vtemplate:

  1. Favour industry standards over proprietary and personal hacks
  2. Promote standards that emphasise readable, maintainable, and simple code
  3. Don’t restrict or govern your desired code architecture

Let’s take a brief tour over the awesome stuff we get with vtemplate.

HTML5Boilerplate

You really can’t go wrong with HTML5Boilerplate. It’s a great piece of collaboration by very experienced frontend web coders and very well documented. This is a great first step to writing responsive, cross-browser, HTML5-valid code. This also brings in so many other frontend joys like HumansTXT, Modernizer, JQuery and so on.

Stylus

If If you’re using another CSS preprocessor, this’ll show you just how clean your CSS can really be. If you’re not yet using a preprocessor … well, you’ll soon find out why it’s so awesome. Admittedly Stylus isn’t as popular as the big boys like LESS, but it has a very lenient syntax and is easy to swap out.

Mustache

Learn why writing logic in your templates is evil! Stop writing Model-View-Controller and start writing Model-View-Template-Controller. Don’t let the backend team hold up the frontend or vice versa.

WYMEditor CMS

Why are your clients modifying their site with bloated CMSes and complex, unsemantic rich text WYSIWYGs? Keep things simple, and let your code govern the CMS, not the other way around. WYMEditor reads and writes directly to clean segments of frontend files and understands your CSS. Best of all, it makes it easy to review changes with version control. Read more about the simple CMS approach here.

Kohana

Modular (quite literally split into many Git submodules), PSR-0 compatible, forward thinking and high coding standards PHP web delivery mechanism with extremely flexible routing and HMVC architecture.

Composer

Composer is everything you wished PEAR could’ve been, and more.

Swiftmailer

Most webapps need a mailer library. Swiftmailer is a great one.

PHPSpec2

We all love TDD, right? BDD is even better because it’s semantic TDD! PHPSpec2 provides a really simple (but powerful) BDD tool complete with clean mocking and stubbing.

Behat and Mink

Another great tool from the same guys who brought PHPSpec2. Whereas PHPSpec covers all your unit testing, Behat is excellent for full stack and UI testing, with the beauty of a semantic Gherkin syntax. Mink is just icing on the cake, giving an easy to use abstraction to emulate browsers.

Phing

Test. Configure. Build. Deploy. Do it again!

So if you’re developing a web application with PHP, check out vtemplate :)

What’s new in vtemplate 1.0.0?

Well, apart from being the first release, where by definition everything is new, there have been a few more tweaks since my last post on it.

  • Phing building, deploying and quality control with all sorts of goodies
  • UTF8 and URL replacement bugs fixed in CMS
  • Sahi comes to Behat
  • New “Photoshopper” driver for image manipulation needs
  • More Behat feature definitions as defined in my post about behat
  • Improved humanstxt
  • Default encourages variable page titles and metas
  • moult/contact form bundled
  • Kohana bumped to develop version
  • Simplified installation / configuration

Feel free to fork :)

Technical

Motion tracking with Javascript, HTML5 and a webcam

Why would you use the web for motion tracking? Simple. HTML5 Canvas is exciting. Javascript is (pretty) cool. Combined with a lazy afternoon, we can create an ultra simple hand motion tracking and colour recognition system.

This isn’t entirely true. It doesn’t track the hand, it tracks a bright blue bottle cap I found on the floor. Even more truthful is to say that it tracks anything bright blue. But enough chat, here’s a demonstration. Just click the small button with the dash in it to get started, grab something blueish and wave it in front of your camera. It won’t be as good as we got it since we adjusted it for specific lighting conditions, but it’s enough as a proof of concept.

We’ve also got pretty pictures to show. Here’s one of the quick n’ dirty strap we used to embed the bottle cap in.

php_hand_motion_tracking

And here is one of it in action.

php_hand_motion_tracking1

You can see the (bad, hackish, thrown together) code for it in my playground repository on GitHub.

Technical

In order to discuss BDD, as a blogger, I need to talk about Behat

If you’re developing a web application, especially one that uses PHP, you should know about Behat.

Behat introduces itself as “a php framework for testing your business expectations”. And it does exactly that. You write down your business expectations of the application, and it automatically tests whether or not your application achieves them.

You begin every feature description with a three liner following the form:

[sourcecode]
Feature: Foo bar
In order to … (achieve what goal?)
As a … (what target audience?)
I need to … (use what feature?)
[/sourcecode]

This is then split up into individual scenarios of using this feature, all of which are described using natural English following the Gherkin syntax. It then uses Mink which is a browser abstraction layer to run these tests.

I’ve been enjoying Behat for quite some time now, and I’ve noticed certain tests I need to write that come up again and again which aren’t included in the default Mink definitions.

The first is to check whether or not an element is visible. These days, Javascript heavy UIs use a lot of hiding and showing, and often this is vital to the business expectations of how the website should work. These sorts of tests need a non-headless browser emulator, such as Sahi. Simply prefix your test with the line @mink:sahi, and now we can use the following definition:

[php]
/**
* @Then /^"([^"]*)" should be visible$/
*/
public function shouldBeVisible($selector)
{
$element = $this->getSession()->getPage()->find(‘css’, $selector);
if (empty($element))
throw new Exception(‘Element "’.$selector.’" not found’);

    $display = $this-&gt;getSession()-&gt;evaluateScript(
        'jQuery(&quot;'.$selector.'&quot;).css(&quot;display&quot;)'
    );

    if ($display === 'none')
        throw new Exception('Element &quot;'.$selector.'&quot; is not visible');
}

[/php]

… so you can now write …

[sourcecode]
Then "div" should be visible
[/sourcecode]

Worth highlighting is the ->evaluateScript() function that is being called. This means that anything you can check with JQuery can be tested. This is pretty much everything.

Another useful query is dealing with images. Modern web apps have to handle image uploading quite a bit, and often this comes with resizing or cropping (for avatars, keeping to layout widths, thumbnails). Wouldn’t it be great if you could just write…

[sourcecode]
Given I have an image with width "500" and height "400" in "/tmp/foo.png"
Then the "img" element should display "/tmp/foo.png"
And the "img" element should be "500" by "400" pixels
[/sourcecode]

… and of course, now you can. All this code is included in vtemplate under the FeatureContext file.

Happy testing!

Technical

VTemplate: a web project boilerplate which combines various industry standards

You’re about to start setting up the delivery mechanism for a web-based project. What do you do?

First, let’s fetch ourselves a framework. Not just any framework, but one which supports PSR-0 and encourages freedom in our domain code architecture. Kohana fits the bill nicely.

Let’s set up our infrastructure now: add Composer and Phing. After setting them up, let’s configure Composer to pull in PHPSpec2 and Behat along with Mink so we can do BDD. Oh yes, and Swiftmailer too, because what web-app nowadays doesn’t need a mailing library?

Still not yet done, let’s pull in Mustache so that we can do sane frontend development, and merge it in with KOstache. Now we can pull the latest HTML5BoilerPlate and shift its files to the appropriate template directories.

Finally, let’s set up some basic view auto loading and rendering for rapid frontend development convenience, and various drivers to hook up to our domain logic. As a finishing touch let’s convert those pesky CSS files into Stylus.

Phew! Wouldn’t it be great if all this was done already for us? Here’s where I introduce vtemplate – a web project boilerplate which combines various industry standards. You can check it out on GitHub.

It’s a little setup I use myself and is project agnostic enough that I can safely use it as a starting point for any of my current projects. Fully open-source, guaranteed by 100s of frontend designers, and by good PHP developers – so go ahead and check it out!

Technical

PHP CMSes done right: how to enable clients to edit content appropriately

In the previous post, I talked about how CMSes harm websites. I debunked the oft used selling points of faster, cheaper and client empowerment over websites and explained how CMSes butcher semantic markup, code decoupling, tasteful style, speed optimisations, maintenance ease and code freedom.

Now I want to mention a few ways how a CMS can be appropriately used to build a website. There are two scenarios I want to cover: using pre-built scripts and prioritising custom code first.

Pre-built scripts

By pre-built, I mean all you really want is an off-the-shelf setup and don’t care for customisations. So grab an out-of-the-box CMS (Joomla, Drupal, WordPress, etc), install an associated theme and several modules from the CMS’s ecosystem and glue them together. With this sort of set-up, you could have yourself a complex website system such as an e-commerce or blog site running within a day, looking good, and costing zilch if you have the know-how.

In this scenario, a CMS should be your top choice. The benefit of speed and set-up far outweighs the extremely costly alternative of custom coding such a complex system. It is for this reason that thinkMoult runs on WordPress: I just wanted a blog to run on the side with minimal fuss.

As the complexity of the system grows this benefit also grows. It would be rare to recommend to the average client to build a blog from scratch, an ecommerce system, a forum, or even ticketing system.

However once you plan on doing lots of customisations, you’re stuck.

Did that really solve anything?

Not yet, we’ve simply outlined a scenario where the cost benefit far outweighs the effort required to invest in a custom product. Unfortunately, all the issues still exist.

So how do we build a CMS for products which don’t fit those requirements – either small tailored “static poster” where first impressions are key or customised systems?

Sceptics might question why building a CMS now is any different from the CMS giants of the past. My answer is that the PHP ecosystem is maturing and the industry is standardising (see PSR-0, Composer, and latest PHP changelogs). Previously we relied mainly on CMSes as they defined a set of conventions we could live with, but now we have proper ones industry wide.

Place custom code first!

VTemplate CMS

The answer is simple. The CMS should not govern your code! Markup style generating, logic modifying systems should be at least completely decoupled if not thrown away completely. The trick to do this quickly is to isolate exactly what a CMS needs to do: and that is to allow the client to edit content.

That’s right: edit content. Not glue modules, not define routing, not to restyle the page, and never, ever, to touch anything to do with logic.

If they ever need anything more complex than editing content, make a module for it. Make that custom module on top of your existing code, and link it to a config repository – nothing else. All it should do is flick switches, not act as a heavyweight crane.

Now, for editing content – I have five strategies to fix the “butchering” aspect of CMSes:

  1. Start by ensuring your frontend code is completely decoupled from all logic. I like to do this by using Mustache as a templating language. It’s simple by design. If your frontend developers can’t break the site’s logic, your client can’t either.

  2. Write your markup and styles perfectly. Writing perfect markup and styles means your editor won’t have to care about whether that extra <div id=”does_things” class=”active wrapper_l12″> was actually vital to the page operating properly. Everything is simple and only uses standard markup tags.

  3. Use a semantic editor. A semantic editor preserves the goodness of point 2. I use WYMEditor, which has bee around for a while. Not only does it stick to the basic tags, it reads extra styles from documented CSS. This way you won’t have clients with no taste inventing new styles and layouts, but only using what you’ve provided.

  4. Beautify the code! PHP Tidy is built-in and can produce well indented, cleanly styled code markup. Don’t have faith in automatic beautifiers? With your perfect markup and complete style/markup separation in points 2 and 3, all your beautifier deals with is the most basic of markup – which probably only needs indenting before it’s called classy code (no pun intended)!

  5. Whitelist editable elements, not blacklist. The default state for whether content should be editable should be off. Don’t give them more than they ask. Because otherwise they will touch it, and inevitably break it. This means you’re custom isolating segments of editable content for the client (I move it into a Mustache partial), and testing it before handing the baton to the client. It also means you can monitor it much more easily- such as inserting an update notifier so that you can run git diff and verify they didn’t somehow still bork things over due to Murphy’s Law.

Et voila! Your client now can edit the content, not break the logic, keep it semantic, keep the code beautiful, and only touches what we wants. He also has a series of switches for the more complex areas of the site. You’re also keeping watch via that update notifier I mentioned (with a monthly fee, of course).

Back-end wise, you’ve lost nothing of the modular ecosystem that CMSes also advertise, because now we’re coding to the PSR-0 standard, and can see the various items that people offer.

What did we lose? Upfront development speed. What did we gain? Everything.

Note: the picture of the CMS is from a custom solution I developed for Omni Studios. Underneath it’s powered by Mustache, PHP Tidy, and WYMEditor, and good markup/styles, all mentioned in this post. So by custom, I mean rebranding a series of industry tools.

Technical

Content Management Systems harm websites

Yes, you read that right! Customers looking to build a web application are often wooed by the many ‘benefits’ of using a Content Management System. But before we begin: What is a content management system (abbreviated CMS)?

When a web site is built, complicated code is written to allow it to function. Some of this code builds what we see on the surface on each page. For example: the design of the site, the layout, and its content.

Content management systems harm websites

(Oh dear, we’ll explain that screenshot later!)

Web developers have built systems which now allows clients to edit the content themselves and have instantly updated content without having to go through experienced web developers. These systems are called Content Management Systems and supposedly pose these benefits:

  • Site content changes are seen instantly as the client thinks it up
  • Clients feel more ‘in control’ of the site
  • No need to pay web developers to make small and frequent edits

Sounds excellent, right? Cheaper, faster, and you’re in control. Well, unfortunately, it’s not the entire story.

What most clients don’t realise is that editing a website is not like editing a word document. CMSes create a rather similar interface which is easy to use, but causes serious side effects:

  1. The CMS editors don’t know how to cleanly separate content and style. This is the difference between what is being displayed, and how it should look like. This cruft builds up over time, making your page load slower and making it increasingly hard to make changes in the future.
  2. The CMS editors only allow you to change what things look like on the surface. Although you might not notice the difference, search engines are less likely to be able to understand your pages, and this will negatively affect your search engine rankings.
  3. They don’t discipline against bad media practice. These editors will let you upload any type of media without any considerations of how to optimise them for the web. Unoptimised images and videos mean slower website loading, more server loads (and thus server costs), and often ugly looking content.
  4. They add a lot of unnecessary code. This is another invisible side effect which leads to slower page loads and poorer search rankings.
  5. The editors don’t refer to the underlying engine when linking pages. This means that should you want to rename pages for SEO, or move site, your links are almost guaranteed to break.
  6. There is no version control. It becomes much harder to track series of changes to a single page and undo edits when problems occur.
  7. It gives you the illusion that you are an interface designer. Experienced interface designers pay attention to details such as padding, ratios, consistency, and usability that clients simply cannot match. A well designed site will slowly degrade in usability and aesthetics until it has to be redone from scratch.
  8. It lets anybody change anything. It doesn’t stop you if you’re changing a vital system page, butchering crafted copy that has undergone hours of SEO, or even edit the text of something you don’t have authority to. It becomes a human single point of failure.
  9. It exposes you to system internals. If you’re a client, all you really want to do is edit some text on your page. Modifying forms and dealing with modules is out of your league, and likely out of your contract scope. You’ll have to learn how to use a complex system just to change what is often just a simple site.
  10. You’re stuck with it. CMSes are walled gardens. They lock you into the system you’ve chosen and when you want something customised in the future, don’t be surprised when you get billed extra.

With the site almost fully in the client’s hands, clients can unknowingly break the system internals, or worse, install 3rd-party low-grade modules which can compromise the site’s security. With the power to edit now fully in the hands of clients, these system changes do not pass through the developers eyes. Over time, these accumulate and you end up with a broken site.

It isn’t all cheaper – to attempt to prevent some of these effects, developers have to spend extra time to develop custom modules for you to manage sections of the site. These costs, of course, have to be passed to you.

CMSes are also rapidly changing and constantly targeted by hackers. Not only does this mean you’re open to security breaches, the server will likely be under extra load by hackers and bots attempting to crack your site. You’re then pushed into a maintenance race to constantly update modules and your system that quickly gets forgotten: until you’re left with an outdated, unable-to-upgrade system that’s a sitting duck for hackers, even if you’ve never needed to make a single change to your content.

Did you receive training for how to use a CMS to edit your site? Bad news. You’re the only one who knows how, and probably not for long. CMSes change very rapidly – so your training will become outdated. There also isn’t much of a standard when it comes to CMSes, so you’re restricted to development firms who specialise in your CMS should you ever need professional help in the future.

Funnily enough, using a CMS is no picnic for developers, either. All CMSes cause developers to build things not the way they should be built, but the way the CMS forces them to build it. This may save time in the short-term, but often leads to costly maintenance nightmares in the long-term.

Together, using a CMS turns the craftsmanship of your site from the costly investment you poured into experienced developers into a cheap, ineffective website. You’re practically throwing away the money you spent going through detailed design revisions, search engine optimisation, training, website optimisation, responsive design, and even choosing the firm you hired to begin with. And given the accumulative nature of these adverse effects, you can be guaranteed that any changes you need done in the future will become much, much more costly.

These aren’t one-off improbable horror stories. These are things I have witnessed again and again with CMS-run sites. It is practically guaranteed to happen: the only question is when. The industry knows this, too – it’s just that CMSes are good at the short term and the prospect of self-editing content is alluring as a selling point. But it’s time to spend your money properly: get an expert craftsman to manufacture it right the first time, and keep the quality you paid for.

… coming up next: CMSes done right.

Technical

Separating the core application from the delivery framework with Kohana

This post is about developing web applications that don’t depend on the web.

Kohana MVC - A separated delivery mechanism architecture

MVC is probably the most popular architecture for web applications. But what’s interesting about MVC is that it’s not actually an architecture meant for your core application. It is merely a delivery mechanism.

With this in mind – a well developed application treats the delivery mechanism as a plugin and cleanly separates the core application from the web. It should be possible to remove the internet with all of its methods of interaction (eg: its HTTP Request/Response interface), and still have a working “core” application which you can use elsewhere. Say for example to make a desktop or mobile device application.

In short: your business logic doesn’t rely on the internet to exist.

With Kohana, or really any modern MVC framework which supports the PSR-0 standard, this is surprisingly easy to do. I’d like to share the convention I use.

In Kohana, all logic goes in application/classes/ (or equivalent in its CFS). This directory will contain all your delivery logic. This includes Controllers, Views, and any Models, and perhaps some Factories to DI your app.

Your actual core logic is kept in a separate repository to force yourself to remove all dependencies. When combined, I like to store the core logic in application/vendor/. With Git this can be done with a submodule. This way MVC and insert-your-architecture-here is cleanly separated.

You can then add your core application to Kohana’s autoloader (in application/bootstrap.php for convenience via spl_autoload_register(function($class) { Kohana::auto_load($class, 'vendor/Path/To/App/src'); });

And that’s it! With a little discipline we suddenly get a massive benefit of future flexibility.

If you are interested in a project which uses this separation, please see my WIPUP project. (Disclaimer: WIPUP is a side-project and is still in progress). More reading by Bob Martin here.