Technical

Clean code, and how to write it

Note: the article was originally circulated on #cleancode and #kohana on Freenode and is now recorded here as an archive. It seems very useful as something to link people to on IRC when they have questions, so feel free to share as well.

At SevenStrokes, we practice Clean Code. Although code speaks louder than words, at the moment my public repositories are heavily outdated. What isn’t as outdated, however, is a short introductory guide I wrote on Clean Code for the internal use of SevenStrokes. Although it is a guide which focuses on the basics, it does make some assumptions on the reader having some knowledge about programming. You’ll notice that the examples are primarily written in PHP, but are applicable in all languages.

Clean code architectures

The article answers the question of why good code matters, what is good code, and covers the three pillars of good code: syntax, architecture, and workflow. It shows coding examples of how to write good code, introduces you to the more abstract architectural jargon, and different tools and processes out there.

Without further ado, please click to read: SevenStrokes: Learn how to write Good Code.

Life & much, much more

Competitive weight loss with WeightRace.net

So last year (or perhaps even the year before, time flies!) two people close to me participated in a friendly weight-loss competition. To do this, they used WeightRace.net.

WeightRace is a small web application I built a while ago for fun, which allows up to four contestants to compete towards a weight goal which they would set. They would be prompted daily for weight updates, and would set a reward for the winner. It also used some lightweight gamification so contestants could earn bonus “wobbly bits” when achieving things like their BMI.

But enough talking about the application — applications are boring! Much more interesting are results! Let’s see:

WeightRace - competitive weight loss

The two contestants — whom we shall refer to as Rob and Julie, which may or may not be their real name — and their results are shown in the graph above. Julie is red, Rob is blue, and their linear trajectories towards their weight goal is shown via the corresponding coloured dotted line.

If I could hear a sped-up commentary of the results, it would truly be exciting! Rob makes an excellent head-start well ahead of his trajectory, whereas Julie is having trouble beginning. As we near the holiday (Christmassy) season, we see Rob’s progress plateauing, whereas Julie gets her game on and updates with a rigorous discipline. Also great to notice is the regular upward hikes in Julie’s weight – those correspond with weekends! As the holidays pass, Rob makes gains and is unable to recover.

In the end, although Julie wins the Race, neither Julie or Rob met their weight goal (note that in terms of absolute figures, Rob actually wins). However, this was all not in vain. Given that almost another year has passed since this race finished, and I can see that Rob’s weight is now well under control and has indeed achieved his goal, I’d like to think that the WeightRace has played a role.

In particular, the WeightRace helped raise daily awareness. I believe that it was this daily awareness of the current weight that helped most in the long-term. In addition, the WeightRace helped Rob’s body to stabilise around 90kg for half a year! I suspect his body figured out that it could manage at that weight, which made it easier for him to (after the WeightRace) continue to lose weight at a healthy pace.

For those interested in playing with the WeightRace, you can check it out online at WeightRace.net. Note though that it is not actually complete, but works good enough for a competition. For those interested in the source, it’s up on my GitHub.

Technical

Building REST APIs with auto-discoverable auto-tested code

For the past few months, one of the projects I’ve been working on with SevenStrokes involves building a REST API for a service. REST APIs are tricky things to get right: they’re deceptively simple to describe, yet play host to plenty of interesting topics to delve into. Such topics can be statelessness, resource scope, authentication, hypermedia representation and so on.

However I’m going to only talk about the very basics (which many people overlook), and demonstrate how the Richardson Maturity Model can help with automated testing and documentation. If you haven’t heard of RMM yet, I recommend you stop reading and go through it now (especially if you’ve built a REST-like API before).

Let’s say our REST API conforms to a level 3 RMM: we have a set of standardised verbs, querying logical resources, receiving standardised status codes, and being able to navigate the entire system via links. We’ve got a pretty good setup so far. All these items in the RMM help our REST API system scale better. However what is doesn’t yet help with is keeping our documentation up to date. This is vital, because we know that the holy grail for REST API is an auto-generated, always up-to-date, stylish documentation that promotes your site/product api. There’s a bunch of tools that help you do this right now, but I think they’re all rather half-baked and used as a bolt-on rather than a core part of your application.

To solve this, I’m going to recommend one more addition: every resource must have the OPTIONS verb implemented. When invoked, it will respond with the following:

  1. An Allow header, specifying all the other verbs available on the invoked resource.
  2. A response body, containing the verbs, and under them in the hierarchy of the body (in whatever format), a description of:
    • Their input parameters, including type, and required boolean
    • A list of example requests and responses, detailing what headers, parameters and body are included in the request, and what headers, status code and body is included in the response.
  3. A list of assumptions that are being made for each example scenario (if applicable)
  4. A list of effects on the system for each example scenario (if applicable)
  5. A list of links to any subresources with descriptions

Let’s see a brief example:

# OPTIONS /user/

{
    "GET": {
        "title": "Get information about your user",
        "parameters": {
            "foobar": {
                "title": "A description of what foobar does",
                "type": "string",
                "required": false
            },
            [ ... snip ... ]
        },
        "examples": [
            {
                "title": "View profile information successfully",
                "request": { "headers": { "Authentication": "{usersignature}" } },
                "response": {
                    "status": 200,
                    "data": {
                        "id": "1",
                        "username": "username1",
                        [ ... snip ... ]
                    }
                }
            },
            [ ... snip ... ]
        ]
    },
    [ ... snip ... ]
    "_links": {
        "self": {
            "href": "\/makkoto-api\/user"
        },
        [ ... snip ... ]
    }
}

Sound familiar? That’s right. It’s documentation. Better than that, it’s embedded documentation. Oh, and better still, it’s auto-discoverable documentation. And if that isn’t great enough, it’s documentation identical to the format of requests and responses that API clients will be working with.

Sure, it’s pretty nifty. But that’s not all! Let’s combine this with TDD/BDD. I’ve written a quick test here:

Feature: Discover
    In order to learn how the REST API works
    As an automated, standards-based REST API client
    I can auto-discover and auto-generate tests for the API

    Scenario: Generate all tests
        Given that I have purged all previously generated tests
        Then I can generate all API tests

That’s right. This test crawls the entire REST API resource tree (starting at the top-level resource, of course), invokes OPTIONS for each resource, and generates tests based on the documentation that you’ve written.

Let’s see a quick demo in action.

Auto-documentation for REST APIs in action

It’s a really great workflow: write documentation first, generate tests from it, and then zone in on your tests in detail. This ensure that your code, tests and documentation are always in sync.

I hope someone finds this useful :) For the curious, the testing tool is Behat, and output format used is application/hal+json, using the HAL specification for linking, and link URI templates.

Technical

In order to discuss BDD, as a blogger, I need to talk about Behat

If you’re developing a web application, especially one that uses PHP, you should know about Behat.

Behat introduces itself as “a php framework for testing your business expectations”. And it does exactly that. You write down your business expectations of the application, and it automatically tests whether or not your application achieves them.

You begin every feature description with a three liner following the form:

[sourcecode]
Feature: Foo bar
In order to … (achieve what goal?)
As a … (what target audience?)
I need to … (use what feature?)
[/sourcecode]

This is then split up into individual scenarios of using this feature, all of which are described using natural English following the Gherkin syntax. It then uses Mink which is a browser abstraction layer to run these tests.

I’ve been enjoying Behat for quite some time now, and I’ve noticed certain tests I need to write that come up again and again which aren’t included in the default Mink definitions.

The first is to check whether or not an element is visible. These days, Javascript heavy UIs use a lot of hiding and showing, and often this is vital to the business expectations of how the website should work. These sorts of tests need a non-headless browser emulator, such as Sahi. Simply prefix your test with the line @mink:sahi, and now we can use the following definition:

[php]
/**
* @Then /^"([^"]*)" should be visible$/
*/
public function shouldBeVisible($selector)
{
$element = $this->getSession()->getPage()->find(‘css’, $selector);
if (empty($element))
throw new Exception(‘Element "’.$selector.’" not found’);

    $display = $this->getSession()->evaluateScript(
        'jQuery("'.$selector.'").css("display")'
    );

    if ($display === 'none')
        throw new Exception('Element "'.$selector.'" is not visible');
}

[/php]

… so you can now write …

[sourcecode]
Then "div" should be visible
[/sourcecode]

Worth highlighting is the ->evaluateScript() function that is being called. This means that anything you can check with JQuery can be tested. This is pretty much everything.

Another useful query is dealing with images. Modern web apps have to handle image uploading quite a bit, and often this comes with resizing or cropping (for avatars, keeping to layout widths, thumbnails). Wouldn’t it be great if you could just write…

[sourcecode]
Given I have an image with width "500" and height "400" in "/tmp/foo.png"
Then the "img" element should display "/tmp/foo.png"
And the "img" element should be "500" by "400" pixels
[/sourcecode]

… and of course, now you can. All this code is included in vtemplate under the FeatureContext file.

Happy testing!

Technical

VTemplate: a web project boilerplate which combines various industry standards

You’re about to start setting up the delivery mechanism for a web-based project. What do you do?

First, let’s fetch ourselves a framework. Not just any framework, but one which supports PSR-0 and encourages freedom in our domain code architecture. Kohana fits the bill nicely.

Let’s set up our infrastructure now: add Composer and Phing. After setting them up, let’s configure Composer to pull in PHPSpec2 and Behat along with Mink so we can do BDD. Oh yes, and Swiftmailer too, because what web-app nowadays doesn’t need a mailing library?

Still not yet done, let’s pull in Mustache so that we can do sane frontend development, and merge it in with KOstache. Now we can pull the latest HTML5BoilerPlate and shift its files to the appropriate template directories.

Finally, let’s set up some basic view auto loading and rendering for rapid frontend development convenience, and various drivers to hook up to our domain logic. As a finishing touch let’s convert those pesky CSS files into Stylus.

Phew! Wouldn’t it be great if all this was done already for us? Here’s where I introduce vtemplate – a web project boilerplate which combines various industry standards. You can check it out on GitHub.

It’s a little setup I use myself and is project agnostic enough that I can safely use it as a starting point for any of my current projects. Fully open-source, guaranteed by 100s of frontend designers, and by good PHP developers – so go ahead and check it out!

Technical

PHP CMSes done right: how to enable clients to edit content appropriately

In the previous post, I talked about how CMSes harm websites. I debunked the oft used selling points of faster, cheaper and client empowerment over websites and explained how CMSes butcher semantic markup, code decoupling, tasteful style, speed optimisations, maintenance ease and code freedom.

Now I want to mention a few ways how a CMS can be appropriately used to build a website. There are two scenarios I want to cover: using pre-built scripts and prioritising custom code first.

Pre-built scripts

By pre-built, I mean all you really want is an off-the-shelf setup and don’t care for customisations. So grab an out-of-the-box CMS (Joomla, Drupal, WordPress, etc), install an associated theme and several modules from the CMS’s ecosystem and glue them together. With this sort of set-up, you could have yourself a complex website system such as an e-commerce or blog site running within a day, looking good, and costing zilch if you have the know-how.

In this scenario, a CMS should be your top choice. The benefit of speed and set-up far outweighs the extremely costly alternative of custom coding such a complex system. It is for this reason that thinkMoult runs on WordPress: I just wanted a blog to run on the side with minimal fuss.

As the complexity of the system grows this benefit also grows. It would be rare to recommend to the average client to build a blog from scratch, an ecommerce system, a forum, or even ticketing system.

However once you plan on doing lots of customisations, you’re stuck.

Did that really solve anything?

Not yet, we’ve simply outlined a scenario where the cost benefit far outweighs the effort required to invest in a custom product. Unfortunately, all the issues still exist.

So how do we build a CMS for products which don’t fit those requirements – either small tailored “static poster” where first impressions are key or customised systems?

Sceptics might question why building a CMS now is any different from the CMS giants of the past. My answer is that the PHP ecosystem is maturing and the industry is standardising (see PSR-0, Composer, and latest PHP changelogs). Previously we relied mainly on CMSes as they defined a set of conventions we could live with, but now we have proper ones industry wide.

Place custom code first!

VTemplate CMS

The answer is simple. The CMS should not govern your code! Markup style generating, logic modifying systems should be at least completely decoupled if not thrown away completely. The trick to do this quickly is to isolate exactly what a CMS needs to do: and that is to allow the client to edit content.

That’s right: edit content. Not glue modules, not define routing, not to restyle the page, and never, ever, to touch anything to do with logic.

If they ever need anything more complex than editing content, make a module for it. Make that custom module on top of your existing code, and link it to a config repository – nothing else. All it should do is flick switches, not act as a heavyweight crane.

Now, for editing content – I have five strategies to fix the “butchering” aspect of CMSes:

  1. Start by ensuring your frontend code is completely decoupled from all logic. I like to do this by using Mustache as a templating language. It’s simple by design. If your frontend developers can’t break the site’s logic, your client can’t either.

  2. Write your markup and styles perfectly. Writing perfect markup and styles means your editor won’t have to care about whether that extra <div id=”does_things” class=”active wrapper_l12″> was actually vital to the page operating properly. Everything is simple and only uses standard markup tags.

  3. Use a semantic editor. A semantic editor preserves the goodness of point 2. I use WYMEditor, which has bee around for a while. Not only does it stick to the basic tags, it reads extra styles from documented CSS. This way you won’t have clients with no taste inventing new styles and layouts, but only using what you’ve provided.

  4. Beautify the code! PHP Tidy is built-in and can produce well indented, cleanly styled code markup. Don’t have faith in automatic beautifiers? With your perfect markup and complete style/markup separation in points 2 and 3, all your beautifier deals with is the most basic of markup – which probably only needs indenting before it’s called classy code (no pun intended)!

  5. Whitelist editable elements, not blacklist. The default state for whether content should be editable should be off. Don’t give them more than they ask. Because otherwise they will touch it, and inevitably break it. This means you’re custom isolating segments of editable content for the client (I move it into a Mustache partial), and testing it before handing the baton to the client. It also means you can monitor it much more easily- such as inserting an update notifier so that you can run git diff and verify they didn’t somehow still bork things over due to Murphy’s Law.

Et voila! Your client now can edit the content, not break the logic, keep it semantic, keep the code beautiful, and only touches what we wants. He also has a series of switches for the more complex areas of the site. You’re also keeping watch via that update notifier I mentioned (with a monthly fee, of course).

Back-end wise, you’ve lost nothing of the modular ecosystem that CMSes also advertise, because now we’re coding to the PSR-0 standard, and can see the various items that people offer.

What did we lose? Upfront development speed. What did we gain? Everything.

Note: the picture of the CMS is from a custom solution I developed for Omni Studios. Underneath it’s powered by Mustache, PHP Tidy, and WYMEditor, and good markup/styles, all mentioned in this post. So by custom, I mean rebranding a series of industry tools.

Technical

Content Management Systems harm websites

Yes, you read that right! Customers looking to build a web application are often wooed by the many ‘benefits’ of using a Content Management System. But before we begin: What is a content management system (abbreviated CMS)?

When a web site is built, complicated code is written to allow it to function. Some of this code builds what we see on the surface on each page. For example: the design of the site, the layout, and its content.

Content management systems harm websites

(Oh dear, we’ll explain that screenshot later!)

Web developers have built systems which now allows clients to edit the content themselves and have instantly updated content without having to go through experienced web developers. These systems are called Content Management Systems and supposedly pose these benefits:

  • Site content changes are seen instantly as the client thinks it up
  • Clients feel more ‘in control’ of the site
  • No need to pay web developers to make small and frequent edits

Sounds excellent, right? Cheaper, faster, and you’re in control. Well, unfortunately, it’s not the entire story.

What most clients don’t realise is that editing a website is not like editing a word document. CMSes create a rather similar interface which is easy to use, but causes serious side effects:

  1. The CMS editors don’t know how to cleanly separate content and style. This is the difference between what is being displayed, and how it should look like. This cruft builds up over time, making your page load slower and making it increasingly hard to make changes in the future.
  2. The CMS editors only allow you to change what things look like on the surface. Although you might not notice the difference, search engines are less likely to be able to understand your pages, and this will negatively affect your search engine rankings.
  3. They don’t discipline against bad media practice. These editors will let you upload any type of media without any considerations of how to optimise them for the web. Unoptimised images and videos mean slower website loading, more server loads (and thus server costs), and often ugly looking content.
  4. They add a lot of unnecessary code. This is another invisible side effect which leads to slower page loads and poorer search rankings.
  5. The editors don’t refer to the underlying engine when linking pages. This means that should you want to rename pages for SEO, or move site, your links are almost guaranteed to break.
  6. There is no version control. It becomes much harder to track series of changes to a single page and undo edits when problems occur.
  7. It gives you the illusion that you are an interface designer. Experienced interface designers pay attention to details such as padding, ratios, consistency, and usability that clients simply cannot match. A well designed site will slowly degrade in usability and aesthetics until it has to be redone from scratch.
  8. It lets anybody change anything. It doesn’t stop you if you’re changing a vital system page, butchering crafted copy that has undergone hours of SEO, or even edit the text of something you don’t have authority to. It becomes a human single point of failure.
  9. It exposes you to system internals. If you’re a client, all you really want to do is edit some text on your page. Modifying forms and dealing with modules is out of your league, and likely out of your contract scope. You’ll have to learn how to use a complex system just to change what is often just a simple site.
  10. You’re stuck with it. CMSes are walled gardens. They lock you into the system you’ve chosen and when you want something customised in the future, don’t be surprised when you get billed extra.

With the site almost fully in the client’s hands, clients can unknowingly break the system internals, or worse, install 3rd-party low-grade modules which can compromise the site’s security. With the power to edit now fully in the hands of clients, these system changes do not pass through the developers eyes. Over time, these accumulate and you end up with a broken site.

It isn’t all cheaper – to attempt to prevent some of these effects, developers have to spend extra time to develop custom modules for you to manage sections of the site. These costs, of course, have to be passed to you.

CMSes are also rapidly changing and constantly targeted by hackers. Not only does this mean you’re open to security breaches, the server will likely be under extra load by hackers and bots attempting to crack your site. You’re then pushed into a maintenance race to constantly update modules and your system that quickly gets forgotten: until you’re left with an outdated, unable-to-upgrade system that’s a sitting duck for hackers, even if you’ve never needed to make a single change to your content.

Did you receive training for how to use a CMS to edit your site? Bad news. You’re the only one who knows how, and probably not for long. CMSes change very rapidly – so your training will become outdated. There also isn’t much of a standard when it comes to CMSes, so you’re restricted to development firms who specialise in your CMS should you ever need professional help in the future.

Funnily enough, using a CMS is no picnic for developers, either. All CMSes cause developers to build things not the way they should be built, but the way the CMS forces them to build it. This may save time in the short-term, but often leads to costly maintenance nightmares in the long-term.

Together, using a CMS turns the craftsmanship of your site from the costly investment you poured into experienced developers into a cheap, ineffective website. You’re practically throwing away the money you spent going through detailed design revisions, search engine optimisation, training, website optimisation, responsive design, and even choosing the firm you hired to begin with. And given the accumulative nature of these adverse effects, you can be guaranteed that any changes you need done in the future will become much, much more costly.

These aren’t one-off improbable horror stories. These are things I have witnessed again and again with CMS-run sites. It is practically guaranteed to happen: the only question is when. The industry knows this, too – it’s just that CMSes are good at the short term and the prospect of self-editing content is alluring as a selling point. But it’s time to spend your money properly: get an expert craftsman to manufacture it right the first time, and keep the quality you paid for.

… coming up next: CMSes done right.

Technical

Separating the core application from the delivery framework with Kohana

This post is about developing web applications that don’t depend on the web.

Kohana MVC - A separated delivery mechanism architecture

MVC is probably the most popular architecture for web applications. But what’s interesting about MVC is that it’s not actually an architecture meant for your core application. It is merely a delivery mechanism.

With this in mind – a well developed application treats the delivery mechanism as a plugin and cleanly separates the core application from the web. It should be possible to remove the internet with all of its methods of interaction (eg: its HTTP Request/Response interface), and still have a working “core” application which you can use elsewhere. Say for example to make a desktop or mobile device application.

In short: your business logic doesn’t rely on the internet to exist.

With Kohana, or really any modern MVC framework which supports the PSR-0 standard, this is surprisingly easy to do. I’d like to share the convention I use.

In Kohana, all logic goes in application/classes/ (or equivalent in its CFS). This directory will contain all your delivery logic. This includes Controllers, Views, and any Models, and perhaps some Factories to DI your app.

Your actual core logic is kept in a separate repository to force yourself to remove all dependencies. When combined, I like to store the core logic in application/vendor/. With Git this can be done with a submodule. This way MVC and insert-your-architecture-here is cleanly separated.

You can then add your core application to Kohana’s autoloader (in application/bootstrap.php for convenience via spl_autoload_register(function($class) { Kohana::auto_load($class, 'vendor/Path/To/App/src'); });

And that’s it! With a little discipline we suddenly get a massive benefit of future flexibility.

If you are interested in a project which uses this separation, please see my WIPUP project. (Disclaimer: WIPUP is a side-project and is still in progress). More reading by Bob Martin here.

Technical

A DCI architecture implementation in PHP

For those unfamiliar with DCI, DCI stands for Data, Context and Interactions. It’s a way to fill the gap in OOP between what an object is and what an object does. It also gives use-case enactment first class status to improve the readability of the code. It was proposed by Trygve Reenskaug (the man behind MVC) and James O. Coplien.

Although DCI implementations have been done in other languages, it’s a bit lacking in PHP. I am only aware of two other implementations – the first is phpcore-dci, by Joe Chrzanowski. Although it does hit first on Google, I believe its implementation is a little backwards and far too restrictive. For example, it injects roles (and their interactions) into data objects rather than the other way around, ie. casting data objects into roles. It also requires a rather silly convention to follow which may not fit your style.

The second is by Jeremy Bush (lead developer of Kohana), as part of his Auto-Modeler-Demo project, which demonstrates quite a few technologies and practices. It’s definitely very good, and in fact has inspired this implementation, but I was not convinced in the casting technique used (via lambda functions).

Without further ado, here’s the implementation:

<?php
class Validation_Exception extends Exception {
    public $errors = array();
    public function __construct($errors) {
        parent::__construct('Multiple exceptions thrown.');
        $this->errors = $errors;
    }
    public function as_array() {
        return $this->errors;
    }
}

/**
 * A dumb data object for a person.
 */
class Person {
    public $name;
    public function __construct(Array $properties) {
        foreach ($properties as $property_name => $property_value) {
            $this->{'set_'. $property_name}($property_value);
        }
    }
    public function get_name() {
        return $this->name;
    }
    public function set_name($n) {
        $this->name = $n;
    }
}

/**
 * Interfaces allows us to specify what data objects can play this role.
 */
interface Actor_Requirements {
    public function get_name();
    public function set_name($n);
}

/**
 * The class that casts the data object as the role
 */
abstract class Cast_Actor extends Person implements Actor_Requirements {
    use Cast_Interactions;

    public function __construct(Person $p) {
        parent::__construct(get_object_vars($p));
    }
}


/**
 * What the role is able to do
 */
trait Cast_Interactions {
    public function link($roles) {
        foreach ($roles as $role_name => $role_instance) {
            $this->$role_name = $role_instance;
        }
    }
}

trait Romeo_Interactions {
    public function call_juliet() {
        echo $this->get_name(), ': Hey Juliet!', "\n";
        $this->juliet->reject_romeo();
    }

    public function leave() {
        echo $this->get_name(), ': Fine then. Goodbye.', "\n";
        //throw new Exception('The play ended unexpectedly.');
    }
}

trait Juliet_Interactions {
    public function reject_romeo() {
        echo $this->get_name(), ': Not now, sorry.', "\n";
        // Not really anything to do for validation, but just for demonstration
        //throw new Validation_Exception(array('Juliet isn\'t following her script.', 'Juliet rejected Romeo.'));
        $this->romeo->leave();
    }
}

/**
 * Inject role interactions into the casting to make our final roleplayer.
 * Separating the Cast_Foo object and the final roleplaying object allow for 
 * reusing generic casts.
 */
class Romeo extends Cast_Actor {
    use Romeo_Interactions;
}

class Juliet extends Cast_Actor {
    use Juliet_Interactions;
}

/*
// An example of how using traits can be useful
class Director extends Cast_Director {
    use Director_Interactions;
    use RomeoInteractions;
    use JulietInteractions;
}
 */

/**
 * Use case: enact Romeo & Juliet
 */
class Context {
    private $romeo;
    private $juliet;

    public function __construct(Person $p1, Person $p2) {
        // Cast objects into roles
        $this->romeo = new Romeo($p1);
        $this->juliet = new Juliet($p2);

        // Defines connections between roles.
        $this->romeo->link(array(
            'juliet' => $this->juliet
        ));
        $this->juliet->link(array(
            'romeo' => $this->romeo
        ));
    }

    public function execute() {
        try {
            $this->romeo->call_juliet();
        } catch (Validation_Exception $e) {
            $errors['validation'] = $e->as_array();
        } catch (Exception $e) {
            $errors['misc'] = $e->getMessage();
        }

        if (isset($errors)) {
            return array(
                'status' => 'failure',
                'errors' => $errors
            );
        } else {
            return array('status' => 'success');
        }
    }
}

$person1 = new Person(array('name' => 'Romeo'));
$person2 = new Person(array('name' => 'Juliet'));

$context = new Context($person1, $person2);
$result = $context->execute();
print_r($result);

Feel free to refactor this for your own architecture – this setup most definitely should not all be in one single file but should be split up as appropriate for autoloading, semantics or organisation.

I hope somebody finds this useful. It’s licensed under the IANAL license.

Life & much, much more

Presenting the Nagger

Over Christmas one of my more humourous gifts to my parents was to allow them to remotely nag each other electronically. Since my dad is often overseas, this actually has some practical use.

The idea was to create a remotely synchronised dynamic wallpaper with text that could be set by another person remotely. Person A would type in some text, a wallpaper with the text formatted would be generated, Person B’s computer would detect that there is an update, download the wallpaper and set it immediately. (I originally wanted to make a pop up message, but realised that having "Go and exercise!" pop up during a powerpoint presentation with your boss wasn’t the best thing)

The system would operate as such: I would create a html form on my webserver to allow somebody to type in text. PHP would take the text and use GD to generate a .jpg file of an image with the text overlayed on top. Batch file on Windows computer would download the .jpg file (either on startup, or via cronw) via URL2FILE. Batch file will call imagemagick installed on the Windows computer to convert .jpg to .bmp because apparently that’s what Windows likes for wallpaper formats and converting on the server would mean a ultra big file download. Finally, batch file will tweak the registry to change the wallpaper and "refresh" it such that it changes immediately.

Here’s an example :)

PHP code:

<?php
if (isset($_POST['submit']) && isset($_POST['nag']) && !empty($_POST['nag'])) {
$width = 1280;
$height = 800;
$imgname = "wallpaper_blank.jpg"; # The empty blue background template
$im = imagecreatefromjpeg ($imgname);
$text = $_POST['nag'];
$textcolor = ImageColorAllocate($im, 255, 255, 255);
$font = 20;
$font_width = ImageFontWidth($font);
$font_height = ImageFontHeight($font);
$font_width = 10;
$text_width = $font_width * strlen($text);
// Position to align in center
$position_center = ceil(($width - $text_width) / 2);
$text_height = $font_height;
// Position to align in abs middle
$position_middle = ceil(($height - $text_height) / 2);
imagettftext ($im, 15, 0, $position_center, $position_middle, $textcolor,
'/path/to/ttf/fontfile/AllOverAgainAllCaps.ttf', $text); # We're offsetting this a little to give space for desktop icons
Imagejpeg($im, '/path/to/final/image/wallpaper.jpg', 100);
chmod('/path/to/final/wallpaper.jpg', 0644); # Ensure we can download it (depending on server setup)
echo 'Nag done!';
} else {
echo '<form action="" method="post">';
echo '<textarea name="nag" rows="10" cols="50"></textarea><br />';
echo '<input type="submit" name="submit" value="Nag!">';
echo '</form>';
}

Batchfile code:

C:\path\to\URL2FILE.EXE http://mysite.com/wallpaper.jpg C:\path\to\save\wallpaper.jpg
C:\path\to\imagemagick\convert.exe C:\path\to\save\wallpaper.jpg C:\path\to\save\wallpaper.bmp
REG ADD "HKCU\Control Panel\Desktop" /V Wallpaper /T REG_SZ /F /D "C:\path\to\save\wallpaper.bmp"
REG ADD "HKCU\Control Panel\Desktop" /V WallpaperStyle /T REG_SZ /F /D 2
REG ADD "HKCU\Control Panel\Desktop" /V TileWallpaper /T REG_SZ /F /D 0
%SystemRoot%\System32\RUNDLL32.EXE user32.dll, UpdatePerUserSystemParameters

I thought it was cute, parents loved it.

P.S. If anybody knows a sane wait to input code into WordPress/Blogilo and have it immediately embedded in <code> tags as well as not lose whitespace, give me a poke.

Uncategorized

WIPUP 27.06.10a released!

It’s super, it’s amazing, and it’s released. It’s WIPUP 27.06.10a. For the uninitiated, WIPUP is a flexible and easy way for people to share, critique, and track works-in-progresses.

To quote some random person, this release truly brings out the “hey, it’s like a working site now“. This release sports super fancy upgrades courtesy of my schedule, which is now free from exams and school. Check out the WIPUP website now, and read the release notes.

Of course it’s also open-source, so not only do we welcome new users, but developers too! This is hopefully the last “alpha” release, so feel free to join.

Uncategorized

Plans for E2-Productions.com to turn into a personal cloud?

Alert! Alert! Buzzword! Yes, before we start, let’s clear up with what I mean when I say "personal cloud". A personal cloud is a web-accessable system which centralises the function of common web 2.0 services, which may or may not be social. For those that aren’t familiar with this jargon, web 2.0 services are those such as web email clients like GMail, photo sharing and management sites like Flickr, online radios like Last.FM, and even blogs just like this one. So your personal cloud is a system on your very own website, with a web interface for your very own emails, PIM (calendar, notes, todos), images, music, etc. Note that the social attribute is optional. Clouds do not have to necessarily have automatic synchronisation, nor does it have to have the ability to easily share your data with the public.

A little history first. E2-Productions.com used to be the center of attention – thriving with the latest adolescent community fads such as animation, art and music "portals". It later saw the rise of the Blender Model Repository, a personal portfolio, several forums (trendy, weren’t they?), and finally ended with the death of the original animation portal. thinkMoult then emerged somewhere on the Blogspot blogosphere and went on to become a moderately-hacked install of WordPress on the E2 server. E2-Productions had become, and still is now, a dead site.

I’ve been toying with the idea of turning E2-Productions into a personal cloud for quite some time. It did actually occur at one point. Even though a PHP developer myself, it would take too long to create my very own cloud to implement existing free scripts. In the end I had created a network of individual PHP scripts giving me a web based RSS reader, filebin, imagebin, and a proxy. With the helpful addition of several rsync scripts and sshfs, it was usable and offered all the functionality. However of course it was very hacked together and didn’t offer the sort of integration I wanted.

Today I decided to look into other cloud services. One currently in development is ownCloud by Frank Karlitschek, the guy resonsible behind the openDesktop websites. Unfortunately the results were disappointing. Taking into account that it’s still under construction (and therefore incomplete and buggy), like the openDesktop websites themselves, ownCloud is unfortunately yet another developer service that underestimates usability. Whilst it had some nifty features in the works, it’s priorities were skewed away from the rightful mentality that design maketh a website, not the function. Further probing into the code revealed some serious problems with the structure of the coding that didn’t look very well thought out. Needless to say ownCloud is not for me.

The other famous personal cloud is the open-source EyeOS. This one goes all the way, completely replicating the desktop interface in the browser. Again, most of the design of EyeOS 1.x, their stable version, approaches the design of the system as a desktop interface. The canvas of a webpage is not suited for a desktop interface. They both have strengths and weaknesses and unfortunately most of these uncanny designs don’t play to the webbrowser’s strengths. Despite the overblown interface (which excusably is amended quite a bit in their 2.0 unstable version) it’s quite featureful and its extensibility in terms of developing it yourself is quite attractive. However the lag which accompanies such client-side interactive bloat (and server-side too!) doesn’t exactly make it the most practical of choices. It’s definitely worth keeping an eye on, though (excuse the pun).

Further searching yielded an exemplary system called Tonido. Unfortunately due to its proprietary nature it’s not for me either. However it does provide a fine example of the potential of a well executed cloud service. This motivated me to reconsider creating a cloud. I began with considering the basic user objectives for the web interface:

  • Ability to dump a file online with a unique private URL, and easily share it via public URL (or a single obfuscated URL).
  • Private browsing of my files with support for subdirectories.
  • Browsing of images will be represented in userfriendly thumbnail form.
  • Browsing of PIM data (vcard, ical) will be parsed and displayed in an appropriate format.
  • Web-based uploader and/or form of synchronisation technique.

You see the "cloud" I had described (in terms of its most basic user-side functionality) as such is pretty much just a smart web-based filebrowser with mini-webapp additions for (mainly) PIM-data. Our latest newcomer to the fad, Ubuntu One, actually provides these needs very well – leaving the browser to do what it does best, and the rest to the desktop. However it falls short in a vital area – proprietariness. This isn’t a question of evangelism, it’s more of one of the simple requirements of a personal cloud. If we analyse the 5 basic user-side needs to its roots, we get a shorter list of what actually makes a personal cloud, personal:

  • Fine control of private, limited, and public files.
  • Convenience – data should be searchable by tags, and there should be no limit on the filestructure or methods of access.
  • Timelessness – data should not be locked into any vendor.

The first issue is tackled quite well in existing cloud providers, with probably the best implementation (in my opinion) being Ubuntu One. Convenience is another easily satisfied need, with wonderful tools like rsync, sshfs, and version control software (though again, most providers lock you into their own system, and convenience ends once you leave it). However the key feature that in my eyes hasn’t yet been solved by any provider is timelessness. Any proprietary client or syncing software is instantly disqualified due to dependency on the vendor. Now, even if the software was completely open and extensible, many so called personal clouds are simply connecting to existing external services. Whilst consistent with the definition of the personal cloud, the service centralises in what is hoped to be seen as the path of least resistance – that is, only for people actually already using those services. What services am I talking about? Oh, things like Flickr, Google, Facebook and Twitter. Social is a hot topic, but not for everybody. Services don’t even have to be online – dependency on, for example Tomboy Notes or Evolution in Ubuntu One qualifies as a dependency and thus means the service is not timeless.

So how is this overcome? Not easily, for sure. I want to propose that the ideal personal cloud be one that focuses simply on file management and synchronisation. It should stop there – the actual display of files and searching of files should be handled by plugins. Plugins can decide how to properly format an image gallery. Plugins can decide how to display PIM-data. These plugins should be accomodating for the most timeless format ever, plaintext, as well as industry standard formats. The user then only applies the plugins that fits their exact workflow, if necessary writing their own for interpreting their own files. This satisfies the 3 criteria of a personal cloud.

I’m still coming up with a few plans and extra ideas on how it’ll be – but meanwhile I’ve got exams to get over. If anybody knows something similar that exists, let me know, otherwise this’ll be a fun project after WIPUP reaches stable.

Uncategorized

Make a category not considered as a post in WordPress

In other words, how do you make posts that are in a certain category not count towards total page post count in WordPress?

A while back I set up Asides on this blog. The problem was that previously I was displaying 5 posts per page. Now with asides it still displayed 5 posts per page, but as asides are probably one sentence long at most I personally don’t consider them to be blog posts. This meant that it didn’t display 5 “real” posts per page. So, how do I fix this?

Disclaimer: I’m not experienced in the under-the-hood of WordPress and as a result some of this solution might be hackery. However it works for me, and that’s what counts.

Problem 1: displaying 5 real posts per page regardless of how many asides there are.

WordPress loops through a series of posts per page and displays them one by one. Initially I thought they would increment a counter, of which I could easily change so that if the post category was in “asides”, it will not increment the counter. However there were a couple flaws: 1) There was no counter, or I completely missed one, 2) The database queries sets the LIMITs from the administrator settings right at the very beginning, and 3) pagination will be completely messed up.

The solution was pretty simple, firstly we set the database query LIMIT to an obscenely large amount – more posts than we ever think we’ll need on a page. This can be done in the administrator panel. Change “display posts per page” to a random large value. I chose 15 because it seems pretty realistic that real posts + aside posts < 15 for 99.9% of the time.

The second step is to manually change the criteria for when the loop terminates. This way it will not actually show 15 posts, but instead up to 15 posts. What we’ll do is create a new counter, where ever time we display a post that isn’t an “aside”, we increment the counter, until it hits 5 posts (if I wanted 5 real posts per page) – at which time it’ll terminate the loop.

This can be done in the wp-includes/query.php page. To begin with we’ll need a new variable in the class for our counter. So below class WP_Query { we should add:

var $counting_up = 0;

Just to make sure that $counting_up resets itself as it should, we’ll add this to the init() function:

$this->counting_up = 0;

Now the next step is to modify the the_post() function. When the loop has started and the category is not an aside, we’ll increment our counter. In this example my aside category ID is category 429. This will be different for you, so you change it. So simply add this to the the_post() function:

if ( !in_category(429) && $this->current_post != -1 ) {
$this->counting_up++;
}

Now we’ve got our counter, we’ll set up the loop to terminate correctly. This can be done in the have_posts() function. Notice this is the have_posts() function inside the WP_Query class, not outside. We can modify our if statement to terminate when our counter hits 4 posts (as the first isn’t counted – therefore in effect we’ll display 5 real posts), and also when we don’t have a do_not_terminate variable set to the WP_Query. Why this do_not_terminate variable is important is if we ever need to override this, as well as later I’ll explain when we look at pagination issues. Here is my completed modified if statement:

if ($this->counting_up == 4 && !$this->query_vars['do_not_terminate']) {
$this->in_the_loop = false;
do_action_ref_array('loop_end', array(&$this));
$this->rewind_posts();
return false;
} elseif ($this->current_post + 1 < $this->post_count) {
return true;
} elseif ($this->current_post + 1 == $this->post_count && $this->post_count > 0) {
do_action_ref_array('loop_end', array(&$this));
$this->rewind_posts();
}

Now we’ve solved problem 1, and 5 real posts are displaying on our front page, let’s move on to problem 2.

Problem 2: previous page, or going to older posts will no longer work.

Since pages are pretty obsolete at this point, we’ll switch to using offsets. This is because each page will no longer display a fixed number of posts, each may display a variable amount of posts, minimum being 5 (that we set just now), and maximum being 15 (that we set at the very beginning). So to start we’ll hop over to our index.php in our theme file, and simply get the offset from the URL and pass it through to our post loop. Here goes:

<?php if ($_GET['offset'] && is_numeric($_GET['offset'])) {
query_posts('offset='. $_GET['offset']); $offsetting = $_GET['offset'];
} else { $offsetting = 0; } ?>

So with that code in index.php, we can now visit myblogsite.com/?offset=20 and offset our posts by 20. To determine how many posts to offset by in previous pages, we simply take how much we’re currently offset by, and add all the posts we’ve displayed on the page, regardless of whether or not it is an aside or a real post. To do this we need another counter. So we’ll initialise our counter, perhaps near the beginning of index.php:

<?php $on_page = 0; ?>

… then within our while (have_posts()) { loop, (or whatever equivalent loop your theme uses), we’ll just increment it:

<?php $on_page++; ?>

So then we recode our “previous posts” link to go to:

<a href="http://yourblog.com/?offset=<?php echo $offsetting + $on_page; ?>" >Previous posts</a>

That was simple, eh? This brings us to problem 3.

Problem 3: newer posts don’t work, for obvious reasons.

Going forward in time is a little bit more complex. We want to calculate how much less we should offset by. To do this we’ll create a function to calculate this. The function will need to know how much we’re currently offset by. Based on that, it’ll query 15 posts into the future, then loop through those posts in reverse order. If it can’t go 15 posts into the future (eg: on the first page, and perhaps the second), it’ll go as far into the future as it can. When looping through, it’ll record the category of each of the posts. Whenever it hits a post, it’ll increment the count we want to offset less by. When we hit a post that category isn’t an aside (category 429 in my example), it’ll increment a counter that determines how many real posts we’ve hit so far. So therefore we have two counters. When the real counter hits 6 posts, it’ll terminate the offset counter. This is because I want 5 real posts per page, and based on how we coded problem 1, we know that the last post of any page must be a real post, not an aside.

We can place this function in the functions.php file of our theme. Here is the function, of which lazy people can copy and paste:

function back_to_the_future($offset = 0) {
$new_offset = $offset-15;
if ($new_offset < 0) {
$new_offset = 0;
}
$diff_offset = $offset - $new_offset;
$future_query = new WP_Query(array(
'showposts' => $diff_offset,
'order' => 'DESC',
'offset' => $new_offset,
'do_not_terminate' => TRUE
));

$post_data = array();

while ($future_query->have_posts()) {
if ($diff_offset == 0) {
break;
} else {
$diff_offset--;
}
$future_query->the_post();
$cat_id = get_the_category();
$cat_id = $cat_id[0]->cat_ID;
$post_data[] = $cat_id;
}

$post_data = array_reverse($post_data);
$count_posts = 0;
$count_total = 0;

foreach ($post_data as $post_cat) {
if ($post_cat != 429) {
$count_posts++;
}
if ($count_posts == 6) {
break;
} else {
$count_total++;
}
}

return $offset - $count_total;
}

People who looked through the code will realise that we passed the do_not_terminate variable to WP_Query that we set up when addressing Problem 1. This is required because if we didn’t, we won’t get 15 posts into the future, instead we’ll just get however many posts starting from 15 posts into the future that include 5 real posts – which is totally useless.

To finish off nicely we’ll edit our “newer posts” link to use this calculated offset in our index.php file, but only display when we have a proper offset to show and we’re not on the first page.

<?php if ($future_offset != 0 || !empty($offsetting)) { ?>
<a href="http://yourblog.com/?offset=<?php echo $future_offset; ?>">Newer Posts</a>
<?php } ?>

Tada – all done! I hope that helped somebody out there, but if not at least I have it for archival purposes. If you’re using it, let me know how it goes!

Uncategorized

FrogCMS: a simple, clean CMS.

The other day I was looking for a CMS to run E2-Productions on. People who exclaim “what? You’re not going to engineer your own solution?” clearly have their priorities in the wrong places. If there’s one thing I learned in programming, it’s never, ever to reinvent the wheel. (unless you don’t know how wheels work yet)

I wanted to make E2 easy to maintain, easy to update, and most of all easy to switch between different ideas of what the site should contain. A CMS was a clear solution to this problem. E2 is the site I plan to use to showcase my portfolio in greater detail. Wipup is free to host my works-in-progresses, and thinkMoult is and always should be just a blog.

Being such a simple site with mainly static pages (perhaps a contact form) it was clear that the big poisons like Joomla, Mambo, Drupal etc were clearly out of the picture. WordPress is a CMS but a pain when you look at the source, and is mainly targeted at blogs. Some poking around led me to discover FrogCMS.

I downloaded the tarball, and had it up and running on my localhost within seconds. A quick tour around the administrator backend was enough to tell me this had all my needs covered. It supported markdown, templating, page hierarchies, and I think I glimpsed at some out-of-the-box SEO. Browsing their website also unearthed some useful plugins, although I think I will code for my needs manually.

The next step is to come up with a design. However, that’s not stopping me from using WIPUP to start tracking the upcoming upgrades on E2-Productions!