Technical

VTemplate: a web project boilerplate which combines various industry standards

You’re about to start setting up the delivery mechanism for a web-based project. What do you do?

First, let’s fetch ourselves a framework. Not just any framework, but one which supports PSR-0 and encourages freedom in our domain code architecture. Kohana fits the bill nicely.

Let’s set up our infrastructure now: add Composer and Phing. After setting them up, let’s configure Composer to pull in PHPSpec2 and Behat along with Mink so we can do BDD. Oh yes, and Swiftmailer too, because what web-app nowadays doesn’t need a mailing library?

Still not yet done, let’s pull in Mustache so that we can do sane frontend development, and merge it in with KOstache. Now we can pull the latest HTML5BoilerPlate and shift its files to the appropriate template directories.

Finally, let’s set up some basic view auto loading and rendering for rapid frontend development convenience, and various drivers to hook up to our domain logic. As a finishing touch let’s convert those pesky CSS files into Stylus.

Phew! Wouldn’t it be great if all this was done already for us? Here’s where I introduce vtemplate – a web project boilerplate which combines various industry standards. You can check it out on GitHub.

It’s a little setup I use myself and is project agnostic enough that I can safely use it as a starting point for any of my current projects. Fully open-source, guaranteed by 100s of frontend designers, and by good PHP developers – so go ahead and check it out!

Technical

PHP CMSes done right: how to enable clients to edit content appropriately

In the previous post, I talked about how CMSes harm websites. I debunked the oft used selling points of faster, cheaper and client empowerment over websites and explained how CMSes butcher semantic markup, code decoupling, tasteful style, speed optimisations, maintenance ease and code freedom.

Now I want to mention a few ways how a CMS can be appropriately used to build a website. There are two scenarios I want to cover: using pre-built scripts and prioritising custom code first.

Pre-built scripts

By pre-built, I mean all you really want is an off-the-shelf setup and don’t care for customisations. So grab an out-of-the-box CMS (Joomla, Drupal, WordPress, etc), install an associated theme and several modules from the CMS’s ecosystem and glue them together. With this sort of set-up, you could have yourself a complex website system such as an e-commerce or blog site running within a day, looking good, and costing zilch if you have the know-how.

In this scenario, a CMS should be your top choice. The benefit of speed and set-up far outweighs the extremely costly alternative of custom coding such a complex system. It is for this reason that thinkMoult runs on WordPress: I just wanted a blog to run on the side with minimal fuss.

As the complexity of the system grows this benefit also grows. It would be rare to recommend to the average client to build a blog from scratch, an ecommerce system, a forum, or even ticketing system.

However once you plan on doing lots of customisations, you’re stuck.

Did that really solve anything?

Not yet, we’ve simply outlined a scenario where the cost benefit far outweighs the effort required to invest in a custom product. Unfortunately, all the issues still exist.

So how do we build a CMS for products which don’t fit those requirements – either small tailored “static poster” where first impressions are key or customised systems?

Sceptics might question why building a CMS now is any different from the CMS giants of the past. My answer is that the PHP ecosystem is maturing and the industry is standardising (see PSR-0, Composer, and latest PHP changelogs). Previously we relied mainly on CMSes as they defined a set of conventions we could live with, but now we have proper ones industry wide.

Place custom code first!

VTemplate CMS

The answer is simple. The CMS should not govern your code! Markup style generating, logic modifying systems should be at least completely decoupled if not thrown away completely. The trick to do this quickly is to isolate exactly what a CMS needs to do: and that is to allow the client to edit content.

That’s right: edit content. Not glue modules, not define routing, not to restyle the page, and never, ever, to touch anything to do with logic.

If they ever need anything more complex than editing content, make a module for it. Make that custom module on top of your existing code, and link it to a config repository – nothing else. All it should do is flick switches, not act as a heavyweight crane.

Now, for editing content – I have five strategies to fix the “butchering” aspect of CMSes:

  1. Start by ensuring your frontend code is completely decoupled from all logic. I like to do this by using Mustache as a templating language. It’s simple by design. If your frontend developers can’t break the site’s logic, your client can’t either.

  2. Write your markup and styles perfectly. Writing perfect markup and styles means your editor won’t have to care about whether that extra <div id=”does_things” class=”active wrapper_l12″> was actually vital to the page operating properly. Everything is simple and only uses standard markup tags.

  3. Use a semantic editor. A semantic editor preserves the goodness of point 2. I use WYMEditor, which has bee around for a while. Not only does it stick to the basic tags, it reads extra styles from documented CSS. This way you won’t have clients with no taste inventing new styles and layouts, but only using what you’ve provided.

  4. Beautify the code! PHP Tidy is built-in and can produce well indented, cleanly styled code markup. Don’t have faith in automatic beautifiers? With your perfect markup and complete style/markup separation in points 2 and 3, all your beautifier deals with is the most basic of markup – which probably only needs indenting before it’s called classy code (no pun intended)!

  5. Whitelist editable elements, not blacklist. The default state for whether content should be editable should be off. Don’t give them more than they ask. Because otherwise they will touch it, and inevitably break it. This means you’re custom isolating segments of editable content for the client (I move it into a Mustache partial), and testing it before handing the baton to the client. It also means you can monitor it much more easily- such as inserting an update notifier so that you can run git diff and verify they didn’t somehow still bork things over due to Murphy’s Law.

Et voila! Your client now can edit the content, not break the logic, keep it semantic, keep the code beautiful, and only touches what we wants. He also has a series of switches for the more complex areas of the site. You’re also keeping watch via that update notifier I mentioned (with a monthly fee, of course).

Back-end wise, you’ve lost nothing of the modular ecosystem that CMSes also advertise, because now we’re coding to the PSR-0 standard, and can see the various items that people offer.

What did we lose? Upfront development speed. What did we gain? Everything.

Note: the picture of the CMS is from a custom solution I developed for Omni Studios. Underneath it’s powered by Mustache, PHP Tidy, and WYMEditor, and good markup/styles, all mentioned in this post. So by custom, I mean rebranding a series of industry tools.

Technical

Content Management Systems harm websites

Yes, you read that right! Customers looking to build a web application are often wooed by the many ‘benefits’ of using a Content Management System. But before we begin: What is a content management system (abbreviated CMS)?

When a web site is built, complicated code is written to allow it to function. Some of this code builds what we see on the surface on each page. For example: the design of the site, the layout, and its content.

Content management systems harm websites

(Oh dear, we’ll explain that screenshot later!)

Web developers have built systems which now allows clients to edit the content themselves and have instantly updated content without having to go through experienced web developers. These systems are called Content Management Systems and supposedly pose these benefits:

  • Site content changes are seen instantly as the client thinks it up
  • Clients feel more ‘in control’ of the site
  • No need to pay web developers to make small and frequent edits

Sounds excellent, right? Cheaper, faster, and you’re in control. Well, unfortunately, it’s not the entire story.

What most clients don’t realise is that editing a website is not like editing a word document. CMSes create a rather similar interface which is easy to use, but causes serious side effects:

  1. The CMS editors don’t know how to cleanly separate content and style. This is the difference between what is being displayed, and how it should look like. This cruft builds up over time, making your page load slower and making it increasingly hard to make changes in the future.
  2. The CMS editors only allow you to change what things look like on the surface. Although you might not notice the difference, search engines are less likely to be able to understand your pages, and this will negatively affect your search engine rankings.
  3. They don’t discipline against bad media practice. These editors will let you upload any type of media without any considerations of how to optimise them for the web. Unoptimised images and videos mean slower website loading, more server loads (and thus server costs), and often ugly looking content.
  4. They add a lot of unnecessary code. This is another invisible side effect which leads to slower page loads and poorer search rankings.
  5. The editors don’t refer to the underlying engine when linking pages. This means that should you want to rename pages for SEO, or move site, your links are almost guaranteed to break.
  6. There is no version control. It becomes much harder to track series of changes to a single page and undo edits when problems occur.
  7. It gives you the illusion that you are an interface designer. Experienced interface designers pay attention to details such as padding, ratios, consistency, and usability that clients simply cannot match. A well designed site will slowly degrade in usability and aesthetics until it has to be redone from scratch.
  8. It lets anybody change anything. It doesn’t stop you if you’re changing a vital system page, butchering crafted copy that has undergone hours of SEO, or even edit the text of something you don’t have authority to. It becomes a human single point of failure.
  9. It exposes you to system internals. If you’re a client, all you really want to do is edit some text on your page. Modifying forms and dealing with modules is out of your league, and likely out of your contract scope. You’ll have to learn how to use a complex system just to change what is often just a simple site.
  10. You’re stuck with it. CMSes are walled gardens. They lock you into the system you’ve chosen and when you want something customised in the future, don’t be surprised when you get billed extra.

With the site almost fully in the client’s hands, clients can unknowingly break the system internals, or worse, install 3rd-party low-grade modules which can compromise the site’s security. With the power to edit now fully in the hands of clients, these system changes do not pass through the developers eyes. Over time, these accumulate and you end up with a broken site.

It isn’t all cheaper – to attempt to prevent some of these effects, developers have to spend extra time to develop custom modules for you to manage sections of the site. These costs, of course, have to be passed to you.

CMSes are also rapidly changing and constantly targeted by hackers. Not only does this mean you’re open to security breaches, the server will likely be under extra load by hackers and bots attempting to crack your site. You’re then pushed into a maintenance race to constantly update modules and your system that quickly gets forgotten: until you’re left with an outdated, unable-to-upgrade system that’s a sitting duck for hackers, even if you’ve never needed to make a single change to your content.

Did you receive training for how to use a CMS to edit your site? Bad news. You’re the only one who knows how, and probably not for long. CMSes change very rapidly – so your training will become outdated. There also isn’t much of a standard when it comes to CMSes, so you’re restricted to development firms who specialise in your CMS should you ever need professional help in the future.

Funnily enough, using a CMS is no picnic for developers, either. All CMSes cause developers to build things not the way they should be built, but the way the CMS forces them to build it. This may save time in the short-term, but often leads to costly maintenance nightmares in the long-term.

Together, using a CMS turns the craftsmanship of your site from the costly investment you poured into experienced developers into a cheap, ineffective website. You’re practically throwing away the money you spent going through detailed design revisions, search engine optimisation, training, website optimisation, responsive design, and even choosing the firm you hired to begin with. And given the accumulative nature of these adverse effects, you can be guaranteed that any changes you need done in the future will become much, much more costly.

These aren’t one-off improbable horror stories. These are things I have witnessed again and again with CMS-run sites. It is practically guaranteed to happen: the only question is when. The industry knows this, too – it’s just that CMSes are good at the short term and the prospect of self-editing content is alluring as a selling point. But it’s time to spend your money properly: get an expert craftsman to manufacture it right the first time, and keep the quality you paid for.

… coming up next: CMSes done right.

Technical

Separating the core application from the delivery framework with Kohana

This post is about developing web applications that don’t depend on the web.

Kohana MVC - A separated delivery mechanism architecture

MVC is probably the most popular architecture for web applications. But what’s interesting about MVC is that it’s not actually an architecture meant for your core application. It is merely a delivery mechanism.

With this in mind – a well developed application treats the delivery mechanism as a plugin and cleanly separates the core application from the web. It should be possible to remove the internet with all of its methods of interaction (eg: its HTTP Request/Response interface), and still have a working “core” application which you can use elsewhere. Say for example to make a desktop or mobile device application.

In short: your business logic doesn’t rely on the internet to exist.

With Kohana, or really any modern MVC framework which supports the PSR-0 standard, this is surprisingly easy to do. I’d like to share the convention I use.

In Kohana, all logic goes in application/classes/ (or equivalent in its CFS). This directory will contain all your delivery logic. This includes Controllers, Views, and any Models, and perhaps some Factories to DI your app.

Your actual core logic is kept in a separate repository to force yourself to remove all dependencies. When combined, I like to store the core logic in application/vendor/. With Git this can be done with a submodule. This way MVC and insert-your-architecture-here is cleanly separated.

You can then add your core application to Kohana’s autoloader (in application/bootstrap.php for convenience via spl_autoload_register(function($class) { Kohana::auto_load($class, 'vendor/Path/To/App/src'); });

And that’s it! With a little discipline we suddenly get a massive benefit of future flexibility.

If you are interested in a project which uses this separation, please see my WIPUP project. (Disclaimer: WIPUP is a side-project and is still in progress). More reading by Bob Martin here.

Technical

A DCI architecture implementation in PHP

For those unfamiliar with DCI, DCI stands for Data, Context and Interactions. It’s a way to fill the gap in OOP between what an object is and what an object does. It also gives use-case enactment first class status to improve the readability of the code. It was proposed by Trygve Reenskaug (the man behind MVC) and James O. Coplien.

Although DCI implementations have been done in other languages, it’s a bit lacking in PHP. I am only aware of two other implementations – the first is phpcore-dci, by Joe Chrzanowski. Although it does hit first on Google, I believe its implementation is a little backwards and far too restrictive. For example, it injects roles (and their interactions) into data objects rather than the other way around, ie. casting data objects into roles. It also requires a rather silly convention to follow which may not fit your style.

The second is by Jeremy Bush (lead developer of Kohana), as part of his Auto-Modeler-Demo project, which demonstrates quite a few technologies and practices. It’s definitely very good, and in fact has inspired this implementation, but I was not convinced in the casting technique used (via lambda functions).

Without further ado, here’s the implementation:

<?php
class Validation_Exception extends Exception {
    public $errors = array();
    public function __construct($errors) {
        parent::__construct('Multiple exceptions thrown.');
        $this->errors = $errors;
    }
    public function as_array() {
        return $this->errors;
    }
}

/**
 * A dumb data object for a person.
 */
class Person {
    public $name;
    public function __construct(Array $properties) {
        foreach ($properties as $property_name => $property_value) {
            $this->{'set_'. $property_name}($property_value);
        }
    }
    public function get_name() {
        return $this->name;
    }
    public function set_name($n) {
        $this->name = $n;
    }
}

/**
 * Interfaces allows us to specify what data objects can play this role.
 */
interface Actor_Requirements {
    public function get_name();
    public function set_name($n);
}

/**
 * The class that casts the data object as the role
 */
abstract class Cast_Actor extends Person implements Actor_Requirements {
    use Cast_Interactions;

    public function __construct(Person $p) {
        parent::__construct(get_object_vars($p));
    }
}


/**
 * What the role is able to do
 */
trait Cast_Interactions {
    public function link($roles) {
        foreach ($roles as $role_name => $role_instance) {
            $this->$role_name = $role_instance;
        }
    }
}

trait Romeo_Interactions {
    public function call_juliet() {
        echo $this->get_name(), ': Hey Juliet!', "\n";
        $this->juliet->reject_romeo();
    }

    public function leave() {
        echo $this->get_name(), ': Fine then. Goodbye.', "\n";
        //throw new Exception('The play ended unexpectedly.');
    }
}

trait Juliet_Interactions {
    public function reject_romeo() {
        echo $this->get_name(), ': Not now, sorry.', "\n";
        // Not really anything to do for validation, but just for demonstration
        //throw new Validation_Exception(array('Juliet isn\'t following her script.', 'Juliet rejected Romeo.'));
        $this->romeo->leave();
    }
}

/**
 * Inject role interactions into the casting to make our final roleplayer.
 * Separating the Cast_Foo object and the final roleplaying object allow for 
 * reusing generic casts.
 */
class Romeo extends Cast_Actor {
    use Romeo_Interactions;
}

class Juliet extends Cast_Actor {
    use Juliet_Interactions;
}

/*
// An example of how using traits can be useful
class Director extends Cast_Director {
    use Director_Interactions;
    use RomeoInteractions;
    use JulietInteractions;
}
 */

/**
 * Use case: enact Romeo & Juliet
 */
class Context {
    private $romeo;
    private $juliet;

    public function __construct(Person $p1, Person $p2) {
        // Cast objects into roles
        $this->romeo = new Romeo($p1);
        $this->juliet = new Juliet($p2);

        // Defines connections between roles.
        $this->romeo->link(array(
            'juliet' => $this->juliet
        ));
        $this->juliet->link(array(
            'romeo' => $this->romeo
        ));
    }

    public function execute() {
        try {
            $this->romeo->call_juliet();
        } catch (Validation_Exception $e) {
            $errors['validation'] = $e->as_array();
        } catch (Exception $e) {
            $errors['misc'] = $e->getMessage();
        }

        if (isset($errors)) {
            return array(
                'status' => 'failure',
                'errors' => $errors
            );
        } else {
            return array('status' => 'success');
        }
    }
}

$person1 = new Person(array('name' => 'Romeo'));
$person2 = new Person(array('name' => 'Juliet'));

$context = new Context($person1, $person2);
$result = $context->execute();
print_r($result);

Feel free to refactor this for your own architecture – this setup most definitely should not all be in one single file but should be split up as appropriate for autoloading, semantics or organisation.

I hope somebody finds this useful. It’s licensed under the IANAL license.

Technical

GitList – a great way to browse Git repos on your personal server.

Git is a very popular version control system or source control management application. It’s incredibly easy to use, really lightweight, and has a hassle-free workflow. Even when I’m working on projects without other contributors, I often still commit the code to a git repository just for its benefits.

If you want to run a git server somewhere, you have a few choices, such as the social GitHub, Gitorious, or doing a custom install on your own server with something like Gitolite + Gitweb. When it comes to non open-source applications, your choices become slightly more limited. Do you fork out (no pun intended) the cash for Github’s high-quality visualisation but pricey hosting schemes, battle with dependency hell on Gitorious’ massive requirements list, or play sysadmin as you cook up a home-brew repo with access rules and security considerations?

Admittedly the access rule bit is getting better with projects like Gitolite, but one thing that has always annoyed me was how aesthetically ugly and awkward it was to browse the repos and in general move around the code. Luckily I’ve now discovered GitList.

GitList is as of writing still a very immature project (seems to have started a mere 2 months ago) and I have no idea how it fares against massively complex repositories, but it’s set up in under a minute, and takes a few hints from some of GitHub’s better UI decisions.

Give it a spin!

Technical

Showing your activity: a plasma widget hack

I like activities. However there are a couple gripes I have with its implementation.

The first is how to switch from one activity to another. Apparently there are at least 7 ways to switch activities already, but all of them fail to simultaneously satisfy two criteria: 1) Being accessible via a keyboard shortcut and 2) visually display an activity list during the switch. The closest implementation is Meta-Q, but that doesn’t show an activity list during the switch like KWin’s switcher does, meaning exactly what order you’re flipping through is anybody’s guess. The Activities widget also comes close in providing a very comprehensive view to manage activities.

Luckily, you can combine the two to somewhat solve this problem in a Meta-Q, Meta-(Shift)-Tab, Meta-Q sequence, but it’s clunky and slow.

The second gripe is that it’s very difficult to tell exactly what activity you’re actually currently on. The only place which says so clearly is on a tab on the desktop, and if you’re busy actually using your computer, that tab is going to be hidden most of the time. Another hint might be due to the change of hue underneath your panel if you use a translucent panel with different wallpapers for each activity – but I personally don’t use different wallpapers. The final hint is due to seeing what other windows crop up when you switch activities, but this is slow to process. This isn’t a problem when only switching between two activites, but three and up become an issue.

It’s vital to be able to always see what activity you’re on. After coming back from a 10 minute break, you might start up another app with it being irrelevant to the current activity. Or you might make detours from your current line of work which means you want to quickly switch between several activities, and Meta-Q’s guesswork doesn’t make this efficient at all. It has to be always in front of you, too, not auto-hidden in a separate panel – especially if you’re doing a lot of typing with keyboard shortcuts so you don’t waste time looking at it or having to remember from the last time you opened an activity switcher interface.

Long story short, I got fed up and decided to make a plasma widget for it. Only barely knowing Python and never opened the Qt or KDE/Plasma docs in my life didn’t help, but I was shocked at how easy it was. After a few hours I’d got something both functional as well as somewhat aesthetically-decent.

kde activities plasma widget

Code is available here – download my plasma widget whichactivity. It’s guaranteed to make real programmers cry.

To install, just plasmapkg -i whichactivity.zip. To uninstall, plasmapkg -r whichactivity. Code is in contents/code/whichactivity.py – you might want to change the colours / icon depending on your theme.

Technical

Tech tip #11: How to have animated wallpapers in KDE

I’ve seen many people asking how to have animated wallpapers in KDE. The current options include specialised Plasma Widgets, or the rather limited yet specialised array of animated effects, such as desktop globe, seasonal change, or virus simulations.

Unfortunately there isn’t a native way to accomplish this, but KDE being KDE, there’s always a workaround.

The idea is to use mplayer to play a constantly looping, muted, fullscreen video and tell it to play on all desktops, underneath all apps, and not show up as a window in the taskbar, switcher, or pagers.

Here’s the snippet:

#!/bin/bash
mplayer -fixed-vo -loop 0 -nosound -fs -name 'animbg' /path/to/yourvideo.avi

Save it as whatever.sh file and chmod +x whatever.sh. (not required but useful for convenience)

The -fixed-vo flag prevents reopening a new window every type the -loop 0 flag is called. -nosound and -loop is self explanatory, and -fs is fullscreen. The -name flag allows us to set a specific window class, which will be picked up by a specific KDE window rule we will create.

A quick note here that mplayer also has the -title flag, which we should be able to use to create a KDE window rule for, but it seems as though either mplayer creates the window and only afterwards changes the title, or KDE has a bug, and so the KDE window rule doesn’t match at runtime.

We can then go into System settings -> Window Behaviour -> Window Rules and press “New” to create a new window rule. Set the window class to an exact match as shown below. For more information you can view the KDE Userbase page on window rules.

In the Size & Position tab, check Desktop, and set it to Force All Desktops. In the Arrangement & Access tab, check Keep below, Skip taskbar, Skip pager, and Skip switcher, and set them all to Force Yes. Hit OK, and Apply your settings. For more information you can again see the KDE Userbase page on window attributes.

Tada! Now you have an animated wallpaper! You can set KDE System Settings -> Startup & Shutdown -> Autostart to run your .sh file.

Technical

Migration from GoDaddy

No doubt anybody with a domain name online has heard of the infamous GoDaddy. Whilst I’ve been dutifully spending time away from the computer this holiday, I’ve kept a close eye on the SOPA act as well as registrars’ reactions towards it. GoDaddy already has horrible customer service, horrible products, horrible interface, horrible advertising … and they also support SOPA.

Unfortunately I do have a couple of domains left on GoDaddy for legacy reasons and laziness on my part, but this was the final straw to migrate away (to Namecheap, if anybody’s interested). If you own a domain with them, spend the extra 5 minutes to migrate. It’s worth it.

Technical

Toronto’s “mini-sprint”, and Sydney’s KDE/FOSS Community

During the holidays I met up with Eugene Trounev, (aka it-s), one of KDE’s awesome artists to discuss our reorganisation of KDE.org and the design aspects of it (which is coming soon in the series). It was a 2-day meeting and it was my first time meeting another KDE enthusiast face-to-face, as given my inconvenient geographical location in Malaysia I don’t know anybody else here. I won’t post the outcome of the sprint here but it will be released periodically with the rest of my kde-www war series. It was extremely useful and awesome of course (and yes, lots did get done), and since no photos were taken, here is one of a random conifer tree to make up for it:

I’ve just arrived in Sydney, Australia to get ready for my upcoming year of university, and so I wanted to quickly throw out the question to see if anybody in the KDE / Blender / FOSS crowd lives there. If you do, throw me an email/comment and if there isn’t an existing community, let’s start one :D

Technical

The kde-www war: part 2

Before I begin this (delayed) post, I would like to reemphasize that a sub-agenda for these blog posts is to raise community-awareness about design issues in KDE. The website is certainly not the only area where there are design flaws, and I was very happy to read over my Christmas holiday a couple a blog posts here and here by Aurélien Gâteau about design issues within applications. I hope we can see even more of these :)

In the initial post, we talked about the elephant in the room: the wall of text that is KDE.org. No solutions were presented, but symptoms were outlined. Then, in part 1, we discovered that the wall of text was partially a side effect of a deeper problem within KDE – the structure, or lack of it. We discussed KDE’s marketing objectives, and the corresponding misalignments within KDE’s website. We finished off with outlining the ideal situation in the future. Today, we are going to talk about achieving it.

KDE has a lot to offer. Our goal is to filter down what it offers based on their relevancy to target user groups. So before we start, let’s look at the current state of KDE’s immediate “sitemap” – this is what the visitor is presented when they first look at the site. I’ve divided them into columns that they belong in, and briefly described in bullet-points what each page does.

Yes, that was long.

Too long. In fact, let me break down the issues here:

Too much choice.

This is the biggest problem here. KDE has a lot to talk about, but newcomers don’t want to be slammed with all of that in one go. For websites belonging to smaller services, each navigational item can highlight a different issue without overwhelming the visitor, because each wrap nicely around a single point of focus. KDE has multiple points of focus. Thus, it should only provide navigation items which hit each topic, not sub-topic. Here are two other websites which deal with the same problem very effectively: Mozilla.org and Opera.com. As you can see, Mozilla ignores submenus altogether, and Opera has a very clear breakdown of the topics they deal with. All in all, nobody should ever be presented with 44 navigational choices.

Imbalance in choices.

Not only is too many items in “About KDE” confusing to the user, some areas in the Community section really seem like pagefiller on what doesn’t need to be included, and others are just a massive list of items. In contrast, the Workspaces section only has 3 links – which combined together really fail to deliver what they could potentially deliver. The user is left with a “is that it? Pages upon pages of history and verbose description about KDE’s past, and only a couple screenshots about what it’s like now?“.

Double entries in the navigation.

A big problem here is that the navigation headers themselves are links to a page instead of a plain divider as it is meant to be. For example, the “About KDE” is a link, and “Community”, “Workspaces”, “Applications” etc are also links. Often this results in the page being simply a summary of all of its sub-pages, which means information repeats itself, two pages have to be maintained in the future instead of one, and users get confused of where the “official” source of information on a topic is. The summary often seems half-motivated, just to fill up a page, with the only exception being the Dev. Platform page.

Ambiguity in categorisation.

The most immediate ambiguity that shouts out at me is the “Support” category. I immediately thought “How to Support KDE”, as is the norm on most other sites out there, but it turns out that it is actually “End-user Docs/Help”. Apparently I was not the first to be misguided, as seen by the later-added “Join the Game” link, which is therefore miscategorised. Similarly, a lot of the “Community” which I identified in my part 1 is nonexistent in the Community section, but is instead filled with links about “KDE: The Foundation”. The Workspaces and Applications category is also separated, even though it need not be – as they are often bundled together when presented to the user. The result of this is a half-assed workspaces section of the site which really undersells what we have to offer.

So, what now?

We have to completely reorganise the website, obviously. The new navigation has to:

  • Provide a smaller number of choices
  • Properly categorise navigational items
  • Remove stub pages that are unnecessary
  • Remove “summary” pages that are unnecessary
  • Hide pages that “only those looking for it should find it” (eg: About Free Qt Foundation)
  • Expose more of KDE’s community, (forums/planets/irc/mail lists/social sites/ocs/etc)
  • Guide users through our outlined optimum navigational route which is aligned with our marketing efforts (as identified in part 1)

Now that we have a clear list of goals, I spent a few days brainstorming and designing a new structure along with the kde-www folks. Here’s the finished product:

Less choice, less confusion.

This new structure narrows down the number of items to 29 areas. However, we’ve decided to not immediately present all 29 to the user, instead settling for showing only 4 items, Community, Software, Development and Support, with the About items hidden in the footer (only those searching for those pages should find it, we shouldn’t showcase it). We’ve merged the Workspaces and Applications sections into Software, which essentially is a visual tour through KDE, instead of splitting it up into single, solitary pages. The community section actually does have community links this time, and we’ve narrowed down the Development items to the bare essentials (open for debate, as the -www folk aren’t desktop-devs), as in general the devs know where information is kept. Ideally, the Development section’s objective is to make it easy for new coders to join.

It is a little hard to describe, but many pages have been merged and some even completely removed, and I won’t go into details describing why every choice was made.

Points of focus

I’ve highlighted with a blue square several points of focus, this are in general more important navigational items, as they represent key sub-topics in each section. Later on in the design phase we shall discuss how these can be emaphsised visually.

Aligned with marketing’s optimum navigational route

I’ve made two arrows in the diagram above, one blue and the other red. The blue represents the optimum path for our new users. It starts them off with “We are KDE”, to answer their question “What is KDE?”, then guides them through the Software section, a visual mosiac of pretty colours, screenshots and beautifully presented features to persuade them “Why is KDE awesome and why should I use it?”, finally, we land them at the “Get KDE Software” page, once they’ve said “OK, you’ve had me convinced. Let’s get started”.

The red arrow is slightly more complex, for people who already use KDE. Their landing page is the “Get Involved” page, of which the objective is to answer the question “Where do I fit in?” When answered, we will direct them to one of our many community outlets within the community section and help them start their journey with KDE. Alternatively, should they be interested in joining the technical aspects of KDE, they can learn about the Dev Platform, and get redirected to the Techbase, which should turn them into super geeks in no time.

That’s it for part 2.

Thanks for reading and I hope you’re enjoying this series. There’s still a long way to go, and you can actually keep up to date on it via the WIPUP project here.

Technical

The kde-www war: part 1

In my initial post, I talked about the wall of text. I described some of the symptoms of the wall of text, and proclaimed that kde.org is terrible. I listed some of the basics of cleaning up text, and gathered some information about the “why” of kde.org.

Unfortunately, KDE.org is representative of a very large and vibrant community, and although formatting and eyecandy insertions will come in good time, we have to first understand the site’s structure to make informed decisions before tidying up small details. KDE.org’s wall of text problem is not simply due to a few bad aesthetic choices, but instead a side-effect of a more fundamental problem in KDE-www’s structure.

When I defined the wall of text issue, I described the problem being boiling to the essence of what you’re trying to communicate to the audience, and how to present it. Thus let’s look at what we are trying to communicate to the KDE audience – of which there are essentially two parties:

The uninitiated potential KDE user

The new user is interested in the single question of “What is KDE?“. They will want to understand that KDE is a community, and that its product is KDE SC – of which is a multidimensional beast full of wonders both for end-users and developers. When this has been answered, we want to tell them “Why is KDE right for me?“, and finally when convinced, “How do I start?“.

New users have a very specific workflow, and so we should recognise this, tailor it to them, and remove any potential “sidetracking” factoids.

The existing KDE user

The existing KDE user knows what KDE is and is currently using it, but most importantly, the existing user IS KDE. The rebranding effort was not about changing KDE to KDE SC, but instead about separating product from people. Technically, open-source is simply a business model, but in reality, open-source is a philosophy constructed by people. KDE’s challenge is how to turn one of open-source’s most intangible qualities into an axiom for all users.

So let’s talk a bit about KDE instead of KDE: SC. It has a “magazine” of sorts, the Dot, which gives “official” news on the ongoing events in KDE. It has an active blogosphere by PlanetKDE, which is populated basically by the people behind KDE: SC, which report upcoming features, discussions about KDE-related topics, ongoing physical events, and ongoing virtual events. It has a micro-blogosphere, by buzz.kde, which highlights recent Flickr and Picasa activity, YouTube videos, Tweets, and Dents. KDE’s community also has the Forums, which acts both as discussions, support and brainstorm. There is a multitude of Wikis: Userbase, the by and for users, Techbase, the by and for developers, and Community, used to organise community activities. There is KDE e.V, which does awesome stuff which isn’t publicised enough, and a variety of groups in social networks such as Facebook and Linux.com. Freenode’s network has a collection of IRC channels where KDE enthusiasts hang out. There is a variety of regional communities which all hold their own KDE specific stuff, and an entire of network of community-contributed KDE resources through the OpenDesktop API, and various other KDE connections through the SocialDesktop.

For your convenience, I’ve bolded what is KDE in the above paragraph. KDE-www, being representative of KDE, must stress that this is what KDE is – firstly by presenting in a digestable form the amazing influx of activity from all of those sources, and secondly by making it easy for any KDE user, old or new, to find out where they belong, and how they can add to the community. If you look at KDE-www from this perspective, it’s not hard to come to the conclusion that KDE.org is terrible.

But where do we start?

Given such a complex problem, let’s start by mapping out the ideal routes for each user. Here’s the proposal:

When looking at the chart above, notice how we clearly separate KDE from KDE:SC. I would like to highlight that the two final goals for existing users are not mutually exclusive. You can both contribute to KDE:SC but at the same time contribute to KDE – as long as you communicate your activity.

Now that we have identified the ideal paths for our target audiences, we can start making informed decisions about restructuring KDE.org. But before I get to that in part 2, feel free to add your opinion.

P.S. There is some wrong terminology used when it comes to KDE:SC, it should be referred to as KDE Software, as SC is more of a technical term used to describe a specific subset of packages in KDE Software.

Technical

Help KDE.org defeat the wall of text.

Everybody knows that effective design is very important to any succesful interface – be it an application, a website, a product, or a physical structure. There are lots of reasons behind this, but the one I’m going to talk about today is how design combats the most dreaded wall of text, of which KDE.org is a victim.

(Note: if you’re not interested in reading this post, just skip to the last paragraph where you can help give your 0.02 cents)

Somebody famous once said that it’s very easy to write. So easy, in fact, until the problem wasn’t with finding things to write about – it was finding things not to write about. The question was how to write concisely: boiling to the essence of what you’re trying to communicate to the audience, and how to present it.

But why is it so terrible? Despite what literature students tell you, people do not like to read. Ideally information should enter their brains without having to make any concious effort whatsoever. As interfaces are all about sharing maximum functionality with the user without sacrificing usability, knowing how to minimise (or present differently) the use of text is very important. Here are a few points to consider when critiquing – it is by no means complete and is not applicable in all scenarios.

You shouldn’t need explanatory paragraphs in your interface.

If the explanation is about your product, it’s ok to have it, but it shouldn’t be as long as a paragraph. If the explanation is about how to use your interface – that is the ultimate evil. The easiest way to remove these is to find isolate the most relevant element of the interface to which the explanation belongs to, then only make that explanation appear if the user is interested in that single option. Another way is to split up your interface into multiple interfaces to reduce the complexity of the things the user has to absorb in one go.

Don’t have more than 10 items in your main navigation.

Unless you expect a lot of repeat visitors who know exactly what they’re doing, of course. The point is that newcomers don’t like choice. They like the illusion of choice, but it is your job as the designer to secretly guide them through to the optimum “first impression” route. If you want to sell a product, you want them to be intrigued by X, then be introduced to Y, then be amazed by Z. And in that order. If you offer a service, you want to think what your target user’s daily functions are, and make sure those are in your main navigation. The rest, stuff it elsewhere.

Icons help. They really do.

Icons speak for themselves. A red X means more than a “No”. A greyed out X means more than a “Not available”. An “i” in a circle means more than “More information”. You can forego the word “Profile” altogether if you use an icon of a person. Plus, icons make your interface look prettier. If anybody isn’t sure what an icon does, they can just hover over it.

Be careful of how you present dates.

Dates are the easiest way to reduce readability of your interface. When given the date 04/05/06, Americans will read it 5th April 2006, Europeans will read it 4th May 2006, and Chinese/Japanese will read it 6th May 2004. The entire string “04/05/06” looks like code, and your brain has to do an awful lot of deciphering to understand it. It’s often best to give a full string “4th May 2006” if it’s in the archives where dates are important, a “Last Month” if the user is likely to only be interested in relative dates, or “4 May” (omitting the year if possible) if space is tight. The date is rarely the most important piece of information in a system, so hide it somewhere that only interested people would see.

Present your text semantically.

On computer systems it’s easy to think of text as lines with line breaks. Instead, get back to thinking of text blocked into paragraphs, with presenting one point per paragraph. If you have a list, use a list. Of course on the internet CSS makes this easy to do.

Create consistent visual format indicators.

Bolding text is good for emphasis, colour signify more information, italics hint at “quoted” text, font sizes represent importance, and alignment influences the workflow. It’s harder to do this on desktop applications, but still possible.

Over the next few weeks I will slowly document exactly how we can put this into practice through a live sample of KDE’s website. I will analyse each page and document it here. My objective is to not only beautify and improve KDE’s website (not only defeating the wall of text, but also improving it all around), but to also increase awareness about this in all of KDE’s applications.

Before I start, I need to collect some qualitative data from you, the community. Simply leave a comment to this post answering the following question:

Do you use KDE.org? (as in www.kde.org, not any subdomains such as the techbase, userbase, dot, etc)

If yes, was it a one-time “tour” use, or do you go to it regularly? If it’s a one-time, what do you expect kde.org to offer you? If you go to it regularly, what do you check most often?

Cheers, and until next time!

Technical

The dust has settled.

A while back, I got myself a VPS from JohnCompanies. Previously I had been under a shared hosting account by OpticEmpire (I still use them for some of my sites). I chose to use a VPS for the convenience of having a personal server to run your little life-hacks, for the learning experience as I inched towards independence with my webservices, for the flexibility of current and future webservices, and obviously because I, like most people, like to have control over their own stuff.

I ended up with Debian. I found Debian to be quite a decent distribution to work with, but all in all, it actually strengthened my attachment to Gentoo (except for the long compilation times!) With absolutely no knowledge whatsoever about running a server or all that magical voodoo that goes behind internet servers I managed to – within a week – learn about and set up my very own DNS server for a few of my domains, learn about and set up a postfix+dovecot(+squirrelmail) mailserver, set up the usual PHP security modules and webapps (eg: phpmyadmin), and migrate thinkmoult.com and my email (oh, and of course put up a Quassel core!). Well, well. *pats self on back* Oh, and of course, on the way learning how to use Debian (as the only distro I’ve ever used at a mentionable length is Gentoo)

I would have to throw some kudos at the JohnCompanies’ tech support – as because I wasn’t familiar with setting this stuff up, they helped recommend packages and pointed at a few documentation pages for me to look over. There were some bumps along the way as I had half set-up an email account (to migrate to) and thus their email got sent to the half-created email account instead of my existing one and thus a few email messages were lost. But otherwise things were great.

As for the installation proceedure, I ran into a few problems when Debian insisted on installing webapps into /usr/share/* and chowning them root:root. PHP modules such as mod_suphp don’t quite like this, and so I had to rechown them, and assign them to their own virtualhost (and add the docroot to suphp’s config too). Debian’s "solution" of running both mod_suphp and mod_php5 at the same time is, sadly, quite stupid.

So yes, the dust has settled and things should be working awesomely. In the meantime, I did also have some chance to play with photography which I will be adding regularly to, and you can view them here. Here’s one of the photos just to spice up this entry.

Technical

Walkthrough of a CSS3 website design slice.

Slicing is a sign of a terrible golfer. Slicing is also the process of cutting up an image design into smaller images and writing markup code to turn it into a living, breathing website. I recently got a request from a friend to slice their portfolio website. Here is the original design he sent to me (and dumped on WIPUP as well).

It is a fixed width, fixed height website design. Technically speaking, it’s a rather simple design. Most website frontend coders would just launch right into slicing, but this time I wanted to have some fun. I wanted the freedom that any slicer and designer yearns towards – perfect separation between presentation and content, and complete disregard for browser compatibility.

Yes, if you haven’t already guessed, I built this site with CSS3. The only images I used in the end were the green background image, and the splash screen background image (oh, and the leaf icons for the navigation, but those don’t really count).

Most of the layout was straightforward using things like the new border-radius and box-shadow tags. However the lump in the navigation bar posed some complications. In the end I was able to recreate it using a three-layered solution (via the z-index tag). The first layer held the navigation strip with shadow effects. The second (above first) layer created the lump in the navigation’s shape and shadow. A third layer mimicked the second except with a slightly decreased width, slightly offset at the top and a shadow of the same colour as the background to create a "fading" effect for the shadow on the sides. With position: relative, and offsetting to place them, I managed to recreate the effect pretty darn well, if I might say so myself.

Finally, I used Google’s Font API to choose a more appropriate font, applied text-shadows (with a different colour in my a:hover tags to create a nice glow effect) and stuck it up online for my friend to see. Here’s the result (output via Gecko renderer):

This multi-tab bar is a common webdesign element, so this trick might help other CSS3-yearning developers. Here’s the code for those who are interested. The design works in Firefox, Opera, and Safari. Chrome does not render rounded shadows correctly but otherwise works fine. It fails with IE8 and below. Haven’t tested IE9.