A history of rendering engines and modern trends

When working in the architecture industry, we often look at rendering engines from many angles, if you’d pardon the pun. We use simple renderings for diagrams, realistic rendering for architectural visualisations, real-time rendering for virtual reality prototyping, point cloud renderings for landscape and heritage scans, and so on. As the penultimate goal in archviz is to both represent abstractly and lower the costs of realistic prototyping, it pays to see what the future of rendering holds for the industry. But first, I’d like to briefly look at rendering’s history.

When the CG industry first considered how to render an image on the scene, they were mostly concerned with the hidden surface determination problem. In short, when you have a polygon, which surfaces are visible from the camera’s POV, and which are not. This mentality of thinking about how to “colour in an object” as opposed to how to simulate light lead to the development of one of the first rendering techniques: flat shading.

In flat shading, the rendering engine would consider the surface normal of each surface with respect to a light source. The more face-on each surface was, the lighter it was, and the more incident the surface was, the darker it was. If the path between the surface normal and a light source intersected with another surface (i.e., was blocked), it was shaded black. I’ve attached an example of flat shading in the first image below. This is roughly analogous to an actual physical phenomenon – that the angle of incidence to a material matters.

This was very simple, and also not very realistic. Flat shading was then combined with specular shading, which was essentially the same but heavily biased the angle of the surface normal, and had another parameter to control the falloff of the highlight. Although this created a convincing metallic glint (see monkey two in the image below), it was again just an artistic trick and wasn’t based off an actual physical phenomenon. Nevertheless, it stuck, even to this day.

Shading techniques improved when a Vietnamese gentleman invented the infamous Phong shader. He had the idea of interpolating the vertex normals between vertices to give a gradient of colour through the face. This created much more realistic results (see monkey three), but again, had no real world equivalent.

The next improvement to the shading model was when people observed completely black shadows. In real life, global illumination and ambient light ray bounces mean that almost everything can be very effectively indirectly lit. There was no computationally efficient solution to the problem at the time, and so an ambient light constant was added to simply bump up the global lighting (see monkey four). This sort of formed the segway into modern day rendering, and thus ends our history lesson.

Flat, phone, interpolated, and ambient light shading

The moral of the story is that almost all the shading approaches had no real-life equivalent, and all the subsequent improvements were based upon a method that considered how to colour in a shape from the point of view of the shape itself. This is fundamentally incorrect – in the physical world, how an object looks (at people scales, forget quantum mechanic scales) depends on rays of light that are emitted from objects that are giving off photons (e.g. hot objects) bouncing around and losing energy. Energy is deposited and is reflected upon materials in very different ways depending on the microsurface imperfections of the material, and the chemical properties of a material.

Luckily, in parallel as these artistic shaders were being developed, physically-based “ray-tracing” rendering engines were also being developed. These ray-tracers traced rays of photons from and to cameras and light sources in the same way that the real world worked. Back then, they were cool technical demos, but always were too inefficient for any practical work. However, theoretically we had proven that if you throw enough computing power at the problem, you can get photo-realistic results. Nowadays, of course, everybody knows about ray-tracing and it’s practically the norm in the market. I’ve shown an example of a chrome Monkey below reflecting the environment – the textbook example of what ray-tracing can achieve that traditional shaders could not (well, not without hacks and light maps and what not). You can see another example of photo-realistic rendering with Cycles that I’ve done too.

Glossy ray tracing render

Almost every single popular rendering engine nowadays, such as Blender Cycles, V-Ray, Maxwell, Renderman, and Arnold are ray-tracers. They are getting faster and now combining both GPU and CPU to provide almost real-time rendering. In recent years, Physically Based Rendering, better real world scanners, and improvements on texture painters are three among many advances that make photo-realistic rendering easier and easier.

Basically, photo-realism is becoming really easy. An interesting subtle trend to additionally note is that we are actually getting more scientifically based. In the past, these ray-tracers, although somewhat physically based, had many approximations to the point that real-world units were ignored in favour of arbitrary values.

The reason why this is important is because penultimate photorealism comes from scanning in real-world data at increasing levels of fidelity. Engines, no matter how physically based they are, will find it hard to use this information if they are unable to be easily linked back to physical units and measurable scientific values.

Thankfully, this is actually improving. Simple things like using more IES profiles in lighting, or falsecolour luminance images are starting to be possible with mainstream renders. The popularisation of the Disney shader is slowly getting engines working on interoperability, and the ultimate interoperability, much like penultimate photorealism, depends on scientific values.

At the very least, we know that if we throw more computers at the problem it will eventually converge and leave us with a beautiful real image.

This is great news for architecture – the industry I’m in. Architecture is no stranger to smoke and mirrors when it comes to renders and a trend towards scientific rendering makes it easier to both cheaply prototype and still promise the same results to eager clients.

Until then, let’s play with photoreal game engines and VR while the hype lasts.


Clean meshes automatically in Blender with Python

I wrote a little Python script to clean up imported meshes (OBJs, DXFs, etc) in Blender. It’s quite useful if you often process meshes from other sources, in particular IFCs. Even better is that Blender can be run heedlessly and invoke the script automatically so you can clean meshes server side even before you open it up on my computer.

From my initial script, Paul Spooner at the BlenderArtists forums was kind enough to rewrite it with improvements. For the record, here it is. Simply copy and paste into the text editor and hit the Run Script button. It will only impact selected objects.

import bpy
checked = set()
selected_objects = bpy.context.selected_objects
for selected_object in selected_objects:
    if selected_object.type != 'MESH':
    meshdata =
    if meshdata in checked:
        checked.add(meshdata) = selected_object

Although it is pretty self explanatory, what it does is weld vertices, convert tris to quads, and recalculate normals.

Life & much, much more

Australian electrical infrastructure maps

Today we’re going to take a brief look at Australia’s electrical infrastructure. The dataset is available publicly from Geoscience Australia, but for those who don’t dabble in GIS software it can be a little hard to get to. I’ve put it all together in QGIS, and here’s a few snapshots for the curious. Now you can pretend you’re an expert electrical engineer and say judgemental things about Australia!

These maps cover major national powers stations, electricity transmission lines, and substations. If you’ve ever wondered where your electricity comes from, or how it gets to your house, this may give you a brief idea of how it all fits together.

Let’s start with Australia as a whole. Translucent circles represent electrical power stations, and their size is weighted by the generation capacity. For convenience, any power station that has a generation capacity of greater than 250MW is labeled. Any non-renewable source is shaded black, and renewables are shaded in red. Transmission power lines are in red, and substations are the small red dots. Transmission lines are weighted based on their voltage capacity – thicker means more kV, thinner means less. Dotted transmission lines occur underground and the rest are overhead.

You can click any map for a high resolution version.

Australian electrical infrastructure map

Detailed analysis aside, we can see that Australia is still largely based on non-renewables. This is unsurprising. Similarly unsurprising is the south-east coast concentration, and proximity to densely populated areas. Tasmania is mostly devoid of non-renewables, which is great, but what’s that large red circle in the south east? Let’s take a look in more detail.

NSW electrical infrastructure map

Zooming into NSW, we can capture Talbingo Dam, which services the Tumut hydroelectric power station. Tumut is special as it is the highest-capacity renewable power station, and according to the list of major power stations by the Department of Resources & Energy, it has a capacity of 2,465MW. Put into context, this is just under the 2800MW capacity of the Bayswater coal plant, the second largest non-renewable power plant in NSW.

All this talk about capacity is really important because most renewable power stations have a capacity of less than 100MW. So you would have to build say 20-40 renewable power stations to equal the capacity of a single coal plant. If you excluded Tumut and Murray (the next high capacity hydro after Tumut), and added up every single renewable power plant in NSW (wind, solar, hydro, geothermal, biofuel, and biogas), you would only then equal the capacity of your average NSW coal plant. Snowy Hydro, which runs the show, are damn successful, and the secret is in the name: snow makes for good hydro! All that melting and sudden runoff is great for electricity.

Sydney electrical infrastructure map

Zooming further into the Sydney region shows coal around the perimeter, as well as the local contender which is the Warragamba dam hydro. Despite the promise of the Warragamba dam hydro, it is important to note that it is disconnected from the grid and only provides power when the dam is at a certain level. This is quite a rare occasion for Warragamba, which provides 80% of the potable water of Sydney. On my recent visit to Warragamba, I was actually told that the hydro is being shut down due to high operating costs. Simply put, the dam is better as a reservoir instead of a hydro source.

Sydney electrical infrastructure map zoomed in

Let’s take a closer look at the Sydney region. We see a spatter of renewables and non-renewables. Still, the non-renewables outweigh the renewables – we’ll take a closer look at insolation and local solar capacity in a future blog post, but right now the only renewables of note are the biogases. In short, these stem off landfills (Eastern Creek, Lucas Heights, and Spring Farm) and industrial wastelands (Camellia). Also interesting to note is that just like the Warragamba dam, all of these landfills are already shut down or close to shutting down. We’ll talk about the waste issue and landfill capacities in a future blog post too.

In summary, Australia has a little bit more work to do. Of course, the issue is a lot deeper than these maps, but we can’t cram everything into one blog post, so hopefully it’s enough to whet your appetite.


Show who modified an element last in Revit

In Revit, don’t you ever wish you could find out who was guilty of screwing over your completely pristine BIM model? In software, we run special tracking software so that we can monitor the history of every single line of code, and blame whoever messed it up (literally, the program is called git blame). Although in the architecture industry we don’t quite have this same fidelity of tracking (well, sort of, more on that later), it’s still possible to find out who touched any Revit element last so we can interrogate them.

Finding out who last modified an element or created an element is actually a native Revit feature, but it is not very exposed on the user interface. First, I’ll show you how to check it via the interface, and then I’ll show you how to create a macro to check it from any view. I’ll then also show you how to check the history of less obvious Revit elements, like who last modified the view template.

To do this, we are assuming there is a central Revit file and people are checking out local copies of it. We are also assuming that everybody has different Revit usernames. You can check your Revit username by going to Menu->Options->General->Username.

Revit username option

Then, turn on a worksharing mode. Any of the four available modes have this feature, so pick any that you’d like.

Revit worksharing display mode options

Once the mode is enabled, just hover over any element in your view, and a Revit tooltip will appear showing various information about who created it, who owns it, and who touched it last. I’ve censored it so you can’t see who’s guilty.

Revit last updated tooltip

This is great and really easy. However to make things even easier I’ve written a macro that will allow you to click on any element without haven’t to first switch display modes, and then it’ll tell you who touched it last.

Go into the Manage tab and click on Macro Manager. Create a new Module in Python, and dump the following code:

def Blame(self):
    select = self.Application.ActiveUIDocument.Selection
    el = self.Application.ActiveUIDocument.Document.GetElement(select.PickObject(ObjectType.Element, 'Get element'))
    info = WorksharingUtils.GetWorksharingTooltipInfo(self.Application.ActiveUIDocument.Document, el.Id)
    TaskDialog.Show('Blame', 'Created by: ' + str(info.Creator) + '\nLast changed by: ' + str(info.LastChangedBy))

Press F8 to compile the macro, then run it in the macro manager. After clicking any element, you’ll see a dialog box pop up. I like to assign a keyboard shortcut to the macro manager to make this very quick to do.

If you feel the need to see the history of another less obvious / clickable element (say, a view template), you will need to first get its element ID. This is an integer that all elements in Revit have (note: it is not the GUID, which is a related but different thing). Using tools that allow you to query or browse the BIM database such as plugins provided by Ideate allow you to find out these element IDs.

Once you have the element ID, you can substitute the element acquisition line in the code above with the below snippet, where XXXXXXXX is your element ID:

el = self.Application.ActiveUIDocument.Document.GetElement(ElementId(XXXXXXXX))

There you have it – it’s all fun and games until you realise that half the screw-ups are your own fault :)

Life & much, much more

Digital privacy is important, even though you think it doesn’t impact you

The average person (or business entity) publicly shares their personal information on the internet. If you search with Google, send email with Gmail, talk with Facebook Messenger, and browse the Web with Chrome, you are being tracked. These free services, and many more, store and analyse your personal messages, search history, cloud photos, and the websites you visit. This information is readily available to governments, hackers, or really any business or person who is interested and willing to pay (law firms, journalists, advertisers, etc).

This is not news to most people. You have perhaps experienced an advertisement pop up suddenly related to a website you visited that you thought was private. You have probably had Facebook recommend new friends who you just met a week ago. However, these are all rather benign examples that don’t warrant paranoia over your digital security.

As part of my 2018 new years resolution I have been taking a closer look at my online privacy. Many people have questioned me on it and so I thought I would address it in a blog post. To begin with, I’d like to refer you to a great TED Talk on Why Privacy Matters. Take 20 minutes to watch it and come back.

Glenn Greenwald - TED - Why Privacy Matters

For those too lazy to click, Glenn Greenwald makes the point that we don’t behave the same way in the physical world and the virtual world. In the physical world, we lock our houses, cover our PIN at the ATM, close the curtains, don’t talk about business secrets in public, and use an empty room when having a private conversation. This is largely because we understand that in the physical world, we can open unlocked doors, glance at PIN keypads, peek through curtains, listen to company gossip, and overhear conversations.

In the virtual world, we are unfortunately uneducated about how to snoop on other’s private information. We assume that sending an email on Gmail is private, or opening an incognito mode browser hides everything. This is far from the truth: mass surveillance is relatively cheap and easy, and there are many organisations that are well invested in knowing how to snoop. However, for the most of us, we only experience this through tailored advertising. As a result, there is little motivation to care about privacy.

In this post, I will not talk about how you are tracked, or how to secure yourself. These are deep topics that deserve more discussion by themselves. However, I do want to talk about why privacy matters.

The right to privacy is a basic human right. Outside the obvious desire to hide company secrets, financial and medical information, we behave differently when we are being watched. You can watch adult videos if you close the door, buy different things if you don’t have a judgmental cashier, and talk about different things on the phone if you aren’t sitting on a train in public.

Again, these are benign and socially accepted norms. However, there are people living in countries where the norm is largely biased against their favour. Global issues like corruption and political oppression exist, even though many of us are lucky to turn a blind eye. Victims of these countries are censored, incarcerated, and killed. See for yourself where your country ranks in the list of freedom indices.

In these societies, a greater percentage of the population start to be impacted by the poor digital security that we practice. We can see this in the following graph, which shows the usage of The Tor Project, a tool that anonymises Internet traffic, correlating with political oppression (read the original study).

Correlation of Tor usage and political repression

Further investigation shows that Tor usage (see how Tor statistics are derived) similarly correlates to politically sensitive events. As of writing this post, I rewinded the clock to the three most recent political events that occurred in countries which experience censorship and political oppression.

First, we have the 19th National Congress of the Communist Party of China. You can see the tripling in activity as this event occurred. The red dots show potential censorship.

Chinese Tor usage spikes during the 19th National Congress of the Communist Party of China

Similarly, we can see a turbulent doubling in value during the blocks of social media and TV channels in Pakistan.

Pakistan Tor usage during the social media block

Finally, a spike of usage and statistically relevant censorship / release of censorship events during the anti-government protests in Iran.

Iran Tor usage spikes during Protests in Iran, blocking of various services including Tor

These three events were simply picked as the most three recent political events. Whether they are good or bad is largely irrelevant and I hold no opinion on them whatsoever. However, it is clear that others do have an opinion, and are using services like Tor as a reaction. Of course, it’s not just Tor. For example, a couple weeks ago, 30,000 Turks were incorrectly accused of treason from a 1×1 tracking pixel. This results in jobs, houses, and innocent lives being lost. In the US, Governors are still signing in support of Net Neutrality.

Despite these issues, there are those that believe that as long as we do not do anything bad, there is nothing to hide. Privacy tools are used by criminals, not the common population. This is also untrue. The definition of “bad” changes depending on who is in power, and criminals are motivated individuals who have much better privacy tools than most will ever have. Statistically, increasing the basic awareness of privacy does not increase criminal activity, but does increase protection of the unfairly oppressed.

Those who are fortunate enough to live a complacent digital life tend to decrease the average awareness of digital privacy. Just as we donate relief aid to countries that experience wars or natural disasters, we should promote awareness about digital freedom on the behalf of those who do not have it. Nurturing a more privacy aware generation -a generation who is born with a tablet in their hands- is a responsibility to ensure that social justice and the expression of the marginalised population remains possible.

Next up, I’ll talk a bit about what tracking does occur, and what privacy tools are out there.


Breakdown of a photo-realistic image in Blender Cycles

Recently, I wanted to produce a sample photo-realistic 3D scene with Blender’s Cycles engine that I could attempt to recreate in other rendering engines. I took an almost random photo of a street and kerb junction that is prolific throughout Sydney’s suburbs. Here’s that photo below. You can see incredible features that we take for granted such as the viscous bulging of the asphalt as it hits the kerb, dead eucalyptus leaves, a groove between two concrete blocks, and so on. It’s a slightly over-exposed shot, hence we have an unnaturally bright grass.

Source image

The resultant 3D equivalent is below, all modeled, textured, and rendered in Blender. I’ve thrown in a glossy Suzanne and sphere, as well as a creative oil slick on the asphalt. You can click on the images to see a high-resolution version.

Rendered image

The modeling itself is ridiculously easy. Excluding the particle systems and dummy meshes, the road and kerb adds up to 5 polygons. The split in the middle of the kerb is because I suspect the kerb rose in level a bit, although I ended up ignoring it. This is typically the level of detail you can expect from an architectural scene where only the road level and sidewalk level matters.

You’ll notice there are no lights. The photo was taken during an overcast sky, and so an overcast sky environment map (+-4 EV) was used for lighting. The environment map was largely untouched as it was an overcast sky, and so we don’t need to worry about the sun’s impact on the EV range.

Off to one side are some of the meshes used in the particle systems. This spot was below a eucalyptus tree, and so various eucalyptus leaves and other debris needed to be placed. The leaves, grass, and mulch are dumb planes, and only the leaves actually have a texture applied. The leaf texture was not a photo, and instead was from a beautiful eucalyptus leaf painting by a talented artist.

OpenGL render

The basic texture layer adds the first layer of realism. These are all pretty standard, such as using this seamless asphalt texture. I have assigned a diffuse and normal map, and did minor colour correction to the textures. What gives them that bit of realism is the dirt map I have painted for worn edges, which darken the values to represent the collection of dirt around edges, the gradient of dirt as water falls towards the kerb, and the evaporation of dirt as it washes up against the edge of the kerb before it finally spills over. Unlike its relative, the occlusion map (which is faking a lighting phenomenon), this dirt map actually does represent deposition of dirt and therefore a contrast between the sun-bleached material and the darkened dirty material. There is no specular map in this case, though there usually is for roads. The map is shown below.

Road dirt map

To show the contrast between the effect a dirt map applies and a flat texture, I’ve attached a work in progress screenshot below. You can see the road which has a dirt map applied in contrast to the very fake looking kerb.

Work in progress screenshot

The particle systems are what really give this scene a bit of life. There are 5 particle systems in total: dead eucalyptus leaves, mulch, long weedy Bermuda grass, short Bermuda grass, and dead grass fragments. They are all weight-painted to place them on the scene, with a noise texture to add colour variation to represent patchiness. An example of the weight paint for mulch, and dead grass is seen below.

Mulch weight paint

This gives a particle distribution which can be seen in the AO-pass below.

AO pass

That’s pretty much it! During compositing there was an AO pass multiplied, colour correction applied, a sharpen filter, as well as a slight lens distortion just for fun. A fully sized render takes about 10 minutes on my Gentoo machine.


How to obfuscate and protect BIM IFC files

Within the architecture, engineering, and construction industry, we often share IFC files to transfer building information to others. IFC (Industry Foundation Classes) is an open standard for storing BIM information. This IFC file can be produced by software such as Revit, ArchiCAD, or SketchUp.

On occasion, however, we would like to strip out information from the IFC. Perhaps there is some classified information that is stored in one of the IFC properties, or perhaps you just need a clean set of a geometry in the respective classes to work with. In this article, we will assume that you want to obfuscate the IFC such that all geometric information is retained (specifically, IfcBuildingElement subtypes and IfcShapeRepresentation), but other BIM information is mangled.

This is a relatively easy task, and can be done with a standard text editor. IFC files are plain text files, which although generally do not lend themselves well to hand-editing, are quite easy to modify with regex. For this exercise, you will need Vim, which is the world’s most advanced text editor which also happens to support regex commands, and not choke upon files which are many gigabytes in size. If you are not using Vim, you will need to convert the regex below to your own probably PCRE compliant flavour.

We’ll take a look at stripping three types of data in the IFC: strings, non-strings, and the element classes themselves.

In the IFC format, apart from the file header, all strings represent user data with the exception of those hardcoded in the IfcShapeRepresentation (which really should be constants, ideally, but hey). The one other slight exception to this are IfcGloballyUniqueId strings which is recommended to be a unique 128-bit number, but since all parsers generally ignore this it is safe to throw this baby out with the bathwater. All string information are enclosed in single quotes which makes stripping with regex trivial.

Non string information is a bit trickier, but generally we need to only care about RGB codes and transparency codes.

Subtypes which share the same express specification can be swapped out interchangeably with one another, such as columns and beams. Others may exist but I haven’t looked too deeply.

As such, the file can be stripped by:


(note: maintain file headers)

As a bonus, we should also strip material information. Material metadata is already stripped by virtue of stripping all strings, but even though every material has no data, the material assignment information is still there. Unfortunately, some IFC outputs split the IfcRelAssociatesMaterial into multiple lines. Multi-line regexes should be treated with caution as it is easy to cause lexical errors. Here’s my attempt:


Where 00000 is the ID of the IfcMaterial you’d like to reset it to.

The result is shown below, where all layer information, all property set information is trashed, all names are trashed, all materials reset, and it thinks a column is a beam, etc. I am viewing the results with Solibri IFC Model Checker.

Obfuscated IFC in Solibri

Keep in mind that with all the best intentions of IFC and the Building Smart folks behind the standard, the implementation is rather spotty in various software, especially Revit. So, your mileage may vary.

As a closing note, IFC is meant for interoperability. Stripping information in this manner is not particularly condoned by myself. You may want to consider alternatives such as… Well, just giving them the data :)

Life & much, much more

Gentoo Linux blogs on Planet Larry

If you use Gentoo Linux, you probably know that you find Gentoo Linux blogs on Planet Gentoo. If you haven’t heard of a planet before, A planet is a website that aggregates a series of blog feeds, and most open-source communities have one. For example, there is also Planet KDE and Planet GNOME. Planet Gentoo, however, is limited to the topic of Gentoo Linux itself, and only aggregates content by Gentoo developers. In the past, Steve Dibb (beandog) started up planet Larry, named after the Gentoo mascot “Larry“, which hosted blogs of Gentoo users. Naturally, Gentoo users get up to all sort of interesting endeavours, and so begun a slightly less technical, less stringently project-specific blog feed. Here’s a picture of Larry below.

Larry the cow mascot

Unfortunately, recently after checking back at my old feedreader list, I noticed that Planet Larry had gone AWOL, and so decided to recreate it. It was never an official Gentoo project and Steve Dibb didn’t seem around, and the domain name ( at the time seemed to be squatted on by advertisers. If you visit it now, despite a half-baked attempt at a Gentoo-ish theme it was filled with “laptop recommendations”. Instead, I registered and started up a new aggregator. The concept is the same as the original. In short:

  • If you use Gentoo Linux and write a blog which has a feed, you can add your blog to Planet Larry
  • You can write about anything you want, as often as you want. It doesn’t necessarily need to be related to Gentoo Linux at all — although I did find that most Gentoo Linux Blogs seem to have more technical content.

So, go ahead and check out If you contact me I will add your blog.

Credits for the Larry the cow male graphic go to Ethan Dunham and Matteo Pescarin, licensed under CC-BY-SA/2.5.

Life & much, much more

2018 New years resolutions

The first half of January’s resolution probation period has ended, and so it’s perhaps safe to post the goals for the year. So in no particular order, here they are.

  • Blog more. There’s a lot that’s been happening, and very little of it sees the light of day online. There are plenty of projects to provoke, reflect upon, or just answer your organic search query. My blogging habits used to be a couple times a week, and slowly died down as life took over. It certainly shows in the analytics dashboard. By the end of the year, monthly sessions should equal the same numbers seen in 2015. This means content creation, content creation, and more content creation. You can probably already see that a mobile friendly theme has been refreshed, new categories, and a few posts already published.
  • Divest. Financially, investors in their 20s can take a long-term view. This is the time to build up investing habits, and experience different markets. By the end of the year, I would like to invest in 20 different markets and start understanding my risk profile. Last year I experienced managed funds, blue-chip stocks, and rode the crypto currency roller coaster. This year will be more.
  • Consume intelligently. The environment is changing. Now is as good a time as any to build habits to be a more ethical consumer. We vote with our dollars, and it is our responsibility to support supply chains that promote good values in our society. Once consumed, we should break the disposable habit that arose sometime in the previous generation, and go towards zero-waste.
  • Improve digital security. The crypto boom is the public’s first taste of moving more traditional assets into a decentralised network. Unlike centralised systems, decentralised systems are very hard to kill. I foresee more of our digital lives being interconnected, even if we don’t realise it. It is pertinent that we promote more usage of privacy practices, such as password managers, secure protocols, self-hosted infrastructure, encryption, and signing.
  • Begin longer term work and life. I’ve been in the architecture industry for a year and a half now after being primarily in software. It’s probably time for training wheels off, and to start specialising in an area of architecture that is socially beneficial. Similarly, despite the prohibitive housing costs here in Sydney, the ongoing market correction suggests it’s time to revisit settling down in the more traditional sense.

Until 2019, then.


Basic rendering tutorial with Radiance

Radiance is the authoritative validated rendering engine out there. Unlike other rendering engines, which focus more on artistic license, Radiance focuses on scientific validation — that is, the results are not just physically based, they will produce the exact same output as measured by a physical optical sensor. This is great if you’d like to produce an image that not only looks photo-realistic, but actually matches what a similar setup in real life would look like. As you’d expect, this appeals to scientists, and designers who work with light and materials.

In addition, Radiance is open-source, completely free, and is Unix-like. If you’ve used other tools that claim to do all of the above, it probably uses Radiance under the hood anyway and rebrands itself with a more friendly interface. However, working with Radiance directly will give you a finer grain of control over the image you produce, and as I will likely write about in the future, scale up to do highly complex renders. Today, we’re going to dive into the guts of the tool itself and render a simple object. This can be a bit scary to those who are not particularly technical, and there’s not a lot of friendly material out there that doesn’t look like it’s from a 1000-page technical manual. Hopefully this walkthrough will focus on the more practical aspects without getting too bogged down in technicalities.

To begin, I’m also going to assume you have Radiance installed, and know how to open up a terminal window in your operating system of choice. If you haven’t got that far yet, go and install something simple like Ubuntu Linux and / or install Radiance. Radiance is not a program you double click on and see a window with buttons and menus that you can click on. Radiance is a collection of programs that work by typing in commands.

Let’s create a model first. Start with a simple mesh with a minimum of polygons. I am using Blender, which is a another open-source, free, and Unix-friendly software. In this case, I have started with a default scene, and arbitrarily replaced the default cube with a mesh of the Blender monkey mascot. I have also given the mesh a material, named white.

Default scene with Blender monkey

Using Blender is optional, of course, and you can use whatever 3D program you like. Radiance works with the OBJ format, which is an open format, plain text, and beautifully simple. As such, export the mesh to get yourself a resultant OBJ file, of which I have named model.obj. The exported accompanying model.mtl file is largely unnecessary right now: we will define our own materials with physical units, of which the .mtl file is not designed to do. When exporting, take care to only export the mesh, and ensure that the proper axes are facing up.

In the same directory that you have your model.obj and your model.mtl, let’s create a new file which will hold all the materials for your model. In this case, there is only one material, called white. So let’s create a new plain text file, called materials.rad and insert the following in it:

void plastic white
5 1 1 1 0 0

It’s the simplest possible material definition (and rather unrealistic, as it defines an RGB reflectance value of 1, 1, 1), but it’ll do for now. You can read about how “plastic” (i.e. non-metallic) materials as defined in the Radiance reference manual. In short, the first line says we are defining a plastic material called white, and the last line says that there are 5 parameters for this material, and their values are 1, 1, 1, 0, 0 respectively. The first three parameters refer to the R, G, and B reflectance of the material. This definition is provided in the Radiance manual, and so in the future it will serve you well to peruse the manual.

Now, open up a terminal window in the same folder where you have the model.obj and materials.rad file. We are going to run a Radiance program called obj2mesh which will combine our OBJ with the material definitions we have provided in our materials.rad, and spit out a Radiance triangulated mesh .rtm file. Execute the following command:

$ obj2mesh -a materials.rad model.obj model.rtm

If it succeeds, you will see a new file in that same directory called model.rtm. You may see a few lines pop up with warnings, but as long as they are not fatal, you may safely disregard them. This .rtm file is special to Radiance, as it does not work directly with the OBJ format.

Now, we will create a scene in Radiance and place our mesh within it. There will be no other objects in the scene. Let’s call it scene.rad, a simple text file with the following contents:

void mesh model
1 model.rtm

The first line simply defines a new mesh in the scene called model. The second line tells it that it can find the mesh in the model.rtm file. The final line (the zero) says that there are no parameters for this mesh.

Now, we will convert our scene into an octree, which is an efficient binary format (as opposed to all the simple text files we’ve been writing) that Radiance uses to do its calculations. We will run another Radiance program called oconv to do this. So open up your terminal window again and execute:

$ oconv scene.rad > scene.oct

You should now find a scene.oct file appear in the same folder as the rest of your files. This is the final file we send off to render. But before we do this final step, we will need to decide where our camera is. A camera in Radiance is defined by three parameters. The first parameter, vp, or view position, is the XYZ coordinate of the camera. The second parameter, vd, or view direction, is the XYZ vector that the camera is facing. The third parameter, vu, or view up, is the XYZ vector of where “up” is, so it knows if the camera is rotated or not. When specifying a parameter to Radiance, you will prefix the parameter name with a hyphen, followed by the parameter value. So, for a camera at the origin facing east (where +X is east and +Z is up), I can tell Radiance this by typing -vp 0 0 0 -vd 1 0 0 -vu 0 0 1.

Radiance camera definition

Calculating these vectors is a real pain unless your camera is in a really simple location and is orthogonal to the world axes like in my previous example. However, here’s a fancy script you can run in Blender which will calculate the values for the camera named Camera.

import bpy
from mathutils import Vector

cam =['Camera']
location = cam.location
up = cam.matrix_world.to_quaternion() * Vector((0.0, 1.0, 0.0))
direction = cam.matrix_world.to_quaternion() * Vector((0.0, 0.0, -1.0))

    '-vp ' + str(location.x) + ' ' + str(location.y) + ' ' +  str(location.z) + ' ' +
    '-vd ' + str(direction.x) + ' ' + str(direction.y) + ' ' + str(direction.z) + ' ' +
    '-vu ' + str(up.x) + ' ' + str(up.y) + ' ' + str(up.z)

The output will be in the Blender console window. For those on other programs, you’ve got homework to do.

Once you know your coordinates and vectors for vp, vd, and vu, let’s use the rpict Radiance program to render from that angle. Please replace my numbers given to the three camera parameters with your own in the command below. We will also specify -av 1 1 1, which tells Radiance to render with an ambient RGB light value of 1, 1, 1. Of course, in real life we don’t have this magical ambient light value, but as we haven’t specified any other lights in our scene, it’ll have to do. We will also specify -ab 2, which allows for 2 ambient bounces of light, just so that we have a bit of shading (if we didn’t have any light bounces, we would have a flat silhouette of our monkey).

$ rpict -vp 7.481131553649902 -6.5076398849487305 5.34366512298584 -vd -0.6515582203865051 0.6141704320907593 -0.44527149200439453 -vu -0.32401347160339355 0.3054208755493164 0.8953956365585327 -av 1 1 1 -ab 2 scene.oct > render.pic

Great, after the render completes, you should see a new file called render.pic in your folder. Let’s look at this image using the Radiance ximage program.

$ ximage render.pic

You should see something like the following:

Final Radiance render

One final step. It’s quite irksome and technical to run all of the commands for rpict, oconv and such, and so it’s much better to use the executive control program rad. rad allows you to write the intention of your render in simple terms, and it’ll work out most of the technical details for you. Of course, everything can be overridden. The rad program parses a .rif configuration file. I’ve included a sample one below, saved as scene.rif:

# Specify where the compiled octree should be generated
# Specify an (I)nterior or (E)xterior scene, along with the bounding box of the scene, obtainable via `getbbox scene.rad`
ZONE=E  -2.25546   4.06512  -3.15161   3.16896  -2.94847    3.3721
# A list of of the rad files which make up our scene
# Camera view options
view=-vp 7.481131553649902 -6.5076398849487305 5.34366512298584 -vd -0.6515582203865051 0.6141704320907593 -0.44527149200439453 -vu -0.32401347160339355 0.3054208755493164 0.8953956365585327
# Option overrides to specify when rendering
render=-av 1 1 1
# Choose how indirect the lighting is
# Choose the quality of the image, from LOW, MEDIUM, or HIGH
# Choose the resolution of mesh detail, from LOW, MEDIUM, or HIGH
# Choose the light value variance variability, from LOW, MEDIUM, or HIGH
# Where to output the raw render
# Where to output a filtered version of the render (scaled down for antialiasing, exposure correction, etc)
# The time duration in minutes before reporting a status update of the render progress

Execute rad scene.rif to get the results. If you’d like to interactively render it, on an X server you can run rad -o x11 scene.rif. I used the above .rif file and ran it against a higher resolution mesh, and I’ve included the results below.

Rad rendered image

All done! We’ve learned about bringing in an OBJ mesh with Radiance materials, placing them in a scene, and rendering it from a camera. Hope it’s been useful. Of course, our final image doesn’t look exactly great – this is because the material and lighting we have set are basically physically impossible. Similarly, the simulation we’ve run has been quite rudimentary. In the future, we’ll look at specifying a much more realistic environment.

Life & much, much more

Brand new Gentoo desktop computer

It’s 2018, and my 5 year old trusty Thinkpad 420i has decided to overheat for its last time. After more than 10 years of laptops, I decided to go for a desktop. I spoke to a fellow at Linux Now, who supplies custom boxes with Linux preinstalled, and are located in Melbourne, Australia (as of writing, no complaints with their service at all). A week later, I was booting up and my old laptop was headed to the nearest e-waste recycling centre. Here’s the obligatory Larry cowsay:

$ cowsay `uname -a`
/ Linux dooby 4.12.12-gentoo #1 SMP Tue \
| Nov 28 09:55:21 AEDT 2017 x86_64 AMD  |
| Ryzen 5 1600X Six-Core Processor      |
\ AuthenticAMD GNU/Linux                /
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

Being a desktop machine, it lacks portability but this is mitigated as you can run Gentoo on your phone. Combine your phone with a Bluetooth keyboard and mouse, and you have a full-on portable workstation. Your desktop will be much more powerful than your laptop, at half the price.

Tower machine

And of course, here are the hardware specs.

  • AMD Ryzen 5 1600X CPU (5 times faster than my laptop) As of writing, these Ryzens are experiencing some instability related to kernel bug 196683, but the workarounds in the bug report seem to solve it.
  • NVIDIA GeForce GTX 1050Ti GPU (16 times faster than my laptop). Yes, proprietary blob drivers are in use.
  • 16GB DDR4 2400Mhz RAM
  • 250GB SSD
  • 23.6in 1920×1080 16:9 LCD Monitor
  • Filco Majestouch-2, Tenkeyless keyboard. If you’ve never needed a clean-cut professional mechanical keyboard that isn’t as bulky as the IBM Model M, I’d highly recommend this one.

Filco Majestouch-2, Tenkeyless keyboard

Software-wise, it is running Gentoo Linux with KDE. Backup server is hosted by The worldfile is largely the same as my old laptop, with the addition of newsbeuter for RSS feeds. Happy 2018!


Stitching high resolution photos from NSW SIX Maps

Have you ever wanted to save out a super high resolution satellite photo from Google Maps or similar? Perhaps you’ve screenshotted satellite photos from your browser and then merged them together in your favourite photo editor like Photoshop or The GIMP. Well, in New South Wales, Australia, there’s a NSW GIS (geographic information service) government service known as NSW GIS SIX Maps.

A beautiful statellite photo from NSW GIS SIX Maps of the Sydney Harbour area

Doing this manually is quite an annoying task, but here’s an automatic script that will do it for you right in your browser, and will save up to a 10,000 by 10,000 pixel resolution image to your computer. Yep, no need to download any further software. There are a couple of prerequisites, but it should work on almost any computer. Just follow the step by step below to merge the tiles together. First, make sure you have Google Chrome as your browser.

  • Go to
  • Open up the browser inspector (Press ctrl-shift-i on Chrome), and click on the “Console” tab. This is where you will copy, paste, and run the scripts.
  • Zoom to desired max resolution level (there are 20 stops available) to the top left of the bounding rectangle of the region you’d like to stitch.
  • Copy, paste and hit enter to run the code below.

This code snippet will allow your browser to use more system resources required to perform this task.


You should see “undefined” if it completes successfully.

  • Pan a little top-left to load another tile, this’ll set the top-left boundary coordinate.
  • Pan to the bottom-right coordinate, triggering a tile-load in the process. This’ll set the bottom-right boundary coordinate.
  • Copy, paste, and hit enter to run the code in below.

This will give your browser permission to save the resulting image to your computer. For the curious, it’s called a Polyfill and here is an explanation by the FileSaver author about what it does.

/* FileSaver.js
 * A saveAs() FileSaver implementation.
 * 1.3.2
 * 2016-06-16 18:25:19
 * By Eli Grey,
 * License: MIT
 *   See

/*global self */
/*jslint bitwise: true, indent: 4, laxbreak: true, laxcomma: true, smarttabs: true, plusplus: true */

/*! @source */

var saveAs = saveAs || (function(view) {
    "use strict";
    // IE <10 is explicitly unsupported
    if (typeof view === "undefined" || typeof navigator !== "undefined" && /MSIE [1-9]\./.test(navigator.userAgent)) {
          doc = view.document
          // only get URL when necessary in case Blob.js hasn't overridden it yet
        , get_URL = function() {
            return view.URL || view.webkitURL || view;
        , save_link = doc.createElementNS("", "a")
        , can_use_save_link = "download" in save_link
        , click = function(node) {
            var event = new MouseEvent("click");
        , is_safari = /constructor/i.test(view.HTMLElement) || view.safari
        , is_chrome_ios =/CriOS\/[\d]+/.test(navigator.userAgent)
        , throw_outside = function(ex) {
            (view.setImmediate || view.setTimeout)(function() {
                throw ex;
            }, 0);
        , force_saveable_type = "application/octet-stream"
        // the Blob API is fundamentally broken as there is no "downloadfinished" event to subscribe to
        , arbitrary_revoke_timeout = 1000 * 40 // in ms
        , revoke = function(file) {
            var revoker = function() {
                if (typeof file === "string") { // file is an object URL
                } else { // file is a File
            setTimeout(revoker, arbitrary_revoke_timeout);
        , dispatch = function(filesaver, event_types, event) {
            event_types = [].concat(event_types);
            var i = event_types.length;
            while (i--) {
                var listener = filesaver["on" + event_types[i]];
                if (typeof listener === "function") {
                    try {
              , event || filesaver);
                    } catch (ex) {
        , auto_bom = function(blob) {
            // prepend BOM for UTF-8 XML and text/* types (including HTML)
            // note: your browser will automatically convert UTF-16 U+FEFF to EF BB BF
            if (/^\s*(?:text\/\S*|application\/xml|\S*\/\S*\+xml)\s*;.*charset\s*=\s*utf-8/i.test(blob.type)) {
                return new Blob([String.fromCharCode(0xFEFF), blob], {type: blob.type});
            return blob;
        , FileSaver = function(blob, name, no_auto_bom) {
            if (!no_auto_bom) {
                blob = auto_bom(blob);
            // First try, then web filesystem, then object URLs
                  filesaver = this
                , type = blob.type
                , force = type === force_saveable_type
                , object_url
                , dispatch_all = function() {
                    dispatch(filesaver, "writestart progress write writeend".split(" "));
                // on any filesys errors revert to saving with object URLs
                , fs_error = function() {
                    if ((is_chrome_ios || (force && is_safari)) && view.FileReader) {
                        // Safari doesn't allow downloading of blob urls
                        var reader = new FileReader();
                        reader.onloadend = function() {
                            var url = is_chrome_ios ? reader.result : reader.result.replace(/^data:[^;]*;/, 'data:attachment/file;');
                            var popup =, '_blank');
                            if(!popup) view.location.href = url;
                            url=undefined; // release reference before dispatching
                            filesaver.readyState = filesaver.DONE;
                        filesaver.readyState = filesaver.INIT;
                    // don't create more object URLs than needed
                    if (!object_url) {
                        object_url = get_URL().createObjectURL(blob);
                    if (force) {
                        view.location.href = object_url;
                    } else {
                        var opened =, "_blank");
                        if (!opened) {
                            // Apple does not allow, see
                            view.location.href = object_url;
                    filesaver.readyState = filesaver.DONE;
            filesaver.readyState = filesaver.INIT;

            if (can_use_save_link) {
                object_url = get_URL().createObjectURL(blob);
                setTimeout(function() {
                    save_link.href = object_url;
           = name;
                    filesaver.readyState = filesaver.DONE;

        , FS_proto = FileSaver.prototype
        , saveAs = function(blob, name, no_auto_bom) {
            return new FileSaver(blob, name || || "download", no_auto_bom);
    // IE 10+ (native saveAs)
    if (typeof navigator !== "undefined" && navigator.msSaveOrOpenBlob) {
        return function(blob, name, no_auto_bom) {
            name = name || || "download";

            if (!no_auto_bom) {
                blob = auto_bom(blob);
            return navigator.msSaveOrOpenBlob(blob, name);

    FS_proto.abort = function(){};
    FS_proto.readyState = FS_proto.INIT = 0;
    FS_proto.WRITING = 1;
    FS_proto.DONE = 2;

    FS_proto.error =
    FS_proto.onwritestart =
    FS_proto.onprogress =
    FS_proto.onwrite =
    FS_proto.onabort =
    FS_proto.onerror =
    FS_proto.onwriteend =

    return saveAs;
       typeof self !== "undefined" && self
    || typeof window !== "undefined" && window
    || this.content
// `self` is undefined in Firefox for Android content script context
// while `this` is nsIContentFrameMessageManager
// with an attribute `content` that corresponds to the window

if (typeof module !== "undefined" && module.exports) {
  module.exports.saveAs = saveAs;
} else if ((typeof define !== "undefined" && define !== null) && (define.amd !== null)) {
  define("FileSaver.js", function() {
    return saveAs;

Now that you have the SaveAs polyfill, your browser will be able to save the results to your hard drive.

  • Finally, copy, paste and run the code below.

This code does the actual work, and will stitch the tiles together with a progress notification. It will save as “output.png” in your Downloads folder when complete.

var tiles = performance.getEntriesByType('resource').filter(item =>"MapServer/tile"));
var resolution = null;
var coords = [];
var maxX = null;
var maxY = null;
var minX = null;
var minY = null;
var tileSize = 256;

for (var i=0; i<tiles.length; i++) {
    var tileUrlTokens = tiles[i].name.split('?')[0].split('/');
    var tileResolution = tileUrlTokens[tileUrlTokens.length-3]
    if (tileResolution > resolution || resolution == null) {
        resolution = tileResolution;
for (var i=0; i<tiles.length; i++) {
    var tileUrlTokens = tiles[i].name.split('?')[0].split('/');
    var tileResolution = tileUrlTokens[tileUrlTokens.length-3]
    var x = tileUrlTokens[tileUrlTokens.length-1]
    var y = tileUrlTokens[tileUrlTokens.length-2]
    if (tileResolution != resolution) {
    if (x > maxX || maxX == null) {
        maxX = parseInt(x);
    if (y > maxY || maxY == null) {
        maxY = parseInt(y);
    if (x < minX || minX == null) {
        minX = parseInt(x);
    if (y < minY || minY == null) {
        minY = parseInt(y);

var canvas = document.createElement('canvas'); = "sixgis";
canvas.width = 10240;
canvas.height = 10240; = "absolute";
var body = document.getElementsByTagName("body")[0];
sixgis = document.getElementById("sixgis");
var ctx = canvas.getContext("2d");

var currentTileIndex = 0;

function renderTile(resolution, x, y) {
    var img = new Image();
    img.onload = function(response) {
        console.log('Rendering '+currentTileIndex+' / '+((maxX-minX)*(maxY-minY)));
        var renderX = (response.path[0].tileX - minX) * tileSize;
        var renderY = (response.path[0].tileY - minY) * tileSize;
        ctx.drawImage(response.path[0], renderX, renderY);
        if (x < maxX && y < maxY) {
            renderTile(resolution, x, y+1);
        } else if (y >= maxY) {
            renderTile(resolution, x+1, minY);
        } else {
            canvas.toBlob(function(blob) {
                saveAs(blob, 'output.png');
    img.tileX = x;
    img.tileY = y;
renderTile(resolution, minX, minY);

All done! One more final thing to consider, is that their terms of service probably has something to say on the matter of stitching photos together, but I am not a lawyer, so go figure :)


LearnMusicSheets – download PDFs of music sheet exercises

Today I’d like to talk about a brand new project: LearnMusicSheets. If you are a music teacher, or are learning music, you have no doubt searched the internet looking for music score PDFs. Examples of music scores are major and minor scales, arpeggios, or even some blank manuscript. I’ve searched before for scores to present in my lessons, but have so far been unable to find scores with a suitable level of quality. Specifically, I’m looking for no copyright notices, no badly notated scores, and comprehensive. Instead, often I find scores with horrible jazz Sibelius music fonts, inconsistent naming, or obscene fingerings notation. On the rare occasion that I find a sheet typeset half-decently, it is often incomplete and doesn’t contain all the relevant exercises.

So I have taken the time to notate various music scores for learning piano and music in general. These music scores are all typeset beautifully using the Lilypond notation software. I haven’t put any copyright notices, and have provided a variety of paper sizes and combination of exercises. I’ve used these myself in my lessons for many years and they work great! Hopefully, you’ll enjoy them as much as I have!

Beautiful music exercises to download

Today I would like to launch LearnMusicSheets. Learnmusicsheets is a website where you can download music sheets for learning music. Currently, it offers blank manuscript paper, major and minor scales, major and minor arpeggios, and intervals and cadences. All these scores are available as an instant PDF download. More exercises, such as jazz scales, will come soon. If you’re curious, go and download music sheets now!

Example PDF of major and minor arpeggios

If you have any comments or suggestions, please feel free to send me a message. Or if you use them yourself, let me know how they are!

Life & much, much more

Practical Abhidhamma Course for Theravāda Buddhists

Today, I’d like to briefly introduce a project for Theravāda Buddhists. Buddhism, like most religions, have a few sacred texts to describe their teachings. One of these texts, the “Abhidhamma”, is rather elusive and complicated to understand. My dad has been teaching this difficult topic for the past 15 years, and over the past year and half, has written a 200-page introductory book for those who want to see what all the fuss is about. It’s chock-full of diagrams, references, and bad jokes.

To quote from the actual page:

There are eight lessons in this course covering selected topics from the Abhidhamma that are most practical and relevant to daily life. Though it is called a “Practical Abhidhamma Course,” it is also a practical Dhamma course using themes from the Abhidhamma. The Dhamma and the Abhidhamma are not meant for abstract theorizing; they are meant for practical application. I hope you approach this course not only to learn new facts, but also to consider how you can improve yourself spiritually.

So, click to go ahead and learn about the Abhidhamma.


I had the pleasure of helping on various technical and visual aspects, and I’m happy to launch which will serve the book as well as any future supplementary content. For those interested, the book was typeset with LaTeX, with diagrams provided by Inkscape with LaTeX rendering for text labels.

Life & much, much more

Space architecture – a history of space station designs

To quote the beginning of the full article:

This article explores different priorities of human comfort and how these priorities were satisfied in standalone artificial environments, such as space stations.

If you’re impatient and just want to read the full article, click to read A history of design and human factors in Space Stations.

… or if you want a bit more background, read on …

I began investigating in more detail the field of space architecture last year. Although I had a bit of experience from the ISSDC, I was much more interested in real current designs as opposed to hypothetical scenarios.

Space architecture, and its parent field of design is a broad one. It’s an engineering challenge, an economic challenge, a logistical challenge, a political challenge, you name it. As an architect, the priorities of space station/settlement designs lie with the people that inhabit it. Simply put, you don’t call an architect to build a rocket, but when a person is living inside that rocket, especially if they’re stuck there for a while, that’s when you call your architect.

This means that when an architect looks at designing a space station, although they need to be aware of the technical constraints of the environment (gravity, air, temperature, structure, radiation, transport, health), their true expertise lies in understanding how to make people comfortable and productive within that space. This means that space architects need to understand to an incredible amount of detail how we perceive and are affected by our environment. Much more so than Earth architects, who have the advantage of the natural world, which is usually much nicer than whatever is indoors, as well as existing social and urban infrastructure. Space architects don’t have this benefit, and so the entire “world” is limited to what they can fit inside a large room.

This point: space architects are responsible for the happiness of humans, is an absolutely vital one, and unfortunately often missed. Too many architects are instead raptured by the technological pornography of the environment, the intricate constraints, or even the ideological ability to “reimagine our future”. No. The reality is much more humble: space architecture is about rediscovering what humans hold dear in the world. You cannot claim to reinvent a better future if you do not yet understand what we already appreciate in the present.

And so if my point has made any impact, please go ahead and read A history of design and human factors in Space Stations, where I walk through the history of space station designs, their priorities, and what architects are looking at now.

Space architecture - how cosy

Cosy, isn’t it? Also, a TED Talk on How to go to space, without having to go to space shares many of my thoughts, and would be worth watching.