Creative

A history of rendering engines and modern trends

When working in the architecture industry, we often look at rendering engines from many angles, if you’d pardon the pun. We use simple renderings for diagrams, realistic rendering for architectural visualisations, real-time rendering for virtual reality prototyping, point cloud renderings for landscape and heritage scans, and so on. As the penultimate goal in archviz is to both represent abstractly and lower the costs of realistic prototyping, it pays to see what the future of rendering holds for the industry. But first, I’d like to briefly look at rendering’s history.

When the CG industry first considered how to render an image on the scene, they were mostly concerned with the hidden surface determination problem. In short, when you have a polygon, which surfaces are visible from the camera’s POV, and which are not. This mentality of thinking about how to “colour in an object” as opposed to how to simulate light lead to the development of one of the first rendering techniques: flat shading.

In flat shading, the rendering engine would consider the surface normal of each surface with respect to a light source. The more face-on each surface was, the lighter it was, and the more incident the surface was, the darker it was. If the path between the surface normal and a light source intersected with another surface (i.e., was blocked), it was shaded black. I’ve attached an example of flat shading in the first image below. This is roughly analogous to an actual physical phenomenon – that the angle of incidence to a material matters.

This was very simple, and also not very realistic. Flat shading was then combined with specular shading, which was essentially the same but heavily biased the angle of the surface normal, and had another parameter to control the falloff of the highlight. Although this created a convincing metallic glint (see monkey two in the image below), it was again just an artistic trick and wasn’t based off an actual physical phenomenon. Nevertheless, it stuck, even to this day.

Shading techniques improved when a Vietnamese gentleman invented the infamous Phong shader. He had the idea of interpolating the vertex normals between vertices to give a gradient of colour through the face. This created much more realistic results (see monkey three), but again, had no real world equivalent.

The next improvement to the shading model was when people observed completely black shadows. In real life, global illumination and ambient light ray bounces mean that almost everything can be very effectively indirectly lit. There was no computationally efficient solution to the problem at the time, and so an ambient light constant was added to simply bump up the global lighting (see monkey four). This sort of formed the segway into modern day rendering, and thus ends our history lesson.

Flat, phone, interpolated, and ambient light shading

The moral of the story is that almost all the shading approaches had no real-life equivalent, and all the subsequent improvements were based upon a method that considered how to colour in a shape from the point of view of the shape itself. This is fundamentally incorrect – in the physical world, how an object looks (at people scales, forget quantum mechanic scales) depends on rays of light that are emitted from objects that are giving off photons (e.g. hot objects) bouncing around and losing energy. Energy is deposited and is reflected upon materials in very different ways depending on the microsurface imperfections of the material, and the chemical properties of a material.

Luckily, in parallel as these artistic shaders were being developed, physically-based “ray-tracing” rendering engines were also being developed. These ray-tracers traced rays of photons from and to cameras and light sources in the same way that the real world worked. Back then, they were cool technical demos, but always were too inefficient for any practical work. However, theoretically we had proven that if you throw enough computing power at the problem, you can get photo-realistic results. Nowadays, of course, everybody knows about ray-tracing and it’s practically the norm in the market. I’ve shown an example of a chrome Monkey below reflecting the environment – the textbook example of what ray-tracing can achieve that traditional shaders could not (well, not without hacks and light maps and what not). You can see another example of photo-realistic rendering with Cycles that I’ve done too.

Glossy ray tracing render

Almost every single popular rendering engine nowadays, such as Blender Cycles, V-Ray, Maxwell, Renderman, and Arnold are ray-tracers. They are getting faster and now combining both GPU and CPU to provide almost real-time rendering. In recent years, Physically Based Rendering, better real world scanners, and improvements on texture painters are three among many advances that make photo-realistic rendering easier and easier.

Basically, photo-realism is becoming really easy. An interesting subtle trend to additionally note is that we are actually getting more scientifically based. In the past, these ray-tracers, although somewhat physically based, had many approximations to the point that real-world units were ignored in favour of arbitrary values.

The reason why this is important is because penultimate photorealism comes from scanning in real-world data at increasing levels of fidelity. Engines, no matter how physically based they are, will find it hard to use this information if they are unable to be easily linked back to physical units and measurable scientific values.

Thankfully, this is actually improving. Simple things like using more IES profiles in lighting, or falsecolour luminance images are starting to be possible with mainstream renders. The popularisation of the Disney shader is slowly getting engines working on interoperability, and the ultimate interoperability, much like penultimate photorealism, depends on scientific values.

At the very least, we know that if we throw more computers at the problem it will eventually converge and leave us with a beautiful real image.

This is great news for architecture – the industry I’m in. Architecture is no stranger to smoke and mirrors when it comes to renders and a trend towards scientific rendering makes it easier to both cheaply prototype and still promise the same results to eager clients.

Until then, let’s play with photoreal game engines and VR while the hype lasts.

Technical

Clean meshes automatically in Blender with Python

I wrote a little Python script to clean up imported meshes (OBJs, DXFs, etc) in Blender. It’s quite useful if you often process meshes from other sources, in particular IFCs. Even better is that Blender can be run heedlessly and invoke the script automatically so you can clean meshes server side even before you open it up on my computer.

From my initial script, Paul Spooner at the BlenderArtists forums was kind enough to rewrite it with improvements. For the record, here it is. Simply copy and paste into the text editor and hit the Run Script button. It will only impact selected objects.

import bpy
checked = set()
selected_objects = bpy.context.selected_objects
for selected_object in selected_objects:
    if selected_object.type != 'MESH':
        continue
    meshdata = selected_object.data
    if meshdata in checked:
        continue
    else:
        checked.add(meshdata)
    bpy.context.scene.objects.active = selected_object
    bpy.ops.object.editmode_toggle()
    bpy.ops.mesh.select_all(action='SELECT')
    bpy.ops.mesh.remove_doubles()
    bpy.ops.mesh.tris_convert_to_quads()
    bpy.ops.mesh.normals_make_consistent()
    bpy.ops.object.editmode_toggle()

Although it is pretty self explanatory, what it does is weld vertices, convert tris to quads, and recalculate normals.

Creative

Breakdown of a photo-realistic image in Blender Cycles

Recently, I wanted to produce a sample photo-realistic 3D scene with Blender’s Cycles engine that I could attempt to recreate in other rendering engines. I took an almost random photo of a street and kerb junction that is prolific throughout Sydney’s suburbs. Here’s that photo below. You can see incredible features that we take for granted such as the viscous bulging of the asphalt as it hits the kerb, dead eucalyptus leaves, a groove between two concrete blocks, and so on. It’s a slightly over-exposed shot, hence we have an unnaturally bright grass.

Source image

The resultant 3D equivalent is below, all modeled, textured, and rendered in Blender. I’ve thrown in a glossy Suzanne and sphere, as well as a creative oil slick on the asphalt. You can click on the images to see a high-resolution version.

Rendered image

The modeling itself is ridiculously easy. Excluding the particle systems and dummy meshes, the road and kerb adds up to 5 polygons. The split in the middle of the kerb is because I suspect the kerb rose in level a bit, although I ended up ignoring it. This is typically the level of detail you can expect from an architectural scene where only the road level and sidewalk level matters.

You’ll notice there are no lights. The photo was taken during an overcast sky, and so an overcast sky environment map (+-4 EV) was used for lighting. The environment map was largely untouched as it was an overcast sky, and so we don’t need to worry about the sun’s impact on the EV range.

Off to one side are some of the meshes used in the particle systems. This spot was below a eucalyptus tree, and so various eucalyptus leaves and other debris needed to be placed. The leaves, grass, and mulch are dumb planes, and only the leaves actually have a texture applied. The leaf texture was not a photo, and instead was from a beautiful eucalyptus leaf painting by a talented artist.

OpenGL render

The basic texture layer adds the first layer of realism. These are all pretty standard, such as using this seamless asphalt texture. I have assigned a diffuse and normal map, and did minor colour correction to the textures. What gives them that bit of realism is the dirt map I have painted for worn edges, which darken the values to represent the collection of dirt around edges, the gradient of dirt as water falls towards the kerb, and the evaporation of dirt as it washes up against the edge of the kerb before it finally spills over. Unlike its relative, the occlusion map (which is faking a lighting phenomenon), this dirt map actually does represent deposition of dirt and therefore a contrast between the sun-bleached material and the darkened dirty material. There is no specular map in this case, though there usually is for roads. The map is shown below.

Road dirt map

To show the contrast between the effect a dirt map applies and a flat texture, I’ve attached a work in progress screenshot below. You can see the road which has a dirt map applied in contrast to the very fake looking kerb.

Work in progress screenshot

The particle systems are what really give this scene a bit of life. There are 5 particle systems in total: dead eucalyptus leaves, mulch, long weedy Bermuda grass, short Bermuda grass, and dead grass fragments. They are all weight-painted to place them on the scene, with a noise texture to add colour variation to represent patchiness. An example of the weight paint for mulch, and dead grass is seen below.

Mulch weight paint

This gives a particle distribution which can be seen in the AO-pass below.

AO pass

That’s pretty much it! During compositing there was an AO pass multiplied, colour correction applied, a sharpen filter, as well as a slight lens distortion just for fun. A fully sized render takes about 10 minutes on my Gentoo machine.

Technical

Basic rendering tutorial with Radiance

Radiance is the authoritative validated rendering engine out there. Unlike other rendering engines, which focus more on artistic license, Radiance focuses on scientific validation — that is, the results are not just physically based, they will produce the exact same output as measured by a physical optical sensor. This is great if you’d like to produce an image that not only looks photo-realistic, but actually matches what a similar setup in real life would look like. As you’d expect, this appeals to scientists, and designers who work with light and materials.

In addition, Radiance is open-source, completely free, and is Unix-like. If you’ve used other tools that claim to do all of the above, it probably uses Radiance under the hood anyway and rebrands itself with a more friendly interface. However, working with Radiance directly will give you a finer grain of control over the image you produce, and as I will likely write about in the future, scale up to do highly complex renders. Today, we’re going to dive into the guts of the tool itself and render a simple object. This can be a bit scary to those who are not particularly technical, and there’s not a lot of friendly material out there that doesn’t look like it’s from a 1000-page technical manual. Hopefully this walkthrough will focus on the more practical aspects without getting too bogged down in technicalities.

To begin, I’m also going to assume you have Radiance installed, and know how to open up a terminal window in your operating system of choice. If you haven’t got that far yet, go and install something simple like Ubuntu Linux and / or install Radiance. Radiance is not a program you double click on and see a window with buttons and menus that you can click on. Radiance is a collection of programs that work by typing in commands.

Let’s create a model first. Start with a simple mesh with a minimum of polygons. I am using Blender, which is a another open-source, free, and Unix-friendly software. In this case, I have started with a default scene, and arbitrarily replaced the default cube with a mesh of the Blender monkey mascot. I have also given the mesh a material, named white.

Default scene with Blender monkey

Using Blender is optional, of course, and you can use whatever 3D program you like. Radiance works with the OBJ format, which is an open format, plain text, and beautifully simple. As such, export the mesh to get yourself a resultant OBJ file, of which I have named model.obj. The exported accompanying model.mtl file is largely unnecessary right now: we will define our own materials with physical units, of which the .mtl file is not designed to do. When exporting, take care to only export the mesh, and ensure that the proper axes are facing up.

In the same directory that you have your model.obj and your model.mtl, let’s create a new file which will hold all the materials for your model. In this case, there is only one material, called white. So let’s create a new plain text file, called materials.rad and insert the following in it:

void plastic white
0
0
5 1 1 1 0 0

It’s the simplest possible material definition (and rather unrealistic, as it defines an RGB reflectance value of 1, 1, 1), but it’ll do for now. You can read about how “plastic” (i.e. non-metallic) materials as defined in the Radiance reference manual. In short, the first line says we are defining a plastic material called white, and the last line says that there are 5 parameters for this material, and their values are 1, 1, 1, 0, 0 respectively. The first three parameters refer to the R, G, and B reflectance of the material. This definition is provided in the Radiance manual, and so in the future it will serve you well to peruse the manual.

Now, open up a terminal window in the same folder where you have the model.obj and materials.rad file. We are going to run a Radiance program called obj2mesh which will combine our OBJ with the material definitions we have provided in our materials.rad, and spit out a Radiance triangulated mesh .rtm file. Execute the following command:

$ obj2mesh -a materials.rad model.obj model.rtm

If it succeeds, you will see a new file in that same directory called model.rtm. You may see a few lines pop up with warnings, but as long as they are not fatal, you may safely disregard them. This .rtm file is special to Radiance, as it does not work directly with the OBJ format.

Now, we will create a scene in Radiance and place our mesh within it. There will be no other objects in the scene. Let’s call it scene.rad, a simple text file with the following contents:

void mesh model
1 model.rtm
0
0

The first line simply defines a new mesh in the scene called model. The second line tells it that it can find the mesh in the model.rtm file. The final line (the zero) says that there are no parameters for this mesh.

Now, we will convert our scene into an octree, which is an efficient binary format (as opposed to all the simple text files we’ve been writing) that Radiance uses to do its calculations. We will run another Radiance program called oconv to do this. So open up your terminal window again and execute:

$ oconv scene.rad > scene.oct

You should now find a scene.oct file appear in the same folder as the rest of your files. This is the final file we send off to render. But before we do this final step, we will need to decide where our camera is. A camera in Radiance is defined by three parameters. The first parameter, vp, or view position, is the XYZ coordinate of the camera. The second parameter, vd, or view direction, is the XYZ vector that the camera is facing. The third parameter, vu, or view up, is the XYZ vector of where “up” is, so it knows if the camera is rotated or not. When specifying a parameter to Radiance, you will prefix the parameter name with a hyphen, followed by the parameter value. So, for a camera at the origin facing east (where +X is east and +Z is up), I can tell Radiance this by typing -vp 0 0 0 -vd 1 0 0 -vu 0 0 1.

Radiance camera definition

Calculating these vectors is a real pain unless your camera is in a really simple location and is orthogonal to the world axes like in my previous example. However, here’s a fancy script you can run in Blender which will calculate the values for the camera named Camera.

import bpy
from mathutils import Vector

cam = bpy.data.objects['Camera']
location = cam.location
up = cam.matrix_world.to_quaternion() * Vector((0.0, 1.0, 0.0))
direction = cam.matrix_world.to_quaternion() * Vector((0.0, 0.0, -1.0))

print(
    '-vp ' + str(location.x) + ' ' + str(location.y) + ' ' +  str(location.z) + ' ' +
    '-vd ' + str(direction.x) + ' ' + str(direction.y) + ' ' + str(direction.z) + ' ' +
    '-vu ' + str(up.x) + ' ' + str(up.y) + ' ' + str(up.z)
)

The output will be in the Blender console window. For those on other programs, you’ve got homework to do.

Once you know your coordinates and vectors for vp, vd, and vu, let’s use the rpict Radiance program to render from that angle. Please replace my numbers given to the three camera parameters with your own in the command below. We will also specify -av 1 1 1, which tells Radiance to render with an ambient RGB light value of 1, 1, 1. Of course, in real life we don’t have this magical ambient light value, but as we haven’t specified any other lights in our scene, it’ll have to do. We will also specify -ab 2, which allows for 2 ambient bounces of light, just so that we have a bit of shading (if we didn’t have any light bounces, we would have a flat silhouette of our monkey).

$ rpict -vp 7.481131553649902 -6.5076398849487305 5.34366512298584 -vd -0.6515582203865051 0.6141704320907593 -0.44527149200439453 -vu -0.32401347160339355 0.3054208755493164 0.8953956365585327 -av 1 1 1 -ab 2 scene.oct > render.pic

Great, after the render completes, you should see a new file called render.pic in your folder. Let’s look at this image using the Radiance ximage program.

$ ximage render.pic

You should see something like the following:

Final Radiance render

One final step. It’s quite irksome and technical to run all of the commands for rpict, oconv and such, and so it’s much better to use the executive control program rad. rad allows you to write the intention of your render in simple terms, and it’ll work out most of the technical details for you. Of course, everything can be overridden. The rad program parses a .rif configuration file. I’ve included a sample one below, saved as scene.rif:

# Specify where the compiled octree should be generated
OCTREE=scene.oct
# Specify an (I)nterior or (E)xterior scene, along with the bounding box of the scene, obtainable via `getbbox scene.rad`
ZONE=E  -2.25546   4.06512  -3.15161   3.16896  -2.94847    3.3721
# A list of of the rad files which make up our scene
scene=scene.rad
# Camera view options
view=-vp 7.481131553649902 -6.5076398849487305 5.34366512298584 -vd -0.6515582203865051 0.6141704320907593 -0.44527149200439453 -vu -0.32401347160339355 0.3054208755493164 0.8953956365585327
# Option overrides to specify when rendering
render=-av 1 1 1
# Choose how indirect the lighting is
INDIRECT=2
# Choose the quality of the image, from LOW, MEDIUM, or HIGH
QUALITY=HIGH
# Choose the resolution of mesh detail, from LOW, MEDIUM, or HIGH
DETAIL=HIGH
# Choose the light value variance variability, from LOW, MEDIUM, or HIGH
VARIABILITY=MEDIUM
# Where to output the raw render
RAWFILE=output_raw.pic
# Where to output a filtered version of the render (scaled down for antialiasing, exposure correction, etc)
PICTURE=output.pic
# The time duration in minutes before reporting a status update of the render progress
REPORT=0.1

Execute rad scene.rif to get the results. If you’d like to interactively render it, on an X server you can run rad -o x11 scene.rif. I used the above .rif file and ran it against a higher resolution mesh, and I’ve included the results below.

Rad rendered image

All done! We’ve learned about bringing in an OBJ mesh with Radiance materials, placing them in a scene, and rendering it from a camera. Hope it’s been useful. Of course, our final image doesn’t look exactly great – this is because the material and lighting we have set are basically physically impossible. Similarly, the simulation we’ve run has been quite rudimentary. In the future, we’ll look at specifying a much more realistic environment.

Life & much, much more

A Beaglebone, a Blender, a Board, and a Swarm.

Hardware isn’t generally my thing. When it comes to software, I like to break and create. But in my opinion, hardware should just work. But even though that’s another story altogether, it did explain my apprehension when I greeted the UPS guy one morning delivering a BeagleBone Black.

beagleboneblack

Let’s begin with the BBB. It’s a computer the size of a credit card, which isn’t that impressive if you realise that your phone is a computer. I find the best way to explain it is in terms of two other products, the Arduino and the Raspberry Pi. The Arduino is a similarly sized (comes in multiple sizes though) controller where you can upload scripts, plug in a hardware circuit (wires and lightbulb, that sort of thing), and have it control the circuit. Despite its power in hardware control, it only has a small scripting interface for you to do your programming. The Raspberry Pi is the opposite. It’s a full Linux computer (based off Debian), but does not have proper hardware controls out of the box. The BBB provides the best of both worlds: a full Linux system (Angstrom Linux, but of course you can flash your own), and a ridiculous number of IO pins to control circuits. All this awesome power at 45USD.

The next step upon receiving this wonderboard was obvious. Let’s build a swarm of robots. Along with two university friends, Lawrence Huang and Gloria Nam, we set out planning the system.

world

The base was to be constructed out of a 1200x1200mm wooden plywood board and cut it into a circle with a hole in the middle. This would be the “world” where the robot swarm would live on. This world would operate like a Lazy Susan, and would have a two depots filled with some sort of resource. One at the center, and one at the perimeter. This gave the colony a purpose: it would need to collect resources. Above the board was where we would put the computer, BBB, power supply, and cables to hook up to all the bots below.

We then had to determine the behavior and movement capabilities of the swarm. It had to act as one, but still remain separate entities. It also had to disperse to discover where the rotated resource depots were, and the swarm as a whole had a set of goals and quota limitations. Five movement types (along with the math) were worked out to allow the bots smooth and flexible movement across the terrain.

rules

The overmind was next. We would use Blender‘s very flexible boid simulator along with custom Python scripts using Blender’s Python API to simulate the swarm behavior on the computer and set swarm goals. At the same time, a real-time top-down view could be generated and displayed. Due to budget reasons, we couldn’t build the entire swarm of robots, but instead settled on building just one bot in the swarm, and having this bot track the motions of a single bot on the computer screen, but still behave as part of the full 32-robot swarm on the screen. Viewers could then see on the screen the full swarm behavior, and physically see a single bots behavior in front of them.

swarmscreenshot

The car itself was then built. It was relatively small and was barely enough to fit the two continuous-rotation servo motors that were required to power its left and right treads. It had a little tank on its top to hold resources, a depositing mechanism at its front, and dragged along a massive conveyor belt to collect resources behind it.

car

Now the fun part – calibrating the simulated swarm with the actual physical swarm behavior, and doing all the physical PWM circuits. Many sleepless nights later it was a success. Here we see the bot doing a weird parking job into the depot and collecting resources, going back to the center, and depositing it. Apologies for the lack of video.

collect

And there we have it. A swarm of robots. Did it curb my fear of hardware? Not entirely.

frontshot

For those interested in the actual system, here’s a macro overview:

system

A few extra fun things from the project:

  • Calibration was not easy. Actually, it was very hard. No, it was stupidly hard. It was ridiculously hard. Real life has so many uncertainties.
  • Each bot is tethered to the overmind via 8 wires (3 per tread, 2 for conveyor belt). Could it be made into a wireless swarm? Yes. Did we have the money? No.
  • Could it be modified to move in 3D XYZ space like a swarm of helicopters? Yes. Would I do the math for it? No.
  • The actual simulation was done on the computer via Blender + custom python scripts. The computer was then connected via a persistent master SSH connection, which was reused to send simple signals to the pin’s embedded controller. So all in all the BBB actually didn’t do much work. It was just a software->hardware adapter.
  • Because the computer was doing all the work, it wasn’t hard to add network hooks. This meant we could actually control the system via our phones (which we did).
  • Weirdest bug? When (and only when) we connected the computer to the university wifi, flicking a switch 10 meters away in a completely separate circuit (seriously, completely separate) would cause the BBB to die. Still completely confused and will accept any explanation.
  • Timeframe for the project? 4 weeks along with other obligations.
  • Prior hardware and circuit experience: none. Well. Hooking up a lightbulb to a battery. Or something like that.
  • Casualties included at least three bot prototypes, a motor, and at least 50 Styrofoam rabbits (don’t ask)
  • Why are all these diagrams on weird old paper backgrounds? Why not?
  • At the time of the project, the BBB was less than a month old. This meant practically no documentation, and lack of coherent support in their IRC channels. As expected, this was hardly a good thing.

Project success. I hope you enjoyed it too :)

Creative

My latest architectural renders

Now that I’ve finished my second year of architecture, I’ve started to develop a much faster workflow when it comes to churning out architectural renders. From being asked to make animations within really tight schedules, to having to produce presentation-ready drawings in a short period of time, being able to do the graphical equivalent of rapid development in programming was very important to me. Fortunately, unlike programming where the product has a 20% build time and 80% maintenance time, most graphics are present and discard.

I have started to collect some of my renders together and release them on WIPUP. Some of the better ones were shared on the Blenderartists forums, as naturally they were produced using Blender.

Wheelchair house - Blender architectural visualisation

I was happy to hear that the above render was featured as a render on the week on Blendernews :) Although Blendernews is hardly an official news source for Blender, it was quite nice.

You can view the full set of renders below (click to go to the WIP update and view full-res images). My personal favourite is the forest one :) I find it makes a nice phone wallpaper.

Wheelchair house - blender architectural visualisation

A lift to make my world - blender architectural visualisation

Lift off into the clouds - blender architectural visualisation

Schematics - blender architectural visualisation

I am releasing the four above renders under CC-by. A link to thinkMoult along with my name will suffice.

Creative

Makerbotting beavers

A while back, I started modeling a 3D beaver. No – this wasn’t the beaver I modeled for my animation “Big Rat” at least 5 years ago, this is a more recent one. In fact, after I had fun printing Suzanne, I had so much fun I decided I would print a beaver next.

Unfinished Makerbot beaver

Whoops. Wrong picture. It does, however, show you what goes on inside a 3D printed beaver, for those unfamiliar with Makerbot’s honeycombing.

Makerbot beaver print

… and …

Makerbot beaver print

Modeled in Blender, printed with white translucent ABS plastic. You might notice it’s always propped up with something – I got the center of mass wrong, so it has a mischievous habit of falling on its face. It seems to be one of those objects which look nicer in real life than in a photograph – perhaps because of the translucency of the plastic.

Creative

Game of Homes opening sequence animation

This week, and to be more specific, yesterday, today, and tomorrow, the Architecture Revue Club from the University of Sydney will present Game of Homes, the 2012 annual performance.

Architecture revue Game of Homes official poster

As mentioned before, apart from musical director, I also did some AV stuff – such as this opening sequence. Check it out :)

It was essentially a one-man rush job. Blender was used for most of it, except for adding the credit names, which was done in Adobe Premiere. The few image textures that were used were done in the GIMP. Total time taken including rendering was ~4 days.

Rendering was done with Blender Internal, with an average of ~20 seconds per frame at some silly arbitrary resolution ~1100x~500px. BI was the obvious choice for speed. Blender VSE was used for sequencing and sound splicing.

The workflow was a little odd – essentially post processing was done first, followed by basic materials, and then camera animation. Based on the camera animation modelling and material tweaking would be done as necessary.

Comments welcome :)

Creative

Blender 3D printed Suzanne model

Hot smokes, it’s been two months. Hopefully a monkey head will fix it!

Blender suzanne 3d printer

It’s Suzanne, Blender‘s mascot monkey 3D printed with a Makerbot. It’s about 45x40x50mm from a 3mm black plastic spool, and sits on your desk on on your keyboard staring at you with its docile eyes.

It’s a little lumpy, mainly due to the capabilities of the printer, but a little bit more planning could’ve improved it a little – as seen below:

blender suzanne 3d print

You can see where the scaffolding was attached as well as where plastic dripped down, causing some unevenness in the layering.

For the technically-inclined, the model was exported to obj, then re-exported to stl in Rhino (because Blender’s stl is broken), and then the rest is standard as documented here.

Life & much, much more

Back in Malaysia, and other things I have dabbled in.

Blog posting has been slow lately. This is mostly due to real life and connectivity issues, but despite this I have had some time to dabble in the various public projects I juggle. The pace is not rapid enough to be able to keep up a alternate-day post like I used to, but is suitable for a summary post, such as this one.

The ThoughtScore Project

The first project is my ever-incomplete ThoughtScore animated movie. The highlight of this update is that there has been an animation update with a few extra shots added. You can view the ThoughtScore Blender animation here, or click the screenshot below.

You may view more feedback on its BlenderArtists forum thread (page 4).

The project also got awarded its own domain with some content I pulled together quickly in about an hour. See ThoughtScore.org.

I do have a couple more scenes prepped and awaiting animation & rendering, so more updates will be popping up.

live.WIPUP

WIPUP, a way to share works in progresses, has experienced the yearly dip in content due to the holiday season, but live.WIPUP (the bleeding-edge iteration of WIPUP) has received experimental design changes and slight SEO updates.

live.WIPUP -like the projects it was built to showcase- is also a work-in-progress. It’s incomplete, but as always, hopefully a step in the right direction. Text link to check out live.WIPUP – share your works in progress here.

Real Life

Apart from badminton, taking a break from learning Chinese, globetrotting, and client work, this picture says it all.

Well, that’s it for a brief summary of what I’ve been up to. I hope everybody have also had a great Christmas, New Year, upcoming Chinese New Year, and awesome holiday.

Uncategorized

Free textures from CarriageWorks

Firstly, some apologies for the lack of life on this blog. Things are trickling in and you can actually keep an eye on WIPUP for incoming updates on my projects that I won’t write about in thinkMoult.

Just down the road from my faculty building is a site called CarriageWorks – it was an abandoned train station which got remodeled into a contemporary art site. It’s quite a charming place, and it makes perfect sense for the upcoming TEDxSydney to be hosted there. Being an abandoned train station, it also is quite the goldmine in terms of textures which may be used for budding CG artists.

Last Wednesday I snuck around and snapped a bunch of photos of the mouldy, rotting and disintegrating, as well as the lovely mechanical details here and there. I haven’t done any modifications or cleaning up on the photos, but they’re sufficient to be used as references or basic texture works.

Licensed as CC BY-SA.

If you use them please link me to your work as I love to see other’s creations!

Uncategorized

Cinematic Perception film entry

Hey folks, I haven’t posted anything since the SLUG meeting for (what should be) obvious reasons. University has started and it takes a little time for me to adjust back into a schedule after almost a year without one. I’m still working out the kinks to juggle university, freelancing, family time, thoughtscore, wipup, sports, music, time well wasted (reading, tv, internet), household work and socialising. It’s a little tricky but I should have it sorted out soon (there’s no such thing asĀ not enough time). Not enough motivation, maybe, but never not enough time.

I am still doing things, which can be seen trickling slowly into WIPUP, but one of the more interesting ones that warrants a blog post was a short film-making competition I took part in a week ago. First off, I know nothing about filmmaking. 3D animation a little perhaps, but not filmmaking. So I initially went it just to have some fun. However walking away with first prize was definitely not what I had expected. Here was the entry after several hours of filming and splicing the video clips together.

Enjoy!

P.S. Yes, of course, Blender was used as the VSE.

Technical

Toronto’s “mini-sprint”, and Sydney’s KDE/FOSS Community

During the holidays I met up with Eugene Trounev, (aka it-s), one of KDE’s awesome artists to discuss our reorganisation of KDE.org and the design aspects of it (which is coming soon in the series). It was a 2-day meeting and it was my first time meeting another KDE enthusiast face-to-face, as given my inconvenient geographical location in Malaysia I don’t know anybody else here. I won’t post the outcome of the sprint here but it will be released periodically with the rest of my kde-www war series. It was extremely useful and awesome of course (and yes, lots did get done), and since no photos were taken, here is one of a random conifer tree to make up for it:

I’ve just arrived in Sydney, Australia to get ready for my upcoming year of university, and so I wanted to quickly throw out the question to see if anybody in the KDE / Blender / FOSS crowd lives there. If you do, throw me an email/comment and if there isn’t an existing community, let’s start one :D

Creative

One project finishes construction, another starts.

Blog posts are getting a little r- wait a minute- oh, there haven’t been many blog posts! This of course doesn’t mean I’ve been lounging around doing nothing, but probably means I’m doing more since I’m not clearing out the random gunk that accumulates in that little hole in my brain.

Anyways, firstly the project that has recently been “finished” (well, technically I’m still waiting for a little bit here and there to fill up the rest of the pages – but the bulk is done anyway), is the grand web-portfolio of Erik Kylen. The web system is using Kohana of course, and behind the scenes there is a very simple administrator panel to CRUD portfolio items.

<

p align=”center”>

Also, after so many empty sentences saying I will get back to The ThoughtScore Project, I finally have. A lovely bonus is that this time the update includes both a riveting storyline update as well as pretty pictures (or I think they are pretty anyways).

<

p align=”center”>
Clicky

More awesome in the oven. Temperature set on high. Very high.