Life & much, much more

Medical animations and surgical visualisation with MedFilm

You’re sitting in front of the doctor’s table in a hospital. The doctor has just spent the past half hour explaining the procedure you will undergo to solve a medical problem that you experience. It sounds complicated — there are a few things you have to do to prepare, some foods to watch out for, and a recovery process of a few months afterwards. You will later come home to only be bombarded with a series of questions from your friends and family, who are all curious and have somehow managed to ask the questions which you didn’t think of asking earlier. It also doesn’t help that their native language isn’t English.

Four years ago, this was exactly the problem that Erik Kylen, a small team in Sweden, and myself working remotely set out to solve. The solution was a series of animated videos to explain various medical issues in simple terms. A doctor could use these videos to help guide patients, and patients could then watch these videos from the comfort of their own home. This is MedFilm.

MedFilm Logo

Each video starts with a gentle description of the various body parts involved in the procedure to introduce the required medical terminology. This is followed by an explanation of how these parts relate to the ailment at hand. The patient is then reminded of the various preparatory steps they need to take before the procedure, such as fasting, or drinking fluids. The surgical procedure is then shown, heavily tested to maintain medical accuracy whilst ensuring that the patient does not see anything gruesome. Finally, the video describes the recovery process, and the steps the patient can take to expedite it.

These videos are simple to understand, are accessible with subtitles and translations into many languages, and tailored for specific medical practices in localised hospitals and countries. Each hospital and country has their own preferred ways of doing things, and these videos accomodate that fact.

A doctor from a participating hospital can share a link to their patient, or interactively use the video during the briefing process on a tablet. A patient can later watch it again to refresh their memory, or reshare it with friends and family.

MedFilm surgical videos on various tablets

Let’s see a demonstration video (and yes, videos can be embedded with custom branding into a hospital or clinic’s website!). Below is the video created for an appendectomy. Usually, getting your appendix removed is a pretty safe, standard procedure, and happens pretty soon after you figure out you have a problem. Most people have also heard of it, which makes it a great procedure to demonstrate.

Here’s the video! Click play below and learn about an appendectomy!

MedFilm is steadily growing and now has a repository of 40 videos covering topics from cardiology to otorhinolaryngology (I’m not a doctor, so to me that’s a very complicated word!), used in clinics across Scandinavia. I’m proud of the service, and happy that it is able to help patients. If you’re interested are are involved in the health industry, you can contact MedFilm here and we can explore opportunities!


A history of rendering engines and modern trends

When working in the architecture industry, we often look at rendering engines from many angles, if you’d pardon the pun. We use simple renderings for diagrams, realistic rendering for architectural visualisations, real-time rendering for virtual reality prototyping, point cloud renderings for landscape and heritage scans, and so on. As the penultimate goal in archviz is to both represent abstractly and lower the costs of realistic prototyping, it pays to see what the future of rendering holds for the industry. But first, I’d like to briefly look at rendering’s history.

When the CG industry first considered how to render an image on the scene, they were mostly concerned with the hidden surface determination problem. In short, when you have a polygon, which surfaces are visible from the camera’s POV, and which are not. This mentality of thinking about how to “colour in an object” as opposed to how to simulate light lead to the development of one of the first rendering techniques: flat shading.

In flat shading, the rendering engine would consider the surface normal of each surface with respect to a light source. The more face-on each surface was, the lighter it was, and the more incident the surface was, the darker it was. If the path between the surface normal and a light source intersected with another surface (i.e., was blocked), it was shaded black. I’ve attached an example of flat shading in the first image below. This is roughly analogous to an actual physical phenomenon – that the angle of incidence to a material matters.

This was very simple, and also not very realistic. Flat shading was then combined with specular shading, which was essentially the same but heavily biased the angle of the surface normal, and had another parameter to control the falloff of the highlight. Although this created a convincing metallic glint (see monkey two in the image below), it was again just an artistic trick and wasn’t based off an actual physical phenomenon. Nevertheless, it stuck, even to this day.

Shading techniques improved when a Vietnamese gentleman invented the infamous Phong shader. He had the idea of interpolating the vertex normals between vertices to give a gradient of colour through the face. This created much more realistic results (see monkey three), but again, had no real world equivalent.

The next improvement to the shading model was when people observed completely black shadows. In real life, global illumination and ambient light ray bounces mean that almost everything can be very effectively indirectly lit. There was no computationally efficient solution to the problem at the time, and so an ambient light constant was added to simply bump up the global lighting (see monkey four). This sort of formed the segway into modern day rendering, and thus ends our history lesson.

Flat, phone, interpolated, and ambient light shading

The moral of the story is that almost all the shading approaches had no real-life equivalent, and all the subsequent improvements were based upon a method that considered how to colour in a shape from the point of view of the shape itself. This is fundamentally incorrect – in the physical world, how an object looks (at people scales, forget quantum mechanic scales) depends on rays of light that are emitted from objects that are giving off photons (e.g. hot objects) bouncing around and losing energy. Energy is deposited and is reflected upon materials in very different ways depending on the microsurface imperfections of the material, and the chemical properties of a material.

Luckily, in parallel as these artistic shaders were being developed, physically-based “ray-tracing” rendering engines were also being developed. These ray-tracers traced rays of photons from and to cameras and light sources in the same way that the real world worked. Back then, they were cool technical demos, but always were too inefficient for any practical work. However, theoretically we had proven that if you throw enough computing power at the problem, you can get photo-realistic results. Nowadays, of course, everybody knows about ray-tracing and it’s practically the norm in the market. I’ve shown an example of a chrome Monkey below reflecting the environment – the textbook example of what ray-tracing can achieve that traditional shaders could not (well, not without hacks and light maps and what not). You can see another example of photo-realistic rendering with Cycles that I’ve done too.

Glossy ray tracing render

Almost every single popular rendering engine nowadays, such as Blender Cycles, V-Ray, Maxwell, Renderman, and Arnold are ray-tracers. They are getting faster and now combining both GPU and CPU to provide almost real-time rendering. In recent years, Physically Based Rendering, better real world scanners, and improvements on texture painters are three among many advances that make photo-realistic rendering easier and easier.

Basically, photo-realism is becoming really easy. An interesting subtle trend to additionally note is that we are actually getting more scientifically based. In the past, these ray-tracers, although somewhat physically based, had many approximations to the point that real-world units were ignored in favour of arbitrary values.

The reason why this is important is because penultimate photorealism comes from scanning in real-world data at increasing levels of fidelity. Engines, no matter how physically based they are, will find it hard to use this information if they are unable to be easily linked back to physical units and measurable scientific values.

Thankfully, this is actually improving. Simple things like using more IES profiles in lighting, or falsecolour luminance images are starting to be possible with mainstream renders. The popularisation of the Disney shader is slowly getting engines working on interoperability, and the ultimate interoperability, much like penultimate photorealism, depends on scientific values.

At the very least, we know that if we throw more computers at the problem it will eventually converge and leave us with a beautiful real image.

This is great news for architecture – the industry I’m in. Architecture is no stranger to smoke and mirrors when it comes to renders and a trend towards scientific rendering makes it easier to both cheaply prototype and still promise the same results to eager clients.

Until then, let’s play with photoreal game engines and VR while the hype lasts.


Breakdown of a photo-realistic image in Blender Cycles

Recently, I wanted to produce a sample photo-realistic 3D scene with Blender’s Cycles engine that I could attempt to recreate in other rendering engines. I took an almost random photo of a street and kerb junction that is prolific throughout Sydney’s suburbs. Here’s that photo below. You can see incredible features that we take for granted such as the viscous bulging of the asphalt as it hits the kerb, dead eucalyptus leaves, a groove between two concrete blocks, and so on. It’s a slightly over-exposed shot, hence we have an unnaturally bright grass.

Source image

The resultant 3D equivalent is below, all modeled, textured, and rendered in Blender. I’ve thrown in a glossy Suzanne and sphere, as well as a creative oil slick on the asphalt. You can click on the images to see a high-resolution version.

Rendered image

The modeling itself is ridiculously easy. Excluding the particle systems and dummy meshes, the road and kerb adds up to 5 polygons. The split in the middle of the kerb is because I suspect the kerb rose in level a bit, although I ended up ignoring it. This is typically the level of detail you can expect from an architectural scene where only the road level and sidewalk level matters.

You’ll notice there are no lights. The photo was taken during an overcast sky, and so an overcast sky environment map (+-4 EV) was used for lighting. The environment map was largely untouched as it was an overcast sky, and so we don’t need to worry about the sun’s impact on the EV range.

Off to one side are some of the meshes used in the particle systems. This spot was below a eucalyptus tree, and so various eucalyptus leaves and other debris needed to be placed. The leaves, grass, and mulch are dumb planes, and only the leaves actually have a texture applied. The leaf texture was not a photo, and instead was from a beautiful eucalyptus leaf painting by a talented artist.

OpenGL render

The basic texture layer adds the first layer of realism. These are all pretty standard, such as using this seamless asphalt texture. I have assigned a diffuse and normal map, and did minor colour correction to the textures. What gives them that bit of realism is the dirt map I have painted for worn edges, which darken the values to represent the collection of dirt around edges, the gradient of dirt as water falls towards the kerb, and the evaporation of dirt as it washes up against the edge of the kerb before it finally spills over. Unlike its relative, the occlusion map (which is faking a lighting phenomenon), this dirt map actually does represent deposition of dirt and therefore a contrast between the sun-bleached material and the darkened dirty material. There is no specular map in this case, though there usually is for roads. The map is shown below.

Road dirt map

To show the contrast between the effect a dirt map applies and a flat texture, I’ve attached a work in progress screenshot below. You can see the road which has a dirt map applied in contrast to the very fake looking kerb.

Work in progress screenshot

The particle systems are what really give this scene a bit of life. There are 5 particle systems in total: dead eucalyptus leaves, mulch, long weedy Bermuda grass, short Bermuda grass, and dead grass fragments. They are all weight-painted to place them on the scene, with a noise texture to add colour variation to represent patchiness. An example of the weight paint for mulch, and dead grass is seen below.

Mulch weight paint

This gives a particle distribution which can be seen in the AO-pass below.

AO pass

That’s pretty much it! During compositing there was an AO pass multiplied, colour correction applied, a sharpen filter, as well as a slight lens distortion just for fun. A fully sized render takes about 10 minutes on my Gentoo machine.


Zygomatic Studios design: an experiment in one-page animated layouts

Recently I did a front-end proposal for Zygomatic Studios. They’re an animation company started up by Erik Kylen and I’ll be maintaining their website.

Given that I knew them, I had some freedom to experiment. For an animation firm, the website itself had to be showy graphically somehow. I ended up making the entire page animated on page-load: to present itself in a showy way but not interrupt the user whilst actually using the page. “Slick” was what I was going for.

Another idea I wanted to play with was the one-page site concept, which displayed the highlights of each “sub page”, which could then be expanded if interested.

You can check it out in my alpha playground.


Designed with GIMP, with a little help from Blender. Personally quite happy with the experiment.


My latest architectural renders

Now that I’ve finished my second year of architecture, I’ve started to develop a much faster workflow when it comes to churning out architectural renders. From being asked to make animations within really tight schedules, to having to produce presentation-ready drawings in a short period of time, being able to do the graphical equivalent of rapid development in programming was very important to me. Fortunately, unlike programming where the product has a 20% build time and 80% maintenance time, most graphics are present and discard.

I have started to collect some of my renders together and release them on WIPUP. Some of the better ones were shared on the Blenderartists forums, as naturally they were produced using Blender.

Wheelchair house - Blender architectural visualisation

I was happy to hear that the above render was featured as a render on the week on Blendernews :) Although Blendernews is hardly an official news source for Blender, it was quite nice.

You can view the full set of renders below (click to go to the WIP update and view full-res images). My personal favourite is the forest one :) I find it makes a nice phone wallpaper.

Wheelchair house - blender architectural visualisation

A lift to make my world - blender architectural visualisation

Lift off into the clouds - blender architectural visualisation

Schematics - blender architectural visualisation

I am releasing the four above renders under CC-by. A link to thinkMoult along with my name will suffice.


Makerbotting beavers

A while back, I started modeling a 3D beaver. No – this wasn’t the beaver I modeled for my animation “Big Rat” at least 5 years ago, this is a more recent one. In fact, after I had fun printing Suzanne, I had so much fun I decided I would print a beaver next.

Unfinished Makerbot beaver

Whoops. Wrong picture. It does, however, show you what goes on inside a 3D printed beaver, for those unfamiliar with Makerbot’s honeycombing.

Makerbot beaver print

… and …

Makerbot beaver print

Modeled in Blender, printed with white translucent ABS plastic. You might notice it’s always propped up with something – I got the center of mass wrong, so it has a mischievous habit of falling on its face. It seems to be one of those objects which look nicer in real life than in a photograph – perhaps because of the translucency of the plastic.


Game of Homes opening sequence animation

This week, and to be more specific, yesterday, today, and tomorrow, the Architecture Revue Club from the University of Sydney will present Game of Homes, the 2012 annual performance.

Architecture revue Game of Homes official poster

As mentioned before, apart from musical director, I also did some AV stuff – such as this opening sequence. Check it out :)

It was essentially a one-man rush job. Blender was used for most of it, except for adding the credit names, which was done in Adobe Premiere. The few image textures that were used were done in the GIMP. Total time taken including rendering was ~4 days.

Rendering was done with Blender Internal, with an average of ~20 seconds per frame at some silly arbitrary resolution ~1100x~500px. BI was the obvious choice for speed. Blender VSE was used for sequencing and sound splicing.

The workflow was a little odd – essentially post processing was done first, followed by basic materials, and then camera animation. Based on the camera animation modelling and material tweaking would be done as necessary.

Comments welcome :)


Blender 3D printed Suzanne model

Hot smokes, it’s been two months. Hopefully a monkey head will fix it!

Blender suzanne 3d printer

It’s Suzanne, Blender‘s mascot monkey 3D printed with a Makerbot. It’s about 45x40x50mm from a 3mm black plastic spool, and sits on your desk on on your keyboard staring at you with its docile eyes.

It’s a little lumpy, mainly due to the capabilities of the printer, but a little bit more planning could’ve improved it a little – as seen below:

blender suzanne 3d print

You can see where the scaffolding was attached as well as where plastic dripped down, causing some unevenness in the layering.

For the technically-inclined, the model was exported to obj, then re-exported to stl in Rhino (because Blender’s stl is broken), and then the rest is standard as documented here.


Free textures from CarriageWorks

Firstly, some apologies for the lack of life on this blog. Things are trickling in and you can actually keep an eye on WIPUP for incoming updates on my projects that I won’t write about in thinkMoult.

Just down the road from my faculty building is a site called CarriageWorks – it was an abandoned train station which got remodeled into a contemporary art site. It’s quite a charming place, and it makes perfect sense for the upcoming TEDxSydney to be hosted there. Being an abandoned train station, it also is quite the goldmine in terms of textures which may be used for budding CG artists.

Last Wednesday I snuck around and snapped a bunch of photos of the mouldy, rotting and disintegrating, as well as the lovely mechanical details here and there. I haven’t done any modifications or cleaning up on the photos, but they’re sufficient to be used as references or basic texture works.

Licensed as CC BY-SA.

If you use them please link me to your work as I love to see other’s creations!


Rigging a machine.

Things have been going absurdly slow lately. No commits to WIPUP. No new ThoughtScore models (though a few more seconds of video have been added). Nothing open-source related (except for trying out the Ubuntu beta1 on a box). Even schoolwork has slowed.

Because I fully emphathise that people with a grip of things wouldn’t give a rat’s ass about my life, I decided to show some pictures of the trouble I’ve been having trying to rig Taras, one of the main characters in The ThoughtScore Project. Here are two statically posed shots of Taras:

The left shows him in his unrigged pose. The pose he was modeled in. The right shows him "looking" down, slightly bent forward with his left arm reaching towards you. Disregarding the fact that the lighting is completely faked (what is that suit reflecting, anyway?), we have two other major problems to deal with.

Problem Number One: His arm was not built to be in that pose. Not was any other part of his anatomy. When standing straight his arms are abnormally squashed in order to look natural in that one pose… and when in a dark environment. In any other scenario you’d see two spindly arms sticking out of a hunk of metal. The way it was designed, his shoulder "ball and socket" joint is more of a "plank of wood stuck on a block of wood" joint. It doesn’t fit nicely like a joint should.

Put simply, all of his joints (legs included) will have to be remodelled in order so that you don’t have gaping holes or bits of the suit intersecting when limbs are moved in their extremeties. Not an easy task.

Problem Number Two: The torso. The torso is made up of several different meshes. Each part fits together nicely in one way and one way only. If you look at the picture, you’ll see that when he leans forward, the upper torso covers the middle torso, which largely remains stationary, the groin panel shifts outwards slightly, and the piping all has to move to accomodate this change and not randomly stick out where it shouldn’t. Think of it like the parts of a steam engine.

Long story short, it’s going to be a PITA to rig that guy just to bend over. Heck, I don’t think you can bend over in a suit like that.

Normally I stubbornly plod down the road of "create first, learn later, fix and redo even later", but this time I think I’d better buy some of Blender’s training DVDs before continuing on ThoughtScore.


Blender 2.5 Features Video

blender2.5-dev1Hello everybody, I’m back from my 5 day jungle trek and I’m just catching up on what I’ve missed throughout the week. I was initially going to award you all with a post about the trek itself, but it turns out Jonathan Williamson from Montage Studio (the very same who does the Blender screencasts and gave some good tips for ThoughtScore) has got himself a Blender build for Windows 7 and has recorded a short screencast demo-ing the development.

I am truly amazed with what has been going on and I will definitely throw myself back into Blender this holiday and its stuff like this that really shows what open-source is capable of. Blender is one serious threat to the huge commercial monopoly in the 3D industry. Here is a short list of the features he describes:

  • New design/look
  • Panel splitting/deleting/management
  • Not limited to one window only
  • Massive reorganisation of features that make it more intuitive
  • Real-time playback animation while editing
  • Real-time playback animation while rendering
  • Every single value in Blender can now be animated
  • Support for macro options
  • New transform panel
  • Search option for features

Without further ado:

Clicky here to watch the video.



The ThoughtScore project is another gypsy on my to-do list along with the BMR. It seems as though the world of 3D graphics and I are drifting slowly apart. It’ll be such a pity to let it go, so I want to make a serious effort and continue the amazing progress I once had on ThoughtScore.

You can see the pitiful post I made after scrolling through history on this page:

Revive ThoughtScore! I need a plan, a design, something huge! Grab me some pencil and paper, and let’s bring my vision into a reality! I have a holiday coming up, and I hope I can approach this through another angle which should allow me to continue production.

The next post will be in a week’s time because I am going to be stuck in a jungle throughout next week. You will then receive posts in this order: 1) The Trek, 2) New Perspective Magazine Released, and 3) What is becoming of Eadrax.


The Blender Model Repository and BlenderNation: open-source merger?

2009-05-15-005937_1280x800_scrotAs some might know, Blender is an open-source 3D content creation application – it’s cross-platform, a pioneer in the free 3D application market, and I use it. Not only do I use it, love it, and hang out in the #blenderchat IRC channel on freenode, I host the Blender Model Repository, taking over from Andrew Kator long time ago when he suffered legal issues. It’s been running stable for the past year or so, every so often getting new model submissions, and users finding it a useful resource.

Even if you know nothing about Blender, help me in this open-source dilemma, please read on.

Recently, Bart Veldhuizen over from started beta-testing for a new resource sharing system known as BlenderNation Links. BlenderNation, for those that don’t know, is the central news website for all things Blender related. It’s the central hub that Blender development and community news goes through – outside the official website, which is a bit more boring and just says “hey guys, new version” – as do most official websites. (Just joking!)

I was recently pleased to be given the opportunity to beta-test the new system. Well, this new BN Links categorises things as “individual” items – a model repository, as one might expect is not just one individual item, but instead a whole other resource system. The thing I’m wondering about is “how do I make the repository’s resources just as accessible through the BN Links system?“.

A while back I wrote the second part of my open-source analysis article, called “The Open-Source Market – Limitless and Forever Expanding?” (click it to read the article – it might interest you) One of the conclusions I came up with there was that in the short term, open-source should have plenty of choice and competition, but in the long-term, it must realise the synergy is what is needed to ensure its survival and continued growth. This is a perfect example of this concept in real life. There are two resource sites, one obviously much larger and more popular than the other, originally offering slightly different things. BlenderNation focuses on news, and has a small tutorials/resources section, whereas the BMR (Blender Model Repository) focuses on…hosting models and tutorials. Now BlenderNation wants to increase its focus on tutorials and resources, thus duplicating the BMR’s function somewhat. Is this, perhaps, the time to synergise?

Firstly, let’s get the facts down:

  • BlenderNation is much more popular and well known than the BMR. It also has a cooler name.
  • The BMR is a hub for models. I have no legal right to give all my models/let them be published on BN Links.
  • Competition is good, but function replication is not.
  • I do have the legal right to “link” to each individual model, but such manual addition is tiresome, and will have to be constantly updated as new models come in.
  • The BMR does have a built-up reputation for those that know it. It’s not very nice to say “hey guys, we’ve uh, disappeared – check out this cooler site
  • The BMR is running on depreciated technology – sad but true. Whoops, did I just say that? But hey, if it ain’t broke, don’t fix it.
  • The BMR is a bit like a music collection with some missing metadata. Some files are hosted elsewhere, some don’t have preview pictures. This means that links die out.
  • The BN Links system, from what I’ve seen, seems a lot more flexible and makes it much easier for users to find what they want, which is great for the community.
  • I juggle a lot of projects. BMR maintenance is somewhat of a gypsy on my todo list.
  • I’m human – try ask someone else to delete a section of their site so somebody else can run it. (Ok, that sounded very selfish and attached)

Well. Here’s where you guys come in. To what extent can I realistically share resources, how should this be done, and tell me – is this the time to synergise?

Please leave a comment. Even if you know nothing about Blender.


Back from SIGGRAPH Asia 2008

Well, I’ve been pretty inactive in terms of blog posts given the hectic series of events that were lined up. However now, thing’s are a lot calmer (in the most free sense of the word) and I’ll be back to my a post every 2 days schedule. Lot of fun fun fun in store.

For those who have been following my Twitter feed, you would know that I’ve recently finished my music exam, then flown over to Singapore to attend the last day of SIGGRAPH ASIA 2008. What is SIGGRAPH? Well, it’s basically to 3D modelers, animators, graphic artists and the like what E3 is to gamers (or if you don’t know what E3 is, it’s like what “Palace Erotica” is to a pervert). Firstly, being an absolute klutz, I didn’t snap any photos or cool pictures of the exhibitions and whatnot. Secondly, devoid of any income, I can’t exactly cough up 850 Singapore Dollars to pay for full registration which entitles me to view everything – so I went for free, only attending the (limited) exhibition. Thirdly, I’ll have you know that I’m all for open-source, so any demonstations there (all on proprietary software) could not be replicated exactly: the most I can do is check out the concepts, and apply them the best I can.

Again, my apologies for no happy pictures (apart from that logo I ripped off up there), so all I’ll give are boring, vague descriptions of what I experienced. Ok, I didn’t get to view the Computer Animation Festival (shucks!) due to my “Exhibitions Only” free pass. That’s probably one of the best things I missed. Well, there was obviously the NVidia and ATI stalls competing against each other (Go NVidia!), showcasing how their awesomely powerful hardware can do such awesome real-time rendering. Ooooh. There was a huge Autodesk (lol, now huge both physically and virtually) desk doing Autodesk-ish presentations throughout the whole day. Yeah that’s right, 3DS, XSI, Toxic, and whatever other stuff they’ve got in their pigsty of a product portfolio. Well, the talks were interesting. Covered quite a bit on Mudbox which included interesting sculpting techniques which I’m itching to try out.

There was what looked like a permanently unmanned Lucasfilms’ stall, advertising their new Clone Wars animation as well as some “Jedi’s Course to 3D Stuff” via a row of TV screens. There were also the folks from Industrial Light & Magic (ILM) and other companies who showed in-detail work on how things were created for The Golden Compass. Now that’s some pretty amazing graphics work. How each layer was done, how it was composited, how it was matched into the stock footage, how the “little green men” (green-suits) did their jobs, the processes of making the Yeti character (from concept art, turntable, fur, animation, etc) … etc. Other visual effects that were shown were from some Harry Potter films (pretty awesome scene recreation there), some Warcraft, Bioshock and various game trailers, as well as a variety of mini videos (ever seen Iron Man, Spider Man and The Hulk fight together? – Spider Man pretty much sucks.)

What interested me most was explanations of industry workflows. Workflow is one of the most important aspects in film and movie creation, as that’s pretty much how things get done. I daresay I might either humiliate myself due to lack of knowledge or confuse a lot of people if I tried to explain everything I learnt here (especially because workflows change slightly depending on the situation). So I’m just going to leave it as “yes, it was very interesting, and you’ll definitely see a bit on workflow when I post some work-in-progress posts for ThoughtScore in the future”. Happy? Rhetorical? Yes.

Of course, there was the usual plethora of shady looking Universities and organisations offering related courses. I sifted through the lot and found some that might be interested in personal guidance given my no-income situation. But that’s all for future investments, so other than “I got myself a collection of namecards”, there isn’t much else I can announce in my post here.

So yeah, other than the indescribable knowledge I absorbed, the motivation (you know, that sudden urge to create something amazing), the nametag and 3D glasses I stole, that’s really all I can say for SIGGRAPH. The next SIGGRAPH Asia will be in Seoul, which I most definitely will not be attending. I must say, I’m a bit disappointed at the amount they restrict for people who aren’t willing to break their banks. Other than that, a very worthwhile trip indeed.


ThoughtScore Updated!

Ahh! ThoughtScore! The finest of all my projects (the slowest, too). Well, it’s been updated!

View the ThoughtScore update!

You know you want to check it out.

Note: the update is my last post on that page. The first post was a pretty old update. You can also check out pages 1 and 2 of that forum thread to see how far the project has progressed.

Well, a picture is worth a thousand words, so here are some wallpaper sized renders (also available in the thread) to entice you to click that link up there. If you’re too lazy to register an account on to comment on that thread, just leave a comment to this post ;)

One of Cicero:

And one of Taras:

And one of the station:

On more unfortunate news, I will be overseas and there will not be an article until the 16th of August. However, I promise sometime on very early September there will be another really huge release by me … something so big it might even shadow ThoughtScore. Now that’s just scary.