Creative

A history of rendering engines and modern trends

When working in the architecture industry, we often look at rendering engines from many angles, if you’d pardon the pun. We use simple renderings for diagrams, realistic rendering for architectural visualisations, real-time rendering for virtual reality prototyping, point cloud renderings for landscape and heritage scans, and so on. As the penultimate goal in archviz is to both represent abstractly and lower the costs of realistic prototyping, it pays to see what the future of rendering holds for the industry. But first, I’d like to briefly look at rendering’s history.

When the CG industry first considered how to render an image on the scene, they were mostly concerned with the hidden surface determination problem. In short, when you have a polygon, which surfaces are visible from the camera’s POV, and which are not. This mentality of thinking about how to “colour in an object” as opposed to how to simulate light lead to the development of one of the first rendering techniques: flat shading.

In flat shading, the rendering engine would consider the surface normal of each surface with respect to a light source. The more face-on each surface was, the lighter it was, and the more incident the surface was, the darker it was. If the path between the surface normal and a light source intersected with another surface (i.e., was blocked), it was shaded black. I’ve attached an example of flat shading in the first image below. This is roughly analogous to an actual physical phenomenon – that the angle of incidence to a material matters.

This was very simple, and also not very realistic. Flat shading was then combined with specular shading, which was essentially the same but heavily biased the angle of the surface normal, and had another parameter to control the falloff of the highlight. Although this created a convincing metallic glint (see monkey two in the image below), it was again just an artistic trick and wasn’t based off an actual physical phenomenon. Nevertheless, it stuck, even to this day.

Shading techniques improved when a Vietnamese gentleman invented the infamous Phong shader. He had the idea of interpolating the vertex normals between vertices to give a gradient of colour through the face. This created much more realistic results (see monkey three), but again, had no real world equivalent.

The next improvement to the shading model was when people observed completely black shadows. In real life, global illumination and ambient light ray bounces mean that almost everything can be very effectively indirectly lit. There was no computationally efficient solution to the problem at the time, and so an ambient light constant was added to simply bump up the global lighting (see monkey four). This sort of formed the segway into modern day rendering, and thus ends our history lesson.

Flat, phone, interpolated, and ambient light shading

The moral of the story is that almost all the shading approaches had no real-life equivalent, and all the subsequent improvements were based upon a method that considered how to colour in a shape from the point of view of the shape itself. This is fundamentally incorrect – in the physical world, how an object looks (at people scales, forget quantum mechanic scales) depends on rays of light that are emitted from objects that are giving off photons (e.g. hot objects) bouncing around and losing energy. Energy is deposited and is reflected upon materials in very different ways depending on the microsurface imperfections of the material, and the chemical properties of a material.

Luckily, in parallel as these artistic shaders were being developed, physically-based “ray-tracing” rendering engines were also being developed. These ray-tracers traced rays of photons from and to cameras and light sources in the same way that the real world worked. Back then, they were cool technical demos, but always were too inefficient for any practical work. However, theoretically we had proven that if you throw enough computing power at the problem, you can get photo-realistic results. Nowadays, of course, everybody knows about ray-tracing and it’s practically the norm in the market. I’ve shown an example of a chrome Monkey below reflecting the environment – the textbook example of what ray-tracing can achieve that traditional shaders could not (well, not without hacks and light maps and what not). You can see another example of photo-realistic rendering with Cycles that I’ve done too.

Glossy ray tracing render

Almost every single popular rendering engine nowadays, such as Blender Cycles, V-Ray, Maxwell, Renderman, and Arnold are ray-tracers. They are getting faster and now combining both GPU and CPU to provide almost real-time rendering. In recent years, Physically Based Rendering, better real world scanners, and improvements on texture painters are three among many advances that make photo-realistic rendering easier and easier.

Basically, photo-realism is becoming really easy. An interesting subtle trend to additionally note is that we are actually getting more scientifically based. In the past, these ray-tracers, although somewhat physically based, had many approximations to the point that real-world units were ignored in favour of arbitrary values.

The reason why this is important is because penultimate photorealism comes from scanning in real-world data at increasing levels of fidelity. Engines, no matter how physically based they are, will find it hard to use this information if they are unable to be easily linked back to physical units and measurable scientific values.

Thankfully, this is actually improving. Simple things like using more IES profiles in lighting, or falsecolour luminance images are starting to be possible with mainstream renders. The popularisation of the Disney shader is slowly getting engines working on interoperability, and the ultimate interoperability, much like penultimate photorealism, depends on scientific values.

At the very least, we know that if we throw more computers at the problem it will eventually converge and leave us with a beautiful real image.

This is great news for architecture – the industry I’m in. Architecture is no stranger to smoke and mirrors when it comes to renders and a trend towards scientific rendering makes it easier to both cheaply prototype and still promise the same results to eager clients.

Until then, let’s play with photoreal game engines and VR while the hype lasts.

Life & much, much more

Architecture’s existential crisis pt 3: Goals, ethics, and the people element

In part 1: Architecture is not a Profession, I outlined architecture’s distraction with competitive theories rather than acting as a professional discipline and serving society.

In part 2: The Foundations of Architecture, I talked about what architecture is currently based upon, and how to unify these into a governing framework that encompasses all architectural ideologies based off Vitruvius.

In this part, I will extend the framework to hint at the goals and responsibilities of a professional discipline.

The role of a framework in determining goals

Whilst outlining the framework gives us a foundation as a profession, it says nothing about the precedence of welfare, health and safety of the community as needed in a profession. To do this, we need to refine the architectural framework to pinpoint goals in society.

I use the phrase “pinpoint in society” because a framework does not prescribe goals. It is descriptive. It doesn’t claim that the profession knows everything about the world and is authorised to make decisions for it. However, by outlining elemental considerations when people decide on a goal, it is able to influence these goals (This is still the case even if an element is marked as unimportant.).

The current framework has three elements: structure/firmness, commodity/function, and delight/design/beauty. The first tackles built form itself whereas the latter two tackles community reactions towards built form. Whilst these latter two elements may tackle some aspects of the welfare, health and safety of the community, I believe the addition (Unlike previous attempts to extend Vitruvius’ statement[1], this adds a new element rather than providing detail about existing elements. This is because providing detail converts the framework into a theory.) of a fourth community element may pinpoint this.

A new element: People are as important as built form

Architecture is a discipline where it is impossible to escape values. It’s radically value-laden. I think it’s possible that you can become an architect and see it as somewhat autonomous and not as a political act, which is incredibly naive. I try to make students aware of the radical, political, cultural, social nature of our work and how it’s impossible to escape those responsibilities.

-Thom Mayne, Morphosis

The element outlined above comes in the form of values and responsibilities. What governs our values and how to respond to these responsibilities are ethics (Ethics, morality and ethos (original Greek) can be used interchangeably.). As ethics also fits the requirements of a framework, I propose for ethics to be added as a fourth element:

  • It is encompassing. It is based upon people, which is a universal constant for all built forms. Whilst Vitruvius already targets the aesthetic judgement (when people react to firmness), the moral judgement (when people react to commodity) is left unconsidered. Whether or not we are consciously making decisions based on ethics, it will have effects nonetheless.
  • It is descriptive. It does not dictate the alignment of the moral compass but instead just highlights its presence as a quality of an architecture.
  • It is agnostic. All cultures have a moral compass, and as such, applies to all cultures. Ethics also covers the relation between groups and individuals, which won’t exclude individualistic cultures or the third architectural body who does what they please.

To further prove ethics as an element, we can list some theories who highlight their consideration to ethics: sustainability, where the primary value is that our decisions should not inhibit the opportunities of the future, modernism, where the moral value of truth was translated into an aesthetic quality, and then post-modernism, where the populist ethic was rejected[2]. As for older examples of theories, any theory governed by religious or political ideas has by definition shown consideration of ethics.

Ethics is also a useful addition as it fills a gap left by the original three elements (Vitruvius did mention aspects of ethics[3], such as relationships between men, politics, and precepts, but treats it in the form of a prescribed theory, not a framework). The original three elements either consider the built form itself or the relations between people and built form. Noticeably missing is people themselves. A recognition of people themselves is needed to highlight the distinction between the roles buildings play and the roles people play. Ethics covers both people themselves and the relationships to built form.

This coverage of people themselves and relationships to architecture cover societal aspects: aspects such as politics, environment / sustainability, humanitarian needs, urban planning, right down to individual clients. Including ethics as an element clearly strengthens the link to the welfare, health and safety of the community – one step closer to an architectural profession.

All architects have two clients whenever they work – one is the person that actually pays the bills, and the other is society in general. I think an architect that doesn’t see they are working for society in general doesn’t know his job.

-Joseph Rykwert, Architectural Historian[4]

Understanding of the interests of society is a prerequisite for ethics to be considered. This means that adding ethics as an element helps encourage consideration of our actions in the interests of others.

Although it is not the job of a framework to govern the application of its elements, it’s important to make sure that it can be applied in the first place. ie. to ensure that ethics is not “good in theory but not in practice”. This allows the element to be carried into architectural theories, and then implemented in architectural styles. We can prove this by citing religion, as well as agnostic hierarchies of ethical systems[5]. This practical side of the element means not only can it seed theories, it can fulfil the frameworks goals as a measurement tool.

Ethics is also complex. The inability to create a set of non-conflicting simple rules to govern ethics[5] over humanity’s history suggests a NP-complete nature. This means not only can it be applied in practice, it can also take many different forms that will continue to change over time.

This consideration of the interests of others, nature, welfare, health, and safety changes Vitruvius’ framework into a professional framework, ie. a framework pinpointed in society. We now have a framework consisting of firmness, commodity, delight, and ethics.

  • [1] Watkin, D, 2005, A History of Western Architecture, Laurence King Publishing, London, UK
  • [2] Boje, D. M., Toward a Narrative Ethics for Modern and Postmodern Organization Science, viewed 10 October 2012, http://business.nmsu.edu/~dboje/papers/toward_a_narrative_ethics_for_mo.htm
  • [3] Wotton, H, 1651, The elements of architecture (translated from De Architectura, Vitruvius), Thomas Maxey, London, UK
  • [4] Barbican Five Points for An Ethical Architecture, Architecture Foundation London, viewed 4 October 2012, http://vimeo.com/29281095
  • [5] Singer, P, 2011, Practical Ethics, Cambridge University Press, New York, USA
Life & much, much more

Architecture’s existential crisis pt 1: Architecture is not a profession

A while back (half a year ago), I planned to attempt to solve architecture’s long-lasting “existential crisis”. I thought of creating a framework where people could understand what a theory was and how to generate new ones. The more overarching goal was to look at architectural theories in a positive and constructive light, rather than as points of debate. However in this first part, I just want to highlight the symptoms of the problem.

In architecture’s 8,000 years or more years of existence, it has had about 112 distinct architectural styles (not counting regional differences)[1]. Each style represents a theory or a subset of one. More than half of these theories were formed in the past 250 years – a mere 3% of architecture’s lifespan. At this rate you will encounter 10 more architectural theories during your average career. Simply put, architecture has an existential crisis.

But what is an existential crisis? It’s a stage of development at which an individual questions the very foundations of his or her life: whether their life has any meaning, purpose or value.

An existential crises creates two problems. The first is that we are unable to define goals unanimously as an profession. Without goals, our efforts become divided and ineffective towards serving society.

Uncertainty has spilled over into our schools of architecture. Thirty years ago Christian Norberg-Schulz charged that “the schools have shown themselves incapable of bringing forth architects able to solve the actual tasks.” Things are no different today although we are more likely to meet with challenges to the very notion of “the actual tasks”. Do we know what these tasks are?
– Karsten Harries[2]

The second problem is that we lose a professional foundation. We are unable to be disciplined in our actions, measure standards of success, and focus on the needs of society. These are all professional traits that a foundation provides.

[A profession is] A disciplined group of individuals who adhere to high ethical standards and uphold themselves to, and are accepted by, the public as possessing special knowledge and skills in a widely recognised, organised body of learning derived from education and training at a high level, and who are prepared to exercise this knowledge and these skills in the interest of others.

Inherent in this definition is the concept that the responsibility for the welfare, health and safety of the community shall take precedence over other considerations.
– Australian Council of Professions[3]

As a result of undefined goals and non-existent foundation, we get lots of theories vying for the industry’s attention. However, at any one time only a few theories are marketed as relevant, each describing a certain type of society. Not only does this mean we are limited in our ability to serve types of society, but we get a schism in the architectural body into:

  • Those who apply theory as a discipline to the relevant group in society with shared interests. ie. part of a profession.
  • Those who apply theory as a discipline without understanding which society it was meant for. Exercising knowledge without considering the interests of others is not part of a profession.
  • Those who disregard theory and do what they please. This lack of discipline is also not part of a profession.

With architectural fame dominated by theoreticians who build, it is encouraged to critically observe the previous generation’s philosophy and debunk it with your own[4]. This is childish bickering–creating a dog-eat-dog industry where we aggressively defend our individuality and treat it as a good thing.

This lack of discipline and resulting schism is why I propose that either the state of architectural profession is a short-lived movement waiting to be debunked, or we do not have one. Extending this movement into something that is timeless and bound by the definition of a profession is how we can solve the existential crisis.

Some might argue that continuously questioning our approach is a sign of dedication towards relevancy in society and see it as a good thing. This, however, is missing the point: it isn’t about the details of each theory or how they are formed. It is about how theories are marketed.

Architectural theories are marketed as the be-all and end-all of architectural approach. Although hindsight continuously proves this to be wrong, our impression of current theory renders past theories outdated and somehow irrelevant. Our resistance to change then fixates our attention on the theoretical details between past and present, leading to arguments. This hinders our ability to see larger goals.

Additionally, we are still unable to outline goals or foundation despite increasingly and continuously questioning our foundations. If we continue generating theories at a rate of every 3-4 years without being able to highlight any one of these theories as being correct or still relevant today, then perhaps we are searching in the wrong place.

Coming in part 2: What are the foundations of architecture and why are they inappropriate?

  • [1] Timeline of architectural styles, Wikipedia, viewed 11 September 2012, http://en.wikipedia.org/wiki/Timeline_of_architectural_styles
  • [2] Harries, K, 2000, The Ethical Function of Architecture, Massachusetts Institute of Technology, USA
  • [3] Definition of a Profession, Australian Council of Professions, viewed 4 October 2012, http://www.accc.gov.au/content/index.phtml/itemId/277772
  • [4] Breitschmid, M, Architecture & Philosophy: Thoughts on Building, Blacksburg, Virginia, USA
Uncategorized

History of the Internet

picture2A short note by Dion Moult: Hello there readers! Today I present to you a guest article by none other than a guest writer – namely Nathan from Inkweaver-review (link below). If you want your article to be published on thinkMoult, simply drop me an email and we’ll have a chat :)

Today the Internet is a seamless service that seems to fit into human life as it was always there, but the truth is that it is only about forty years old.  The Internet began in the 1960s when early packet switching networks were created.  Packet switching is the process of dividing data into chunks that are sent one at a time.  Prior to this concept it was very difficult to route data through a network.  For example, if the computer was to send a file it would be sent in a stream of data usually in analog format.  Computers would have a hard time trying to determine where the stream began and ended.  This makes it nearly impossible to route the stream through a network.  With packet switching, however, it becomes much easier to package data and then address it so that computers can tell where it should be sent.

In 1964 the advantages of digital packet switching were first explored with the purpose of creating a communications network that had wide connectivity and would be able to survive failure of any of its nodes.  This was visualized in a series of papers titled On Distributed Communications.  This communications idea was very attractive for military purposes as it promoted a network that could easily survive attack.  Messages could be forwarded around any computer or server that had failed or been destroyed.  The first internet application was therefore a military network called ARPANET, developed during the height of the Cold War to make communication possible in the case of a wide spread nuclear attack.

In the 1979 however, ARPANET became the backbone of a new network called USENET.  This network began as a system for connecting major universities so that they could share resources and data from experiments.  However, this network was also used for bulletin board systems and email.  College students found that they could use USENET  for their own purposes.  Although the ARPANET networks forbid the use of their servers for discussions on subjects such as drugs and sex, innovative students found ways to set up their own servers to host such discussions.  The development of personal computers made it possible for a much larger group of people to connect to the developing internet.  Hackers and computer buffs from around the world began setting up their own servers and chat systems.

The Internet as we know it today began at CERN with the innovation of Tim Berners-Lee.  He wanted to set up a network that would work well between different types of computers.  He also thought up hypertext, text documents that had links and other active content that paper documents could not have.  This protocol, called the World Wide Web, works on top of the infrastructure that is the internet.  Today it has absorbed such a large portion of internet use that most people confuse the World Wide Web with the Internet itself.

To summarize the Internet began as a military network to distribute information in the case of an enemy attack.  From there it expanded to include educational institutions that wanted a good way to share data.  Then college students began using the network for bulletin boards, chat, and other applications, paving the way for the internet that we know today.

For more information see:

Guest Post by NathanKP of Inkweaver Review.