Doing a Blender panoramic render isn’t easy. This is because you can’t actually see what you’re going to create in the regular OpenGL-shaded 3D view. In fact, even when looking through the Blender camera, you won’t see what you’re going to end up with.
Blender panoramic renders with Blender Internal renderer
The technique is actually rather simple: just select the camera. Then in the object data settings panel, just click the panoramic button and play with the focal length until you get something you want. You can see an example of how to create a simple Blender panoramic render in the screenshot below:
You’ll soon discover is that you can only actually see what you’re going to get when you hit the render button. If you’re using the internal Blender renderer, this’ll become a pain really quickly. This is because you’ll have to constantly hit render and wait for the image to come up.
Blender panoramic renders with Cycles renderer
However if you switch the scene to Cycles Renderer, things become easier. This is because Cycles provides a real-time rendering view. Just set one window open to rendered view and you’ll see what you’re going to get (roughly) in real time. At the same time, you’ll notice that Cycles opens up new options on how your panoramic render should be like: the type of panoramic – whether it’s fish eye, equidistant, or rectangular, and how many degrees the field of view is.
Now comes the fun part, actually building your scene. The best strategy is to have planned out your scene beforehand on paper so you know its composition. With panoramic renders, if you haven’t already got a scene built, now is your chance to bend reality in a Dali-esque manner and have fun with perspective.
Once your composition is planned out, create simple cubes and planes as proxy objects for the rough locations of where your scene objects are going to be. Then slowly replace them with the more detailed meshes of what you want to make.
Another useful tip is to use the Blender preview window. In Cycles, when in camera view, just press shift-b and drag and drop a cropped portion of your screen, and it’ll only render that area. This keeps rendering times low. There are plenty of other tips on how to reduce rendering times, so I won’t repeat them all here.
Below you can see a panoramic render I did a few months ago which uses the above techniques. As you can see, I’ve blended several scenes together with varying perspectives, which was only possible with a panoramic render.
Good luck, and create those awesome renders!
This isn’t entirely true. It doesn’t track the hand, it tracks a bright blue bottle cap I found on the floor. Even more truthful is to say that it tracks anything bright blue. But enough chat, here’s a demonstration. Just click the small button with the dash in it to get started, grab something blueish and wave it in front of your camera. It won’t be as good as we got it since we adjusted it for specific lighting conditions, but it’s enough as a proof of concept.
We’ve also got pretty pictures to show. Here’s one of the quick n’ dirty strap we used to embed the bottle cap in.
And here is one of it in action.
You can see the (bad, hackish, thrown together) code for it in my playground repository on GitHub.
A few months ago, Chris Paplinski, Nathan Charrois, Kaushal Inna, Andre Brokman, Kelsie Rose and I, Dion Moult, gathered to create a company. Today, we would like to present to the world: SevenStrokes.
SevenStrokes is a web development company but with a few key differences.
- Firstly, we see websites as a service, not a product. We don’t just build a website, we treat it as part of your larger corporate strategy.
- We build systems that mirror your day-to-day domain issues. We use a combination of behavior-driven development and code architecture that employs the same daily language that you do. This ensures our system makes sense not just in the software world, but in real life, and thus always move a step towards achieving your corporate goals.
- We follow many aspects of the open-source business model, ensuring that we assign the most motivated staff that want your site to succeed just as much as you do, and that full inspection guarantees your system integrity.
- We push for the latest industry standards and keep on pushing, even after launch. Websites are usually short-lived, but we’re changing that with a system architecture that maximises long-term life.
So what are you waiting for? Do you need a website built? Do you need somebody to help spearhead your latest online initiative? Check out SevenStrokes: Building websites … a little differently
Whilst Linux is an excellent system for programmers, it’s certainly a little wanting for people who deal with creative graphics. There are tools like Krita, GIMP, Inkscape, Blender and Digikam and so on to help fill this gap, but one area which isn’t talked about so often are CAD tools. As an architecture student and a Linux user, I can safely say that the options are disappointing. It certainly is possible to have a complete graphics workflow on Linux, but it’s not as good as on Windows.
There were always CAD packages around such as FreeCAD and QCAD (I believe rebranded to LibreCAD) and its various derivatives, but they were all slow and not particularly powerful. However for the past few years, I’ve enjoyed DraftSight.
Firstly, a disclaimer: DraftSight is not open-source. It is certainly free to use and works very well on all platforms, but it is backed by a commercial company (Dassault Systemes), is financed through an enterprise license, and certainly has no obligation to the community.
However perhaps the reason DraftSight is so much more powerful than the open-source alternatives is because it has a very clear goal: it wants to clone AutoCAD. Unlike GIMP, which has tried to define a new paradigm for itself, DraftSight keeps users in a familiar environment.
If you are on Gentoo Linux, I am maintaining the DraftSight package and as of May earlier this year, it is available in the betagarden overlay. For more information, you can see this bug.
Hardware isn’t generally my thing. When it comes to software, I like to break and create. But in my opinion, hardware should just work. But even though that’s another story altogether, it did explain my apprehension when I greeted the UPS guy one morning delivering a BeagleBone Black.
Let’s begin with the BBB. It’s a computer the size of a credit card, which isn’t that impressive if you realise that your phone is a computer. I find the best way to explain it is in terms of two other products, the Arduino and the Raspberry Pi. The Arduino is a similarly sized (comes in multiple sizes though) controller where you can upload scripts, plug in a hardware circuit (wires and lightbulb, that sort of thing), and have it control the circuit. Despite its power in hardware control, it only has a small scripting interface for you to do your programming. The Raspberry Pi is the opposite. It’s a full Linux computer (based off Debian), but does not have proper hardware controls out of the box. The BBB provides the best of both worlds: a full Linux system (Angstrom Linux, but of course you can flash your own), and a ridiculous number of IO pins to control circuits. All this awesome power at 45USD.
The next step upon receiving this wonderboard was obvious. Let’s build a swarm of robots. Along with two university friends, Lawrence Huang and Gloria Nam, we set out planning the system.
The base was to be constructed out of a 1200x1200mm wooden plywood board and cut it into a circle with a hole in the middle. This would be the “world” where the robot swarm would live on. This world would operate like a Lazy Susan, and would have a two depots filled with some sort of resource. One at the center, and one at the perimeter. This gave the colony a purpose: it would need to collect resources. Above the board was where we would put the computer, BBB, power supply, and cables to hook up to all the bots below.
We then had to determine the behavior and movement capabilities of the swarm. It had to act as one, but still remain separate entities. It also had to disperse to discover where the rotated resource depots were, and the swarm as a whole had a set of goals and quota limitations. Five movement types (along with the math) were worked out to allow the bots smooth and flexible movement across the terrain.
The overmind was next. We would use Blender‘s very flexible boid simulator along with custom Python scripts using Blender’s Python API to simulate the swarm behavior on the computer and set swarm goals. At the same time, a real-time top-down view could be generated and displayed. Due to budget reasons, we couldn’t build the entire swarm of robots, but instead settled on building just one bot in the swarm, and having this bot track the motions of a single bot on the computer screen, but still behave as part of the full 32-robot swarm on the screen. Viewers could then see on the screen the full swarm behavior, and physically see a single bots behavior in front of them.
The car itself was then built. It was relatively small and was barely enough to fit the two continuous-rotation servo motors that were required to power its left and right treads. It had a little tank on its top to hold resources, a depositing mechanism at its front, and dragged along a massive conveyor belt to collect resources behind it.
Now the fun part – calibrating the simulated swarm with the actual physical swarm behavior, and doing all the physical PWM circuits. Many sleepless nights later it was a success. Here we see the bot doing a weird parking job into the depot and collecting resources, going back to the center, and depositing it. Apologies for the lack of video.
And there we have it. A swarm of robots. Did it curb my fear of hardware? Not entirely.
For those interested in the actual system, here’s a macro overview:
A few extra fun things from the project:
- Calibration was not easy. Actually, it was very hard. No, it was stupidly hard. It was ridiculously hard. Real life has so many uncertainties.
- Each bot is tethered to the overmind via 8 wires (3 per tread, 2 for conveyor belt). Could it be made into a wireless swarm? Yes. Did we have the money? No.
- Could it be modified to move in 3D XYZ space like a swarm of helicopters? Yes. Would I do the math for it? No.
- The actual simulation was done on the computer via Blender + custom python scripts. The computer was then connected via a persistent master SSH connection, which was reused to send simple signals to the pin’s embedded controller. So all in all the BBB actually didn’t do much work. It was just a software->hardware adapter.
- Because the computer was doing all the work, it wasn’t hard to add network hooks. This meant we could actually control the system via our phones (which we did).
- Weirdest bug? When (and only when) we connected the computer to the university wifi, flicking a switch 10 meters away in a completely separate circuit (seriously, completely separate) would cause the BBB to die. Still completely confused and will accept any explanation.
- Timeframe for the project? 4 weeks along with other obligations.
- Prior hardware and circuit experience: none. Well. Hooking up a lightbulb to a battery. Or something like that.
- Casualties included at least three bot prototypes, a motor, and at least 50 Styrofoam rabbits (don’t ask)
- Why are all these diagrams on weird old paper backgrounds? Why not?
- At the time of the project, the BBB was less than a month old. This meant practically no documentation, and lack of coherent support in their IRC channels. As expected, this was hardly a good thing.
Project success. I hope you enjoyed it too :)