« 26c3 - Here be Dragons! | Main | HTML5 Video - Introduction and Commentary on Video Codecs »

PTEX&OpenCL - or How Steve Jobs Companies Are Changing 3D

Something amazing came through my ticker today something that is a game changer and together with another technology will change the way I work and make much more enjoyable.

First some basics to understand what I am talking about for those that have no clue about it all. There are basically the following six steps to get to a final 3d picture.

1. Modelling: Multiple Approaches get you to a mesh model that consists of - in the end mostly - polygons. You can scan you can push little points in 3d space you can use mathematical formulas to create substract or add simple forms or other formulas to make edges round revolve lines or extrude other lines. The end product is mostly always a polygon mesh with x amount of polygons - the more the higher the resolution of the model the closer you can look at it. About ten years ago a really nice way to model high resolution meshes came into existence called SubDivision Surfaces which lets you model a corse resolution model which is much easier to understand and alter and then generate a highres model out of it - that was the first real game changer in the 3d industry and the reason why character modelling became so "easy" and so many people doing so many great models.

2. UV Preperation: Now a model out of triangels looks less then realistic of course so you need to tell the programm what kind of material is on the model - here a lot of option are available - but especially for film work and characters you want something that is realistic and you do that by getting something realistic - like a photo - and alter it in such a way that it fits on your model - or you paint from scratch - now that such a picture can be put onto the model you need to flatten out the model into a two dimensional surface. You can imagine this like taking a dead animal and skinning it then make the skin of the animal flat. Like so:
(there is actually a programm that stretches the "hides" pretty similar to this very analog process). Its a very dull process to do this on a complex model - mostly you have to take your nice model apart and do all kinds of voodoo to get it artifact free. No fun and certainly not really creative.

3. Texturing: Ones you have your nice model with a more or less nice UV map you start to apply your texture - photo or programatic or a mixture of that. Here is a lot of "fun" to be had as you add little bumps, tell the software how shiny the model will be how reflective how refractive and lots of other things I donīt really want to go into - but its a nice step in general. +

4. Light & Camera: Without light there wouldnīt be anything visible. So you set up some virtual lights which act and react just like different kind of light sources you find in reality + some more other tricks that arenīt in reality but can add to a realistic picture. You also set up a camera or your virtual eye - which again acts just like a photographic camera in real life (almost). Both a creative and fun process.

5. Animation: Then you animate your model - push objects around, apply physics, deform your model. You can either do that by hand or get some animation data from MotionCaputure - like you might have seen these people with a black suit and pingpong balls attaced to them - or faces with dots all over them for example. This step is both fun and frustrating - with hand made or captured data. The human eye is so susceptible to small problems in movement that to get it realistically convincing not even a certain 500 Mio. Dollar production can fully perfect this step.

6. Render: Then comes the process that is mostly free of human intervention but not free of hassles and frustration. The rendering. Can take up to 50 hours per frame in Avatar on a stock normal computer. 24-25 frames per seconds (or in case of 3d double that) and you get an idea how much processing power is needed. And if you do a mistake - render it all over again. Also rendering is a complex mathematical problem and there are bound to be errors in software so prepare for the worst here.

Now why I am telling you all this? Well one step it seems has just been eliminated. Progress in the field of visual effects is very eratic - you have 2-4 years no progress at all and then all of the sudden a floodgate opens and something dramatically changes or multiple things. I would say we had a quit period the last 2-4 years - mostly because the development of the real cool stuff was "inhouse" meaning - that really smart programmer people where hired by the big VFX companies to program them certain things for certain needs and problems - a lot of problems in the above pipeline are already solved I think but have never seen the light of the broader world and instead stayed and sometimes even died within certain companies. Its really frustrating as the software companies struggled with the most basic problems (plagued by slow sales and a bad economy) and then you see Pirates of the Caribbean for example and they completely figure out how to motion capture live actors (record their movement) on set with no real special equipment - that technology is still only available behind the looked doors of Industrial Light & Magic. For me as an artist that is a valuable tool that has been created and I could do cool stuff with it but I canīt get my hands on because of corporate policies.
So its REALLY amazing to see that Disney - the intellectual property hoarding company for whom the copyright law has been rewritten at least ones - is releasing a software/API/Filestandard as open source as of today. Code that no less promises to completely eliminate step two of my list above. In their own words they have already produced one short animation and are in the process of one full feature animation completely without doing any UV mapping. I can only try to explain to you the joy that this brings to me. UV mapping has been my biggest hurdle to date - I never really mastered it - I hated it. Its such a painstaking long tedious process. I normally used every workaround that I could find to avoid doing UV mapping. Its crazy to think they finally have figured out a way to get there without it and I think this will drop like a bomb into every 3d app and supporting 3d app there is on the market within a year (a wishfull thinking here) - at least I can hope it does and I hope that Blender, Autodesk, sideFX are listening very closely.
Combine that with the recent advancement of render technology by using OpenCL (developed and released as part of SnowLeopard by Apple and made an Open Standard with ports for Linux and Windows now available) and render partially on the graphic card (GPU) - which speeds up rendering up to 50 times. That means that a frame from avatar takes only one hour to render instead of 50 - or in a more realistic case - current rendertime for an HD shot takes here 2-5 minutes an average to render - thats cut down to 10sec - 1min and would actually make rendering a fun part of the process.
Now we all know who is behind both companies releasing and opening that up: The mighty Steve Jobs. You could almost say there is an agenda behind it to make 3d a way more pleasurable creative work then it currently is - maybe Mr. Jobs wants us all to model and render amazing virtual worlds to inhabit where he can play god ;)
Good times indeed.

Whats left? Well animation is still not worked out completely but with muscle simulation and easy face and bone setups it has become easier over the past years - still hidousely tidious process to make it look right - donīt know if there ever is a solution for it that is as revolutionary as PTEX. Motion sensors might help a bit in the short future also some techniques that make the models physically acurate so that things canīt get into each other and gravity is automatically apllied. High quality texture maps that hold up to very very close scrunity are still memory hogs and burn down the most powerfull workstations. The rest will get better with faster bigger better computers as always (like all the nice lighting models that are almost unusable in production to date because they render too long). Generally we are so much further with UV mapping and rendering problems out of the picture I might get back into 3d much much more.

ptex.us - the official website
The PTEX white paper
PTEX sample objects and a demo movie

Disclaimer: I have been doing 3d since 1992 when I rendered a 320x240 scene of two lamps on an Amiga 2000 with raytracing - it took 2 days to render. My first animation in 1993 took a month to render. Then I switched to Macintosh (exclusively) in 1995 and did 3d on them for a while. It was so frustrating that I did not make a serious efford to get really good at it ever - now I am still doing it alongside Compositing / VFX Supervision but rather as add on & for previz then main work.


TrackBack URL for this entry:

Post a comment