Main

16.01.10

PTEX&OpenCL - or How Steve Jobs Companies Are Changing 3D

Something amazing came through my ticker today something that is a game changer and together with another technology will change the way I work and make much more enjoyable.

First some basics to understand what I am talking about for those that have no clue about it all. There are basically the following six steps to get to a final 3d picture.

1. Modelling: Multiple Approaches get you to a mesh model that consists of - in the end mostly - polygons. You can scan you can push little points in 3d space you can use mathematical formulas to create substract or add simple forms or other formulas to make edges round revolve lines or extrude other lines. The end product is mostly always a polygon mesh with x amount of polygons - the more the higher the resolution of the model the closer you can look at it. About ten years ago a really nice way to model high resolution meshes came into existence called SubDivision Surfaces which lets you model a corse resolution model which is much easier to understand and alter and then generate a highres model out of it - that was the first real game changer in the 3d industry and the reason why character modelling became so "easy" and so many people doing so many great models.

2. UV Preperation: Now a model out of triangels looks less then realistic of course so you need to tell the programm what kind of material is on the model - here a lot of option are available - but especially for film work and characters you want something that is realistic and you do that by getting something realistic - like a photo - and alter it in such a way that it fits on your model - or you paint from scratch - now that such a picture can be put onto the model you need to flatten out the model into a two dimensional surface. You can imagine this like taking a dead animal and skinning it then make the skin of the animal flat. Like so:
hide.jpg
(there is actually a programm that stretches the "hides" pretty similar to this very analog process). Its a very dull process to do this on a complex model - mostly you have to take your nice model apart and do all kinds of voodoo to get it artifact free. No fun and certainly not really creative.

3. Texturing: Ones you have your nice model with a more or less nice UV map you start to apply your texture - photo or programatic or a mixture of that. Here is a lot of "fun" to be had as you add little bumps, tell the software how shiny the model will be how reflective how refractive and lots of other things I donīt really want to go into - but its a nice step in general. +

4. Light & Camera: Without light there wouldnīt be anything visible. So you set up some virtual lights which act and react just like different kind of light sources you find in reality + some more other tricks that arenīt in reality but can add to a realistic picture. You also set up a camera or your virtual eye - which again acts just like a photographic camera in real life (almost). Both a creative and fun process.

5. Animation: Then you animate your model - push objects around, apply physics, deform your model. You can either do that by hand or get some animation data from MotionCaputure - like you might have seen these people with a black suit and pingpong balls attaced to them - or faces with dots all over them for example. This step is both fun and frustrating - with hand made or captured data. The human eye is so susceptible to small problems in movement that to get it realistically convincing not even a certain 500 Mio. Dollar production can fully perfect this step.

6. Render: Then comes the process that is mostly free of human intervention but not free of hassles and frustration. The rendering. Can take up to 50 hours per frame in Avatar on a stock normal computer. 24-25 frames per seconds (or in case of 3d double that) and you get an idea how much processing power is needed. And if you do a mistake - render it all over again. Also rendering is a complex mathematical problem and there are bound to be errors in software so prepare for the worst here.

Now why I am telling you all this? Well one step it seems has just been eliminated. Progress in the field of visual effects is very eratic - you have 2-4 years no progress at all and then all of the sudden a floodgate opens and something dramatically changes or multiple things. I would say we had a quit period the last 2-4 years - mostly because the development of the real cool stuff was "inhouse" meaning - that really smart programmer people where hired by the big VFX companies to program them certain things for certain needs and problems - a lot of problems in the above pipeline are already solved I think but have never seen the light of the broader world and instead stayed and sometimes even died within certain companies. Its really frustrating as the software companies struggled with the most basic problems (plagued by slow sales and a bad economy) and then you see Pirates of the Caribbean for example and they completely figure out how to motion capture live actors (record their movement) on set with no real special equipment - that technology is still only available behind the looked doors of Industrial Light & Magic. For me as an artist that is a valuable tool that has been created and I could do cool stuff with it but I canīt get my hands on because of corporate policies.
So its REALLY amazing to see that Disney - the intellectual property hoarding company for whom the copyright law has been rewritten at least ones - is releasing a software/API/Filestandard as open source as of today. Code that no less promises to completely eliminate step two of my list above. In their own words they have already produced one short animation and are in the process of one full feature animation completely without doing any UV mapping. I can only try to explain to you the joy that this brings to me. UV mapping has been my biggest hurdle to date - I never really mastered it - I hated it. Its such a painstaking long tedious process. I normally used every workaround that I could find to avoid doing UV mapping. Its crazy to think they finally have figured out a way to get there without it and I think this will drop like a bomb into every 3d app and supporting 3d app there is on the market within a year (a wishfull thinking here) - at least I can hope it does and I hope that Blender, Autodesk, sideFX are listening very closely.
Combine that with the recent advancement of render technology by using OpenCL (developed and released as part of SnowLeopard by Apple and made an Open Standard with ports for Linux and Windows now available) and render partially on the graphic card (GPU) - which speeds up rendering up to 50 times. That means that a frame from avatar takes only one hour to render instead of 50 - or in a more realistic case - current rendertime for an HD shot takes here 2-5 minutes an average to render - thats cut down to 10sec - 1min and would actually make rendering a fun part of the process.
Now we all know who is behind both companies releasing and opening that up: The mighty Steve Jobs. You could almost say there is an agenda behind it to make 3d a way more pleasurable creative work then it currently is - maybe Mr. Jobs wants us all to model and render amazing virtual worlds to inhabit where he can play god ;)
Good times indeed.

Whats left? Well animation is still not worked out completely but with muscle simulation and easy face and bone setups it has become easier over the past years - still hidousely tidious process to make it look right - donīt know if there ever is a solution for it that is as revolutionary as PTEX. Motion sensors might help a bit in the short future also some techniques that make the models physically acurate so that things canīt get into each other and gravity is automatically apllied. High quality texture maps that hold up to very very close scrunity are still memory hogs and burn down the most powerfull workstations. The rest will get better with faster bigger better computers as always (like all the nice lighting models that are almost unusable in production to date because they render too long). Generally we are so much further with UV mapping and rendering problems out of the picture I might get back into 3d much much more.

ptex.us - the official website
The PTEX white paper
PTEX sample objects and a demo movie

Disclaimer: I have been doing 3d since 1992 when I rendered a 320x240 scene of two lamps on an Amiga 2000 with raytracing - it took 2 days to render. My first animation in 1993 took a month to render. Then I switched to Macintosh (exclusively) in 1995 and did 3d on them for a while. It was so frustrating that I did not make a serious efford to get really good at it ever - now I am still doing it alongside Compositing / VFX Supervision but rather as add on & for previz then main work.

24.03.09

3d in the browser - is it really finally coming?

VRML was once to be said the future of the web - everyone who ever tried that out back in the good days will agree with me that it was deemed to failure right from the beginning on. It went under and was never seen again with the second generation browsers. Modern browsers had other stuff to worry about - like passing acidic tests and such so 3d was not a main concern ever. Now word from the Game Developer Conference hits the street that the Kronos group is working together with the Mozilla foundation to bring accelerated 3d graphics inside the browser window. The Kronos group is responsible for OpenGL and OpenGL ES (iPhone is all I say here) and Mozilla of course for the Firefox. They formed an "accelerated 3d on the web working group" that will create a roalty free standard for browser makers to implement and webdevelopers to use. Hallejulia - now it might take some eons for a) a standard to form b) browser to adopt the standard c) 3d program letting you export stuff in the right format but the prospects for real 3d in the browser in a 3-5 year time frame are exiting to say the least. Personally for me this is bigger then vector (as it includes vector hopefully) - the possibilities are endless and truly exiting. Be sure to hear back from me if there is the earliest inclination of any beta or even alpha warez to try this out.

via internetnews.com

11.03.09

Using a DJ midi controller for 3d Previz

Quang from Exozet who I have written about before has finally come around and made himself a blog - grainy fx (making me aware to update my blogroll). In his third entry on the blog he talks about a current Previsulization project for a Polylux subseries trailer and how he used my trusty Faderfox DJ2 midi controller to let the cameraman of the show position his camera where he wants it. Its just 128 units is not really precise enough it seems - but it worked sufficiently as I heard and the team was happy after the meeting - I will have a look at the actual shooting on saturday because they are shooting with a RED camera (the RED one that is) on greenscreen and I would love to see how the cam performs in real life - I will report back here.

Here is the video of the controller in action Quang shared with the world.



7.11.08

Free Stock Footage

stockfootagefree.gifUgh right on the heels of "free 3d tutorials" I can post something that is on par with free 3d tutorials - free stock footage. If you ever needed some special cloud backdrop, falling rain to comp over or similar and have looked at certain sites that sell you these you know that you can not afford them ever (we are talking about $200 for ONE 30 second clip in HD resolution). So the great free internet whispered to me today that there is a site now giving you free access to their royalty free stock footage collection - not as pro and full yet as the for pay options but on the right track. And yes you get the footage in HD quality too - just in 30fps (frucking NTSC)...

23.08.08

Mudbox - the end of rendering near

mudbox2009realtime.jpgI promised to put out some more cool stuff that was announced at Siggraph because I think that Siggraph this year has awoken the sleeping Motion-Graphics-Visual-Effects-Dragon(tm). Some extremely unexpected wow factor came from a video presentation about the new Autodesk Mudbox 2009. Its a competitor to the much acclaimed ZBrush (which has been lacking development lately so competition is good) but was never really up to be head on with the latter 3d sculpting tool (coming also 3 years later to the party). Basically this category of programms is "realtime sculpting and texturing of organic forms with subdevision models". There has been great innovation coming from this approach like an extended focus on normal and displacement maps and easier UV transfer from between models. But ZBrush dropped the bomb like last year that they can now do ultra high precision modelling in realtime and add some lighting in the process of sculpting - making a unprecedendet modelling process possible that already generated a lot of highly detailed wierd and real looking characters (some of the Lord of the Rings characters have been modelled in ZBrush). But this year Autodesk who had just bought Mudbox like a year ago is upping the ante with modelling in ultra ultra high resolution - we are talking about 15 Million Polygons + to add detail that floors everyone and the modelling is still realtime and yes you are actually working with the quad polygons and not some normal map trick. The guy in the video is subdeviding the model he works on again and again - to like level 7 or 8 (that is quadrupling polygons on every step) - most programs crash out on level 4 up til now. But when you think it couldnīt get any better he turns on his texture, Light with Shadows, realtime HDRI lightning, RealTime ambient occlusion and realtime Depth of Field. WTF. Looking at the model (see the picture above) you think its rendered with renderman and took about 4 hours to render - but yet he continues to draw details on this fully lit fully textures models as if this is a low poly un-textured model - the light updates the viewport updates, he can reposition the main lights to see how the crinkles work he just painted - absolutely stunning.
See the video and more over at Autodesks Area User Platform (so to see the video you have to be registered).

16.08.08

Using photographs to enhance Videos

This last two weeks have been amazing when it comes to new technology in the realm of visual fx production. I will show some over the next view days (when I discover videos for them to show what they are about). I would even say that the advancement of VFX and 3D over the last year surpasses the advancements we have seen the 5 years before that combined (read there where almost none). The first thing I want to show is not available yet in a commercial product but could revolutionize every hard task of compositing - basically cut compositing times for 75% of complex shots by 90%.
Apperently you take a couple of photographs in addition to your video of a scene without moving elements. Then the software matches the photographs onto the video and generates a depth map. Every compositor now knows that when you have a depth map you are in heaven because you isolate objects an make them dissapear or enhance parts of the image. But this software goes further. It reconstructs the image and blends it cleverly with the original footage using motion estimation and difference masks. If it works as shown compositing will start to be fun. Curiously they are aiming this demonstration video at consumers to upres there footage or stabilize it - but the market much more interested is the pro compositors and VFX houses. Because as a VFX supervisor you shoot photos on the set anyways so this would not even add any other level to the pipeline. Amazing. If its really that clean and artifacts free as seen in the video.


Using Photographs to Enhance Videos of a Static Scene from pro on Vimeo.

The PDF paper of the technology and slide shows of the talk are found here