Main

23.10.10

The How and What and Why of 3D Conversion

485px-Oryx_family.jpg

I have recently worked for a very large cooperation on a very large scale project that involves magicians and sorcerers and a fantasy world. This film was suppose to go to the big screen - like the really really big screen - in 3D - now it will only be in 2D. Sorry for the mangled description I am still under a somewhat restricting Non Disclosure Agreement.

Let me be assured that this particular movie franchise is so big that any misstep is costing the production company likely a couple million bucks. The decision to not release the movie in 3D was made not very long before its release after a lot of very good work has gone into the conversion already. Yes conversion. I am not here to spill the beans on how this all happened on the management side - I also can not talk to detailed about how their particular approach to conversion was done. I just want to answer a couple of more general questions on 2D to 3D conversions. I feel that this is public knowledge and can be talked about (looks over his shoulder to see if an army of lawyers is on his back).

The first question I get asked over and over again is why a movie that big has not been shot in 3D in the first place - as most conversions that have been done looked horrible where eye hurting and generally not good for the general experience (Alice In Wonderland for example).
I have asked this question myself in the team over and over again. There is one simple answer I got back: its not easier for an visual fx heavy film to get produced entirely in 3D then it is for the film to be converted to 3D. Some even said - its more expensive and more difficult to have a complete 3D pipeline from shoot to delivery then a conversion.
Why is that? For one shooting 3D is not perfect. There are pretty much 3 different ways to shoot 3D and all have a number of problems. You have parallel - having two cameras (or lenses like the new panasonic camera) side by side total parallel. Now for wide shots with no near foreground this works ok - but forget any and all closeup action - you will get out of the movie theater cross eyed. You can mimic what happens here by holding a finger very close to your eyes in the middle of the head.
That can be enhance by actually angling the camera a bit toward each other (making them crosseyed) converging on the point of interest. Since this is all just tricking our eyes believing we see a 3D picture when in fact we only see a 2D picture on the screen this opens a can of worms. What happens with far distance background, how can one quickly and precisly enough change the angles of both camera bodies to adjust for a character coming from far back to the front. Another thing to consider for both techniques is that the further you spread the cameras apart the deeper the image will become - its really interesting how just half a millimeter can totally make the picture unbelievable and break the 3D illusion or just make things look comical.
The third option is the mirror rig. You have two cameras at an angle of 90 degrees to each other (A 3D Rig overview) the image is split between left and right eye by a polarizing mirror. While this approach is more precise as the angulation of both cameras can be set more finely and is mostly used for closeups it has one big drawback - if you polarize you loose reflections, meaning that one eye will not have reflection in it that then later in post have to be painstakingly manually added again (or have the viewer get a headache cause both eyes aint matching the same picture).
Preciseness is the key to all of these approaches - its crazy how our brain actually works in 3D and its not easely fooled or if it is and its not perfectly correct you will have an audience that walks out and takes 10 aspirins and likely never come back to a 3D movie in their lifetime.
Now couple this with no lens on this planet beeing the exact same, every camera can have a billion settings that makes the pictures different, skew of the camera sensors, filters, dust, windgusts, water (like rain) etc etc etc. and you start getting an idea of what can actually go wrong in shooting. On top of all that there is not soooo extremely much you can do with the picture ones its shot. There is a $8000 dollar plugin that lets you adjust some stuff (and help you a tiny bit with the VFX in the next paragraph) but its not magic and it surely is not a cure all.

On top of these problems comes the next. Doing visual effects for 2D is hard enough - especially if you are working on films that are finalized in 4k (4096Ũ3072 pixels or the like depending on widescreen format used). You trick you way around to make cables invisible, paint back hairs where the green screen didnīt work, manually draw masks 24 frames for every second of film to make a robot stand behind a person not in front, adjust colors to perfection so things look right and fit together. Now in 3D you have two pictures that pretty much are the same but arenīt. If you think its just a "copy over" one side to the other and then thats it forget about it. Talking to a fellow compositor who has worked on a 3D movie the full scope came to light - he said (and I might add he is pure genius young compositor - way better then me) - he needs TEN times as much time to finish a shot then he would in 2D - ugh.
Summing that up - the reason why a lot of films are still rather converted from 2D to 3D have to do with no real good shooting options on set, highly difficult compositing and not much leverage to adjust depth after it was shot. Making it more expensive (and not necessarily better) to shoot then to convert.

So how does a conversion work. Well. I think I can say that much its A LOT of manual labour. You basically build the scene that you want to convert roughly in 3D cut out the objects in your scene based on depth then project those objects onto their respective 3d object then render the whole thing from a camera that is of to the side. That leaves you with an occlusion edge that then has to be mostly manually filled in and corrected. Thats mostly a job of making up what would be behind objects. Crazy stuff really do one pixel mistake in this and it all stops working, giving your headaches or looking just outright laughable - mostly its headache so. Bad conversions have hair of characters sticking not on their head but to the back wall, see through object looking like cardboard, weird depth changes when one object switches with another through depth (like two people dancing and swiveling around).

Pros of a conversion: Its cheaper with FX heavy stuff - the director and his team (of stereographers) can adjust depth on the fly in the editing room - a HUGE plus to make the movie coherent (like so your eyes donīt have to constantly adjust to different depth in every scene) and just adjust individual scenes to make them look perfect or to isolate one character or object in 3D space. (this is the thing where 3D shines from a filmmaker perspective)

Do I like 3D? I watched avatar and I didnīt think it has merit. Really thought that its a neat gimmick but nothing more. Having worked on it and stared at a single scene for quite some time and made small adjustments here and there and looked at other scenes I do
think yes as a filmmakers tool it has its place across all genres of film. I actually think that "talking heads" movies can benefit from it almost more then action movies with fast edits. While the latter will have the or other "shock" moment it will break your head with all these fast edits that change depth and your eyes trying to adjust (but having nothing to adjust because they are looking at a 2d image after all) every half second. With talking heads movies you can isolate characters (in a very minimal way but its beautiful actually) far big landscapes will evoke more emotions then ever before and you will be more immersed in the feeling of the film in general. (now I still like the gimmick at points but its really not where 3D really shines).

Oh and donīt judge the 3D conversion thing on movies like Alice in Wonderland or Saw 3D cause that have been bad conversions - it can look much much better - I have seen it and I think next year there will be some very good conversions coming. Even the biggest FX houses still struggling to put up a pipeline for 3D that works - its all tough and eats up money for breakfast like no other new technology and eats manpower for lunch and directors for dinner.

If you have any other question I can try to answer or ask people more in the know then me - just leave a comment.

16.01.10

PTEX&OpenCL - or How Steve Jobs Companies Are Changing 3D

Something amazing came through my ticker today something that is a game changer and together with another technology will change the way I work and make much more enjoyable.

First some basics to understand what I am talking about for those that have no clue about it all. There are basically the following six steps to get to a final 3d picture.

1. Modelling: Multiple Approaches get you to a mesh model that consists of - in the end mostly - polygons. You can scan you can push little points in 3d space you can use mathematical formulas to create substract or add simple forms or other formulas to make edges round revolve lines or extrude other lines. The end product is mostly always a polygon mesh with x amount of polygons - the more the higher the resolution of the model the closer you can look at it. About ten years ago a really nice way to model high resolution meshes came into existence called SubDivision Surfaces which lets you model a corse resolution model which is much easier to understand and alter and then generate a highres model out of it - that was the first real game changer in the 3d industry and the reason why character modelling became so "easy" and so many people doing so many great models.

2. UV Preperation: Now a model out of triangels looks less then realistic of course so you need to tell the programm what kind of material is on the model - here a lot of option are available - but especially for film work and characters you want something that is realistic and you do that by getting something realistic - like a photo - and alter it in such a way that it fits on your model - or you paint from scratch - now that such a picture can be put onto the model you need to flatten out the model into a two dimensional surface. You can imagine this like taking a dead animal and skinning it then make the skin of the animal flat. Like so:
hide.jpg
(there is actually a programm that stretches the "hides" pretty similar to this very analog process). Its a very dull process to do this on a complex model - mostly you have to take your nice model apart and do all kinds of voodoo to get it artifact free. No fun and certainly not really creative.

3. Texturing: Ones you have your nice model with a more or less nice UV map you start to apply your texture - photo or programatic or a mixture of that. Here is a lot of "fun" to be had as you add little bumps, tell the software how shiny the model will be how reflective how refractive and lots of other things I donīt really want to go into - but its a nice step in general. +

4. Light & Camera: Without light there wouldnīt be anything visible. So you set up some virtual lights which act and react just like different kind of light sources you find in reality + some more other tricks that arenīt in reality but can add to a realistic picture. You also set up a camera or your virtual eye - which again acts just like a photographic camera in real life (almost). Both a creative and fun process.

5. Animation: Then you animate your model - push objects around, apply physics, deform your model. You can either do that by hand or get some animation data from MotionCaputure - like you might have seen these people with a black suit and pingpong balls attaced to them - or faces with dots all over them for example. This step is both fun and frustrating - with hand made or captured data. The human eye is so susceptible to small problems in movement that to get it realistically convincing not even a certain 500 Mio. Dollar production can fully perfect this step.

6. Render: Then comes the process that is mostly free of human intervention but not free of hassles and frustration. The rendering. Can take up to 50 hours per frame in Avatar on a stock normal computer. 24-25 frames per seconds (or in case of 3d double that) and you get an idea how much processing power is needed. And if you do a mistake - render it all over again. Also rendering is a complex mathematical problem and there are bound to be errors in software so prepare for the worst here.

Now why I am telling you all this? Well one step it seems has just been eliminated. Progress in the field of visual effects is very eratic - you have 2-4 years no progress at all and then all of the sudden a floodgate opens and something dramatically changes or multiple things. I would say we had a quit period the last 2-4 years - mostly because the development of the real cool stuff was "inhouse" meaning - that really smart programmer people where hired by the big VFX companies to program them certain things for certain needs and problems - a lot of problems in the above pipeline are already solved I think but have never seen the light of the broader world and instead stayed and sometimes even died within certain companies. Its really frustrating as the software companies struggled with the most basic problems (plagued by slow sales and a bad economy) and then you see Pirates of the Caribbean for example and they completely figure out how to motion capture live actors (record their movement) on set with no real special equipment - that technology is still only available behind the looked doors of Industrial Light & Magic. For me as an artist that is a valuable tool that has been created and I could do cool stuff with it but I canīt get my hands on because of corporate policies.
So its REALLY amazing to see that Disney - the intellectual property hoarding company for whom the copyright law has been rewritten at least ones - is releasing a software/API/Filestandard as open source as of today. Code that no less promises to completely eliminate step two of my list above. In their own words they have already produced one short animation and are in the process of one full feature animation completely without doing any UV mapping. I can only try to explain to you the joy that this brings to me. UV mapping has been my biggest hurdle to date - I never really mastered it - I hated it. Its such a painstaking long tedious process. I normally used every workaround that I could find to avoid doing UV mapping. Its crazy to think they finally have figured out a way to get there without it and I think this will drop like a bomb into every 3d app and supporting 3d app there is on the market within a year (a wishfull thinking here) - at least I can hope it does and I hope that Blender, Autodesk, sideFX are listening very closely.
Combine that with the recent advancement of render technology by using OpenCL (developed and released as part of SnowLeopard by Apple and made an Open Standard with ports for Linux and Windows now available) and render partially on the graphic card (GPU) - which speeds up rendering up to 50 times. That means that a frame from avatar takes only one hour to render instead of 50 - or in a more realistic case - current rendertime for an HD shot takes here 2-5 minutes an average to render - thats cut down to 10sec - 1min and would actually make rendering a fun part of the process.
Now we all know who is behind both companies releasing and opening that up: The mighty Steve Jobs. You could almost say there is an agenda behind it to make 3d a way more pleasurable creative work then it currently is - maybe Mr. Jobs wants us all to model and render amazing virtual worlds to inhabit where he can play god ;)
Good times indeed.

Whats left? Well animation is still not worked out completely but with muscle simulation and easy face and bone setups it has become easier over the past years - still hidousely tidious process to make it look right - donīt know if there ever is a solution for it that is as revolutionary as PTEX. Motion sensors might help a bit in the short future also some techniques that make the models physically acurate so that things canīt get into each other and gravity is automatically apllied. High quality texture maps that hold up to very very close scrunity are still memory hogs and burn down the most powerfull workstations. The rest will get better with faster bigger better computers as always (like all the nice lighting models that are almost unusable in production to date because they render too long). Generally we are so much further with UV mapping and rendering problems out of the picture I might get back into 3d much much more.

ptex.us - the official website
The PTEX white paper
PTEX sample objects and a demo movie

Disclaimer: I have been doing 3d since 1992 when I rendered a 320x240 scene of two lamps on an Amiga 2000 with raytracing - it took 2 days to render. My first animation in 1993 took a month to render. Then I switched to Macintosh (exclusively) in 1995 and did 3d on them for a while. It was so frustrating that I did not make a serious efford to get really good at it ever - now I am still doing it alongside Compositing / VFX Supervision but rather as add on & for previz then main work.

24.03.09

3d in the browser - is it really finally coming?

VRML was once to be said the future of the web - everyone who ever tried that out back in the good days will agree with me that it was deemed to failure right from the beginning on. It went under and was never seen again with the second generation browsers. Modern browsers had other stuff to worry about - like passing acidic tests and such so 3d was not a main concern ever. Now word from the Game Developer Conference hits the street that the Kronos group is working together with the Mozilla foundation to bring accelerated 3d graphics inside the browser window. The Kronos group is responsible for OpenGL and OpenGL ES (iPhone is all I say here) and Mozilla of course for the Firefox. They formed an "accelerated 3d on the web working group" that will create a roalty free standard for browser makers to implement and webdevelopers to use. Hallejulia - now it might take some eons for a) a standard to form b) browser to adopt the standard c) 3d program letting you export stuff in the right format but the prospects for real 3d in the browser in a 3-5 year time frame are exiting to say the least. Personally for me this is bigger then vector (as it includes vector hopefully) - the possibilities are endless and truly exiting. Be sure to hear back from me if there is the earliest inclination of any beta or even alpha warez to try this out.

via internetnews.com

22.03.09

3d Scanner with Lego

scan12.jpgThings that used to cost around a million and one buck just a very short time ago seem to be available for almost nothing these days - especially if you roll your own. Such the case with 3d scanners it seems - while only 5 years ago its was inconcievable to even dream about owning a 3d printer - things started to get interesting 2 years ago when the prices dropped into the sub $5000 range and the first efforts of DIY open source 3d scanners appeared on the ether. Now we drop into the sub $300 range with a laser 3d scanner made out of the Lego NXT system. Oh good are the times.

via make blog.

11.03.09

Using a DJ midi controller for 3d Previz

Quang from Exozet who I have written about before has finally come around and made himself a blog - grainy fx (making me aware to update my blogroll). In his third entry on the blog he talks about a current Previsulization project for a Polylux subseries trailer and how he used my trusty Faderfox DJ2 midi controller to let the cameraman of the show position his camera where he wants it. Its just 128 units is not really precise enough it seems - but it worked sufficiently as I heard and the team was happy after the meeting - I will have a look at the actual shooting on saturday because they are shooting with a RED camera (the RED one that is) on greenscreen and I would love to see how the cam performs in real life - I will report back here.

Here is the video of the controller in action Quang shared with the world.



8.03.09

Red Camera seen in the wild

DSMC.jpgQuang pointed me to a picture of a new RED Scarlet camera in the "I emulate a photo camera"(dubbed DSMC by RED) configuration as a real preproduction model - not a 3d rendering. 4k godness (15Megapixels) at 150 fps here we come.


25.02.09

60 sites with free 3d models - 150+ sites with videotools

Via twitter come two really valuable links that I think are in the scope of myself very much. One has links and explanations to 150+ pages of video tools that you can use online - it ranges from video sharing, editing, streaming to commenting and hoarding. The other is personally even more useful - it contains 60 sites that sport free as in beer 3d models. Countless of times I needed a very generic object just to populate a scene and have looked on the three lousy sites known to me without luck.

150+ online videotools and resources

60 free 3d object sites

via @dollars5 & @LisaTorres.

17.02.09

Procedural City Building

Introversion the makers of the great Defcon game have released some videos about their upcoming game Subversion. Man this procedural city building extremely rocks - I hope this can somehow be used for exporting to be used for other 3d scenes.


6.02.09

Good bye 4:3

As I am wrapping up my current project I think I also wrap up the last project I will ever do in 4:3 format. It has been long time in the coming and now with resolutions going up so does 4:3 go away. I am not shedding a single tear. 16:9 and beyond is so much nicer to layout with. You can have split screen and whatnot. Needless to say that I am also stopping to VJ in 4:3. It just never immerses you. It is always this oddly shaped window that does not adhere to any emotional feeling. When the screen stands out then people are not immersing and fact of the matter is that we humans just have greater horizontal perception then vertical (clouds are nice to look at but never threatening - same can be said for moles). Interestingly enough the medium where the 4:3 aspect ratio came from was film - which has stopped using this aspect ratio when 8mm was abonded about 30 trillion years ago. TV was born when film primarily used 4:3 itself. TV needed 35 years to make the switch to at least 16:9 - needless to say that film - even in its digital form - has even moved beyond that but then a 3:1 monitor would probably look strange.

Good bye 4:3 for the good things to come (full immersive).

Have a look at the wikipedia entry for the trazillion different aspect ratios out there.

19.01.09

Autodesk cuts jobs looses executive

Oh this is so not good. After buying up all serious 3d programs on the market and trying to overtake the whole VFX sector Adobe style (meaning: create a monopoly) they are now facing financial struggles and cut 750 jobs (thats a lot for a company that size) - let me guess that most these are talented people making 3d apps which have become abundant at the company because they brought in two full additional teams of them to their already existing three teams they had. Oh and they lost their Executive Chairman to Yahoo. I really hope for Autodesk to not affect certain 3d software that they just bought - now that Maya finally is usable on the Mac and a homogenous VFX production is on the horizon with Toxic and Mudbox.

Full story ironically enough on Yahoo finance.

7.01.09

Toxic on OSX - Bye bye Shake?

Autodesk is going OSX big time - and its about time. The company that for such a long time has been neglecting the OSX platform almost completely (exception is combustion) and only continued bought in packages on OSX (maya) has been making some big announcements. Toxic - their compositing package - with an pretty awesome keyer and an interface that enhance the node tree interface concept (its really nice) is coming. Now that the market for compositing packages on OSX was narrowed to only to the Foundries Nuke after the long negligence of Shake by Apple itself (what a shame) (and I am not counting After Effects as a serious compositing package) Autodesk has seen the light and brings over Toxic. I wont complain at all - competition is great - even so Autodesk could use some of that themself especially in the 3d market where they bought up all major 3d packages over the last two years (they posses now 3DStudioMax (its always been theirs), Maya and Softimage). Speaking of 3d - Mudbox is also coming to the Mac and I really really am happy about that announcement. ZBrush has abondend the Mac platform and lost all Mac users in the process for once and all by promising to port version 3 forever (2 years and counting) and never delivering on their promise - just last year Mudbox was catching up to ZBrush and even surpassing it in some features (real time displacement and realtime HDR is just the bomb) now its coming to my current workplatform and it will be probably a nice addition to the toolchain.
As for imagemodeller - never really used that program canīt really comment on it but the more the merrier I would say.

1.12.08

Deep from the Adobe Labs - Stuff we will likely never see?

Its rare that you get a peek into research development of a big software company but Adobe is pursuing a more open path lately with a high profile blog that spills some inside beans and now some researches allowed to put their pet projects on youtube some video sharing site. Now the stuff they are developing is insanely great but the guy ended a totally let down statement "I canīt guarantee that any of the stuff you are seeing is making it into adobe products". Anyway they seem to have found the holy grail of motion tracking if anything that the movie shows is an indication. The part with the two people in the woods and a million trackers are totally stuck to creases on their grey sweaters must be the best motion estimation out there and it seems to work in almost realtime, it can differentiate between different objects and is 2.5D at least (I would venture to guess to get the different objects it has to be actually a 3D tracker). Adobe would be well advised to put out well working easy to use realtime 3d tracker on the market - but then they have to watch out - they might be bought out by the Autodesk 3d monster.


Interactive Video Object Manipulation from Dan Goldman on Vimeo.

18.11.08

New MacBook(Pro) broken by design

Word is out that the new Apple MacBooks and MacBookPros with their new display port technology are lovely looking environmently "friendly" trojan horses of the MPAA (that is the movie mafia of america). When you buy a movie through the iTunes (not only music anymore) store and play it on these new machines - you have to have a copyright safe monitor attached to actually view the movie. That is a mafia approved monitor. If you do not you get a nice little warning sign saying that you are not allowed to play the movie.

Now this raises an interesting question: Is the lack of a miniDisplayPort to analog video adapter something that will never be rectified because it would mean that this would breach HDCP copyright protection? Anyone ponder what this would mean to the Apple using VJs? Not to mention that the movie mafia does the exact same mistake as the music mafia had been doing - by actually forcing people to crack movies just to watch them on their devices - while at the same time teaching millions of people how to actually crack a movie who then find it so easy and enjoying the freedom afterwards that they donīt ever buy one again.

This decision of Apple and the movie mafia is so wrong on so many levels its mind boggling that in the current environment such mistakes are made. What is Apple thinking? They are not above the godline just yet - they must know that these things are creating a huge backlash on the internet - just after their movie store was heralded as something great.
Watch the blog storm about this in the coming days with the citizens demanding DRM free movies and apple apologizing and putting a software patch out in a three month timeframe and the movie industry realising their mistake in less then one year (when they need a bailout).

To the people thinking this stuff up I can only say one thing: Stupid fucking idiots - you must have won your harvard degree in the lottery.

To quote a commenter on the Engadget thread:

I'm not really down with stealing, but when they decide they are going to control how I watch my movies and listen to music they can go die in a fire.

13.11.08

Red Announces 28k camera - and a a trillion other options

redExplode.pngSo last night was the big announcement of the RED digital cameras and did they announce. They basically turned the whole movie making industry on its head, poured pure caffein on them then splashed some more cold water - rinsed and repeated a couple of times. Some users are speculating that they raided Area 51 in the night and stole a camera chip from the UFO there and reverse engineered it. Why? Well there is for one the insanely expensive far off big bambooza of a 28k Monstro chip. You know the most high definition films that are going through post production are 4k with I think Batman for the IMAX topping it with 6k. Now how you would sufficently post produce a 28k movie is beyond me - also beyond me is why you would such foolish thing - but you know you can crop, zoom, pan later and still have 4k. But why I am much more exited is the modularity approach and this might not just be a game changer in the camera business but maybe even the computer biz. Imagine Apple sold 2-3 different enclosures for laptops and you can fit a wide range of gear in them - pretty much plug and play. New motherboard, new processor, new graphic card, new display, different keyboard. Well this is pretty much what red is doing for their cameras - yes you can switch out a 2/3rd, 12bit, 11+ stops sensor and replace it with a super 35mm 16bit 13+ stops sensor later, when you made the money or whatever. Also of course you can change anything else, batteries, outputs, lenses, handles, screens, shoulder mounts whatever. Oh and for the added bonus: it can do 3d too (with two cams that fit together nicely for that option). Now they are not hitting quite home the original goal of 3k camera that works out of the box for $3.000, but the smallest brain (or body) is $2.500 - for about $5000 you should be able to assemble a working unit. Its very revolutionary stuff they do over at RED. Since delivering the REDone nobody doubts their ability to deliver these two new camera systems either. Its just a bit late - donīt expect to hold any of the new ones in your hands before fall next year (thats a year away). Now its going to be more then interesting what the big shots like sony, panasonic, canon and nikon are doing (and hasselblatt) this invades their turf big time.

For more info (or info overload might be better) head over to the REDdot.

7.11.08

2009 the year of digitial cinema

Take a guy who has a very successfull sunglass bussiness, some hardware geniouses a friend who is an indy movie producer - shake everything and let sit for 3 years. Whats the outcome? A completely change landscape of digital cinema recording. I am of course talking about the RED cameras. Not only has RED proven they can produce a camera that rivals cameras 10-20 times the price they also threat to overtake every other market that has "pro" in it - pro sumer video, pro-photography. The big camera manufacturers are shocked and pump out their own versions of video capable photocameras (because the video cameras have already lost with their lousy 1/3rd sensors and tape mentality). So the newest competitor on the block? Canon EOS 5D MarkII. Its probably the most best pro photocamera you want to get short before middle format but thats it. While it does not inherit the embarrassing sheering effect of the Nikon Pro Cam (the name I forgot) when you move the camera "too fast" it shares the same problem as all other photocams with video function out there: H.264 only. I understand that most photo camera design teams do not get the fact that people are actally doing something in post with the recordings - emblematic with all the "demo" videos have been "not post processed because the companies want to show you the raw footage" - and while stunning its about as far as this footage gets you because doing ANYTHING - even a slight color correction on H.264 footage - even with an insanely high bitrate - will look like crap - especially if you have to recompress it afterwards for web delivery. H.264 is a time based compressor - and the realtime H.264 is even worse because it allocates the same bitrate across all of the movie - leaving you extremely lacking when it comes to fast motion video. Heck up to 8x8 pixel can be combined to one in the color channel of H.264. That leaves you with a 320x240 color resolution when you shoot in HD. Its a nice gimmick and for news style stuff its probably alright (but I wouldnīt want to design the post pipeline trying to even just edit the stuff - non iFrame codecs just donīt edit well - realtime scrubbing is almost impossible etc).
So Nikon and Canon can just try again this was a miss when it comes to the video capabilities.
Until then everyone is looking at RED on November 13th when they show their revised Scarlet 3k camera that will cost you around the same price as the canon photo cam (thats around 2.500 lines x around 4500 rows - thats "only" 11 Megapixels - VIDEO and RAW and iFrame) and brings better photo functionality on top of the exceptional post friendly video functionality.

Oh and if you run a company that makes consumer goods - this is how you should run it:

With 11 days to go, I want to take a moment to thank all of you. We have listened. At 1st, it was difficult. We kept saying to ourselves "don't these guys know how difficult this stuff is?"... and "if they want that, they should try and build one themselves!". Then we really began to get comfortable with the tiger we had grabbed hold of. And many of the suggestions actually seemed within reach. So we decided to really stretch ourselves. The results of that stretch will become apparent on the 13th. Many of you have had input that pushed us to reach higher.

Of course there have been many suggestions that are just out of the question. They are either impossible or not realistic at a reasonable price. If those suggestions were serious (I sincerely hope they were not)... a few might actually be disappointed. But I doubt it. Especially if everything is considered.

Long way around saying that many of you have made a difference. That really is what this forum is supposed to be about. If we responded harshly early on, it was just because we didn't know how to do it all. Now we do. Well, at least much of it.

Put your helmet on. The 13th is near. It is only fitting that I should post here 1st, then on red.com. You guys matter.

Jim