HTML5 Video - Introduction and Commentary on Video Codecs

There is a raging discussion out there on the web about the upcoming HTML5 standard and the inclusion of the video tag. Not about the tag itself but about the codec used by videos played inside that tag.
There is a firestorm by free software advocates that want the only codec to be used inside this tag to be the -largely- patent free open source Theora Codec - the other side wants the ubiquitously used high quality H264 video codec. I think I can weigh into that debate. If you donīt want to or know already about codecs, containers and its history jump below to "my take on the codec war".

I am a content producer have been following video on the web since the very very beginning, have advised firms how to handle video on the web, have struggled countless of hours trying to find the best solution to put video on the web and have so far refused to use flash to display video on the web. I always believed that the web should be fundamentally free of technology that is owned by one company that then can take the whole web hostage to their world domination plans. I have hoped that the video tag would be introduced much earlier in the game and have looked with horror to youtube & co. how they made adobe - a company who has basically stopped innovating 10 years ago - a ruler of the web when it comes to moving pixels.
Now this is finally about to change - or at least that is the intention by google, apple, mozilla and others who are pretty fed up with flash for very obvious reasons (its slow, development sucks, its proprietary, the source code of the creations is always closed, its slow, its slow as fuck, it eats energy from the processors like nothing else). It never really made any sense to put a container inside a container inside a container to display a video - the second most powerhungry thing you as a consumer can do on your computer (the first would be 3d/gaming).
Yet a video is not a video. A video to fit through the net needs to be compressed - heavely. Compression technology is nothing new but it evolves over years and years. Its always a tradeoff between size, quality and processing power. The "Video" codec by apple - probably the first "commercial" codec available to a wider audience looks rubbish but is insanely fast (it utilizes almost no processor on a modern machine) and the file size is pretty alright. It was capable to run video on an 8 Mhz processor mind you.
Over the years lots and lots of codecs have sprung up - some geared toward postproduction and some towards media delivery - there is a fine line - for the postproduction codecs you need full quality and try to save a bit of storage. Its videos that still need work and you want to work with a mostly uncompressed or losslessly compressed video. Processing power for decrompression is an issue because you need to scrub through the video - also compression porcessing power (to make the video) you donīt want to take ages because you like to work in realtime and not wait for your computer to re-encode a video just because you clicked on a pixel.

The other codecs are the "end of line" codecs - delivery codecs - made to compress the crap out of the video while "visually" loosing the least amount of quality and having the smallest possible file sizes. Here it doesnīt matter how long the compression side takes as long as the decompression is fast enough to work on low end computers to reach the largest available audience.

While production codecs are fairly fluid - people change as soon as a better becomes available - takes less then 5 month to have a new codec established (recently apple released the ProRes4444 codec - most postproduction companies are already using it (those that donīt use single pictures - but that a whole different story) - the delivery codecs are here to stay for a very very long time because in the time of the web people just donīt reencode their stuff and reupload it - if its there its there.

Now before I go into the format war and my take on it there is one more concept I need to explain shortly - containers. Flash is a container for a video with a certain codec displayed in it. So is quicktime, so is windows media so is Real Media. It gets confusing because MP4 can be a container and a codec at the same time. A container just hold the video and adds some metadata to the video itself - but the raw video could be ripped out of the container and put into another without re-encoding. This is what apple is doing with youtube on the iPhone. Adobes last "great" innovation (or best market move ever) was to enable the Flash container to play h264 (a codec) encoded videos. Since Apple (among everybody else who isnīt a flashy designer) thinks that flash sucks they pull out the video from inside the flash container and put it into the (now equally bad) quicktime container and so you can enjoy flash free youtube on your iPhone.
Now with the technicalities out of the way whats all the fuss about?
HTML5 promises - ones it becomes a standard - to advance the web into a media rich one without bolted on add ons and plugins that differ from platform to platform and browser to browser - its a pretty awesome development or most people ever developing anything on the web. Part of the process to make this the new standard is to involve everybody who has something to say and is a shaker and mover on the web to give the direction this standard is going. Its a tough rough ride - everybody and their mother wants to put in their tech their knowledge their thinking - I really would not want to be the decision maker in this process if you gave me a trillion.
The biggest and most awesome change in HTML5 - and the one the most abvious to the end user - will be the inclusion of media content without a freaking container that needs a plugin to display that content that only half or less of the internet population have. To make this happen at least all the big browser makers need to approve what can be played inside the new tags (video & audio).
This is where the debate heats up. I really donīt understand why audio doesnīt spurn the same debate publicly as does video - but its probably because google is involved with the video debate and can change the direction completely on their merit with whatever they choose to support on youtube.
The two competing codecs are Ogg Theora and H264. Now I am less familiar with the Ogg codec (but have tried it) but first a small history of H264. Back around 2000-2001 a company called Sorenson developed the first video codec that was actually usable on the web - there where different ones before but they all sucked balls in one of the departments for a great delivery codec. Sorenson made a lot of video people who wanted to present their work on the web very happy. Apple bought in and shipped quicktime with the Sorenson codec and the ability to encode (make) the video with this codec - albeit with a catch. To really get the full quality of Sorenson you had to buy a pro version - which costs a lot of money - the version that Apple included could play Sorenson (pro or non pro) just fine but the encoder was crippled to one pass encoding only. The real beauty and innovation was with two pass encoding - basically the computer looks at the video first and decides where it can get away with more compression and where with less.
Apple and the users where not really happy with this situation at all. So for a long time (in web terms) there was no real alternative to that codec. The situation was even worse because to play Sorenson you had to have Quicktime installed - before the advent of the iPod a loosing prospect - I think they had 15% installed user base on the web. It was the days of the video format wars - Microsoft hit with Winows Media (which sucked balls quality wise but had a huge installed user base) and on top Real Media (which was the only real viable solution for streaming video back then).
In the meantime another phenomenom happened on the audio side - mp3 became the defacto standard - a high quality one at that (back then) in the audio field. We the video people looked over with envy. When producing audio you could just encode it in mp3 pretty much for free on shareware apps and upload it to the web and everybody could listen to it. There was nothing even close happening on the video side. The irony is of course that its MPG1 layer3 (mp3) - part of a video codec - but the video codec side of MPG1 sucked really really really really bad. Quality shit, Size not really small only processor use was alrightish but not great.

Jumping forward a couple of years (and jumping over the creation of MPG2 - the codec used for media delivery on DVDs - totally unsuitable for web delivery) the Motion Picture Expert Group - a consortium of industry Companies and experts that developed (and bought in) MPEG1 and MPEG2 decided to do something for cross platform standard video delivery and created the MPEG4 standard (overjumping MPEG3 for various reasons - mostly because of the confusion with MP3 (MPEG1 layer 3). MPEG4 is a container format - mostly - but it had a possibility for reference codecs and the first one of these was H263 - this already was quality wise on par with Sorenson yet in a container that was playable by Quicktime and Windows Media - the two last standing titans of media playback (by this time Real Media mostly had already lost any format war). Great you think - well not quite - Microsoft wasnīt enourmously happy and created its own standard (as they do) based on MPG4 H263 called VC1 (I am not really familiar with this side of the story so I leave you to wikipedia to look that up yourself if you are so inclined). Web-video delivery was still not cross platform sadly and the format war became a codec war but there was now a standard underlying all of this and the quality - oh my the quality was getting good. Then the MPEGroup enhanced the h263 codec and called it h264 and oh my this codec with a pure godsend in the media delivery world - it was great looking scaled up to huge resolution could be used online, streaming and on HighDefDVDs and in the beginning it all was pretty much for free.
It looked like an Apple comeback in the webvideo delivery because Quicktime was for a while the only container that could play H264 without problems. Around that time flash started to include a function in its webplugin to play video - interestingly enough they choosed to include sorenson video as the only supported codec - word on the street was that Sorenson was very unhappy with Apples decision to ditch them as codec of choice and instead pushed H263/H264. Now the story could have ended with Apple winning the format war right there and all computers would have quicktime installed by default but it didnīt because out of nowhere Youtube emerged and Youtube used flash and Youtube scaled big time and made it - for the first time ever - really easy for Joe the Plumber and anybody else to upload a video to the web and share it with the rest of the world family. It changed the landscape in less then 6 month (I watched it it was crazy). Now you had a really good codec finally as a content producer to upload video in very good quality but the world choosed the worse quality inside a player that sucked up 90% processing power with the codec of choise needing another 90% and all that came out was shit looking video that everybody was happy to be over - but the user experience of hitting an upload button and have everybody watch your video was just unbeatable. Eventually just when people realised how bad these videos looked compared to some Hollywood trailers that still used quicktime and H264 Adobe included H264 into flash and prolonged their death by doing so again (without innovating at all it must be said).
Now fast forward to now - again a group of clever people, big companies and such have sat down to bring us HTML5 and the video tag. That tag as said is going to rid us from any plugins and containers and instead just plays pure video as fast as possible right inside the browser that supports HTML5. Now the problem is that people can not agree on the codec to be used. Why you ask if H264 is so great? Because H264 was developed by a for profit group of people and they want to make money and they have freaking patents on it - not that this has hampered web video to this day in any way - but for the future standard it seems lots of people have taking offense to that. There is in fact a whole ecosystem of alternative codecs (audio and video) in the open source world and the most prominent is Ogg and its video incarnation Theora. They are mostly patent free because the company who originally developed these codecs gave the patents to the open source community (yet its still not clear if that covers the whole codec). Now what happens when patents enter the WorldWideWeb could be seen with GIFs. The GIF graphics (moving or non moving) where once a cornerstone of the web - a more popular choice for graphics then anything else (small could be read by anything blahblah) then a company found out that they had the patents on that (luckily just shortly before they run out) and sued a lot of big websites for patent infringement and wanted to have royalties of $5000 from every website that used GIFs - they would have killed the web with that move (and they where in the right - law wise) if the big companies they sued first would have dragged out the court case until the patents run out - now the GIF file format is in public domain.
Now its understandable that this lesson should be learned BUT and here is

my take on the codec war:

Flash is a MUCH bigger threat then patents on the codecs used. Because not only does it use the patent infringed codec inside its container but the container is totally owned by one company and a company that has shown often (PDF) that it will do everything to take control of anybody using their technology - even if it is the whole world.
Now 95% of all web videos are delivered by flash these days and to change that a lot of things need to happen. First google needs to drop it on youtube - they just announced a beta which does just that - but even with googles might its just not enough - content producers need to hop on board as well. And here is where the chain breaks for the "free and open codecs of OGG". See from the history above H264 has been the industry standard on a wide range of devices including the web for years now. The whole backend has settled on this and there are really good workflows to create H264 video. With the video tag - Youtube is less relevant then it was at its beginning because all of the sudden its easy to incorporate video into your webpage. Now if google where to say "we use Theora only" high quality content producers would just say "fuck you" and post their videos on their own sites in a much better quality without the hassle to find any workflow to produce theora videos (for the non terminal using people there just isnīt an easy way to do that still that can be used in a professional non frickly environment - we like to create not code for a delivery sorry).
But thats not enough - almost ALL consumer cameras released over the last 2 years including the hot shit DSLRs with video functionality produce H264 that can be "just" uploaded to the web without reencoding - thats saves Youtube and vimeo a lot of processing capabilities - and with their lossy revenues they sure donīt want to add another server farm just to reencode every and all video they have to a codec that has worse quality. You know 90% of all videos on the web are already encoded in H264 as of now (and Theora maybe has 0.2% of the other 10% that are left over). Its uneconomical and not sensible to re-encode all of that any way you look at it - especially since the quality is not surpassed by any other codec out there - patent free or not.
I would say go H264 now and have a new consortioum of browser developers and other companies develop a new codec (or build upon theora) from scratch that is patent free AND high quality AND has a good workflow (means is supported by hardware vendors and OS vendors across the board). That can then take over H264 (just like PNG took over GIF in less then 2 years following the patent threat). Leave the codec question open for now and let the web sort it out for itself (for the moment) - like in the img tag - doesnīt matter if you put PNGs, GIFs or JPGs in it (or any of the other plethora of IMG formats) as long as the browsers support it its watchable and so far has shaken out a good road to take (see the switch to PNG with transparencies that also helped a lot to bring down IE5 in my opinion which didnīt fully support that - so the market sorted it out quickly (as in 5 years quickly)).
BTW the only browser that just does not want to go down that route (and rather wants to cripple itself with Flash in the meantime) is the oh so open minded Firefox. Sorry I fail to see your point dear Mozilla developers - you are not gonna make a lot of friends that way outside of the very very small open source community (and even here your approach is not liked universally by those who can not install a flash plugin for example because flash is not supported on their platform (ppc linux f.e.).

Get rid of Flash first then get rid of H264 later when you have something equally good on all accounts. Going backward with technology is just never the way forward - open source or not.


PTEX&OpenCL - or How Steve Jobs Companies Are Changing 3D

Something amazing came through my ticker today something that is a game changer and together with another technology will change the way I work and make much more enjoyable.

First some basics to understand what I am talking about for those that have no clue about it all. There are basically the following six steps to get to a final 3d picture.

1. Modelling: Multiple Approaches get you to a mesh model that consists of - in the end mostly - polygons. You can scan you can push little points in 3d space you can use mathematical formulas to create substract or add simple forms or other formulas to make edges round revolve lines or extrude other lines. The end product is mostly always a polygon mesh with x amount of polygons - the more the higher the resolution of the model the closer you can look at it. About ten years ago a really nice way to model high resolution meshes came into existence called SubDivision Surfaces which lets you model a corse resolution model which is much easier to understand and alter and then generate a highres model out of it - that was the first real game changer in the 3d industry and the reason why character modelling became so "easy" and so many people doing so many great models.

2. UV Preperation: Now a model out of triangels looks less then realistic of course so you need to tell the programm what kind of material is on the model - here a lot of option are available - but especially for film work and characters you want something that is realistic and you do that by getting something realistic - like a photo - and alter it in such a way that it fits on your model - or you paint from scratch - now that such a picture can be put onto the model you need to flatten out the model into a two dimensional surface. You can imagine this like taking a dead animal and skinning it then make the skin of the animal flat. Like so:
(there is actually a programm that stretches the "hides" pretty similar to this very analog process). Its a very dull process to do this on a complex model - mostly you have to take your nice model apart and do all kinds of voodoo to get it artifact free. No fun and certainly not really creative.

3. Texturing: Ones you have your nice model with a more or less nice UV map you start to apply your texture - photo or programatic or a mixture of that. Here is a lot of "fun" to be had as you add little bumps, tell the software how shiny the model will be how reflective how refractive and lots of other things I donīt really want to go into - but its a nice step in general. +

4. Light & Camera: Without light there wouldnīt be anything visible. So you set up some virtual lights which act and react just like different kind of light sources you find in reality + some more other tricks that arenīt in reality but can add to a realistic picture. You also set up a camera or your virtual eye - which again acts just like a photographic camera in real life (almost). Both a creative and fun process.

5. Animation: Then you animate your model - push objects around, apply physics, deform your model. You can either do that by hand or get some animation data from MotionCaputure - like you might have seen these people with a black suit and pingpong balls attaced to them - or faces with dots all over them for example. This step is both fun and frustrating - with hand made or captured data. The human eye is so susceptible to small problems in movement that to get it realistically convincing not even a certain 500 Mio. Dollar production can fully perfect this step.

6. Render: Then comes the process that is mostly free of human intervention but not free of hassles and frustration. The rendering. Can take up to 50 hours per frame in Avatar on a stock normal computer. 24-25 frames per seconds (or in case of 3d double that) and you get an idea how much processing power is needed. And if you do a mistake - render it all over again. Also rendering is a complex mathematical problem and there are bound to be errors in software so prepare for the worst here.

Now why I am telling you all this? Well one step it seems has just been eliminated. Progress in the field of visual effects is very eratic - you have 2-4 years no progress at all and then all of the sudden a floodgate opens and something dramatically changes or multiple things. I would say we had a quit period the last 2-4 years - mostly because the development of the real cool stuff was "inhouse" meaning - that really smart programmer people where hired by the big VFX companies to program them certain things for certain needs and problems - a lot of problems in the above pipeline are already solved I think but have never seen the light of the broader world and instead stayed and sometimes even died within certain companies. Its really frustrating as the software companies struggled with the most basic problems (plagued by slow sales and a bad economy) and then you see Pirates of the Caribbean for example and they completely figure out how to motion capture live actors (record their movement) on set with no real special equipment - that technology is still only available behind the looked doors of Industrial Light & Magic. For me as an artist that is a valuable tool that has been created and I could do cool stuff with it but I canīt get my hands on because of corporate policies.
So its REALLY amazing to see that Disney - the intellectual property hoarding company for whom the copyright law has been rewritten at least ones - is releasing a software/API/Filestandard as open source as of today. Code that no less promises to completely eliminate step two of my list above. In their own words they have already produced one short animation and are in the process of one full feature animation completely without doing any UV mapping. I can only try to explain to you the joy that this brings to me. UV mapping has been my biggest hurdle to date - I never really mastered it - I hated it. Its such a painstaking long tedious process. I normally used every workaround that I could find to avoid doing UV mapping. Its crazy to think they finally have figured out a way to get there without it and I think this will drop like a bomb into every 3d app and supporting 3d app there is on the market within a year (a wishfull thinking here) - at least I can hope it does and I hope that Blender, Autodesk, sideFX are listening very closely.
Combine that with the recent advancement of render technology by using OpenCL (developed and released as part of SnowLeopard by Apple and made an Open Standard with ports for Linux and Windows now available) and render partially on the graphic card (GPU) - which speeds up rendering up to 50 times. That means that a frame from avatar takes only one hour to render instead of 50 - or in a more realistic case - current rendertime for an HD shot takes here 2-5 minutes an average to render - thats cut down to 10sec - 1min and would actually make rendering a fun part of the process.
Now we all know who is behind both companies releasing and opening that up: The mighty Steve Jobs. You could almost say there is an agenda behind it to make 3d a way more pleasurable creative work then it currently is - maybe Mr. Jobs wants us all to model and render amazing virtual worlds to inhabit where he can play god ;)
Good times indeed.

Whats left? Well animation is still not worked out completely but with muscle simulation and easy face and bone setups it has become easier over the past years - still hidousely tidious process to make it look right - donīt know if there ever is a solution for it that is as revolutionary as PTEX. Motion sensors might help a bit in the short future also some techniques that make the models physically acurate so that things canīt get into each other and gravity is automatically apllied. High quality texture maps that hold up to very very close scrunity are still memory hogs and burn down the most powerfull workstations. The rest will get better with faster bigger better computers as always (like all the nice lighting models that are almost unusable in production to date because they render too long). Generally we are so much further with UV mapping and rendering problems out of the picture I might get back into 3d much much more. - the official website
The PTEX white paper
PTEX sample objects and a demo movie

Disclaimer: I have been doing 3d since 1992 when I rendered a 320x240 scene of two lamps on an Amiga 2000 with raytracing - it took 2 days to render. My first animation in 1993 took a month to render. Then I switched to Macintosh (exclusively) in 1995 and did 3d on them for a while. It was so frustrating that I did not make a serious efford to get really good at it ever - now I am still doing it alongside Compositing / VFX Supervision but rather as add on & for previz then main work.


26c3 - Here be Dragons!

HereBeDragons.pngThe congress for the crazy ones the wild ones the good ones the ones defending freedom and digital liberties keep the information flowing unhindered without borders the last of their kind - the real dragons. Those who share and know will meet and talk and copy and paste and for so much knowledge brought to a boil the outcome is unknown.

There be Dragons! Dragons Everywhere. I will be one and so should you.

26c3 Official Website

And on the 28th around 23:00 in the smokers lounge the dragons will undertake a special journey through time:
Indian Timetravels - an audiovisual performance by Das Kraftfuttermischwerk & protobeamaz:fALK (hey thats me ;)


Watch the Watchers Do Their Dirty Work

3-22-09-taser-axom-system.jpgTASER International teams with to create the AXON a secure police recording system that is fully portable and strapped on the head. Now there is multiple things going on here why this is noteworthy. For one it sends (streams?) the data directly to the servers of in an encrypted secure way. That means nobody (also not the cops themself) can temper the footage. That means it could be an effective tool to watch the watchers and make them accountable for "mistakes" that might happen when an arrest or house search or such is being made. That of course would mean that makes the videos available to the public at large but I think this might be even a business model for them so I actually do seem to think this is likely. Now there needs to be a law that prohibits the nice policemen from turning these things of in the heat - make it a felony punishable with a prison sentence or immediate suspension from the job to avoid the "oh I forgot to turn it back on" kind of moments that these fellow friends would likely have at times.

Now that this is covered I would like to raise a voice that this whole system needs to make it to the general market for everyone and the next best VJ to wear all day. Imagine not having to worry about how long the memory stick in your pocket has capacity but instead you can record you whole day onto a server of your choosing (probably your own home server for the more intimate moments) all wirelessly and you wear just a black blob over your ear - no fugly sunglasses or baseballcaps or fake eyes or such - yes that clearly means I WANT ONE :) but I am sure the bright people at Taser int. have figured that out already and present us a mass market version (probably sans encryption so to give back big brother what has been taken from him).

via engadget


E-Voting in germany was illegal

The Federal Constitutional Court (Bundesverfassungsgericht) of Germany has made it clear that the use of electronic voting equipment currently in use is against the law. The last local election was also against the law. That means germans are going to cast their ballots on paper from here on out until e-voting machines comply with the law - which states that the transfer process of the votes and storage of the votes can be certified by EVERYONE not just an "expert" panel.

A big win for democracy and paper freedom fighters in germany - especially a big win for the Chaos Computer Club who has spearheaded the campaign and made sure that the court was well informed of the dangers e-voting poses. Thats the third judgment the court has cast in the last 12 month that is in favor of the people and showing the german government the boundaries of what they can do with their urge to overthrow the constitution and go straight to a police dictator state.

More at (de).


E-Paper at last

epaper-taiwan-24-inch-wow-rm-eng.jpegThe thing you see in Sci Fi movies - moving wallpapers and newspapers that update as the news happen. It has been a long time coming with first time mention of a working e-paper technology was like 10 years ago at least. While there have been ebook readers in first generation and magic ones with unicorn hairs embedded the real first epaper that really knocks me of my feet is the one shown here at a Taiwanese Book show. Impressive big clear crisp paper like and even the cmyk colors are there. Now just holographic storage quantum computing and some solution to the pesky energy problem and we are all set for a sciFi future…

via Engadget


crowdSpring the end of the design bussiness

It had to happen and I like how it happens. The bussiness I trained for in long years will not be the same anymore. CrowdSpring is a website aimed at destroying the "professional" design market once and for all - that is the multithousand dollar design projects that made circular logos with helvetica. What the website does is let a company make an offer for designs like logos, website etc. For example a logo goes between $200-$1000 at the moment - thats about the money a professional design firm asks just to come to the first meeting. Then everyone and their dog can submit logos and the design company can then select the one they want and the recipient gets the money - no matter where he/she is - in bed or in the high decorated office with the $2000 lean back chair. I am more then certain that this will catch on big time and in the process will probably completely wipe out the "pro" design market with its (wo)man in black. This is especially true in a time when firm want to cut costs and there are more firms doing inhouse design for the more complex stuff (like supervising a CI).
So if you need a logo go to crowdSpring offer $300 and get 1000 custom made for you to choose from in return.

And for the snooty designer out there - its gonna hit everybody.


Apple Display Port and the Analog hole

The more I read and think about it I think there is a big issue with the new display port that Apple is trying to put under the rug. They want to plug the analog hole once and for all. I think its not only to do with copyright shit - so I am reasonably believe that is one of the bigger underlying issues - its also the floppy drive issue. Uh? Yes I am actually of the generation where apple removed the floppy drive without asking anybody. A huge outcry and then a year later people applauded them for the move - you know there was nothing really nice about floppy drives and cd burners where just better and looking back it was one of the smarter moves for apple - pushing the whole industry forward - for the few that needed a floppy drive they could just attach one through USB - sure for them it was the more expensive road but it must have been a tiny minority - I know for myself that I never looked back to the floppy era again.
Fast forward eight years later and Apple brings out a Laptop that for the first time is missing any way to connect analog video to it. Mind you I am a VJ and the only way to do our thing is through analog video cables - but also I am vfx professional and go into any professional video studio these days and have a hard look around and I promise you - you wonīt find any kind analog video anywhere anymore. Now also the projectors that do not have a digital video input option of some sort also seem to have died out. So in quite general I would say that yes its a move that from a very very far perspective makes sense on Apples part - ditch analog video and the world sees innovation an crisper pictures and maybe soon wireless digital pictures. Now there is only one big big problem with that thinking and I can see that Apple is not seeing that angle - because they rarely come into contact with it - the whole event bussiness needs to bridge ultra long distances with video cables and as of right now the only digitial standard that can bridge 50+ meters is digital SDI - a very professional option that almost no beamer under $5000 on the market supports nor does apple have a video out solution for that either - especially not on their portable laptop line. I am sure with budgets above 10.000 Euro per event you can cook up a all digitial solution already but this is not the market where 95% of events are in - also the whole backend has not switched to an digital format either - unlike the Floppy that was already replaced by the CD burner and shortly later by the USB flash drives - there is no standard in the video world for digital video. It starts with the cables, goes to compression and codecs through framerates and resolutions - there are about a trillian combinations and no market standard has prevailed so far. So forcing a switch over from analog to digital video at the moment is not a clever idea - no matter what your motivations are - and when half your motivations are bad already (DRM evil evil DRM) then this switch is going to alianate a LOT of people - even if the event market might only make up 0.5% of Apples laptop buying population I think people needing analog video still at this moment approach more 5% (old beamers, old TV sets, bars wanting to show a video etc. pp) and that is a huge chunk. I am quite sure Apple has to come out with a native adapter at one point. The current solution when you ask an Apple employee? Get an old DVI to video adaptor and get minidisplayport to DVI adaptor and chain them together - very elegant Apple - very professional - very reliable such an adaptor chain. BTW - just 3 years ago you could just plug a SVHS cable right into your powerbook without any kind of adaptor - apple used to be about simplicity - these days seem to be waning.


Red Announces 28k camera - and a a trillion other options

redExplode.pngSo last night was the big announcement of the RED digital cameras and did they announce. They basically turned the whole movie making industry on its head, poured pure caffein on them then splashed some more cold water - rinsed and repeated a couple of times. Some users are speculating that they raided Area 51 in the night and stole a camera chip from the UFO there and reverse engineered it. Why? Well there is for one the insanely expensive far off big bambooza of a 28k Monstro chip. You know the most high definition films that are going through post production are 4k with I think Batman for the IMAX topping it with 6k. Now how you would sufficently post produce a 28k movie is beyond me - also beyond me is why you would such foolish thing - but you know you can crop, zoom, pan later and still have 4k. But why I am much more exited is the modularity approach and this might not just be a game changer in the camera business but maybe even the computer biz. Imagine Apple sold 2-3 different enclosures for laptops and you can fit a wide range of gear in them - pretty much plug and play. New motherboard, new processor, new graphic card, new display, different keyboard. Well this is pretty much what red is doing for their cameras - yes you can switch out a 2/3rd, 12bit, 11+ stops sensor and replace it with a super 35mm 16bit 13+ stops sensor later, when you made the money or whatever. Also of course you can change anything else, batteries, outputs, lenses, handles, screens, shoulder mounts whatever. Oh and for the added bonus: it can do 3d too (with two cams that fit together nicely for that option). Now they are not hitting quite home the original goal of 3k camera that works out of the box for $3.000, but the smallest brain (or body) is $2.500 - for about $5000 you should be able to assemble a working unit. Its very revolutionary stuff they do over at RED. Since delivering the REDone nobody doubts their ability to deliver these two new camera systems either. Its just a bit late - donīt expect to hold any of the new ones in your hands before fall next year (thats a year away). Now its going to be more then interesting what the big shots like sony, panasonic, canon and nikon are doing (and hasselblatt) this invades their turf big time.

For more info (or info overload might be better) head over to the REDdot.


Stereoskope - A Blinkenlights Installation


The word is officially out so I can talk about it publicly. Project Blinkenlights is awakening again and its bigger and more badass then ever. The Blinkenlights crew will turn the Toronto City Hall in Toronto/Canada into the biggest-analog-lowres-giant-dual-screen on earth. As seen before in Berlin and Paris every window will turn into a pixel with 16 steps of brightness and sustaining an impressive 30 frames per second picture feed across two buildings. Classic computer games, crazy controllers (*cough*iPhone*cough*Wii*cough*), love-letters and a custom tailored live VJ performance on the houses are all in play.

For the VJs out there is maybe of interest that a lot of the picture making process is driven through quartz composer - there will be even an official blinkenlight quartz composer plugin soon. That means not only can you preview your graphics in quartz composer to see how they look on the house (3d is in the works) but you can actually use quartz composer to stream videos to the blinkenserver that is then serving them to the house (30fps I mentioned right?).

The installation will light up mid-end september and stay on for about 2-3 weeks.
As mentioned I will do a live performance on the houses on the 4th of October - the night of the Nuit Blanche. Standing on the Nathan Philips Square in front of City Hall with music and all.

There will be a live stream of the feed that goes to the house and stay tuned here as I will be "live blogging" as much of the installation setup and tryout as my time permits. Its whicked and monumental and I am proud to be on the Blinkenlights team.

(thanks Tim for putting trust in me).

The official website.
The official blog

FaderFox - the ultimate Midi Controller Solution

DJ2_front_1.jpgI have been looking for years to get a "perfect" midi controller for my live visual endeavors. That is no exageration. I have high expectations toward a midi controller. I want to lug it around the world - so it has to be small enough. I want to a plethora of functions on it so it actually helps me perform and is not just a gimmick. It has to be robust to withstand outdoor gigs with sandy gusts and water spray as well as indoor gigs sweat dropping from the ceiling and 80°C heat from above lamps. It has to be scalable and adaptable to new software coming along and it somehow has to fit my own style of performing - which means the knobs need to be properly soldered on - the faders have a solid feel and the buttons been tested to withstand 1.000.000 times pressing.

I have been impressed with the Mawzer concept two or three years ago because it was poised to deliver on that promise. Yet the final product they came up with is much less extensible and much more expensive and much bigger then what was originally proposed - the midi controller for everyone it was not. So I was looking up on building your own. There are kits out there and with access to lasercutters through the web that looked like a good alternative. There was only one problem - someone would have needed to shower me with some time to make such a project feasible. Soldering, testing building rebuilding a midi controller is - no matter how good the kit you are bying - a task of months. Sure you have the perfect controller in the end for your needs but the time to get there I would rather spend on creating content that looks whicked.

Then along came a project (next blog entry in a minute) that requires to leave my trusty videomixer (an edirol V4 1st generation) at home because it ainīt going through the old route of outputting digital video through an analog cable back to a digital projector - but instead it goes digitally over the air directly to the very analog output. So the only piece of hardware that gave me direct access to my output outside of the laptop I have to toss out for that even - and since the event is quite big and lot of focus will be on what I am doing a replacement for that loss had to come.

But lets make a little detour. Some might ask "oh you have the laptop as input controller". Yes thats true and while I use the keyboard of the laptop extensively and I can feel blind where the cursor is when using the trackpad there are still some things that are not fast enough using the computer. For example changing the speed of a video clip. There is just nothing that allows you to quickly do that - quickly as in "make decission in one beat of a 120bpm clip and have the speed change at latest at the 4th beat or next downbeat". For that direct control is absolutely necessary.

So I had another information scavenging on the interwebs to see if there is any solution to my controller need. I looked at all the ones that are used by vj friends. Cheap ones, expensive ones. I tried to imagine using them and always came up with shortcomings. Most are just too big with too little functionality - not one would fullfill all my needs in one piece. I need Trigger pads, Sliders, Knobs, Rotaries and lots of them - like LOADS of them and as said in a small package to travel. I almost gave up seeing myself trying to move the mouse cursor in unmatched fashion to some crappy output.
Then I stumbled over faderfox. I donīt know how I got there and its even a german company - I have missed them all those years. Apparently they are very well known in the tracktor/live world of musicians - I jut had never seen someone using one of the controllers they offer nor have I ever stumbled across their website.

Their website at first put me off a bit - as you might have noticed in the beginning paragraph - I need some serious professional gear. I just donīt want anything to break half way through a performance. Things need to be sturdy and the knobs and faders and buttons of exceptional quality to not loose that great break in the music that might make a difference for the feel of a whole performance. So a website that looks like its done in the mid 90s and never been updated makes me suspicious as of the professionalism - yet in my midi research I found out already that the midi hardware guys donīt have a great sense of astethics when it comes to websites (doepfer anyone?). Also on their front page was something that was so ultimately intriguing that no bad design could ever throw me off before I figured out more.

What I saw was a modular midi controller concept of ultra small units that are made for ultra portability. And on top I could figure out instantly that all I need in control ever is there in a maximum of five units - with three units already controller heaven. I googled a bit more and found out that people who bought them liked them and where impressed with their professional feel. They are even running battery powered and three of them are not wider then a typical apple pro lappy. Also the DJ2 unit they offer pretty much replaces all the controllers of my Eridol V4 videomixer. So I ordered one.

The order was processed with human kindness (yes a person on the other end - seemingly the developer himself) and delivered in outstanding speed (next day after money arrived). But what I got blew all my expectations. Not only was the controller everything that was advertised - lightweight, professional build quality - faders as solid as they can be - but it was way more versatile then I ever thought. You know you look at a controller and see all the buttons and your mind maps them to functions, but this controller crams more functions in his small space then most other controllers I have looked at 10 times its size. Buttons are all dual configurable with the shift button - that means there are 36 button messages you can get out of the controller, the xy joystick (which ist sooo smooth) can send on four different channels - switchable in a nanosecond, the two rotary controller send out continuus data on 12 different channels and notes on four and have a "push down" event on eight channels. All Sliders and knobs can be muted to avoid jumps on the controlled buttons by holding down the shift key. Everything works flawlessly after a week of testing (so the rotaries are not supported by VDMX yet (nor by quartz composer) which sucks) and its a lot of fun using. There is a (chainable in case you get more controllers from them) poweradapter coming as extra add on, it all supports midi chaining and midi through. Batteries are included and are supposed to hold 80 hours (so through the longest club session ever if you are so inclined). I can recommend this whole heartly to anyone out there - especially vjs - looking for that perfect transportable midi controller with more control then the mind allows. The price - so it does seem steep for such a "small" thing - is fully warranted.


Whats wrong with Adobe Apps?

A website lets users speak out what frustrates them with Adobe apps and the new open Adobe Company is responding in depth and length that I have not seen any other company doing ever - I think thats quite great.

Read the following part about a Linux Port of Photoshop in the discussion:

Linux - there are a lot of people there wanting Linux versions of your leading apps. And yet that's been glossed over time and again. And while it wasn't me that added that particular gripe, Photoshop and Lightroom really ARE the one and only reasons why I can't ditch this POS Windows operating system for Ubuntu.

[I can't speak for other products, nor do I want to give you false hope. Having said that, the architectural investments we're making will make the Photoshop codebase more flexible and portable over time. The fundamental problems with moving to Linux are A) sales to Linux users don't represent growth, they represent replacements of Windows units, and B) Linux use is heavily based in antipathy towards non-open-source commercial software. --J.]

I mean this is some reasonable thought on the issue that Linux users might even understand.

There is much more juice with questions and answers about prices of the Creative Suite, User Interface consistency between apps. While some answers are a bit inward looking instead of outward looking there are so many details in there its hard to recount them all here so head over first to the website and add your gripe then go to the dear adobe top 25 problems (its a must read and if you ever come into contact with adobe apps I am sure you agree with 26 of the 25 points being made.) and finally go to the official Adobe Photoshop insider blog of John Nack to see a huge company open up to the world in a way I have not seen before. Especially follow the discussion after the blog entry where there is a healthy back and forth between users and developers - I hope there are some other big time developers out there taking a cue.


Mudbox - the end of rendering near

mudbox2009realtime.jpgI promised to put out some more cool stuff that was announced at Siggraph because I think that Siggraph this year has awoken the sleeping Motion-Graphics-Visual-Effects-Dragon(tm). Some extremely unexpected wow factor came from a video presentation about the new Autodesk Mudbox 2009. Its a competitor to the much acclaimed ZBrush (which has been lacking development lately so competition is good) but was never really up to be head on with the latter 3d sculpting tool (coming also 3 years later to the party). Basically this category of programms is "realtime sculpting and texturing of organic forms with subdevision models". There has been great innovation coming from this approach like an extended focus on normal and displacement maps and easier UV transfer from between models. But ZBrush dropped the bomb like last year that they can now do ultra high precision modelling in realtime and add some lighting in the process of sculpting - making a unprecedendet modelling process possible that already generated a lot of highly detailed wierd and real looking characters (some of the Lord of the Rings characters have been modelled in ZBrush). But this year Autodesk who had just bought Mudbox like a year ago is upping the ante with modelling in ultra ultra high resolution - we are talking about 15 Million Polygons + to add detail that floors everyone and the modelling is still realtime and yes you are actually working with the quad polygons and not some normal map trick. The guy in the video is subdeviding the model he works on again and again - to like level 7 or 8 (that is quadrupling polygons on every step) - most programs crash out on level 4 up til now. But when you think it couldnīt get any better he turns on his texture, Light with Shadows, realtime HDRI lightning, RealTime ambient occlusion and realtime Depth of Field. WTF. Looking at the model (see the picture above) you think its rendered with renderman and took about 4 hours to render - but yet he continues to draw details on this fully lit fully textures models as if this is a low poly un-textured model - the light updates the viewport updates, he can reposition the main lights to see how the crinkles work he just painted - absolutely stunning.
See the video and more over at Autodesks Area User Platform (so to see the video you have to be registered).


Brain-Computer Interface

nia_angle.jpgOCZ - a modder company selling overclocking and cooling devices normally - has introduced a computer brain input interface. No its not an "coming soon product" it is apparently available to buy pretty much now for a mere US $147,00. The devices detects brainwaves, facial muscle movements and eye muscle movements to enhance your input possibilities. There is a great writeup from hothardware using it and reading that makes me very itchy. This could very well be a great additional solution to some input woes with complex programs - especially complex programs that need fast reaction time or programs were you need to push more buttons at once then you have fingers. Like VJ programs for example... Sadly the device is Windows only at the moment and mostly aimed towards gamers. But I expect this to become a common form of input if it is really working as described in the article.


Safari 4 - the end of the browser

A long long time ago I wrote an article here on this very blog claiming that the browser is not the future of the internet. I was saying that modern webpages provide services and that services are best served with their own interface, with their own usability and that most of the time the browser as the interface framework is not the best presentational model. Well in a stunning move two years after I wrote this apple is moving us closer to this reality with Safari 4. As I read today Safari 4 can save a webapplication as its own application without the need of the traditional browser interface around it. Now technology wise this is nothing so new for Apple as Dashboard is essentially doing that since its introduction - but making this way accessing webapps mainstream is sure dropping like a bomb. For the reference I also read that Mozilla is providing such a service already - but I have never found this information before nor have I found an easy to use button for the user to make a webpage an application. Now I can not say how glad I am to be freed from the square box browser interface soon and write apps for clients that really interface them with us - it brings usability to a much higher level. It makes representation of data that more focused. I will have a test of this technology soon and report how it went later on - surely exiting and I predict this will change the way we see the web rather sooner then later.


The Qingzang Tibet Railway earthquake prediction system failed?

800px-Qingzangrailwaymap.pngI am mulling over a problem for the last of week when hearing about the China Sichuan Province earthquake. China has just completed the most ambitious railway line in the world going from Golmund Xining Province to Tibet Lhasa. Ambitious railway because it is the highest running railway in the world and needs pressurized cabins so you donīt get altitutude sickness to badly (when going by bus from Goldmund its very certain that you get altitude sickness and this route is not recommended to go into Tibet for first time visitors who have never experienced altitude sickness).

From wikipedia:
The line includes the Tanggula Pass, at 5,072 m (16,640 feet) above sea level the world's highest rail track. The 1,338 m Fenghuoshan tunnel is the highest rail tunnel in the world, at 4,905 m above sea level. The 3,345-m Yangbajing tunnel is the longest tunnel on the line. It is 4,264 m above sea level, 80 kilometres north-west of Lhasa.
More than 960 km, or over 80% of the Golmud-Lhasa section, is at an altitude of more than 4,000 m. There are 675 bridges, totalling 159.88 km, and over half the length of the railway is laid on permafrost.

Now when I first read about this railroad line I also read about an earthquake prediction system build into it (and reading above paragraph you know why that is needed). Now Xining province is the neighbor province of Sichuan. The railroad actually uses the the "fault line" mountains to climb to Lhasa. It seems the earthquake prediction system for that railroad has utterly failed to predict the biggest earthquake China has seen for the last 50 years. Makes you feel all better riding on that train. I also have not heard any reports if that trainline is still functioning - because when I was in Tibet I heard that the Swiss engineers have turned down the offer to build the train because they said that even small tectonic shifts would be disastrous. There are no reports anywhere that I could find - either about the failed brand new hightech earthquake prediction system for the railway nor for the fate of the railway itself - I would really like to know.

More on the earthquake prediction system here


Autodesk buys Realviz

AutoDeskLogo.pngWell everyone ever comlaining about the lack of development on the 3D tracking market can rejoice. Autodesk just bought out Realviz - known mostly for their Matchmover 3D tracking software but recently also for their Stitcher VR panoramic stitching application (you know shoot photos all around and get a QTVR stitch - the new hype in 3d animaton at the moment) and their pretty new Movimento Motion Capturing Solution that works with real camera input (instead of the expensive other mocap solution called VICOM that uses highspeed infrared cameras to try and capture body movement).
Seeing that matchmover will not coma as a standalone product anymore under the autodesk brand it gives hope that we see this technology integrated into the two flagship autodesk 3d programs - Max and Maya. As for Maya finally replacing the "Maya Live!" tool that has not been updated for over 5 years (and is about the worst solution if you want to do any serious 3d tracking that you can find).
I really think that is one of the first aquisitions that I read on in a long time that actually makes a lot of sense and will actually help moving things forward - especially since matchmovers interface and core program also lacked development even so the underlying algorythm are said to be some of the best out there. I really look forward to have a 3d tracking solution that works inside Maya - the confusion with scale, different interpretation of how 3d cameras work, 3d data exchange that still has no real standard that actually works etc have been driving me nuts. Working integrated uncomplicated 3d tracking together with a simple and cheap motion capturing solution is one of the last core things missing before the whole 3d world can move to realtime rendering ;)

Read the press releasehere.

Bosch Air Lever (wasserwaage) für MacBook and MacBookPro

BoschWasserwage.pngThis is the greatest software invention since the wordprocessor - well almost. Firstly why I am so exited? Well I am REALLY bad with tools of any kind - I tend to be their arch nemisis. I loose them, I break them, missuse them because of the lack of the right one. Tools and me just donīt get along with my two left hands add glue paint oil+sand to the mix and you know what the few World War II tools that I inherited from my grand dad look like. But salvation has come at last - the era of fully digital tools - tools that are virtual but interface with the real world - kind of like "mind robots". I was extremely delighted to find out today that the german tool maker company BOSCH has released an application (as in software application - a thing that I never loose even over 15years and beyond, those things that rarely break on my always running computers). The application is a full functioning Air Lever - or Wasserwaage (water scale) as we germans call it. It uses the sudden motion sensors in a Macbook or MacbookPro and has a display that looks like your good old lever with simulated airbubbles as well. And yes it works - the installation is also surprisingly simple - via a Javawebapp that you can then instruct to write out a normal .app onto you local computer (because you ainīt having much of internet on most construction sites.
Now apple just needs to release a rugged MacBook that withstands an actual construction site day without a scratch and we have a tool that ainīt going missing so easy :)

You can get it at the official Bosch site (in german but the download link is called "download" and a lever is universal in language I guess ;) Oh and its free as in beer.


The End of the (traditional) Cellphone

Oh god another "the end of" article from fALk you might think... Well here is the deal and I have not seen this elsewhere yet but I think I stand on good ground. Apples iPhone SDK yesterday allows the development of applications that do Voice over IP - just not over the cellular network. You know what this means? Apple is slowly transitioning people away from the cellphone bussiness into the (free) wifi world by giving allowing Voice over IP wherever possible - also to other iPhone users. Now imagine the iPhone has about the success that the iPod has and all of the sudden you have every second person in the non third world running around with a device that - when within reach of a wifi spot - can call each other (with all the great things that chat programs do - like see peoples statuses etc) - for free. Now we all know free wifi is spotty at the moment and I am not saying this will change this right on the spot - but imagine now instead of wifi - wimax with a much bigger reach and say FON hotspots with wimax and you get the picture that in the not so distant future there might be no need for a cellphone network anymore - at least in rural areas this is already happening (wifi at home, wifi at work, wifi in the coffee shop). Heck I even bet that apple will see a HUGE demand for the iPod touch with the 2.0 firmware because people can now CALL on the freaking thing (if it would have a microphone).
This is an industry moving change that will cost the carriers a lot of customers in the near future (I predict a similar move that happened from landline to cellphone to happen from cellphone to wiPhone(tm) or with all the gaming capibilities maybe even the wiiPhone ;).


Enter the Holographic century

There have been hundreds of devices out in the last 100 years that all attempt to bring true 3d visuals to the masses. Now I tripped over the Cheoptics360 and I think this could be the real deal. While the website and the net in general is spare on details on how it actually works it surely looks like a R2D2 style hologramic projection is what this machine does - but watch the videos (and remember the videos you see are 2d so the 3d effect is not sooo visible - yet you can make out the floating object in front of the all the backgrounds which is a good indicator that this is actually working and quietly enough (read: no rotating mirrors) to be installed in an airport lounge. And large enough to eventually give us almost immersive environments... I still suspect this to be the old mirror trick but since its visible from 360 degrees this canīt be. Expect to see you live size game characters appear holographic in your living room in the next 15 years until then enjoy the video... or visit the developers vizoo website.


Prostetic arms shape up

lukearm.jpgThe Iraq war with its tens of thousand of lost limbs is creating a huge demand for prosthetics and it seems with lots of money to be made the artificial limb industry is working hard to get this bionic arm thing working. The inventor of the failed Segway two wheeled nobody needs it personal transportation device tries his genius anew and creates the first robotic arm that can pick up grapes and bricks without destroying either. It gives sensory feedback (although crude) and has a plethora of control mechanism (foot taps, nerve endings and muscle). Donīt let the prosthetic overlords take over out earth!

Watch the video at spectrum online.

HoloDeckCube coming near you

A Dutch comany called HoloCube starts marketing a similar called Device that lets you project movies in thin air. Its about 20 inch and geared towards the 3d advertising market (why? because it probably costs lots of mo and therefore the advertising market is the only market it could survive on). Expect StarWars HD quality holograms at your next fair visit. But donīt put you hand in one of these or they might get beamed to a 3rd dimension. (oh and of course in the video below you can NOT see the 3d effect as the video itself is only 2d! so just imagine the videos floating in air - if you believe the company. It must be the hardest thing to market anything "3d" with 2d mediums.)


3rd Party WiiMote with location capable gyroscopes

darwin-white_x220.jpgMy love for the WiiMote goes way beyond Raymans Raving Rabbits and other fun games as I embrace the gyroscope filled controller for next generation interfaces and 3d tracking in various forms. 3d tracking with the WiiMote has but one big problem. While it is quite great to get the bank and roll data out of the white brick it is only possible to know where it is using infrared lights. Infrared lights tend to behave like real lights as they can get obscured or not in the right place when you need them. Sure you can try to get the location of the WiiMote by using the accelerometers and gyroscopes but the result - and I have tried that - is quite bad. You get what people call a drift very very badly especially when you move around the room with the thing and attach a camera to it.
Now the Boston USA based company Motus Corporation claim to have developed a WiiMote of their own that can be used without the infrared sensor bar and still get the location right. They developed it in the form of a Samurai sword handle and made it primary for the target group of golfers (their are known for golf training gear) but if any of their claims hold true this is the revolution of the sensor revolution. The gyro package send new data every 30 Milliseconds is called Darwin (probably because the software to read out WiiMote data on the computer is Darwin Remote?) seems semi compatible with the WiiMote (the protocol is the same but there is different data available). The firmware has error correction build in and my uneducated guess is that they slapped another 5 or so accelerometers in there to get a more accurate average reading. Surprisingly the price is "in the range" of not beeing overly gold encrusted (its going to be somewhere between $79 and $99) and if the claims hold up its a bargain because then you can build a real realtime camera tracking solution and with a couple of these also a motion capturing solution that rivals about everything out there in the price range 10k+. Lets hope they find a distributor soon (or figure out that in these days you can self distribute this and still be successful)


stat pr0n: Apple's Awesome Market Share Gains

Only believe statistics that you skewed yourself but.... you know I love statistics somehow... and while I was looking around what the current market share of browser are (just to see a trend when it comes to javascript) I stumbled across a little tidebit on the "market share by netapplications" (the article can not be linked - stupid them so they donīt get any traffic from here directly - you can google them and find the article easily).

Market share all MacLines (excluding iPhone):
November 2007: 6.80%
December 2007: 7.31%

not bad (when you think they have been down to less then 3% just 6 years ago) but the astonishing thing comes next:

December 30-31: 8.01%

thats a market share rise of 0.7% after holiday season. It also means Apple is closing in on the big 10% very soon - I thought I would never see that day (hey I was Apple user when there was no hope left so please bear with me and my over-enjoyment to see the better platform get its big bite out of the computer industry fruit)

Record high quality MP4 without a Computer for cheap

VideoCaptureHW_large.jpgI normally donīt like to comment on products in general but I know that some fellow VJs have this big problem and this is just the perfect solution. The problem? Recording a 1+ hour set.

There are some solution already available but they are all bad, but lets recap them

1. Recording to a DV Camera
The Good:
• Good quality
• small formfactor

The Bad:
• Expensive equipment
• Camera record head has limited live span and if you record a two hour set every week the camera will last about 4 month
• maximum record time per tape 120 min with reduced quality 90 minutes full dv quality
• for further distribution you have to digitize the stuff back in (loosing 90 minutes in the meantime)

2. Recording with a DVD Recorder
The Good:
• dvd recorder are quite cheap
• you have a dvd in your hand in the of it

The Bad:
• I have to see one DVD recorder that just works
• heavy basstly music will make the lens jump and leaves you with a broken DVD and mostly no way of recovering you footage - this happens VERY frequently
• putting it on the web (gooTube) requires you to rip the dvd - a long process.

3. Recording to a spare Laptop
The Good:
• Full Quality/Codec/Format Control
• you have it on a harddrive and can just copy move it around recode it whatever

The Bad:
• EXPENSIVE who has earned the money to get a spare laptop to do just video recording - a raise of hands please
• Takes up lots of space
• cable mess
• spilled drinks

As we can see all these solutions are far from perfect and mostly stop gap measures to find a truly portable, cheap recording medium. Well a company that I once had a lot of love for but then got bought up by AVID and since then has not putting out much of interest has the perfect device. The company I am talking about is Pinnacle - people who have done video editing in the late 90s probably know it. They used to produce professional capture cards and computer/video stuff of high quality. Their new product is called - rather unispiring - the PINNACLE VIDEO TRANSFER. Its geared towards consumers but is just the perfect thing for the VJ warrior. It has video/audio inputs (S-Video, Composite-Video und Stereo-Audio) and a USB port. You feed it the video and connect a Harddrive of USB2 flavor or a PSP or an iPod or an Memory Stick on the USB port and it record you video to this massstorage of your choice as an MPG4 in a user customizable quality up to 720x576 25fps (full pal video (or NTSC if that is what you need). The thing without a drive costs kidney saving 129,00 Euro or as a bundle with a western digital USB2 harddrive you get it for 199,00 Euro.

The Good:

• Storage only limited by the Connected Harddrive (1TB of MPG4 should get you about a week of footage)
• Small formfactor
• relatively cheap
• portable
• relatively high Quality (better then MPG2 DVD for sure! Not quite DV but close)
• little failure rate (especially when used with an SSD Drive or Memory stick -> no bass-moved heads jumping around)

The Bad (oh there always has to be one):
• Converting it to something else is about as painful as converting a DVD
• Converting it to something else is bringing you some bad quality loss

But since no one does DVDs anymore anyway you just set to the quality you need for your website and there you go no need for conversion ;)

This is what a VJ should have in its bag if she wants to record their sets - the ultimate solution for now.

You can get more information here and buy it here


Contact Lenses as Screens - a first step in the lab

6552_web.jpg6553_web.jpg Again one of those "wow yeah" moments by just seeing the picture. After all these ultra ugly "screens inside sunglasses" product coming out in the last week its refreshing that technology still holds the potential of actually making itself hidden in the long run. The University of Washington has developed a process to print small circuitry on an flexible eye safe material. The mother-eye-board can also emit light.
Good great god - imagine having a ultra small spy cam that can see in the dark (or two for stereo vision) and you just wearing these contact lenses. The only give away would be that your eyes would glow eery blueish from the OLEDs - you might need to wear sunglasses after all for not being accidently shot by people who think you are alien.
The press release forgets to mention how the things are powered (inductive power right on your head canīt be that healthy) or how a filmfeed could be fed to them but I am sure we see this down the road - the potential (also for - military - so big funding I guess) are overwhelmingly cool.
Have a small scale spy fly hovering over you and provide you with a fly view of your surrounding area, quickly search for a missing item while sitting on your desk... ah endless possibilities...

read more here:


Nvidia buys Mental Images -> Realtime realistic rendering to come soon.

Ugh... Berlin company Mental Images - mostly known for its Mental Ray Rendering Technology which is included in the three biggest 3D applications out there (max, maya, softimage) and used in a lot of major film productions throughout the world has been bought out by NVIDIA the one of two graphic cards company (not only but you get the point).

The combination of mental images and NVIDIA united some of the greatest talents in the visual computing industry. This strategic combination enables the development of tools and technologies to advance the state of visualization. These solutions are optimized for next generation computing architectures and create new product categories for both hardware and software solutions.

This means only one thing: Realtime Raytracing - or less technical Realtime super duper photorealism - is around the corner very soon now - otherwise this takeover would not make sense. I love Mental Ray for its realism - the rendertimes are unbelievable high at the moment - even for small scenes. Now that NVidia has snatched the code I expect that they put the turbo in the rendering software via their graphic cards that have become more general purpose processing devices then graphic cards lately.

This is good. :) Hail the day when there are no rendering times anymore.

NVidia Press release


Movable Type is free

As noted before Movable Type has been put under the GPL (Gnu General Public License) which means its open source and free as in speech and the bare bone version also free as in beer. We have used Movable Type from the beginning and just couldnīt - wouldnīt want to to migrate to a different platform as it always seemed to much hassle. Also we coughed up the small fee for a multisite blog (that was a point where we almost switched). This is all past us now and it seems we have bet on a long distance horse with our blogging software. Now I think a company developing the software and having their business model not to sell the software but to sell distributions and support will be the business model for all software rather sooner then later (quote me in 10 years). Because with a truly open source approach you have tons of helping hands in your code to make it better more stable faster secure and saturate it with plugged in feature at no cost. You as a company are still the ones who know the software best (you coded it from the start) so you are probably the best to help big corporations to install it - meanwhile the free nature of your software allows anyone to get used to and train them self to use it, generating a legion of enthusiast who in return will advocate your software over closed source alternatives giving you company a sustainable income - everyone is happy. This approach is also in my opinion much better then the all non leader community approach of free software (for example wordpress) with no real direction and to many side roads leading to stagnation or confusion among users (drupal is also one of those beasts, buts thats about to change to).
As said I think sooner or later most of the software industry comes around this business model. I am very happy that I donīt need to think about migrating the two active blogs.


Cheap Motion Capture Device to hit market soon

MotionCaptureSensorSystem.pngWow. Sensors are SOOOOO ubercool. Now the motion capture monopoly that is Vicon and its associated studios might fall - very soon. Researchers from the MIT(USA) and MERL (Zürich, Switzerland) have created a system using of the shelf motion sensors such as the one that is found in the Wii controller and ultrasonic sound to create a $3000 motion capturing system that already rivals the vicon - and its only an alpha alpha version. They say the price could easily fall to a "couple hundret bucks". For comparison - a Vicon system that is actually usable cost upward of $250.000 - and that is in no way even close to perfect as every visual motion tracking has to deal with "blind spots" - tracking points that are invisible for a couple of frames because they are obscured by body parts or other bodies. The problems come if these points are then popping up in a different place after they had been obscured leading to absolutely unusable tracking data that has to be cleaned in a very tedious manual labour process.
The Vicon system has 5+ highresolution highspeed infrared cameras (we are talking about 2000lines resoltion and 200 frames per seconds here - you know where the price comes from) so this system will not fall in price anytime soon.

So as a 3d artist I can not stress how great it would be just to slap a <$500 motion capture system on any actor, have them do their dance and then have ready to use data. Especially since this can be employed outside a studio - so to say on set!

But also this would make the Wii look like a stone age gaming device. Imagine playing a boxing game with one of these.....

read more about it at the new scientist

or watch the youtupe video explaining the system.