Something amazing came through my ticker today something that is a game changer and together with another technology will change the way I work and make much more enjoyable.
First some basics to understand what I am talking about for those that have no clue about it all. There are basically the following six steps to get to a final 3d picture.
1. Modelling: Multiple Approaches get you to a mesh model that consists of - in the end mostly - polygons. You can scan you can push little points in 3d space you can use mathematical formulas to create substract or add simple forms or other formulas to make edges round revolve lines or extrude other lines. The end product is mostly always a polygon mesh with x amount of polygons - the more the higher the resolution of the model the closer you can look at it. About ten years ago a really nice way to model high resolution meshes came into existence called SubDivision Surfaces which lets you model a corse resolution model which is much easier to understand and alter and then generate a highres model out of it - that was the first real game changer in the 3d industry and the reason why character modelling became so "easy" and so many people doing so many great models.
2. UV Preperation: Now a model out of triangels looks less then realistic of course so you need to tell the programm what kind of material is on the model - here a lot of option are available - but especially for film work and characters you want something that is realistic and you do that by getting something realistic - like a photo - and alter it in such a way that it fits on your model - or you paint from scratch - now that such a picture can be put onto the model you need to flatten out the model into a two dimensional surface. You can imagine this like taking a dead animal and skinning it then make the skin of the animal flat. Like so:
(there is actually a programm that stretches the "hides" pretty similar to this very analog process). Its a very dull process to do this on a complex model - mostly you have to take your nice model apart and do all kinds of voodoo to get it artifact free. No fun and certainly not really creative.
3. Texturing: Ones you have your nice model with a more or less nice UV map you start to apply your texture - photo or programatic or a mixture of that. Here is a lot of "fun" to be had as you add little bumps, tell the software how shiny the model will be how reflective how refractive and lots of other things I don´t really want to go into - but its a nice step in general. +
4. Light & Camera: Without light there wouldn´t be anything visible. So you set up some virtual lights which act and react just like different kind of light sources you find in reality + some more other tricks that aren´t in reality but can add to a realistic picture. You also set up a camera or your virtual eye - which again acts just like a photographic camera in real life (almost). Both a creative and fun process.
5. Animation: Then you animate your model - push objects around, apply physics, deform your model. You can either do that by hand or get some animation data from MotionCaputure - like you might have seen these people with a black suit and pingpong balls attaced to them - or faces with dots all over them for example. This step is both fun and frustrating - with hand made or captured data. The human eye is so susceptible to small problems in movement that to get it realistically convincing not even a certain 500 Mio. Dollar production can fully perfect this step.
6. Render: Then comes the process that is mostly free of human intervention but not free of hassles and frustration. The rendering. Can take up to 50 hours per frame in Avatar on a stock normal computer. 24-25 frames per seconds (or in case of 3d double that) and you get an idea how much processing power is needed. And if you do a mistake - render it all over again. Also rendering is a complex mathematical problem and there are bound to be errors in software so prepare for the worst here.
Now why I am telling you all this? Well one step it seems has just been eliminated. Progress in the field of visual effects is very eratic - you have 2-4 years no progress at all and then all of the sudden a floodgate opens and something dramatically changes or multiple things. I would say we had a quit period the last 2-4 years - mostly because the development of the real cool stuff was "inhouse" meaning - that really smart programmer people where hired by the big VFX companies to program them certain things for certain needs and problems - a lot of problems in the above pipeline are already solved I think but have never seen the light of the broader world and instead stayed and sometimes even died within certain companies. Its really frustrating as the software companies struggled with the most basic problems (plagued by slow sales and a bad economy) and then you see Pirates of the Caribbean for example and they completely figure out how to motion capture live actors (record their movement) on set with no real special equipment - that technology is still only available behind the looked doors of Industrial Light & Magic. For me as an artist that is a valuable tool that has been created and I could do cool stuff with it but I can´t get my hands on because of corporate policies.
So its REALLY amazing to see that Disney - the intellectual property hoarding company for whom the copyright law has been rewritten at least ones - is releasing a software/API/Filestandard as open source as of today. Code that no less promises to completely eliminate step two of my list above. In their own words they have already produced one short animation and are in the process of one full feature animation completely without doing any UV mapping. I can only try to explain to you the joy that this brings to me. UV mapping has been my biggest hurdle to date - I never really mastered it - I hated it. Its such a painstaking long tedious process. I normally used every workaround that I could find to avoid doing UV mapping. Its crazy to think they finally have figured out a way to get there without it and I think this will drop like a bomb into every 3d app and supporting 3d app there is on the market within a year (a wishfull thinking here) - at least I can hope it does and I hope that Blender, Autodesk, sideFX are listening very closely.
Combine that with the recent advancement of render technology by using OpenCL (developed and released as part of SnowLeopard by Apple and made an Open Standard with ports for Linux and Windows now available) and render partially on the graphic card (GPU) - which speeds up rendering up to 50 times. That means that a frame from avatar takes only one hour to render instead of 50 - or in a more realistic case - current rendertime for an HD shot takes here 2-5 minutes an average to render - thats cut down to 10sec - 1min and would actually make rendering a fun part of the process.
Now we all know who is behind both companies releasing and opening that up: The mighty Steve Jobs. You could almost say there is an agenda behind it to make 3d a way more pleasurable creative work then it currently is - maybe Mr. Jobs wants us all to model and render amazing virtual worlds to inhabit where he can play god ;)
Good times indeed.
Whats left? Well animation is still not worked out completely but with muscle simulation and easy face and bone setups it has become easier over the past years - still hidousely tidious process to make it look right - don´t know if there ever is a solution for it that is as revolutionary as PTEX. Motion sensors might help a bit in the short future also some techniques that make the models physically acurate so that things can´t get into each other and gravity is automatically apllied. High quality texture maps that hold up to very very close scrunity are still memory hogs and burn down the most powerfull workstations. The rest will get better with faster bigger better computers as always (like all the nice lighting models that are almost unusable in production to date because they render too long). Generally we are so much further with UV mapping and rendering problems out of the picture I might get back into 3d much much more.
Disclaimer: I have been doing 3d since 1992 when I rendered a 320x240 scene of two lamps on an Amiga 2000 with raytracing - it took 2 days to render. My first animation in 1993 took a month to render. Then I switched to Macintosh (exclusively) in 1995 and did 3d on them for a while. It was so frustrating that I did not make a serious efford to get really good at it ever - now I am still doing it alongside Compositing / VFX Supervision but rather as add on & for previz then main work.
I love original ideas to circumvent the traditional structures and if you do it with style and message I am ALL out behind it.
There is this screenwriter who seems very fed up with Hollywood and "The Industry" and wants to do his own movie. To finance his endeavor he sells self made Lemonade with a great "disclaimer" text on the label that explains what he wants to do and how much he hates corporate Hollywood structure.
THANK YOU FOR INVESTING IN MY MOVIE!
My name is Matthew and I am one of the best screenwriters in Hollywood. Unfortunately, the television networks and movie studios don't know that yet. As it stands, the decision of which films get produced are left in the hands of emotionally-immature, substance-abusing ex-lawyers who live in dread paranoia that everyone in the universe is out to get them. They spend the bulk of their time spying on their fellow executives, composing nasty counter-intelligence rumors and spreading them through their network of FA-BU-LOUS, yet cunning assistants.
Much of the actual work, like "reading" is left to a gaggle of twenty-something interns who are all the product of George W. Bush's "No Child Left Behind" policy. To these bimbos, nothing in the world existed before 1995, and the most reading they've done has been through text messages. They believe that good writing is something that fits into 160 characters, all performed with the thumbs. :)LOL!
Needless to say, I'm making my own damn movie and you just helped! All of the profits from this amazingly refreshing drink are going into my independent film. Why? Because I believe in the spirit of America - CONSUME AND DESTROY! POOR=BAD/RICH=GOOD! WAR IS PEACE! YOU-ESS-AY! YOU-ESS-AY! YEE-HAW!
Any-hoo, if you work in "THE INDUSTRY" as a common below-the-line slob and would like to work on my film for less than you're worth for no other reason but to satisfy my giant ego, send your resume to: firstname.lastname@example.org.
If you're a producer with a distribution deal, somewhat sober, and capable of actually reading a screenplay by yourself, shoot an email to me as well. I'll be happy to send a script to you along with your stupid submission release agreement boilerplate wank-rag.
If you are an actor, congratulations on making it this far. It's a lot of words. Who's a good boy? You! And you are very special. Plus, you serve specials at the restaurant. Special food served by special people to special people. Okay, I admit it. I'm just jealous because you are better looking than me and get all the hotties. Girls who go for me are all smart 'n' junk. Plus, they sag. And you're in SAG. Isn't that special?!
Agents, entertainment lawyers, managers and all other Pimps of The Antichrist can do us all a favor by simply killing yourselves. If you can, try to attempt a single moment of original, creative thought by finding an entertaining way to do it. Like performing seppuku with a champagne flute during the lunch rush at The Ivy. Or hang yourself from one of "O's" in the Hollywood sign with a noose made from your Kabbalah strings and rubber cancer-awareness bracelets. Either way, die bloodsucker! Die!
His lemonade is made up of the following:
A REFRESHING BLEND OF WATER, CERTIFIED ORGANIC LIMES, CERTIFIED ORGANIC CANE SUGAR, AND CERTIFIED ORGANIC BASIL.
sounds delicious. You can by a 1-quart bottle for $5 or a 10 bottle crate for $30.
Things that used to cost around a million and one buck just a very short time ago seem to be available for almost nothing these days - especially if you roll your own. Such the case with 3d scanners it seems - while only 5 years ago its was inconcievable to even dream about owning a 3d printer - things started to get interesting 2 years ago when the prices dropped into the sub $5000 range and the first efforts of DIY open source 3d scanners appeared on the ether. Now we drop into the sub $300 range with a laser 3d scanner made out of the Lego NXT system. Oh good are the times.
via make blog.
Because I am a Berliner and I love subculture especially when it is at its best. It is at its best when its illegal, wild and sets statements. I must have missed the Reclaim Jannowitzbrücke event last year but it looks like a lot of fun. Flashmob meets Berlin electronic music scene meets "we make everything shiny and expensive" conservatism. I love it - its rumored that its going to happen again this year - I keep my ears wide open.
You know state where I live saw the death and the rebirth of one of the coolest vehicles ever built by humans (sans the bicycle). When the Hindenburg crashed the era of airships came to an unforeseen end - even so the concepts for future airships had everything you ever wanted - slow quite low fuel flight across the world. The problem with the Hindenburg was of course the wrong filling - you ain´t put a high explosive gas in huge quantities atop people and wrap it with burning fabric and install sparking engine beneath it - the consequences where that the only airships you could see since then have been the small blimps that are mainly used for advertisments and low altitude tourist tours (very few at that). about 10 years ago not far from where the Hindenburg crashed someone secured lots of funds from the government to revive the airship in all its glory. Sadly the guy was not good at bussiness and besides building the biggest hall mankind ever build - for the construction of the airships - nothing has come out of this endeavoure except the dream that airships are actually a really nice way to travel and transport goods (if you leave out the highly exploding hydrogen of course). The concepts even showed feasibility to transport goods as much as trains do but with much less environmental cost. Now the construction hall is a fake rainforest eating up so much CO2 that you could power a whole city and need another rainforest to offset the costs - and its horrible sterile from what I heard.
So by now you should be aware that I love airships - to my delight Inhabitat - my favorite "green" blog - has posted an article about a Zeppelin that could be straight out of a sci-fi movie - except that it exists and roams the earth pedal powered (so it combines the two most amazing vehicles ever created by mankind ;) in a zen slow smooth kind of way. It can land everywhere without any infrastructure needed.
The caveat? Helium - the gas that fills the sail.
Although the gas is the second most abundant element in the universe, it is a relatively rare and finite resource on Earth and must be extracted via low temperature gas liquefaction or recovered from natural gas.
There always is a problem... Nonetheless there is hope that this might catch on and we find a synthetic gas that can be had more easy!
More pictures here.
I have voiced a lot of disdain for the closed source proprietary slow resource wasting crap that is flash. Flash is not good for video playback (for a number of reasons everyone but google would admit too) its also quite dumb to jail images in flash containers - but both of those things are a common practice around the net (flickr, youtube) - both things will be a part of an ugly past once HTML5 hits the street and moves toward a 50% usage point. SVG, video, animation and audio tags are just too cool to pass up on as a developer - and its all open and standard and complient and such.
Yet there was one thing where flash was until recently considered the only viable option: browser based games and interaction projects. Well in the offline world there is a programming language called processing that is used to make games and physics simulations and interaction models and and and. Its a breed of its own when it comes to programming languages - quite easy to learn and follow and very powerfull in what you can create but unless you are a java fetishist there was no way to run processing stuff inside the browser.
Take a small aircraft and a good camera, fly to a highaltitude, tip the nose of your aircraft downward 90° let go of your stearing wheel, slowely take out your camera, make all the right setting on it (while the plane slowly accelerates downward to higher and higher speeds), and at just the right moment, when the framing is right, the sun is in the right position breathe in and push the camera trigger, then put the camera safely away take back the steering wheel and pull up the plane just shortly before crashing headways into an amusement park. This is the live of architect and photographer Alex MacLean(attention heavy flash site with no deeplinks - go to "preview new book" to see the pics) who might need some very good meditation techniques to endure this stress at any level. But the outcome are really cool photos of american cities and landscapes and the destruction the american population brings to their own land (purple earth?). The overgrown cars in the forest are interesting not from astetically standpoint but from the idea that is a common thing to do in the US. When I was in South Dakota the forest behind the house was littered with really great 70s cars that actually still worked when you brought a full battery to start them. This is not good for the environment - at some point the oil will leak into the water system - especially on the one shown here. Anyway the photos are mesmerizing and amazing and a warning sign (the suburbs - omfg - who wants to live there?).
Brought to my attention by my parents (yes this makes me proud of them).
The first electronic musical instrument ever and the only instrument played without touching it? The Theremin. Now in the future you ain´t need antennas you can make it wireless and use fancy light - of course brought to you by the awesomeness of the wii controller (has there ever been a piece of computing hardware this versatile while being that cheap?)
Is it not the coolest interface ever for a 303?
Alex Dragulescu is making automatically generated architecture - his data input are spam emails. Keywords in these spam mails are used to generate planes in three dimensional space.
Well I have a tick for the bamboo paper umbrellas - not that I would wear one but I like them for being a canvas with practical usage and girls tend to look chick with them. Now on top of all that they can apparently act as your iPod or other portable music player ultra loud big badass speaker. Immersing you and your friends into rainless cloud of music and disturbing passerby's with music that probably just does not want to fit under 2000 year old invention creating that culture shock moment for the granny next door.
Oh and wouldn´t it be lovely to have an outdoor party with these? I mean your guest are staying dry, and the noise does not spread to the neightboors and everyone has the best musical position - I guess permanent fixture would be needed so that you have both your hands free doing other stuff.
If there is enough interest its going to be produced - for $100... via gizmodo
John Williams is the composer for what can´t be argued the most successful film scores of Hollywood in the last 40 years (Star Wars, Indiana Jones, Jurassic Park etc pp) . This guy who goes by the youtube name Coray Vidal made an a capella tribute with the help of the comedy group moosebutter. Watch it, its an amazing idea.
via tims twitterstream.
I looked over it when it appeared on mogreens blog, but then clicked on the play link and was blown away. This must rank as one of the best mash up videos of all times. And its more - this has the possibility to be played live. If you are a vj and you are using samples - take notice of this video. Its would be Live Cinema at its best: Message, Story Line, Style, Energy. Did I say it fucking rocks?
The word is officially out so I can talk about it publicly. Project Blinkenlights is awakening again and its bigger and more badass then ever. The Blinkenlights crew will turn the Toronto City Hall in Toronto/Canada into the biggest-analog-lowres-giant-dual-screen on earth. As seen before in Berlin and Paris every window will turn into a pixel with 16 steps of brightness and sustaining an impressive 30 frames per second picture feed across two buildings. Classic computer games, crazy controllers (*cough*iPhone*cough*Wii*cough*), love-letters and a custom tailored live VJ performance on the houses are all in play.
For the VJs out there is maybe of interest that a lot of the picture making process is driven through quartz composer - there will be even an official blinkenlight quartz composer plugin soon. That means not only can you preview your graphics in quartz composer to see how they look on the house (3d is in the works) but you can actually use quartz composer to stream videos to the blinkenserver that is then serving them to the house (30fps I mentioned right?).
The installation will light up mid-end september and stay on for about 2-3 weeks.
As mentioned I will do a live performance on the houses on the 4th of October - the night of the Nuit Blanche. Standing on the Nathan Philips Square in front of City Hall with music and all.
There will be a live stream of the feed that goes to the house and stay tuned here as I will be "live blogging" as much of the installation setup and tryout as my time permits. Its whicked and monumental and I am proud to be on the Blinkenlights team.
(thanks Tim for putting trust in me).
There have been hundreds of devices out in the last 100 years that all attempt to bring true 3d visuals to the masses. Now I tripped over the Cheoptics360 and I think this could be the real deal. While the website and the net in general is spare on details on how it actually works it surely looks like a R2D2 style hologramic projection is what this machine does - but watch the videos (and remember the videos you see are 2d so the 3d effect is not sooo visible - yet you can make out the floating object in front of the all the backgrounds which is a good indicator that this is actually working and quietly enough (read: no rotating mirrors) to be installed in an airport lounge. And large enough to eventually give us almost immersive environments... I still suspect this to be the old mirror trick but since its visible from 360 degrees this can´t be. Expect to see you live size game characters appear holographic in your living room in the next 15 years until then enjoy the video... or visit the developers vizoo website.
Well if you didn´t like the iPhone until now you might start liking it as someone has hacked the thing to record video -> with the full resolution of the camera -> thats 2.9 megapixels of video or roughly HDV resoltion. No word on compression algorythm, codec but at the moment the app can record at 10fps and when finished at 45fps! Thats more then any kind of camera below 5000 Euro can do at the moment (in that resolution with that framerate). Somehow the iPhone just became cheap.
The restraining factor is the onboard memory with just 8 GB its only 5 seconds of fun at the moment.
http://hollywoodstory.tistory.com/ has the link to the Drunkenbass for your unlocked iPhone to record video at extreme resolutions. If anyone has an iPhone please tell me if this is uncompressed?!!
UPDATE: Link above links to correct article now!
Wow. Sensors are SOOOOO ubercool. Now the motion capture monopoly that is Vicon and its associated studios might fall - very soon. Researchers from the MIT(USA) and MERL (Zürich, Switzerland) have created a system using of the shelf motion sensors such as the one that is found in the Wii controller and ultrasonic sound to create a $3000 motion capturing system that already rivals the vicon - and its only an alpha alpha version. They say the price could easily fall to a "couple hundret bucks". For comparison - a Vicon system that is actually usable cost upward of $250.000 - and that is in no way even close to perfect as every visual motion tracking has to deal with "blind spots" - tracking points that are invisible for a couple of frames because they are obscured by body parts or other bodies. The problems come if these points are then popping up in a different place after they had been obscured leading to absolutely unusable tracking data that has to be cleaned in a very tedious manual labour process.
The Vicon system has 5+ highresolution highspeed infrared cameras (we are talking about 2000lines resoltion and 200 frames per seconds here - you know where the price comes from) so this system will not fall in price anytime soon.
So as a 3d artist I can not stress how great it would be just to slap a <$500 motion capture system on any actor, have them do their dance and then have ready to use data. Especially since this can be employed outside a studio - so to say on set!
But also this would make the Wii look like a stone age gaming device. Imagine playing a boxing game with one of these.....
or watch the youtupe video explaining the system.
Immersive Environments get more and more press over the years, as they finally become more rich and interactive. Its not only the "Goggle 3D thing" anymore its also blackboxes. One of the funkier ones I have seen and one that serves to teach children a bit about tree growth (namely that they need water to grow ;) and is generally great looking is Funky Forest. Children can create trees with their bodies, divert water with handmovements all in a immersive four walled environment that has some style.
If you ever tried to layout a scientific paper (once I had to) you noticed that just when you choosed a nice font you had to put in two dizillion of other fonts just to get all the special characters that this specific scientific project needs. That is if you find them at all. No more of that says STI - publisher of lots of scientific journals - and opened the open source free Scientific Font package.
What you get is 30 different opentype fonts - mostly wit a "Times" appearance but also in Fraktur (!) monotype and sansserif - with about all and every scientific symbol on the planet all free for your download pleasure. Still at an "open beta" stage at the moment but already available for download here:
One of these "why haven´t I thought of it" moments occurred to me today (so even if I would have I probably would have not been able to pull it off but thats beside the point). Take A LOT of photos from one and the same object/building possibly from different angles and generate a 3D polygon model out of it.
Now where do you find tons of photos of the same thing - Flickr the pixel rubbish tip.
So thats what researchers from the TU-Darmstadt and the University of Washington with the help from the usual suspects (Mircrosoft, Adobe and other "we want to own the world" types) have done. Take 200+ photos of highly varying quality of any one building and let some pattern recognition magic be performed to figure out the point of view of each photographer automatically. Then interpolate the edges of the patterns and put it in a 3D grid, shake it all fix the polygons and out comes - a 3d model representation of the building.
Now I thought this is enormously cool until I saw the actual output. I think to get that quality of polygonal modeling it takes less then the 2h for a human with only very bad knowledge of ZBrush to make a shitty model like this. I mean these are architectural marbles that have fine details and sharp edges - to present a 3D model that looks like a bad cast of souvenir soap does not speak for your research. So as cool as the concept sounds this is WAYS of to be used for anything productive other then - maybe - having a proxy object with roughly the exact measures (of by 2 inches for Pisa tower they say), but this I could also by just having a background photo.
It does however seem to work quite nice for organic object (one would have to see the mesh cloud of one of these objects to actually get a real good observatory comment on this - something they (deliberately?) have not made public.) as can be seen on the fabric of the statue of liberty. So rather then taking 200+ photos of building in flickr they should maybe focus more on making 50+ instant photos from a person and get rid of laser scanners for ones and all? Or maybe they will use this tech to make a 3d representation of humans with 200+ surveillance camera photos of one and the same person and then clone them?
In what seems one of the coolest projects I have read about in a long time a group of people are building a fleet of boats - or better floating objects from scrap junk found in dumpsters and abondend construction sites. These floats are powered by Vegetable Oil, Solarpower and Windcraft and will sail down the Mississippi stopping in bigger towns to hold workshops and theater performances. The carry along a library and invite groups along the route to jam with them. The floats look actually extremely cool (not to self: building a boat for the waterway down at the house with this design approach to make it float through discarded foam in wooden containers). An environment friendly pirate street art project par excellance. I wish them best luck in their endeavor.
For more information see the missrockaway.org webpage.