The Mill’s shapeshifting Blackbird can mimic any car

Securing exotic, high-performance vehicles for a video shoot can be an expensive and arduous ordeal. Between dealing with availability of the vehicle, location, and filming, setting up the perfect shot for movies or commercials is extremely difficult...

Lytro’s first pro movie camera is designed for visual effects magic

While there are plenty of advanced digital movie cameras, most of them aren't really designed for the modern realities of movie making, where computer-generated effects are seemingly ubiquitous. You'll still have to bust out the green screen if you...

Changing Faces in Videos in Real Time: Literally Putting Words in Someone Else’s Mouth

Last October, we checked out a tracking and animation system that creates a realistic computer animated face based on one person’s face and another person’s facial movements. Most of the researchers continued to work on that system, taking it to its logical conclusion. It can now plaster the resulting animation into a video with very convincing results.

real-time_face_capture_and_reenactment_of_rgb_videos_by_thies_et_al_1zoom in

Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt and Matthias Nießner’s system creates photometric maps of the source and target faces, and uses those maps to quickly and accurately create a hybrid computer-animated face. What takes this hack over the top is that it accounts for the target video’s lighting, such that the CGI face seamlessly blends with the rest of the target video when it’s pasted over the target actor’s real face. The target videos in the demo below are all YouTube streams.

That is cray.

[via Matthias Nießner via Gizmodo]

Disney FaceDirector Combines Two Takes Into One: Scene, Take 1, Version 1.1

Last October we checked out a fascinating animation technique that transfers facial movements from one person to another in real-time. Disney Research’s FaceDirector on the other hand blends two different takes of an actor’s face into one customizable performance.

disney_research_face_director_1zoom in

FaceDirector synchronizes two source performances based on facial cues – the eyebrows, eyes, nose and lips – and the actor’s dialogue. The resulting hybrid face can then be customized in terms of which of the two source performances is visible at any given time. For example, in the image above, the synthesized performance shows the actress switching multiple times between an angry and a happy expression, even though she actually only recorded a happy-only take and an angry-only take. The idea is for filmmakers to save on resources by using post-production to achieve their desired performance with less reshoots. As you’ll see in the video below, FaceDirector can also be used to overdub both the audio and the video of erroneous dialogue.

The researchers acknowledge that FaceDirector is far from perfect. For instance, it has trouble blending performances where the facial cues are drastically different, e.g. one has the actor’s lips closed while the other has them wide open. It’s also hampered by items that cover the actor’s face, such eyeglasses or even hair. You can download their paper from Disney Research’s website.

[via Reddit]

Real-Time Facial Expression Transfer: Virtual Face/Off

In the near future, you may be able to make yourself speak in any language, or have video proof of your friend saying he loves to eat poop. It’s all thanks to a new tracking and animation system that can transfer the facial movements of one person into a photorealistic CGI rendering of another person’s face, all in real time. In other words it can make you, or rather an animation of your face, express or say anything.

real_time_expression_transfer_for_facial_reenactment_1zoom in

The jaw-dropping technique was developed by Justus Thies, Michael Zollhöfer, Matthias Nießner, Levi Valgaerts, Marc Stamminger and Christian Theobalt. The group developed custom software that creates parametric models of the source face and the target face with the help of a depth sensor such as the Kinect. Their program also takes into account the real time lighting conditions of the target face to make the resulting animation more realistic.

real_time_expression_transfer_for_facial_reenactment_2zoom in

Before it can work its magic, the system must first analyze the source and target faces so that it can calibrate itself. When that’s done, anything that the source face does will be mimicked in real time by a computer animation that looks just like the target face. Note that the resulting virtual face will still mimic the target’s head movement.

Aside from this “facial reenactment”, the system can also be used to make it so that the virtual face is wearing makeup or different clothing, or is under different lighting conditions.

It’s an insanely useful invention, but obviously it can also be used for nefarious purposes. Now even your face can be hacked. You can download the group’s full paper from Stanford University’s website.

[via Digg]

Ford’s Immersive Cinematic Engineering Tech Makes Virtual Cars Look Like Real Cars

Over the last few decades, computer graphics have become a more and more essential part of industrial design – especially in the automotive industry. The ability to visualize designs digitally has given designers and engineers the freedom to test and refine concepts and functionality prior to expending time and money on physical prototypes.

ford_ice_1zoom in

Ford is definitely on the leading edge of using virtual reality techniques to visualize new vehicles. I’ve spent some time checking out the automaker’s virtual reality environments, and they’re truly impressive, allowing you to walk around and step inside of virtual vehicles. Now the company has announced its using a new technology called ICE (Immersive Cinematic Engineering) which goes one step further than the systems I’ve seen.

ford_ice_3zoom in

Developed under the leadership of Ford virtual reality and advanced visualization specialist Elizabeth Baron, ICE tech allows digital vehicles to be observed with truly photorealistic details, allowing designers and engineers to see their vehicles with the same material properties as they would in reality. This can be helpful for reducing distracting reflections in the cabin and for other drivers on the road, as well as seeing how light plays with the various materials used in the vehicle.

ford_ice_2zoom in

Cinematic ray tracing and global illumination techniques help to produce images that look so realistic, that every surface, crease, and crevice plays with light like it does in the real world. Previous tech would use precomputed shadows and reflections, while ICE renders these in real time. This way everything from the way that the headlights and interior lights will work to how light will cast through the windshield can be realistically rendered.

ford_ice_4zoom in

I can only imagine that these improvements will have an even more profound impact on the freedom that designers have when creating vehicles, as they’ll be able to refine even the tiniest details of how light affects a car long before the car goes into production.

Watch As Mario Travels Through Tokyo

This video is sort of snapshot of a typical day for our Italian plumber friend Mario. On this day, he has to make his way through Tokyo, otherwise he will be late for a very important event.

mario tokyo 620x343magnify


This charming fan film was made by visual fx artist Dean Wright and it is pretty cool. Mario is always a delight to watch in action, but in this video Wright seems to capture his personality and spirit really well.

We join Mario as he is looking at some plumbing supplies and then his alarm goes off…

And so he has to make his way across Tokyo so that he is not late for the Super Smash Bros. tournament. Mamma mia!

[via Kotaku]

Heavenly Sword Movie Starring Fringe’s Anna Torv


Once upon a time, it seemed like Heavenly Sword might end up being a pretty substantial PlayStation franchise. But that time was 2007, back when the game was released for PS3, and we’ve heard little...