Kinect sensors could lead to safer X-rays

You don't want to stand in front of an X-ray machine for any longer than necessary, and scientists have found a clever way to make that happen: the Kinect sensor you might have picked up with your Xbox. Their technique has the depth-sensing camera m...

Xbox One Kinect bundle drops to $399 for a limited time

With the holiday season just around the corner, Microsoft is looking to capitalize on Xbox One sales over the next couple of months. As such, the company has announced a limited-time deal for the kit that includes its latest console and companion m...

Real-Time Facial Expression Transfer: Virtual Face/Off

In the near future, you may be able to make yourself speak in any language, or have video proof of your friend saying he loves to eat poop. It’s all thanks to a new tracking and animation system that can transfer the facial movements of one person into a photorealistic CGI rendering of another person’s face, all in real time. In other words it can make you, or rather an animation of your face, express or say anything.

real_time_expression_transfer_for_facial_reenactment_1zoom in

The jaw-dropping technique was developed by Justus Thies, Michael Zollhöfer, Matthias Nießner, Levi Valgaerts, Marc Stamminger and Christian Theobalt. The group developed custom software that creates parametric models of the source face and the target face with the help of a depth sensor such as the Kinect. Their program also takes into account the real time lighting conditions of the target face to make the resulting animation more realistic.

real_time_expression_transfer_for_facial_reenactment_2zoom in

Before it can work its magic, the system must first analyze the source and target faces so that it can calibrate itself. When that’s done, anything that the source face does will be mimicked in real time by a computer animation that looks just like the target face. Note that the resulting virtual face will still mimic the target’s head movement.

Aside from this “facial reenactment”, the system can also be used to make it so that the virtual face is wearing makeup or different clothing, or is under different lighting conditions.

It’s an insanely useful invention, but obviously it can also be used for nefarious purposes. Now even your face can be hacked. You can download the group’s full paper from Stanford University’s website.

[via Digg]

Microsoft SemanticPaint Creates Labeled 3D Models of Real Objects Splatoon-style

Microsoft Research’s Shahram Izadi and Philip Torr of the University of Oxford have come up with an intuitive way of teaching computers the names of objects while simultaneously creating 3D models of said objects. Their SemanticPaint system learns the names of objects by simple voice commands and can then automatically identify similar objects.

microsoft_semanticpaint_prototype_1zoom in

SemanticPaint uses a Kinect or a similar multi-sensor. You simply go around and aim the sensor at the environment or objects you wish to scan to start the process. When you want to name an object, you can either touch that object or draw a border around it, and then say its name. On its visual interface, you’ll see that SemanticPaint marks each named object with a unique color. The example above shows chairs as green, bananas as blue, the floor as red and tables as yellow.

Once you’ve named a few objects, you can order SemanticPaint to enter a test mode, where it will attempt to scan and identify unnamed objects that it thinks are similar to the ones you’ve already named. The beauty of SemanticPaint is that you can interrupt this automated process and correct any errors in real time.

You can read Shahram and Philip’s full paper on Microsoft Research’s website. The researchers hope that SemanticPaint could someday help guide robots and visually impaired people. Then there’s the potential for new augmented reality and other interactive experiences. Perhaps someday we’ll be able to easily spot specific persons in a crowd, extrapolate the appearance of archeological finds or make accurate mockups of unexplored areas. I hope we make sure to mark humans as friends.

[via TechSpot]

 

Kinect-Powered PomPom Mirror Highlights Fluffiness

PomPom Mirror Daniel Rozin 01

Snow White’s stepmother would have a hard time finding out who’s the fairest of them all, as everyone’s reflection in this Kinect-powered mirror looks equally fluffy.

New York artist Daniel Rozin seems to have developed an obsession for mirrors, considering that most of his portfolio focuses on them. His latest creation, simply titled PomPom Mirror, might convince you that Rozin is not a big fan of sharpness. The contraption makes use of a Kinect sensor, motors and plenty of faux fur pom poms to create fluffy reflections of whoever stands in front of it.

Obviously, the effect couldn’t have been achieved with a small number of faux fur pom poms and motors. With that in mind, Rozin employed 464 motors that turn 928 spherical puffs from beige to black depending on what motion is captured by the Kinect sensor.

People’s reaction to seeing this concept is preponderantly positive, with some of them even wanting to sleep on this mirror. Well, the surface definitely wouldn’t suffice, but maybe that’s what people should suggest Rodin to try in this future projects: a full length mirror that doubles as a fluffy mattress when not in use. After all, the ladies may want to admire their dresses, and the current PomPom Mirror cannot encompass their entire length.

The Kinect sensor tracks the people standing in front of the mirror and sends the data to a microcontroller that, in turn, flips the motors to determine the change in color. The reflection appears in real-time, but don’t expect instant reactions. After all, something as fluffy as this mirror shouldn’t make sudden moves, as they would contradict the whole concept. One might argue that the fluffy mirror has a life of its own, and if you’re considering its aspect and motion, you wouldn’t be that far from the truth.

In case the above video isn’t enough for you, and you happen to be in the Big Apple these days, don’t hesitate to visit the Descent with Modification artworks exhibition. Rozin’s PomPom Mirror is currently showcased there, and can be tested by any of the visitors. The exhibition runs through July 1, 2015, so don’t miss your chance of witnessing a completely unique form of art. Unique and fluffy, that is.

Be social! Follow Walyou on Facebook and Twitter, and read more related stories about the depth-sensing cameras on Kinect v2, or how Disney Aireal enhances Kinect gaming with tactile feedback.

Computational Hydrographics Applies Patterns Precisely: Copy Paste Connoisseur

Hydrographics or water transfer printing lets you apply graphics to objects in one pass. It gives a uniform finish and can be used even with large objects. And watching it is a great timewaster. But it’s not ideal for items with complex shapes, and it can be used to apply only graphics with repeating patterns, because it’s hard to align the film that holds the pattern with the substrate object. Until now.

Computational_Hydrographic_Printing_1zoom in

Students from Columbia University and Zhejiang University presented their computational hydrographic printing system (pdf) at SIGGRAPH 2015. To make it easier to understand this breakthrough, compare the image above with the one below. The patterns on the objects above were printed using the students’ setup, while the ones below were applied using traditional hydrographics. Note how the latter only has simple patterns, while the mask and the car above have a variety of details that are right where they should be – the eyes, nose, headlights, wheels etc – as if they were painted on.

Computational_Hydrographic_Printing_2zoom in

The solution is a combination of hardware made from off-the-shelf parts and the group’s custom software. The student’s hydrographics machine has a depth sensor (e.g. a Kinect) that analyzes the object’s shape as well as its orientation and location relative to the machine’s vat. They combine this data with the group’s virtual simulation of how the film holding the pattern will behave. The simulation predicts exactly how the film will stretch and distort when an object of any given shape is dipped into it.

From there, they can print a pattern that fits exactly on the substrate object (assuming the printing machine dips and holds the object in the recorded position). Because the system can record an object’s shape and print patterns with extreme precision, it can even print multiple patterns separately on the same object.

As noted in the video, this has a lot of applications, but one that’s particularly interesting is its potential to serve as an alternative to colored 3D printers. If you want the colors to only be on the surface of a printed object, perhaps this will be more economical and precise. Download the students’ paper from Columbia University (pdf) for more on their project.

[via Wired]