Auto-Tracking Desk Lamp: Luxo Jr., is That You?

Because who can be bothered to manually adjust their desk lamp (what are we, peasants?), the Werobot Pino Lamp on Kickstarter is an intelligent robotic auto-tracking lamp that can follow book pages, your face, or certain colors to ensure they’re always well illuminated. The future, ladies and gentlemen, this is it! Forget flying cars; we’ve got book-tracking desk lamps.

The lamp has 160 degrees of object tracking movement and can auto-adjust its beam color and brightness based on its artificial intelligence or your pre-set preferences. Its base serves as a wireless charging pad. It also has an optional breathing effect so that the lamp can offer a level of emotional companionship. Granted, not a very high level of emotional companionship, but every little bit counts when you’re lonely.

https://www.kickstarter.com/projects/450423506/werobot-pino-lamp-intelligent-robot-auto-tracking-lamp

For those uninterested in all the AI bells and whistles, the lamp’s features can also be fully adjusted via its smartphone app; that way, you can set it up just the way you want to without it moving or adjusting its light on its own. You know, kind of like a regular lamp. I just bought one of these for every room of the house, including the garage and laundry room.

Guy Builds Creepy Facial Recognition Robot with ‘Brain’ of a Furby

Seen here looking like a typical nuclear family of the future, this is YouTuber LOOK MUM NO COMPUTER showing off a facial recognition robot (the one on the right). It appears to be powered by a Furby inside the clear plastic computer case, which is modeled after a classic Apple Macintosh. Fingers crossed it never recognizes my face because I will snub it publicly.

The facial recognition and tracking are actually powered by a Raspberry Pi using a camera in each of the computer’s eyes, the Furby in the back of the computer with the ribbon cables coming out of its eye sockets is just there for decoration and to add an extra creepy factor. You know, as if this project really needed multiple layers of creepiness.

I really feel like this thing belongs in a reboot of Pee-wee’s Playhouse. Or a nightmare. Is there a difference? Depends on who you ask. But if you ask me, the answer will be dictated by an anthropomorphic Magic 8-Ball.

Cornell researchers created an earphone that can track facial expressions

Researchers from Cornell University have created an earphone system that can track a wearer's facial expressions even when they're wearing a mask. C-Face can monitor cheek contours and convert the wearer's expression into an emoji. That could allow p...

Changing Faces in Videos in Real Time: Literally Putting Words in Someone Else’s Mouth

Last October, we checked out a tracking and animation system that creates a realistic computer animated face based on one person’s face and another person’s facial movements. Most of the researchers continued to work on that system, taking it to its logical conclusion. It can now plaster the resulting animation into a video with very convincing results.

real-time_face_capture_and_reenactment_of_rgb_videos_by_thies_et_al_1zoom in

Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt and Matthias Nießner’s system creates photometric maps of the source and target faces, and uses those maps to quickly and accurately create a hybrid computer-animated face. What takes this hack over the top is that it accounts for the target video’s lighting, such that the CGI face seamlessly blends with the rest of the target video when it’s pasted over the target actor’s real face. The target videos in the demo below are all YouTube streams.

That is cray.

[via Matthias Nießner via Gizmodo]

Disney FaceDirector Combines Two Takes Into One: Scene, Take 1, Version 1.1

Last October we checked out a fascinating animation technique that transfers facial movements from one person to another in real-time. Disney Research’s FaceDirector on the other hand blends two different takes of an actor’s face into one customizable performance.

disney_research_face_director_1zoom in

FaceDirector synchronizes two source performances based on facial cues – the eyebrows, eyes, nose and lips – and the actor’s dialogue. The resulting hybrid face can then be customized in terms of which of the two source performances is visible at any given time. For example, in the image above, the synthesized performance shows the actress switching multiple times between an angry and a happy expression, even though she actually only recorded a happy-only take and an angry-only take. The idea is for filmmakers to save on resources by using post-production to achieve their desired performance with less reshoots. As you’ll see in the video below, FaceDirector can also be used to overdub both the audio and the video of erroneous dialogue.

The researchers acknowledge that FaceDirector is far from perfect. For instance, it has trouble blending performances where the facial cues are drastically different, e.g. one has the actor’s lips closed while the other has them wide open. It’s also hampered by items that cover the actor’s face, such eyeglasses or even hair. You can download their paper from Disney Research’s website.

[via Reddit]

Real-Time Facial Expression Transfer: Virtual Face/Off

In the near future, you may be able to make yourself speak in any language, or have video proof of your friend saying he loves to eat poop. It’s all thanks to a new tracking and animation system that can transfer the facial movements of one person into a photorealistic CGI rendering of another person’s face, all in real time. In other words it can make you, or rather an animation of your face, express or say anything.

real_time_expression_transfer_for_facial_reenactment_1zoom in

The jaw-dropping technique was developed by Justus Thies, Michael Zollhöfer, Matthias Nießner, Levi Valgaerts, Marc Stamminger and Christian Theobalt. The group developed custom software that creates parametric models of the source face and the target face with the help of a depth sensor such as the Kinect. Their program also takes into account the real time lighting conditions of the target face to make the resulting animation more realistic.

real_time_expression_transfer_for_facial_reenactment_2zoom in

Before it can work its magic, the system must first analyze the source and target faces so that it can calibrate itself. When that’s done, anything that the source face does will be mimicked in real time by a computer animation that looks just like the target face. Note that the resulting virtual face will still mimic the target’s head movement.

Aside from this “facial reenactment”, the system can also be used to make it so that the virtual face is wearing makeup or different clothing, or is under different lighting conditions.

It’s an insanely useful invention, but obviously it can also be used for nefarious purposes. Now even your face can be hacked. You can download the group’s full paper from Stanford University’s website.

[via Digg]

Face-Tracking Glassware Can Tell If Your Collocutor Is Angry, Happy or Sad

Fraunhofer IIS SHORE Facial Recognition Google Glass App

Google Glass apps have come a long way since Mountain View’s wearable was first shown to the world. The one developed by Fraunhofer Institute for Integrated Circuits can tell the mood of the person you’re talking to, and can provide an estimation of his or her age.

The SHORE app created by Fraunhover IIS is not exactly the first of its kind, despite the claim that it’s “world’s first emotion detection app on Google Glass,” as Emotient’s Glassware that was unveiled a few months ago served a very similar purpose. However, it would be wrong to claim that Fraunhofer IIS stole the idea, as most probably the apps were developed in parallel by their respective creators. On top of that, SHORE has a bit of extra functionality, as it does more than just recognizing if your interlocutor is angry, sad, happy, or surprised.

SHORE is able to discern men from women, and can provide an estimation for their age. The results are not exactly precise, neither when it comes to the mood of the person you’re speaking to (ever heard of fake smiles?), nor when guessing the age, as this value is provided more as a range than an exact number.

The team of researchers that developed this app pointed out that “This opens up an entire spectrum of new smart eyewear applications, including communication aids for people with disorders such as autism, many of whom have difficulty interpreting emotions through facial expressions. By taking advantage of the additional capability to determine someone’s gender or estimate their age, the software could be used in other applications such as interactive games or market research analyses.”

“The foundation of the versatile solution lies in our extensive experience with detection and analysis technologies and a large database for machine learning,” claim the researchers. “The technology is ‘trained’ by accessing a database of more than 10,000 annotated faces. In combination with the structure-based features and learning algorithms, we can train so-called models that boast extremely high recognition rates.”

Needless to say, normal people wouldn’t probably need such an app, as their social skills would help them identify quite easily the mood of their interlocutor. Still, sufferers of Aspergers syndrome or Autism, who have a hard time reading social situations, would benefit a lot from using the SHORE app.

Be social! Follow Walyou on Facebook and Twitter, and read more related stories about the integration of Google Glass in the 2015 Hyundai Genesis, and the Google Gesture concept that gives the speech-impaired a voice.

FaceRig Turns You into a Digital Avatar in Real Time: Self-e

Here’s a program that could be one of the big hits of 2014. Currently in development by Holotech Studios, FaceRig lets anyone with a webcam project their head movements and facial expressions onto a virtual character, all in real time. It’s Dance Central for your face.

facerig 620x310magnify

According to Holotech Studios, FaceRig is based on “real time image based tracking technology” made by Swedish company Visage Technologies. Aside from tracking and mapping your head and face, voice alteration will also be included in FaceRig. So you can become a voice actor, a motion capture actor and an animator all at once.

So what can you do with the FaceRig? For starters you can stream a show online using your avatar as your visage. You can be the next Hatsune Miku! Or rather, Half-sune Miku. You can make a simple animated film without spending a single second or cent in 3D modeling software. Or you can just make funny faces all day.

Holotech Studios plans to release several versions of FaceRig for different devices and use cases, such as a full featured desktop program for professional use and a mobile app for funny face use. For now a pledge of at least $5 (USD) on Indiegogo will be enough to score you both a beta and a full license to the basic version of FaceRig.

[via Incredible Things]