Adobe trained AI to detect facial manipulation in Photoshop

A team of Adobe and UC Berkeley researchers trained AI to detect facial manipulation in images edited with Adobe Photoshop. The researchers hope the tool will help restore trust in digital media at a time when deepfakes and fake faces are more common...

Yearbook Photo Predicts Happiness, Divorce and Death?


Look back at your high school or college yearbook photo, and what does it tell you? A lot more than whether you were a geek or a homecoming princess, it turns out. From your expression, psychologists...

Samsung patent ties emotional states to virtual faces through voice, shows when we’re cracking up

Samsung patent gives emotions to a virtual face through voice, can tell when you're cracking up

Voice recognition usually applies to communication only in the most utilitarian sense, whether it's to translate on the spot or to keep those hands on the wheel while sending a text message. Samsung has just been granted a US patent that would convey how we're truly feeling through visuals instead of leaving it to interpretation of audio or text. An avatar could change its eyes, mouth and other facial traits to reflect the emotional state of a speaker depending on the pronunciation: sound exasperated or brimming with joy and the consonants or vowels could lead to a furrowed brow or a smile. The technique could be weighted against direct lip syncing to keep the facial cues active in mid-speech. While the patent won't be quite as expressive as direct facial mapping if Samsung puts it to use, it could be a boon for more realistic facial behavior in video games and computer-animated movies, as well as signal whether there was any emotional subtext in that speech-to-text conversion -- try not to give away any sarcasm.

Filed under: , , ,

Samsung patent ties emotional states to virtual faces through voice, shows when we're cracking up originally appeared on Engadget on Tue, 06 Nov 2012 11:41:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceUSPTO  | Email this | Comments

Baby robot Affetto gets a torso, still gives us the creeps (video)

Baby robot Affetto gets a torso, still gives us the creeps

It's taken a year to get the sinister ticks and motions of Osaka University's Affetto baby head out of our nightmares -- and now it's grown a torso. Walking that still-precarious line between robots and humans, the animated robot baby now has a pair of arms to call its own. The prototype upper body has a babyish looseness to it -- accidentally hitting itself in the face during the demo video -- with around 20 pneumatic actuators providing the movement. The face remains curiously paused, although we'd assume that the body prototype hasn't been paired with facial motions just yet, which just about puts it the right side of adorable. However, the demonstration does include some sinister faceless dance motions. It's right after the break -- you've been warned.

Continue reading Baby robot Affetto gets a torso, still gives us the creeps (video)

Filed under:

Baby robot Affetto gets a torso, still gives us the creeps (video) originally appeared on Engadget on Thu, 26 Jul 2012 06:47:00 EDT. Please see our terms for use of feeds.

Permalink Plastic Pals  |  sourceProject Affetto (YouTube)  | Email this | Comments