Tag Archives: SignLanguage
Uber offers basic sign language tips so you can talk to deaf drivers
The Morning After: Tuesday, March 14 2017
Giphy made 2,000 GIFs to help you learn sign language
ICYMI: Animals can communicate better than we’d realized
Microsoft Research turns Kinect into canny sign language reader (video)
Though early Kinect patents showed its potential for sign language translation, Microsoft quashed any notion early on that this would become a proper feature. However, that hasn't stopped Redmond from continuing development of the idea. Microsoft Research Asia recently showed off software that allows the Kinect to read almost every American Sign Language gesture via hand tracking, even at conversational speeds. In addition to converting signs to text or speech, the software can also let a hearing person input text and "sign" it using an on-screen avatar. All of this is still confined to a lab so far, but the researchers hope that one day it'll open up new lines of communication between the hearing and deaf -- a patent development we could actually get behind. See its alacrity in the video after the break.
Filed under: Cameras, Science, Alt, Microsoft
Via: Gizmodo
Source: Microsoft Research
Google Hangouts receive sign language interpreter support, keyboard shortcuts
Video chat can be an empowering tool for hard-of-hearing internet citizens for whom sign language is easier than voice. Most chat software doesn't easily bring an interpreter into the equation, however, which spurred Google into adding a Sign Language Interpreter app for Google+ Hangouts. The web component lets chatters invite an interpreter that stays in the background while they verbalize hand gestures. Google is also helping reduce dependencies on the mouse for those who can't (or just won't) use one during chat: there's now keyboard shortcuts to start or stop chats, disable the camera and other basics that would normally demand a click. Both the interpreter app and shortcuts are available today.
Via: The Verge
Source: Anna Cavender (Google+)
Sigma R&D shows Kinect sign language and Jedi savvy to win gesture challenge (video)
Sigma R&D has won first prize in a gesture challenge to show just how much more talent -- like sign language translation and light saber fun -- can be unlocked in a Kinect. Normally the Microsoft device can only scope body and full mitt movements, but the research company was able to track individual fingers with a Kinect or similar sensor, plus its custom software, allowing a user's hand to become a more finely tuned controller. To prove it, the company introduced a virtual lightsaber to a subject, tracking his swordsmanship perfectly and using his thumb extension to turn it on and off. The system even detected when a passing gesture was made, seamlessly making a virtual transfer of the weapon. The same tech was also used to read sign language, displaying the intended letters on the screen for a quick translation. The SDK is due in the fall, when we can't wait to finally get our hands on a Jedi weapon that isn't dangerous or plasticky. To believe it for yourself, see the videos after the break.
Filed under: Peripherals, Software
Sigma R&D shows Kinect sign language and Jedi savvy to win gesture challenge (video) originally appeared on Engadget on Wed, 25 Jul 2012 10:57:00 EDT. Please see our terms for use of feeds.
Permalink | | Email this | CommentsSign language translator turns gestures into spoken letters, makes for a better world (video)
By far one of the greatest challenges of sign language has been to translate it for everyday folk that wouldn't know where to begin a conversation with the deaf. Cornell University engineering students Ranjay Krishna, Seonwoo Lee and Si Ping Wang -- along with some help from Jonathan Lang -- used their final project time this past semester to close this gap in one of the more practical solutions we've seen to date. Their prototype glove uses accelerometers, contact sensors and flex sensors to translate complex finger gestures from the American Sign Language alphabet into spoken letters: after converting hand positions to digital signals, the test unit both speaks out the resulting letters and sends them to a computer, where they can be used for anything from a game (shown in the video below) to, presumably, constructing whole sentences. Along with being accurate, the Cornell work is even designed with a mind towards how it would work in the real world, as the glove and its transmitter are both wireless and powered by 9-volt batteries. We hope that the project leads to a real product and an extra bridge between the deaf and the rest of us, but in the meantime, we'll be happy that at least one form of powered glove is being put to the noblest use possible.
Sign language translator turns gestures into spoken letters, makes for a better world (video) originally appeared on Engadget on Tue, 15 May 2012 07:45:00 EDT. Please see our terms for use of feeds.
Permalink | | Email this | Comments