Akimbo Kinect hack offers precise control with minimal effort (video)

Akimbo Kinect hack offers precise control with minimal effort (video)

We've seen Microsoft's Kinect used in countless ways, but 3Gear Systems means to better these predecessors with the beta release of its SDK, which turns all the subtleties of hand movement into actions. In addition to using two Kinect cameras for accuracy, the software compares hand poses against a pre-rendered database so gesture commands are executed with little lag. It offers complete control of a virtual 3D environment from the comfort of your natural desk position, so you won't have to worry about flail fatigue after long stints. A free public beta is available now until November 30th, at which point bigger companies will require a license, while individuals and small enterprises will continue to get complimentary access. We know what you're thinking -- it's just another Kinect hack -- but we suggest you reserve judgment til you've seen the demo below, showing examples of how the API could be used for CAD, medical, and of course, gaming applications.

Continue reading Akimbo Kinect hack offers precise control with minimal effort (video)

Filed under: ,

Akimbo Kinect hack offers precise control with minimal effort (video) originally appeared on Engadget on Thu, 04 Oct 2012 00:36:00 EDT. Please see our terms for use of feeds.

Permalink GamesBeat  |  source3Gear Systems  | Email this | Comments

NTT DoCoMo Grip UI detects how you hold your device, makes big phones friendly for tiny hands (video)

NTT DoCoMo Grip UI detects how you hold your phone, make short work for tiny hands

Maintaining your balance on a packed train while trying to handle the big-screened smartphones of today is often a tough challenge. At least NTT DoCoMo thinks so, offering up a new interface to avoid such issues -- and throw in some extra gesture shortcuts. Gesture UI is a combination hardware-software prototype that the Japanese carrier is showing at this year's CEATEC showcase in Japan. Consisting of a trio of grip sensors located along the two edges and across the back of the prototype phone, these can each detect up to five levels of pressure from your hand, as well as detecting how you're holding the device.

This data is then channeled into the user interface, which allows the user to customize what the device does under certain conditions. We saw demonstrations of grip "shortcuts" to send you back to the homescreen, while holding certain portions of the sides would launched pre-assigned apps -- pinching at the top of this device launched the internet browser. Once inside the browser, the Grip UI also allows the user to transfer across to other programs without returning to the aforementioned homescreen, using a combination of gripping and swiping across the display. We get a handle on the prototype UI inside DoCoMo's imaginary train right after the break.

Continue reading NTT DoCoMo Grip UI detects how you hold your device, makes big phones friendly for tiny hands (video)

Filed under: ,

NTT DoCoMo Grip UI detects how you hold your device, makes big phones friendly for tiny hands (video) originally appeared on Engadget on Mon, 01 Oct 2012 20:55:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

BlackBerry 10 L-series tutorial videos surface online, give a literal peek at the future (video)

BlackBerry 10 Lseries tutorial videos surface online, give a literal peek at the future video

Those of us who've used a BlackBerry PlayBook will be familiar with the inevitable first-boot tutorials showing how to navigate the swipe-driven interface before we're let loose. Thanks to a series of demonstration videos leaked by BlackBerryItalia, it's apparent that we won't escape that educational process on BlackBerry 10 devices, either. The four clips show the basics of what we know the gesture experience will be like on full-touch L-series phones, including the signature BlackBerry Peek to check notifications and the unified inbox. Anyone looking for a direct clue as to what production BlackBerry 10 hardware will entail might be frustrated, mind you -- the rendered phone appears to be a placeholder rather than the L-series or a Dev Alpha B, and the device name is censored in an attempt to protect the source. That said, the clips provide a very straightforward explanation of the new interface concept and give us one more indication that RIM is closer to launch.

Continue reading BlackBerry 10 L-series tutorial videos surface online, give a literal peek at the future (video)

Filed under: , ,

BlackBerry 10 L-series tutorial videos surface online, give a literal peek at the future (video) originally appeared on Engadget on Sat, 29 Sep 2012 08:38:00 EDT. Please see our terms for use of feeds.

Permalink CrackBerry  |  sourceBlackBerryItalia (translated)  | Email this | Comments

Chrome experiment explores new types of navigation, degrees of embarrassment

Chrome experiment reveals embarrassing wonders of bodily navigation

What you're about to see, should you choose to click the source link below, is far from perfect. On the other hand, it's clearly had a lot of effort and expertise put into it -- not only by HTML5-savvy coders, but also by a troupe of performers from the Cirque du Soleil. It's called Movi.Kanti.Revo, which is a fancy way of saying Move.Sing.Dream, and it involves navigating through an ethereal and slightly laggy landscape using only swaying gestures, your singing voice (mournful sobbing sounds also worked for us) and a bunch of APIs that conveniently fail to work on FireFox, Safari or Internet Explorer. It's well-suited to those with a mic and webcam, preferably sitting in a open-plan and bully-ridden workplace, and if you don't like it there's always Bastion.

Filed under: , ,

Chrome experiment explores new types of navigation, degrees of embarrassment originally appeared on Engadget on Thu, 20 Sep 2012 10:22:00 EDT. Please see our terms for use of feeds.

Permalink Google  |  sourceMovi.Kanti.Revo  | Email this | Comments

EnableTalk Gloves Translate Sign Language to Spoken Language: Sound of Silence

A few months ago we saw a concept for a camera-based device that is meant to recognize sign language and translate it into spoken words. A Ukrainian-based team has something better: a working prototype of a smart glove with the exact same capability.

enabletalk gloves by quadsquad

The quadSquad team won the 2012 Imagine Cup – Microsoft’s technology competition for students – for their invention, which they call EnableTalk. The glove has 15 flex sensors, an accelerometer, a gyroscope and a compass, all manned by an onboard microcontroller. The glove sends input via Bluetooth to a custom app made for Windows smartphones, which will then interpret the data and output spoken language.

enabletalk gloves by quadsquad 2

The brief demo below show the tester spelling “hello” letter by letter, which the app is able to translate after just a brief delay:

Head to EnableTalk’s official website for more information on the product. I tip my hat off to quadSquad; I hope the team succeeds in releasing a commercial version of their device.

[via CNET via Reddit]


Nexi robot helps Northeastern University track effects of shifty body language (video)

Nexi robot helps Northeastern University reveal shifty body language video

MIT's Nexi robot has been teaching us about social interaction for years, and has even done a stint with the US Navy. Its latest role, however, involved studying those moments when society falls apart. Northeastern University researchers made Nexi the key ingredient of an experiment where subjects were asked to play a Prisoner's Dilemma-style game immediately after a conversation, whether it was with a human or a machine. Nexi showed that humans are better judges of trustworthiness after they see the telltale body language of dishonesty -- crossed arms, leaning back and other cues -- even when those expressions come from a collection of metal and plastic. The study suggests not just that humans are tuned to watch for subtle hints of sketchy behavior, but that future humanoid robots could foster trust by using the right gestures. We'll look forward to the friendlier machine assistants that result... and keep in mind the room for deception when the robots invariably plot to take over the world.

Continue reading Nexi robot helps Northeastern University track effects of shifty body language (video)

Filed under: ,

Nexi robot helps Northeastern University track effects of shifty body language (video) originally appeared on Engadget on Wed, 12 Sep 2012 08:32:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceNortheastern University  | Email this | Comments

Woven’s wearable platform for gaming, cool points and a whole lot more (video)

Woven's wearable platform for gaming, cool points and a whole lot more (video)

TshirtOS showed us one take on wearable gadgetry earlier this month, and now it's Woven's turn. This particular e-garment packs quite the selection of hardware, as you can see above -- a trio of LilyPad Arduino boards (and some custom ones), a Bluetooth module, 12 x 12 RGB LED "screen", speakers, bend sensors, a heart rate monitor, shake motors and a power pack. You'll need to accessorize, of course, with a smartphone for hardware harmony and to run companion apps. So what's it for, you ask? Well, the creators are touting it primarily as a "pervasive" gaming platform, and even seem to have a working first title in the form of SPOOKY (think gesture-based ghost-fighting). Other uses (which appear a little more conceptual) see Woven as a workout companion, TV remote, Wii controller, social network alerter or simply a fashion accessory. Check out the videos below to see it in action and imagine all the fun you could have in the five minutes before you're ushered into that padded room.

Continue reading Woven's wearable platform for gaming, cool points and a whole lot more (video)

Filed under:

Woven's wearable platform for gaming, cool points and a whole lot more (video) originally appeared on Engadget on Fri, 31 Aug 2012 05:36:00 EDT. Please see our terms for use of feeds.

Permalink IGN  |  sourceWearable Games  | Email this | Comments

Google grabs glove-based input patent, could spell out gesture control

Google grabs glovebased input patent, could spell out gesture control

Google might have already patented some nifty eye-tracking controls, but that doesn't mean it isn't considering other sensory input. A recently granted patent hints at a potential glove-based controller, with references to a pair of detectors that record "images" of an environment, and then determine gestures based on the calculated movement between them. The illustrations go on to show a hand drawing out the letter J, indicating it could be used for text input, while another suggests recognition of pinch-to-zoom style gestures. There's no mention of its fancy glasses in the patent, but we're thinking a glove to control the Nexus 7 might be a bit overkill.

Filed under:

Google grabs glove-based input patent, could spell out gesture control originally appeared on Engadget on Tue, 21 Aug 2012 06:52:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceUSPTO  | Email this | Comments

Qualcomm demos touch-free gesture control for tablets powered by Snapdragon (video)

Qualcomm demos touchfree gesture controls powered by Snapdragon

Tablets are for touching -- that much is understood. But Qualcomm's making it so your fingers will be mostly optional, thanks to the Kinect-like powers of its Snapdragon CPU. To highlight this, the company's uploaded a couple of videos to its YouTube channel that showcase two practical use case scenarios for the gesture tech: gaming and cooking. Using the device's front-facing camera, users will one day soon be able to control onscreen avatars, page forward and back through recipes, setup profiles and even wake their slates all with simple hand or head movements. Alright, so tactile-free navigation of this sort isn't exactly new, but it does up open up the tablet category to a whole new world of innovation. Head past the break to peek the demos in action.

Continue reading Qualcomm demos touch-free gesture control for tablets powered by Snapdragon (video)

Filed under: ,

Qualcomm demos touch-free gesture control for tablets powered by Snapdragon (video) originally appeared on Engadget on Mon, 06 Aug 2012 16:22:00 EDT. Please see our terms for use of feeds.

Permalink Notebook Italia (Translated)  |  sourceMobiFlip (Translated)  | Email this | Comments

Kinect Toolbox update turns hand gestures into mouse input, physical contact into distant memory

Kinect Toolbox update turns our frantic gestures into mouse input

Using Microsoft's Kinect to replace a mouse is often considered the Holy Grail of developers; there have been hacks and other tricks to get it working well before Kinect for Windows was even an option. A lead Technical Evangelist for Microsoft in France, David Catuhe, has just provided a less makeshift approach. The 1.2 update to his Kinect Toolbox side project introduces hooks to control the mouse outright, including 'magnetic' control to draw the mouse from its original position. To help keep the newly fashioned input (among other gestures) under control, Catuhe has also taken advantage of the SDK 1.5 release to check that the would-be hand-waver is sitting and staring at the Kinect before accepting any input. The open-source Windows software is available to grab for experimentation today, so if you think hands-free belongs as much on the PC desktop as in a car, you now have a ready-made way to make the dream a reality... at least, until you have to type.

Filed under: ,

Kinect Toolbox update turns hand gestures into mouse input, physical contact into distant memory originally appeared on Engadget on Wed, 01 Aug 2012 03:09:00 EDT. Please see our terms for use of feeds.

Permalink Eternal Coding  |  sourceCodePlex  | Email this | Comments