Djay Pro AI for iPad now has touchless gesture controls

When it comes to motion tracking and music, you can follow the breadcrumbs including Max Mathews, Imogen Heap, Beat Saber, a growing research and Kickstarter crowd, and now potentially you. If you’re a DJ using djay Pro AI on an iPad Pro running iOS...

Google’s Self-Driving Cars Can Understand Hand Gestures

Google Self-Driving car

Following road safety concerns over Google’s self-driving cars, the search giant has now announced that the vehicles can understand hand gestures.

As a pedestrian, there’s one known danger that you try to avoid any time you step foot outside: cars. Being a bipedal creature made up of flesh and bone, fast driving vehicles have propensity for being able to smush our mortal bodies into paste. It’s a reality of crossing the road which is exactly why traffic lights are in place to prevent that from happening. Accidents can still occur however and a lot of the time as road-crossers we have to count on drivers noticing us and being courteous enough to let us pass without hassle or we’ll have to go out of our way to request safe passage with a head nod or by extending our arm and showing our palm to the driver. But with Google’s self-driving cars, concerns are rising that these established road crossing traditions won’t be observed if the cars on our streets become robotic, but thankfully Google have announced that that won’t quite be the case.

Revealed in a new update video, Google have revealed that their self-driving cars use cameras on the car’s body to identify what’s going on on the streets that they’re navigating. According to the search giant, the front camera is now able to recognise a myriad of different obstacles including road signs, construction work and, of course,busy crossings. Even cyclists are accounted for in their own special “category” as explained in the vehicle and because of this, Google’s cars can understand the hand signals – such as an extended arm to show that they want to change lane – that cyclists use in real life and not just in theory.

The video makes Google’s self[driving cars seem promising but it only shows static vehicles and static construction work objects. It’s unclear what would happen if, for instance, someone was stood right in the middle of the road. Would they be classed as a pedestrian or would the car simply give up and be unable to categorise them? This is something that Google’s team will have to figure out as they’re been testing various scenarios that the car might get into so as this seems to be a small update for now, we’ll keep you posted once we know more.

Source: Google

Be social! Follow Walyou on Facebook and Twitter, and read more related stories  Google Mobile Ads to Promote Ads Based on Your InterestsGoogle Camera App Brings Lens Blur and Panorama to Android

CES 2014 in Las Vegas Highlights: Gesture Recognition Technology from eyesight Mobile Technologies Ltd


The much awaited Consumer Electronics Show (CES), also known as International CES is just only a few days away from now. We have been eagerly waiting for this event since last and can’t wait anymore...
    






Intel announces Creative Senz3D Peripheral Camera at Computex 2013

Intel announces Creative Depth Vision camera at Computex 2013

Intel's just announced the Creative Senz3D Peripheral Camera at the company's Computex keynote in Taipei. The camera lets users manipulate objects on the screen using gestures and is able to completely eliminate the background. It appears to be an evolution of the Creative Interactive Gesture Camera we recently played with at IDF in Beijing. This new 3D depth camera is expected to become available next quarter and Intel plans to incorporate the technology into devices during the second half of 2014. "It's like adding two eyes to my system," said Tom Kilroy, VP of marketing. The company's been talking about "perceptual computing" for some time and this certainly brings the idea one step closer to fruition.

Filed under: , , ,

Comments

SoftKinetic’s motion sensor tracks your hands and fingers, fits in them too (video)

softkinetics-motion-sensor-tracks-hands-fingers-ds325

Coming out of its shell as a possible Kinect foe, SoftKinetic has launched a new range sensor at Computex right on the heels of its last model. Upping the accuracy while shrinking the size, the DepthSense 325 now sees your fingers and hand gestures in crisp HD and as close as 10cm (4 inches), an improvement from the 15cm (6 inches) of its DS311 predecessor. Two microphones are also tucked in, making the device suitable for video conferencing, gaming and whatever else OEMs and developers might have in mind. We haven't tried it yet, but judging from the video, it seems to hunt finger and hand movements quite competently. Hit the break to see for yourself.

Show full PR text

SoftKinetic Announces World's Smallest HD Gesture Recognition Camera and Releases Far and Close Interaction Middleware

Professional Kit Available For Developers To Start Building a New Generation of Gesture-Based Experiences

TAIPEI & BRUSSELS - June 5, 2012 - SoftKinetic, the pioneering provider of 3D vision and gesture recognition technology, today announced a device that will revolutionize the way people interact with their PCs. The DepthSense 325 (DS325), a pocket-sized camera that sees both in 3D (depth) and high-definition 2D (color), delivered as a professional kit, will enable developers to incorporate high-quality finger and hand tracking for PC video games, introduce new video conferencing experiences and many other immersive PC applications. The DS325 can operate from as close as 10cm and includes a high-resolution depth sensor with a wide field of view, combined with HD video and dual microphones.

In addition, the company announced the general availability of iisu[TM] 3.5, its acclaimed gesture-recognition middleware compatible with most 3D sensors available on the market. In addition of its robust full body tracking features, iisu 3.5 now offers the capacity to accurately track users' individual fingers at 60 frames per second, opening up a new world of close-range applications.

"SoftKinetic is proud to release these revolutionary products to developers and OEMs," said Michel Tombroff, CEO of SoftKinetic. "The availability of iisu 3.5 and the DS325 clearly marks a milestone for the 3D vision and gesture recognition markets. These technologies will enable new generations of video games, edutainment applications, video conference, virtual shopping, media browsing, social media connectivity and more."

SoftKinetic will demonstrate the new PC and SmartTV experiences and at its booth at Computex, June 5-9, 2012, in the NanGang Expo Hall, Upper Level, booth N1214. For business appointments, send a meeting request to events@softkinetic.com.

The DS325 Professional Kit is available for pre-order now at SoftKinetic's online store (http://www.softkinetic.com/Store.aspx) and is expected to begin shipping in the coming weeks.

iisu 3.5 Software Development Kit is available free for non-commercial use at SoftKinetic's online store (http://www.softkinetic.com/Store.aspx) and at iisu.com.

About SoftKinetic S.A.
SoftKinetic's vision is to transform the way people interact with the digital world. SoftKinetic is the leading provider of gesture-based platforms for the consumer electronics and professional markets. The company offers a complete family of 3D imaging and gesture recognition solutions, including patented 3D CMOS time-of-flight sensors and cameras (DepthSense[TM] family of products, formerly known as Optrima product family), multi-platform and multi-camera 3D gesture recognition middleware and tools (iisu[TM] family of products) as well as games and applications from SoftKinetic Studios.

With over 8 years of R&D on both hardware and software, SoftKinetic solutions have already been successfully used in the field of interactive digital entertainment, consumer electronics, health care and other professional markets (such as digital signage and medical systems). SoftKinetic, iisu, DepthSense and The Interface Is You are trade names or registered trademarks of SoftKinetic. For more information on SoftKinetic please visit www.softkinetic.com. For videos of SoftKinetic-related products visit SoftKinetic's YouTube channel: www.youtube.com/SoftKinetic.

Continue reading SoftKinetic's motion sensor tracks your hands and fingers, fits in them too (video)

SoftKinetic's motion sensor tracks your hands and fingers, fits in them too (video) originally appeared on Engadget on Wed, 06 Jun 2012 22:48:00 EDT. Please see our terms for use of feeds.

Permalink The Verge  |  sourceTriplePoint  | Email this | Comments

Hillcrest Labs takes its TV motion control system to China, becomes TCL’s new best friend

Image

It's only been a few days since Hillcrest Labs open sourced its Kylo web browser for TVs, and now the company's back with yet another announcement. Well, this time it's more about TCL who's just declared its top TV market share in China. Much like the Roku 2 and LG TVs with Magic Motion remote, Hillcrest's Freespace engine has been outted as the enabling technology behind TCL's recently announced V7500, a 3D smart TV series featuring a heavily customized Android 4.0.3 and a 7.9mm-thick bezel. This means users can interact with and play games on this slim TV via motion and cursor control on the remote (there's also voice control here but it doesn't look like Hillcrest has anything to do with it). There are no dates or prices just yet, but TCL better be quick as Lenovo's got something very similar ready to ship soon.

Continue reading Hillcrest Labs takes its TV motion control system to China, becomes TCL's new best friend

Hillcrest Labs takes its TV motion control system to China, becomes TCL's new best friend originally appeared on Engadget on Wed, 23 May 2012 00:01:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Huawei throws R&D dollars at gesture control, cloud storage, being more ‘disruptive’

Huawei throws R&D dollars at gesture control, cloud storage, being more 'disruptive'

Undeterred by the fact that even humans struggle to interpret certain gestures, Huawei says it's allocating a chunk of its growing R&D budget to new motion-sensing technology for smartphones and tablets. The company's North American research chief, John Roese, told Computerworld that he wants to allow "three-dimensional interaction" with devices using stereo front-facing cameras and a powerful GPU to make sense of the dual video feed. Separately, the Chinese telecoms company is also putting development cash into a cloud computing project that promises to "change the economics of storage by an order of magnitude." Roese provided scant few details on this particular ambition, but did mention that Huawei has teamed up with CERN to conduct research and has somehow accumulated over 15 petabytes of experimental physics data in the process. Whatever it's up to, Huawei had better get a move on -- others are snapping up gesture recognition and cloud patents faster than you can say fa te ne una bicicletta with your hands.

Huawei throws R&D dollars at gesture control, cloud storage, being more 'disruptive' originally appeared on Engadget on Mon, 30 Apr 2012 05:40:00 EDT. Please see our terms for use of feeds.

Permalink SlashGear  |  sourceComputerworld  | Email this | Comments