Eyes-on: MIT Media Lab’s Smarter Objects can map a user interface onto… anything (video)

Eyeson MIT Media Lab's Smarter Objects can map a user interface onto anything video

While patrolling the halls of the CHI 2013 Human Factors in Computing conference in Paris, we spied a research project from MIT's Media Lab called "Smarter Objects" that turns Minority Report tech on its head. The researchers figured out a way to map software functionality onto tangible objects like a radio, light switch or door lock through an iPad interface and a simple processor / WiFi transceiver in the object. Researcher Valentin Huen explains that "graphical user interfaces are perfect for modifying systems," but operating them on a day-to-day basis is much easier using tangible objects.

To that end, the team developed an iPad app that uses motion tracking technology to "map" a user interface onto different parts of an object. The example we saw was a simple radio with a a pair of dials and a speaker, and when the iPad's camera was pointed at it, a circular interface along with a menu system popped up that cannily tracked the radio. From there, Huen mapped various songs onto different positions of the knob, allowing him to control his playlist by moving it -- a simple, manual interface for selecting music. He was even able to activate a second speaker by drawing a line to it, then "cutting" the line to shut it off. We're not sure when, or if, this kind of tech will ever make it into your house, but the demo we saw (see the pair of videos after the break) seemed impressively ready to go.

Filed under: ,

Comments

Formlabs FORM 1 high-resolution 3D printer spotted in the wild, we go eyes on (video)

Formlab FORM 1 highresolution 3D printer spotted in the wild, we go eyes on

Last time we checked in with the 3D printing upstarts over at Formlabs, their Kickstarter was doing splendidly, having over doubled its initial funding target. Well, less than a month later, and with the money still rolling in, the current total stands (at time of writing) at a somewhat impressive $2,182,031 -- over 20 times its initial goal. When we heard that the team behind it, along with some all important working printers, rolled into town, how could we resist taking the opportunity to catch up? The venue? London's 3D print show. Where, amongst all the printed bracelets and figurines, the FORM 1 stood out like a sore thumb. A wonderfully orange, and geometrically formed one at that. We elbowed our way through the permanent four-deep crowd at their booth to take a closer look, and as the show is running for another two days, you can too if you're in town. Or you could just click past the break for more.

Continue reading Formlabs FORM 1 high-resolution 3D printer spotted in the wild, we go eyes on (video)

Filed under:

Formlabs FORM 1 high-resolution 3D printer spotted in the wild, we go eyes on (video) originally appeared on Engadget on Fri, 19 Oct 2012 15:00:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceFormlabs (Kickstarter)  | Email this | Comments

FORM 1 delivers high-end 3D printing for an affordable price, meets Kickstarter goal in 1 day

FORM 1

A $2,300 3D printer isn't really anything special anymore. We've seen them as cheap as $350 in fact. But all those affordable units are of the extrusion variety -- meaning they lay out molten plastic in layers. The FORM 1 opts for a method called stereolithography that blasts liquid plastic with a laser, causing the resin to cure. This is one of the most accurate methods of additive manufacturing, but also one of the most expensive thanks to the need for high-end optics, with units typically costing tens-of-thousands of dollars. A group of recent grads from the MIT Media Lab have managed to replicate the process for a fraction of the cost and founded a company called Formlabs to deliver their innovations to the public. Like many other startups, the group turned to Kickstarter to get off the ground and easily passed its $100,000 within its first day. As of this writing over $250,000 had been pledged and the first 25 printers have already been claimed.

The FORM 1 is capable of creating objects with layers as thin as 25 microns -- that's 75 percent thinner than even the new Replicator 2. The company didn't scrimp on design and polish to meet its affordability goals either. The base is a stylish brushed metal with the small build platform protected by an orange plastic shell. There's even a companion software tool for simple model creation. You can still get one, though the price of entry is now $2,500, at the Kickstarter page. Or you can simply get a sneak peek in the gallery and video below.

Continue reading FORM 1 delivers high-end 3D printing for an affordable price, meets Kickstarter goal in 1 day

Filed under:

FORM 1 delivers high-end 3D printing for an affordable price, meets Kickstarter goal in 1 day originally appeared on Engadget on Wed, 26 Sep 2012 18:46:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceKickstarter  | Email this | Comments

MIT Media Lab’s Tensor Displays stack LCDs for low-cost glasses-free 3D (hands-on video)

MIT Media Lab's Tensor Displays stack LCDs for lowcost glassesfree 3D handson video

Glasses-free 3D may be the next logical step in TV's evolution, but we have yet to see a convincing device make it to market that doesn't come along with a five-figure price tag. The sets that do come within range of tickling our home theater budgets won't blow you away, and it's not unreasonable to expect that trend to continue through the next few product cycles. A dramatic adjustment in our approach to glasses-free 3D may be just what the industry needs, so you'll want to pay close attention to the MIT Media Lab's latest brew. Tensor Displays combine layered low-cost panels with some clever software that assigns and alternates the image at a rapid pace, creating depth that actually looks fairly realistic. Gordon Wetzstein, one of the project creators, explained that the solution essentially "(takes) the complexity away from the optics and (puts) it in the computation," and since software solutions are far more easily scaled than their hardware equivalent, the Tensor Display concept could result in less expensive, yet superior 3D products.

We caught up with the project at SIGGRAPH, where the first demonstration included four fixed images, which employed a similar concept as the LCD version, but with backlit inkjet prints instead of motion-capable panels. Each displaying a slightly different static image, the transparencies were stacked to give the appearance of depth without the typical cost. The version that shows the most potential, however, consists of three stacked LCD panels, each displaying a sightly different pattern that flashes back and forth four times per frame of video, creating a three-dimensional effect that appears smooth and natural. The result was certainly more tolerable than the glasses-free 3D we're used to seeing, though it's surely a long way from being a viable replacement for active-glasses sets -- Wetzstein said that the solution could make its way to consumers within the next five years. Currently, the technology works best in a dark room, where it's able to present a consistent image. Unfortunately, this meant the light levels around the booth were a bit dimmer than what our camera required, resulting in the underexposed, yet very informative hands-on video you'll see after the break.

Continue reading MIT Media Lab's Tensor Displays stack LCDs for low-cost glasses-free 3D (hands-on video)

Filed under: ,

MIT Media Lab's Tensor Displays stack LCDs for low-cost glasses-free 3D (hands-on video) originally appeared on Engadget on Thu, 09 Aug 2012 14:16:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceMIT Media Lab  | Email this | Comments

MIT Media Lab’s Tensor Displays stack LCDs for low-cost glasses-free 3D (hands-on video)

MIT Media Lab's Tensor Displays stack LCDs for lowcost glassesfree 3D handson video

Glasses-free 3D may be the next logical step in TV's evolution, but we have yet to see a convincing device make it to market that doesn't come along with a five-figure price tag. The sets that do come within range of tickling our home theater budgets won't blow you away, and it's not unreasonable to expect that trend to continue through the next few product cycles. A dramatic adjustment in our approach to glasses-free 3D may be just what the industry needs, so you'll want to pay close attention to the MIT Media Lab's latest brew. Tensor Displays combine layered low-cost panels with some clever software that assigns and alternates the image at a rapid pace, creating depth that actually looks fairly realistic. Gordon Wetzstein, one of the project creators, explained that the solution essentially "(takes) the complexity away from the optics and (puts) it in the computation," and since software solutions are far more easily scaled than their hardware equivalent, the Tensor Display concept could result in less expensive, yet superior 3D products.

We caught up with the project at SIGGRAPH, where the first demonstration included four fixed images, which employed a similar concept as the LCD version, but with backlit inkjet prints instead of motion-capable panels. Each displaying a slightly different static image, the transparencies were stacked to give the appearance of depth without the typical cost. The version that shows the most potential, however, consists of three stacked LCD panels, each displaying a sightly different pattern that flashes back and forth four times per frame of video, creating a three-dimensional effect that appears smooth and natural. The result was certainly more tolerable than the glasses-free 3D we're used to seeing, though it's surely a long way from being a viable replacement for active-glasses sets -- Wetzstein said that the solution could make its way to consumers within the next five years. Currently, the technology works best in a dark room, where it's able to present a consistent image. Unfortunately, this meant the light levels around the booth were a bit dimmer than what our camera required, resulting in the underexposed, yet very informative hands-on video you'll see after the break.

Continue reading MIT Media Lab's Tensor Displays stack LCDs for low-cost glasses-free 3D (hands-on video)

Filed under: ,

MIT Media Lab's Tensor Displays stack LCDs for low-cost glasses-free 3D (hands-on video) originally appeared on Engadget on Thu, 09 Aug 2012 14:16:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceMIT Media Lab  | Email this | Comments

MIT projection system extends video to peripheral vision, samples footage in real-time

MIT projection system extends video to peripheral vision, samples footage in real-time

Researchers at the MIT Media Lab have developed an ambient lighting system for video that would make Philips' Ambilight tech jealous. Dubbed Infinity-by-Nine, the rig analyzes frames of footage in real-time -- with consumer-grade hardware no less -- and projects rough representations of the video's edges onto a room's walls or ceiling. Synchronized with camera motion, the effect aims to extend the picture into a viewer's peripheral vision. MIT guinea pigs have reported a greater feeling of involvement with video content when Infinity-by-Nine was in action, and some even claimed to feel the heat from on-screen explosions. A five screen multimedia powerhouse it isn't, but the team suggests that the technology could be used for gaming, security systems, user interface design and other applications. Head past the jump to catch the setup in action.

Continue reading MIT projection system extends video to peripheral vision, samples footage in real-time

MIT projection system extends video to peripheral vision, samples footage in real-time originally appeared on Engadget on Mon, 25 Jun 2012 04:55:00 EDT. Please see our terms for use of feeds.

Permalink Gizmodo  |  sourceMIT  | Email this | Comments

MIT researchers teach computers to recognize your smile, frustration

MIT researchers teach computers to recognize your smile, frustration

Wipe that insincere, two-faced grin off your face -- your computer knows you're full of it. Or at least it will once it gets a load of MIT's research on classifying frustration, delight and facial expressions. By teaching a computer how to differentiate between involuntary smiles of frustration and genuine grins of joy, researchers hope to be able to deconstruct the expression into low-level features. What's the use of a disassembled smile? In addition to helping computers suss out your mood, the team hopes the data can be used to help people with autism learn to more accurately decipher expressions. Find out how MIT is making your computer a better people person than you after the break.

[Thanks, Kaustubh]

Continue reading MIT researchers teach computers to recognize your smile, frustration

MIT researchers teach computers to recognize your smile, frustration originally appeared on Engadget on Mon, 28 May 2012 11:06:00 EDT. Please see our terms for use of feeds.

Permalink Crazy Engineers  |  sourceMIT News  | Email this | Comments

ZeroN slips surly bonds, re-runs your 3D gestures in mid-air

zeron-levitation-mit-media-labs

Playback of 3D motion capture with a computer is nothing new, but how about with a solid levitating object? MIT's Media Lab has developed ZeroN, a large magnet and 3D actuator, which can fly an "interaction element" (aka ball bearing) and control its position in space. You can also bump it to and fro yourself, with everything scanned and recorded, and then have real-life, gravity-defying playback showing planetary motion or virtual cameras, for example. It might be impractical right now as a Minority Report-type object-based input device, but check the video after the break to see its awesome potential for 3D visualization.

Continue reading ZeroN slips surly bonds, re-runs your 3D gestures in mid-air

ZeroN slips surly bonds, re-runs your 3D gestures in mid-air originally appeared on Engadget on Mon, 14 May 2012 16:07:00 EDT. Please see our terms for use of feeds.

Permalink The Verge  |  sourceJinha Lee  | Email this | Comments

EyeRing finger-mounted connected cam captures signs and dollar bills, identifies them with OCR (hands-on)

Image

Ready to swap that diamond for a finger-mounted camera with a built-in trigger and Bluetooth connectivity? If it could help identify otherwise indistinguishable objects, you might just consider it. The MIT Media Lab's EyeRing project was designed with an assistive focus in mind, helping visually disabled persons read signs or identify currency, for example, while also serving to assist children during the tedious process of learning to read. Instead of hunting for a grownup to translate text into speech, a young student could direct EyeRing at words on a page, hit the shutter release, and receive a verbal response from a Bluetooth-connected device, such as a smartphone or tablet. EyeRing could be useful for other individuals as well, serving as an ever-ready imaging device that enables you to capture pictures or documents with ease, transmitting them automatically to a smartphone, then on to a media sharing site or a server.

We peeked at EyeRing during our visit to the MIT Media Lab this week, and while the device is buggy at best in its current state, we can definitely see how it could fit into the lives of people unable to read posted signs, text on a page or the monetary value of a currency note. We had an opportunity to see several iterations of the device, which has come quite a long way in recent months, as you'll notice in the gallery below. The demo, which like many at the Lab includes a Samsung Epic 4G, transmits images from the ring to the smartphone, where text is highlighted and read aloud using a custom app. Snapping the text "ring," it took a dozen or so attempts before the rig correctly read the word aloud, but considering that we've seen much more accurate OCR implementations, it's reasonable to expect a more advanced version of the software to make its way out once the hardware is a bit more polished -- at this stage, EyeRing is more about the device itself, which had some issues of its own maintaining a link to the phone. You can get a feel for how the whole package works in the video after the break, which required quite a few takes before we were able to capture an accurate reading.

Continue reading EyeRing finger-mounted connected cam captures signs and dollar bills, identifies them with OCR (hands-on)

EyeRing finger-mounted connected cam captures signs and dollar bills, identifies them with OCR (hands-on) originally appeared on Engadget on Wed, 25 Apr 2012 13:53:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Perifoveal Display tracks head positioning, highlights changing data on secondary LCDs (hands-on)

Image

If there's a large display as part of your workstation, you know how difficult it can be to keep track of all of your windows simultaneously, without missing a single update. Now imagine surrounding yourself with three, or four, or five jumbo LCDs, each littered with dozens of windows tracking realtime data -- be it RSS feeds, an inbox or chat. Financial analysts, security guards and transit dispatchers are but a few of the professionals tasked with monitoring such arrays, constantly scanning each monitor to keep abreast of updates. One project from the MIT Media Lab offers a solution, pairing Microsoft Kinect cameras with detection software, then highlighting changes with a new graphical user interface.

Perifoveal Display presents data at normal brightness on the monitor that you're facing directly. Then, as you move your head to a different LCD, that panel becomes brighter, while changes on any of the displays that you're not facing directly (but still remain within your peripheral vision) -- a rising stock price, or motion on a security camera -- are highlighted with a white square, which slowly fades once you turn to face the new information. During our hands-on demo, everything worked as described, albeit without the instant response times you may expect from such a platform. As with most Media Lab projects, there's no release date in sight, but you can gawk at the prototype in our video just after the break.

Continue reading Perifoveal Display tracks head positioning, highlights changing data on secondary LCDs (hands-on)

Perifoveal Display tracks head positioning, highlights changing data on secondary LCDs (hands-on) originally appeared on Engadget on Wed, 25 Apr 2012 13:28:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments