Colloidal Display uses soap bubbles, ultrasonic waves to form a projection screen (hands-on video)

Colloidal Display uses soap bubbles, ultrasonic waves to form a projection screen handson video

If you've ever been to an amusement park, you may have noticed ride designers using some non-traditional platforms as projection screens -- the most common example being a steady stream of artificial fog. Projecting onto transparent substances is a different story, however, which made this latest technique a bit baffling to say the least. Colloidal Display, developed by Yoichi Ochiai, Alexis Oyama and Keisuke Toyoshima, uses bubbles as an incredibly thin projection "screen," regulating the substance's properties, such as reflectance, using ultrasonic sound waves from a nearby speaker. The bubble liquid is made from a mixture of sugar, glycerin, soap, surfactant, water and milk, which the designers say is not easily popped. Still, during their SIGGRAPH demo, a motor dunked the wands in the solution and replaced the bubble every few seconds.

A standard projector directed at the bubble creates an image, which appears to be floating in the air. And, because the bubbles are transparent, they can be stacked to simulate a 3D image. You can also use the same display to project completely different images that fade in and out of view depending on your angle relative to the bubble. There is a tremendous amount of distortion, however, because the screen used is a liquid that remains in a fluid state. Because of the requirement to constantly refresh the bubbles, and the unstable nature of the screen itself, the project, which is merely a proof of concept, wouldn't be implemented without significant modification. Ultimately, the designers hope to create a film that offers similar transparent properties but with a more solid, permanent composition. For now, you can sneak a peek of the first iteration in our hands-on video after the break.

Continue reading Colloidal Display uses soap bubbles, ultrasonic waves to form a projection screen (hands-on video)

Filed under:

Colloidal Display uses soap bubbles, ultrasonic waves to form a projection screen (hands-on video) originally appeared on Engadget on Fri, 10 Aug 2012 12:24:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceYoichi Ochiai  | Email this | Comments

Stuffed Toys Alive! replaces mechanical limbs with strings for a much softer feel (hands-on)

Stuffed Toys Alive! replaces mechanical limbs with strings for a much softer feel handson

It worked just fine for Pinocchio, so why not animatronic stuffed bears? A group of researchers from the Tokyo University of Technology are on hand at SIGGRAPH's Emerging Technologies section this week to demonstrate "Stuffed Toys Alive!," a new type of interactive toy that replaces the rigid plastic infrastructure used today with a seemingly simple string pulley-based solution. Several strings are installed at different points within each of the cuddly gadget's limbs, then attached to a motor that pulls the strings to move the fuzzy guy's arms while also registering feedback, letting it respond to touch as well. There's not much more to it than that -- the project is ingenious but also quite simple, and it's certain to be a hit amongst youngsters. The obligatory creepy hands-on video is waiting just past the break.

Continue reading Stuffed Toys Alive! replaces mechanical limbs with strings for a much softer feel (hands-on)

Filed under:

Stuffed Toys Alive! replaces mechanical limbs with strings for a much softer feel (hands-on) originally appeared on Engadget on Fri, 10 Aug 2012 11:32:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceSIGGRAPH  | Email this | Comments

Chilly Chair uses static electricity to raise your arm hair, force an ’emotional reaction’ (hands-on video)

Chilly Chair uses static electricity to raise your arm hair, force an emotional reaction handson video

Hiding in the back of the SIGGRAPH Emerging Technologies demo area -- exactly where such a project might belong -- is a dark wood chair that looks anything but innocent. Created by a team at the University of Electro-Communications in Toyko, Chilly Chair, as it's called, may be a reference to the chilling feeling the device is tasked with invoking. After signing a liability waiver, attendees are welcomed to pop a squat before resting their arms atop a cool, flat metal platform hidden beneath a curved sheath that looks like something you may expect to see in Dr. Frankenstein's lab, not a crowded corridor of the Los Angeles Convention Center. Once powered up, the ominous-looking contraption serves to "enrich" the experience as you consume different forms of media, be it watching a movie or listening to some tunes. It works by using a power source to pump 10 kV of juice to an electrode, which then polarizes a dielectric plate, causing it to attract your body hair.

After signing our life away with the requisite waiver, we sat down and strapped in for the ride. Despite several minutes of build-up, the entire experience concluded in what seemed like only a few seconds. A projection screen in front of the chair lit up to present a warning just as we felt the hairs jet directly towards the sheath above. By the time we rose, there was no visual evidence of the previous state, though we have no doubt that the Chilly Chair succeeded in raising hair (note: the experience didn't come close to justifying the exaggerated reaction you may have noticed above). It's difficult to see how this could be implemented in future home theater setups, especially considering all the extra hardware currently required, but it could potentially add another layer of immersion to those novelty 4D attractions we can't seem to avoid during visits to the amusement park. You can witness our Chilly Chair experience in the hands-on video after the break.

Continue reading Chilly Chair uses static electricity to raise your arm hair, force an 'emotional reaction' (hands-on video)

Filed under: ,

Chilly Chair uses static electricity to raise your arm hair, force an 'emotional reaction' (hands-on video) originally appeared on Engadget on Thu, 09 Aug 2012 16:29:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceSIGGRAPH  | Email this | Comments

MIT Media Lab’s Tensor Displays stack LCDs for low-cost glasses-free 3D (hands-on video)

MIT Media Lab's Tensor Displays stack LCDs for lowcost glassesfree 3D handson video

Glasses-free 3D may be the next logical step in TV's evolution, but we have yet to see a convincing device make it to market that doesn't come along with a five-figure price tag. The sets that do come within range of tickling our home theater budgets won't blow you away, and it's not unreasonable to expect that trend to continue through the next few product cycles. A dramatic adjustment in our approach to glasses-free 3D may be just what the industry needs, so you'll want to pay close attention to the MIT Media Lab's latest brew. Tensor Displays combine layered low-cost panels with some clever software that assigns and alternates the image at a rapid pace, creating depth that actually looks fairly realistic. Gordon Wetzstein, one of the project creators, explained that the solution essentially "(takes) the complexity away from the optics and (puts) it in the computation," and since software solutions are far more easily scaled than their hardware equivalent, the Tensor Display concept could result in less expensive, yet superior 3D products.

We caught up with the project at SIGGRAPH, where the first demonstration included four fixed images, which employed a similar concept as the LCD version, but with backlit inkjet prints instead of motion-capable panels. Each displaying a slightly different static image, the transparencies were stacked to give the appearance of depth without the typical cost. The version that shows the most potential, however, consists of three stacked LCD panels, each displaying a sightly different pattern that flashes back and forth four times per frame of video, creating a three-dimensional effect that appears smooth and natural. The result was certainly more tolerable than the glasses-free 3D we're used to seeing, though it's surely a long way from being a viable replacement for active-glasses sets -- Wetzstein said that the solution could make its way to consumers within the next five years. Currently, the technology works best in a dark room, where it's able to present a consistent image. Unfortunately, this meant the light levels around the booth were a bit dimmer than what our camera required, resulting in the underexposed, yet very informative hands-on video you'll see after the break.

Continue reading MIT Media Lab's Tensor Displays stack LCDs for low-cost glasses-free 3D (hands-on video)

Filed under: ,

MIT Media Lab's Tensor Displays stack LCDs for low-cost glasses-free 3D (hands-on video) originally appeared on Engadget on Thu, 09 Aug 2012 14:16:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceMIT Media Lab  | Email this | Comments

MIT Media Lab’s Tensor Displays stack LCDs for low-cost glasses-free 3D (hands-on video)

MIT Media Lab's Tensor Displays stack LCDs for lowcost glassesfree 3D handson video

Glasses-free 3D may be the next logical step in TV's evolution, but we have yet to see a convincing device make it to market that doesn't come along with a five-figure price tag. The sets that do come within range of tickling our home theater budgets won't blow you away, and it's not unreasonable to expect that trend to continue through the next few product cycles. A dramatic adjustment in our approach to glasses-free 3D may be just what the industry needs, so you'll want to pay close attention to the MIT Media Lab's latest brew. Tensor Displays combine layered low-cost panels with some clever software that assigns and alternates the image at a rapid pace, creating depth that actually looks fairly realistic. Gordon Wetzstein, one of the project creators, explained that the solution essentially "(takes) the complexity away from the optics and (puts) it in the computation," and since software solutions are far more easily scaled than their hardware equivalent, the Tensor Display concept could result in less expensive, yet superior 3D products.

We caught up with the project at SIGGRAPH, where the first demonstration included four fixed images, which employed a similar concept as the LCD version, but with backlit inkjet prints instead of motion-capable panels. Each displaying a slightly different static image, the transparencies were stacked to give the appearance of depth without the typical cost. The version that shows the most potential, however, consists of three stacked LCD panels, each displaying a sightly different pattern that flashes back and forth four times per frame of video, creating a three-dimensional effect that appears smooth and natural. The result was certainly more tolerable than the glasses-free 3D we're used to seeing, though it's surely a long way from being a viable replacement for active-glasses sets -- Wetzstein said that the solution could make its way to consumers within the next five years. Currently, the technology works best in a dark room, where it's able to present a consistent image. Unfortunately, this meant the light levels around the booth were a bit dimmer than what our camera required, resulting in the underexposed, yet very informative hands-on video you'll see after the break.

Continue reading MIT Media Lab's Tensor Displays stack LCDs for low-cost glasses-free 3D (hands-on video)

Filed under: ,

MIT Media Lab's Tensor Displays stack LCDs for low-cost glasses-free 3D (hands-on video) originally appeared on Engadget on Thu, 09 Aug 2012 14:16:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceMIT Media Lab  | Email this | Comments

Gocen optical music recognition can read a printed score, play notes in real-time (hands-on video)

Gocen optical music recognition can read a printed score, play notes in realtime handson video

It's not often that we stumble upon classical music on the floor at SIGGRAPH, so the tune of Bach's Cantata 147 was reason enough to stop by Gocen's small table in the annual graphics trade show's Emerging Technologies hall. At first glance, the four Japanese men at the booth could have been doing anything on their MacBook Pros -- there wasn't a musical instrument in sight -- but upon closer inspection, they each appeared to be holding identical loupe-like devices, connected to each laptop via USB. Below each self-lit handheld reader were small stacks of sheet music, and it soon became clear that each of the men was very slowly moving their devices from side to side, playing a seemingly perfect rendition of "Jesu, Joy of Man's Desiring."

The project, called Gocen, is described by its creators as a "handwritten notation interface for musical performance and learning music." Developed at Tokyo Metropolitan University, the device can read a printed (or even handwritten) music score in real-time using optical music recognition (OMR), which is sent through each computer to an audio mixer, and then to a set of speakers. The interface is entirely text and music-based -- musicians, if you can call them that, scan an instrument name on the page before sliding over to the notes, which can be played back at different pitches by moving the reader vertically along the line. It certainly won't replace an orchestra anytime soon -- it takes an incredible amount of care to play in a group without falling out of a sync -- but Gocen is designed more as a learning tool than a practical device for coordinated performances. Hearing exactly how each note is meant to sound makes it easier for students to master musical basics during the beginning stages of their educations, providing instant feedback for those that depend on self-teaching. You can take a closer look in our hands-on video after the break, in a real-time performance demo with the Japan-based team.

Continue reading Gocen optical music recognition can read a printed score, play notes in real-time (hands-on video)

Filed under: , ,

Gocen optical music recognition can read a printed score, play notes in real-time (hands-on video) originally appeared on Engadget on Wed, 08 Aug 2012 17:23:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceSIGGRAPH  | Email this | Comments

Shader Printer uses heat-sensitive ‘paint’ that can be erased with low temperatures (hands-on video)

Shader Printer uses heatsensitive 'paint' that can be erased with low temperatures handson video

Lovin' the bold look of those new Nikes? If you're up to date on the athletic shoe scene, you may notice that sneaker designs can give way long before your soles do. A new decaling technique could enable you to "erase" labels and other artworks overnight without a trace, however, letting you change up your wardrobe without shelling out more cash. A prototype device, called Shader Printer, uses a laser to heat (at 50 degrees Celsius, 120 degrees Fahrenheit) a surface coated with a bi-stable color-changing material. When the laser reaches the "ink," it creates a visible design, that can then be removed by leaving the object in a -10 degree Celsius (14 degree Fahrenheit) freezer overnight. The laser and freezer simply apply standard heat and cold, so you could theoretically add and remove designs using any source.

For the purposes of a SIGGRAPH demo, the team, which includes members from the Japan Science and Technology Agency and MIT, used a hair dryer to apply heat to a coated plastic doll in only a few seconds -- that source doesn't exactly offer the precision of a laser, but it works much more quickly. Then, they sprayed the surface with -50-degree Celsius (-58 Fahrenheit) compressed air, which burned off the rather sloppy pattern in a flash. There were much more attractive prints on hand as well, including an iPhone cover and a sneaker with the SIGGRAPH logo, along with a similar plastic doll with clearly defined eyes. We also had a chance to peek at the custom laser rig, which currently takes about 10 minutes to apply a small design, but could be much quicker in the future with a higher-powered laser on board. The hair dryer / canned air combo offers a much more efficient way of demoing the tech, however, as you'll see in our hands-on video after the break.

Continue reading Shader Printer uses heat-sensitive 'paint' that can be erased with low temperatures (hands-on video)

Filed under: ,

Shader Printer uses heat-sensitive 'paint' that can be erased with low temperatures (hands-on video) originally appeared on Engadget on Wed, 08 Aug 2012 16:54:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceSIGGRAPH  | Email this | Comments

Disney Research’s Botanicus Interacticus adds capacitive touch to ordinary plants, we go hands-on

Disney Research's Botanicus Interacticus adds capacitive touch to ordinary plants, we go handson video

Sure, you spend plenty of time talking to your plants, but have you ever made them sing? In partnership with Berlin-based Studio NAND, Walt Disney's experience development arm, coined Disney Research, has found a way to take human-plant interaction to an almost freakish level. The project's called Botanicus Interacticus, and centers around a custom-built capacitive sensor module, which pipes a very low current through an otherwise ordinary plant, then senses when and where you touch. Assuming your body is grounded, the device uses more than 200 frequencies to determine exactly where you've grabbed hold of a stem. Then, depending on how it may be programed, the sensor can trigger any combination of feedback, ranging from a notification that your child is attempting to climb that massive oak in the yard again, to an interactive melody that varies based on where your hand falls along the plant.

Because this is Disney Research, the company would most likely use the new tech in an interactive theme park attraction, though there's currently no plan to do much more than demo Botanicus Interacticus for SIGGRAPH attendees. This week's demonstration is giving the creators an opportunity to gather feedback as they try out their project on the general public. There's four different stations on hand, ranging from a stick of bamboo that offers the full gamut of sensitivity, including the exact location of touch, to an orchid that can sense an electric field disruption even as you approach for contact. While interactive plants may not have a role in everyday life, Botanicus Interacticus is certainly a clever implementation of capacitive touch. You can see it action just past the break.

Continue reading Disney Research's Botanicus Interacticus adds capacitive touch to ordinary plants, we go hands-on

Filed under: ,

Disney Research's Botanicus Interacticus adds capacitive touch to ordinary plants, we go hands-on originally appeared on Engadget on Wed, 08 Aug 2012 14:58:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceDisney Research  | Email this | Comments

AMD launches its next-gen FirePro graphics card lineup, we go hands-on at SIGGRAPH (video)

AMD launches its nextgen FirePro graphics card lineup, we go handson at SIGGRAPH video

Just as you've cozied up with "Tahiti" and "Cape Verde," AMD has returned to grow its "Southern Islands" family of graphics cards with four fresh FirePros, offering up to four teraflops of graphics computing power. That spec can be found in the company's new W9000, which is capable of four TFLOPs single precision and one TFLOP double precision with a price tag just shy of $4,000. That behemoth of a card offers 6GB of GDDR5 RAM and requires 274 watts of power. More humble members of the fam include the W8000, which has the same form-factor as the higher-end W9000, but eases back on the specs, consuming 189 watts of power and carrying a $1,599 price tag.

We had a chance to take a closer look at both cards at SIGGRAPH, and while they packed a significant amount of heft, you'll likely never take a second look once they're buried away in your tower rig. Fans of smaller housings (and price tags) may take notice of the W7000 and W5000, which are both considerably more compact and require less power to boot, with pricing set at $899 and $599, respectively. Those cards were also on hand for our demo, and can be seen along with the top two configs in our gallery below. You can also sneak a closer peek in the hands-on video after the break, and glance at the full specs over at our news post from earlier today.

Continue reading AMD launches its next-gen FirePro graphics card lineup, we go hands-on at SIGGRAPH (video)

Filed under: , ,

AMD launches its next-gen FirePro graphics card lineup, we go hands-on at SIGGRAPH (video) originally appeared on Engadget on Tue, 07 Aug 2012 15:46:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceAMD  | Email this | Comments

ARM’s Mali-T604 makes official debut, we get a first look at the next-gen GPU (hands-on video)

DNP ARM's MaliT604 makes its official device debut, we get a first look at the nextgen GPU handson video

Think those are some pretty slick graphics in your Galaxy S III? Samsung's latest smartphone packs some mighty graphics prowess of its own, thanks to the Mali-400 MP GPU, but once you spend a few minutes with the Mali-T604, the company's next-generation chipset, the improvements become quite clear. After seeing the Mali-T604 in action, as we did at SIGGRAPH today, the capabilities leave us hopeful for the future, and perhaps feeling a bit self-conscious about the silicon currently in our pockets. The reference device on hand was operating in sync with a variety of unnamed hardware, protected from view in a relatively large sealed box. We weren't able to squeeze many details out of ARM reps, who remained mum about the demo components, including clock speed, manufacturer and even fabrication size. What we do know is that we were looking at a quad-core Mali-T604 and dual-core ARM Cortex-A15 processor, with a fabrication size in the range of "28 to 40 nanometers" (confirming the exact size would reveal the manufacturer). Clock speed is also TBD, and the early silicon on demo at the show wasn't operating anywhere close to its top end.

In order to experience the T604, we took a look at three demos, including Timbuktu 2, which demonstrates elements like self shadowing and depth of field with OpenGL ES 3.0, Hauntheim, which gives us an early look at physics simulation and HDR lighting with OpenCL, and Enlighten, which rendered silky smooth real-time illumination. You can see all of the demos in action after the break, and you can expect T604-equipped devices to make their debut beginning later this year -- ARM says its working with eight manufacturers to get the licensed tech to market as early as Q3.

Continue reading ARM's Mali-T604 makes official debut, we get a first look at the next-gen GPU (hands-on video)

Filed under: , , ,

ARM's Mali-T604 makes official debut, we get a first look at the next-gen GPU (hands-on video) originally appeared on Engadget on Tue, 07 Aug 2012 13:30:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments