Watson is helping heal America’s broken criminal-sentencing system

The American criminal-justice system's sentencing system is among the fairest and most equitable in the world ... assuming you're wealthy, white and male. Everybody else is generally SOL. During the past three decades, America's prison population has...

Watson is helping heal America’s broken criminal-sentencing system

The American criminal-justice system's sentencing system is among the fairest and most equitable in the world ... assuming you're wealthy, white and male. Everybody else is generally SOL. During the past three decades, America's prison population has...

Tell Me Dave Robot Learns Through Conversations with Humans

Tell Me Dave Robot

Most robots come with predefined interactions with humans, so seeing one who has the ability of learning as people converse with it is really something else.

Instead of programming robots to react in a certain way, what if we could explain them what to do in simple steps, so that sets of instructions are added verbally. That could definitely cut down the lines of code written by roboticists. Researchers at Cornell University designed as part of the “Tell Me Dave” project a robot that learns new instructions while talking (or rather being talked to) by humans.

Tell Me Dave is based on Willow Garage’s PR2 robot, which could tell what people are doing by analyzing their movement patterns. PR2 was also able to identify objects and situations, fact that made his tasks a whole lot easier. Tell Me Dave employs a 3D camera to associate objects with the activities they’re used for.

In particular, Tell Me Dave could become one of the first robotic chefs in the world. What I mean by that is that it could make a lot of associations in the kitchen, starting with pans, faucets and stoves. For example, Tell Me Dave knows that pans have a concave surface that holds water poured from a faucet. Furthermore, that water could be heated by placing the pan on the stove. Assuming that the robot already knows these when starting a conversation with a human, when asked to boil some noodles, he will know exactly what to do.

The important aspect in this context is that the robot is able to memorize previous associations, in order to know how to further expand them in the future. Ashutosh Saxena, assistant professor of computer science at Cornell University, achieved this by teaching robots to understand instructions given in naturally-spoken language. The robots developed by Saxena and his fellow researchers are able to adapt to the surrounding environment. Saxena even pointed out that “With crowd-sourcing at such a scale, robots will learn at a much faster rate.”

The ones who happen to be in Berkeley, California between July 12-16, should definitely attend the Robotics: Science and Systems conference, where Saxena and graduate students Dipendra K. Misra and Jaeyong Sung will showcase the Tell Me Dave robot.

Be social! Follow Walyou on Facebook and Twitter, and read more related stories about the Japanese comedian robot that’s funnier than some humans, and the Clearpath Robotics TwitBot that expects your tweets to move.

Cornell scientists 3D print ears with help from rat tails and cow ears

Cornell scientists 3D print ears with help from rat tails and cow ears

Science! A team of bioengineers and physicians over at Cornell University recently detailed their work to 3D print lifelike ears that may be used to treat birth defects like microtia and assist those who have lost or damaged an ear due to an accident or cancer. The product, which is, "practically identical to the human ear," according to the school, was created using 3D printing and gels made from living cells -- collagen was gathered from rat tails and cartilage cells were taken from cow's ears. The whole process is quite quick, according to associate professor Lawrence Bonassar, who co-authored the report on the matter,

"It takes half a day to design the mold, a day or so to print it, 30 minutes to inject the gel, and we can remove the ear 15 minutes later. We trim the ear and then let it culture for several days in nourishing cell culture media before it is implanted."

The team is looking to implant the first ear in around three years, if all goes well.

Filed under: ,

Comments

Source: Cornell Chronicle

Researchers turn to 19th century math for wireless data center breakthrough

Researchers turn to 19th century math for wireless data center breakthrough

Researchers from Microsoft and Cornell University want to remove the tangles of cables from data centers. It's no small feat. With thousands of machines that need every bit of bandwidth available WiFi certainly isn't an option. To solve the issue, scientists are turning to two sources: the cutting edge of 60GHz networking and the 19th century mathematical theories of Arthur Cayley. Cayley's 1889 paper, On the Theory of Groups, was used to guide their method for connecting servers in the most efficient and fault tolerant way possible. The findings will be presented in a paper later this month, but it won't be clear how effectively this research can be applied to an actual data center until someone funds a prototype. The proposed Cayley data centers would rely on cylindrical server racks that have transceivers both inside and outside the tubes of machines, allowing them to pass data both among and between racks with (hopefully) minimal interference. Since the new design would do away with traditional network switches and cables, researchers believe they may eventually cost less than current designs and will draw less power. And will do so while still streaming data at 10 gigabits per second -- far faster than WiGig, which also makes use of 60GHz spectrum. To read the paper in its entirety check out the source.

Filed under: , , , ,

Researchers turn to 19th century math for wireless data center breakthrough originally appeared on Engadget on Fri, 12 Oct 2012 11:39:00 EDT. Please see our terms for use of feeds.

Permalink Wired  |  sourceOn the Feasibility of Completely Wireless Datacenters (PDF)  | Email this | Comments

Fabricated: Scientists develop method to synthesize the sound of clothing for animations (video)

Fabricated Scientists synthesize the sound of moving clothing, but you'll still need the Wilhelm Scream

Developments in CGI and animatronics might be getting alarmingly realistic, but the audio that goes with it often still relies on manual recordings. A pair of associate professors and a graduate student from Cornell University, however, have developed a method for synthesizing the sound of moving fabrics -- such as rustling clothes -- for use in animations, and thus, potentially film. The process, presented at SIGGRAPH, but reported to the public today, involves looking into two components of the natural sound of fabric, cloth moving on cloth, and crumpling. After creating a model for the energy and pattern of these two aspects, an approximation of the sound can be created, which acts as a kind of "road map" for the final audio.

The end result is created by breaking the map down into much smaller fragments, which are then matched against a database of similar sections of real field-recorded audio. They even included binaural recordings to give a first-person perspective for headphone wearers. The process is still overseen by a human sound engineer, who selects the appropriate type of fabric and oversees the way that sounds are matched, meaning it's not quite ready for prime time. Understandable really, as this is still a proof of concept, with real-time operations and other improvements penciled in for future iterations. What does a virtual sheet being pulled over an imaginary sofa sound like? Head past the break to hear it in action, along with a presentation of the process.

Continue reading Fabricated: Scientists develop method to synthesize the sound of clothing for animations (video)

Filed under: ,

Fabricated: Scientists develop method to synthesize the sound of clothing for animations (video) originally appeared on Engadget on Wed, 26 Sep 2012 23:40:00 EDT. Please see our terms for use of feeds.

Permalink PhysOrg  |  sourceCornell Chronical  | Email this | Comments

Cornell students build spider-like robotic chalkboard eraser out of Lego, magnets, fun (video)

Robotic Eraser

While you were trying to pass Poetry 101, Cornell seniors Le Zhang and Michael Lathrop were creating an apple-polishing Lego robot that automatically erases your prof's chalkboard. A final class project, the toady mech uses an Atmel brain, accelerometers for direction control, microswitches to sense the edge of the board, magnets to stay attached and hot glue to keep the Lego from flying apart. As the video below the break shows, it first aligns itself vertically, then moves to the top of the board, commencing the chalk sweeping and turning 180 degrees each time its bumpers sense the edge. The duo are thinking of getting a patent, and a commercialized version would allow your teacher to drone on without the normal slate-clearing pause. So, if designing a clever bot and saving their prof from manual labor doesn't get the students an 'A', we don't know what will.

Continue reading Cornell students build spider-like robotic chalkboard eraser out of Lego, magnets, fun (video)

Filed under:

Cornell students build spider-like robotic chalkboard eraser out of Lego, magnets, fun (video) originally appeared on Engadget on Tue, 14 Aug 2012 08:39:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceYin Yang Robotics  | Email this | Comments

Google simulates the human brain with 1000 machines, 16000 cores and a love of cats

Google simulates the human brain with 1000 machines, 16000 cores and a love of cats

Don't tell Google, but its latest X lab project is something performed by the great internet public every day. For free. Mountain View's secret lab stitched together 1,000 computers totaling 16,000 cores to form a neural network with over 1 billion connections, and sent it to YouTube looking for cats. Unlike the popular human time-sink, this was all in the name of science: specifically, simulating the human brain. The neural machine was presented with 10 million images taken from random videos, and went about teaching itself what our feline friends look like. Unlike similar experiments, where some manual guidance and supervision is involved, Google's pseudo-brain was given no such assistance.

It wasn't just about cats, of course -- the broader aim was to see whether computers can learn face detection without labeled images. After studying the large set of image-data, the cluster revealed that indeed it could, in addition to being able to develop concepts for human body parts and -- of course -- cats. Overall, there was 15.8 percent accuracy in recognizing 20,000 object categories, which the researchers claim is a 70 percent jump over previous studies. Full details of the hows and whys will be presented at a forthcoming conference in Edinburgh.

Google simulates the human brain with 1000 machines, 16000 cores and a love of cats originally appeared on Engadget on Tue, 26 Jun 2012 07:22:00 EDT. Please see our terms for use of feeds.

Permalink SMH.com.au  |  sourceCornell University, New York Times, Official Google Blog  | Email this | Comments