Scientists make an artificial heart out of foam

Artificial hearts only kinda-sorta behave like the real thing. They pump blood, sure, but they're typically solid blocks of machinery that are out of place in a squishy human body. Cornell University thinks it can do better, though: its scientist...

Composite lighting technique lets amateurs produce well-lit photos in minutes (video)

Computational lighting

Hobbyist photographers don't often have the luxury of elaborate lighting rigs. However, Adobe and Cornell University have developed a new software technique that could bring pro-grade illumination to a wider audience. Known as computational lighting design, the solution simplifies a familiar trick that combines shots taken with a camera's external flash placed in different positions. The software uses multiple sample photos to create composite images that emphasize color, edge lighting and fill lighting; editors just have to balance those three light values to get the desired effect. While the code is still unpolished, it's good enough that even beginners can produce well-lit masterpieces in less than 15 minutes. Adobe believes that the technique could reach future versions of apps like Lightroom or Photoshop, so don't be surprised if still-life photography catches on in the near future.

Filed under: ,

Comments

Via: Phys.org

Source: Cornell University

Eyes-on with Cornell University’s laser tag dunebots (video)

Eyes-on with Cornell University's laser tag dunebots (video)

Cornell University may be the host of the Cornell Cup competition, but that doesn't mean it can't bring its own robots to join in on the fun. This year, students brought along a few bots, dubbed dunebots, outfitted with all-terrain wheels and equipped with laser tag turrets. The rugged rig features a pair of cameras, a dustproof and water resistant chassis, air intakes capped with filters, and other custom components for suspension and steering. Not only does the team plan on releasing code and documentation for the project, but the hardware was designed with modularity in mind, so others can build their own modified versions.

Taking the robot into battle requires two pilots armed with Xbox 360 controllers: one directing where it travels, and another aiming the turret and firing. Driving the buggy over the web is also possible, though it takes a few seconds for it to react. The group also baked in voice controls, to boot. If you're not watching the car duke it out in person, you can even tune in over the web and watch a live video stream from one of its onboard cams. Its top speeds haven't been firmly nailed down, but the team says the bot was running at approximately 35 percent of its full potential, since it was deemed too fast for conference attendees. Hit the jump to catch us talk with the effort's Computer Science lead Mike Dezube, and to see a dunebot in action.

Filed under: ,

Comments

Cornell scientists 3D print ears with help from rat tails and cow ears

Cornell scientists 3D print ears with help from rat tails and cow ears

Science! A team of bioengineers and physicians over at Cornell University recently detailed their work to 3D print lifelike ears that may be used to treat birth defects like microtia and assist those who have lost or damaged an ear due to an accident or cancer. The product, which is, "practically identical to the human ear," according to the school, was created using 3D printing and gels made from living cells -- collagen was gathered from rat tails and cartilage cells were taken from cow's ears. The whole process is quite quick, according to associate professor Lawrence Bonassar, who co-authored the report on the matter,

"It takes half a day to design the mold, a day or so to print it, 30 minutes to inject the gel, and we can remove the ear 15 minutes later. We trim the ear and then let it culture for several days in nourishing cell culture media before it is implanted."

The team is looking to implant the first ear in around three years, if all goes well.

Filed under: ,

Comments

Source: Cornell Chronicle

Researchers turn to 19th century math for wireless data center breakthrough

Researchers turn to 19th century math for wireless data center breakthrough

Researchers from Microsoft and Cornell University want to remove the tangles of cables from data centers. It's no small feat. With thousands of machines that need every bit of bandwidth available WiFi certainly isn't an option. To solve the issue, scientists are turning to two sources: the cutting edge of 60GHz networking and the 19th century mathematical theories of Arthur Cayley. Cayley's 1889 paper, On the Theory of Groups, was used to guide their method for connecting servers in the most efficient and fault tolerant way possible. The findings will be presented in a paper later this month, but it won't be clear how effectively this research can be applied to an actual data center until someone funds a prototype. The proposed Cayley data centers would rely on cylindrical server racks that have transceivers both inside and outside the tubes of machines, allowing them to pass data both among and between racks with (hopefully) minimal interference. Since the new design would do away with traditional network switches and cables, researchers believe they may eventually cost less than current designs and will draw less power. And will do so while still streaming data at 10 gigabits per second -- far faster than WiGig, which also makes use of 60GHz spectrum. To read the paper in its entirety check out the source.

Filed under: , , , ,

Researchers turn to 19th century math for wireless data center breakthrough originally appeared on Engadget on Fri, 12 Oct 2012 11:39:00 EDT. Please see our terms for use of feeds.

Permalink Wired  |  sourceOn the Feasibility of Completely Wireless Datacenters (PDF)  | Email this | Comments

Fabricated: Scientists develop method to synthesize the sound of clothing for animations (video)

Fabricated Scientists synthesize the sound of moving clothing, but you'll still need the Wilhelm Scream

Developments in CGI and animatronics might be getting alarmingly realistic, but the audio that goes with it often still relies on manual recordings. A pair of associate professors and a graduate student from Cornell University, however, have developed a method for synthesizing the sound of moving fabrics -- such as rustling clothes -- for use in animations, and thus, potentially film. The process, presented at SIGGRAPH, but reported to the public today, involves looking into two components of the natural sound of fabric, cloth moving on cloth, and crumpling. After creating a model for the energy and pattern of these two aspects, an approximation of the sound can be created, which acts as a kind of "road map" for the final audio.

The end result is created by breaking the map down into much smaller fragments, which are then matched against a database of similar sections of real field-recorded audio. They even included binaural recordings to give a first-person perspective for headphone wearers. The process is still overseen by a human sound engineer, who selects the appropriate type of fabric and oversees the way that sounds are matched, meaning it's not quite ready for prime time. Understandable really, as this is still a proof of concept, with real-time operations and other improvements penciled in for future iterations. What does a virtual sheet being pulled over an imaginary sofa sound like? Head past the break to hear it in action, along with a presentation of the process.

Continue reading Fabricated: Scientists develop method to synthesize the sound of clothing for animations (video)

Filed under: ,

Fabricated: Scientists develop method to synthesize the sound of clothing for animations (video) originally appeared on Engadget on Wed, 26 Sep 2012 23:40:00 EDT. Please see our terms for use of feeds.

Permalink PhysOrg  |  sourceCornell Chronical  | Email this | Comments

Cornell students build spider-like robotic chalkboard eraser out of Lego, magnets, fun (video)

Robotic Eraser

While you were trying to pass Poetry 101, Cornell seniors Le Zhang and Michael Lathrop were creating an apple-polishing Lego robot that automatically erases your prof's chalkboard. A final class project, the toady mech uses an Atmel brain, accelerometers for direction control, microswitches to sense the edge of the board, magnets to stay attached and hot glue to keep the Lego from flying apart. As the video below the break shows, it first aligns itself vertically, then moves to the top of the board, commencing the chalk sweeping and turning 180 degrees each time its bumpers sense the edge. The duo are thinking of getting a patent, and a commercialized version would allow your teacher to drone on without the normal slate-clearing pause. So, if designing a clever bot and saving their prof from manual labor doesn't get the students an 'A', we don't know what will.

Continue reading Cornell students build spider-like robotic chalkboard eraser out of Lego, magnets, fun (video)

Filed under:

Cornell students build spider-like robotic chalkboard eraser out of Lego, magnets, fun (video) originally appeared on Engadget on Tue, 14 Aug 2012 08:39:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceYin Yang Robotics  | Email this | Comments

Google simulates the human brain with 1000 machines, 16000 cores and a love of cats

Google simulates the human brain with 1000 machines, 16000 cores and a love of cats

Don't tell Google, but its latest X lab project is something performed by the great internet public every day. For free. Mountain View's secret lab stitched together 1,000 computers totaling 16,000 cores to form a neural network with over 1 billion connections, and sent it to YouTube looking for cats. Unlike the popular human time-sink, this was all in the name of science: specifically, simulating the human brain. The neural machine was presented with 10 million images taken from random videos, and went about teaching itself what our feline friends look like. Unlike similar experiments, where some manual guidance and supervision is involved, Google's pseudo-brain was given no such assistance.

It wasn't just about cats, of course -- the broader aim was to see whether computers can learn face detection without labeled images. After studying the large set of image-data, the cluster revealed that indeed it could, in addition to being able to develop concepts for human body parts and -- of course -- cats. Overall, there was 15.8 percent accuracy in recognizing 20,000 object categories, which the researchers claim is a 70 percent jump over previous studies. Full details of the hows and whys will be presented at a forthcoming conference in Edinburgh.

Google simulates the human brain with 1000 machines, 16000 cores and a love of cats originally appeared on Engadget on Tue, 26 Jun 2012 07:22:00 EDT. Please see our terms for use of feeds.

Permalink SMH.com.au  |  sourceCornell University, New York Times, Official Google Blog  | Email this | Comments

Intel designs neuromorphic chip concept, our android clones are one step closer

Neuromancer chip

Most neurochip projects have been designed around melding the brain and technology in the most literal sense. Intel's Circuit Research Laboratory, however, is betting that we might get along just fine with neuromorphic (brain-like) computers. By using valves that only have to respond to the spin of an electron, as well as memristors that work as very efficient permanent storage, the researchers believe they have a design that operates on the same spikes of energy that our noggins use rather than a non-stop stream. Along with simply using power levels closer to those of our brains, the technique allows for the very subtle, massively parallel computations that our minds manage every day but which are still difficult to reproduce with traditional PCs. There's still a long path to take before we're reproducing Prometheus' David (if we want to), but we've at least started walking in the right direction.

Intel designs neuromorphic chip concept, our android clones are one step closer originally appeared on Engadget on Mon, 18 Jun 2012 16:28:00 EDT. Please see our terms for use of feeds.

Permalink MIT Technology Review  |  sourceIntel proposal (Cornell University)  | Email this | Comments