Carnegie Mellon computer learns common sense through pictures, shows what it’s thinking

Never Ending Image Learner

Humans have a knack for making visual associations, but computers don't have it so easy; we often have to tell them what they see. Carnegie Mellon's recently launched Never Ending Image Learner (NEIL) supercomputer bucks that trend by forming those connections itself. Building on the university's earlier NELL research, the 200-core cluster scours the internet for images and defines objects based on the common attributes that it finds. It knows that buildings are frequently tall, for example, and that ducks look like geese. While NEIL is occasionally prone to making mistakes, it's also transparent -- a public page lets you see what it's learning, and you can suggest queries if you think there's a gap in the system's logic. The project could eventually lead to computers and robots with a much better understanding of the world around them, even if they never quite gain human-like perception.

Filed under: ,

Comments

Via: TG Daily

Source: NEIL, Carnegie Mellon University

Study: Facebook users sharing more personal info despite increased privacy concerns

Study Facebook users sharing more personal info despite increased privacy concerns

Carnegie Mellon University conducted a study following more than 5,000 Facebook users over six years, from 2005 and 2011, and found that changes in the social network's privacy policies caused users to share more -- not less -- personal data. Lest you think this means that users suddenly trusted the site more, Carnegie Mellon says that Facebookers became more and more protective of their personal details as the social network grew in membership -- and that the uptick in shared information is a result of increasingly granular privacy settings. If you recall, Facebook introduced new in-depth privacy controls in 2010, and the study found that the release of these new settings corresponded to users sharing more personal data, both within their network of friends and with strangers and third-party applications.

It's been quite some time since the new privacy policy was introduced, but the university says the sample group didn't reduce the amount of info shared with non-friends on the network, even as of 2011. The takeaway? Well, it's safe to say that more privacy controls doesn't equal more vigilance in protecting personal data, and it's certainly not a stretch to call Facebook's settings confusing. The researchers' comparison of the struggle for privacy to the eternal plight of Sisyphus? That might be a touch more dramatic.

Filed under: , ,

Comments

Via: Huffington Post

Source: Journal of Privacy and Confidentiality

Robot Hall of Fame inducts Big Dog, PackBot, Nao and WALL-E (video)

Image

It's the sort of ceremony that's so magical it can only occur on even-numbered years. Inventors, educators, entertainers, college students and media folk gathered at the Carnegie Science Center in Pittsburgh, PA tonight for the 2012 inductions to the Robot Hall of Fame, a Carnegie Mellon-sponsored event created to celebrate the best of our mechanical betters.

This year, the field included four categories, judged by both a jury of 107 writers, designs, entrepreneurs and academics and the public at large, each faction constituting half the voting total. The show kicked off, however, with the induction of 2010 winners, the Spirit and Opportunity Mars rovers, the da Vinci Surgical System, iRobot's Roomba, the Terminator and Huey, Dewey and Louie, a trio of robots from 1971's Silent Running.

The first 'bot to secure its spot in the class of 2012, was the programmable humaoid Nao, from Aldebaran Robotics, which beat out the iRobot Create and Vex Robotics Design System in the Educational category. The PackBot military robot from iRobot took the Industrial and Service category, beating out the Kiva Mobile Robotic Fulfillment System and Woods Hole Oceanographic's Jason. Boston Dynamic's Big Dog ran over some stiff competition in the form of Willow Garage's PR2 and NASA's Robonaut to win the Research title. And WALL-E triumphed over doppelganger Johnny Five and the Jetsons' Rosie in the Entertainment category. Relive the festivities in four minutes after the break.

Continue reading Robot Hall of Fame inducts Big Dog, PackBot, Nao and WALL-E (video)

Filed under:

Robot Hall of Fame inducts Big Dog, PackBot, Nao and WALL-E (video) originally appeared on Engadget on Tue, 23 Oct 2012 23:41:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Acoustic barcodes store data in sound, go on just about anything (video)

Acoustic barcodes store data in sound, go on just about anything

Technologies like NFC, RFID and QR codes are quickly becoming a normal part of everyday life, and now a group from Carnegie Mellon University has a fresh take on close-quarters data it calls acoustic barcodes. It involves physically etching a barcode-like pattern onto almost any surface, so it produces sound when something's dragged across it -- a fingernail, for example. A computer is then fed that sound through a microphone, recognizes the waveform and executes a command based on it. By altering the space between the grooves, it's possible to create endless unique identifiers that are associated with different actions.

It's easy to see how smartphones could take advantage of this -- not that we recommend dragging your new iPhone over ridged surfaces -- but unlike the technologies mentioned earlier, not all potential applications envisage a personal reading device. Dot barcodes around an area, install the sound processing hardware on site, and you've got yourself an interactive space primed for breaking freshly manicured nails. We're pretty impressed by the simplicity of the concept, and the team does a good job of presenting scenarios for implementing it, which you can see in the video below. And, if you'd like to learn a little more about the idea or delve into the full academic paper, the source links await you.

[Thanks, Julia]

Continue reading Acoustic barcodes store data in sound, go on just about anything (video)

Filed under: , , ,

Acoustic barcodes store data in sound, go on just about anything (video) originally appeared on Engadget on Sat, 13 Oct 2012 00:57:00 EDT. Please see our terms for use of feeds.

Permalink Hack a Day  |  sourceChris Harrison (1), (2) (PDF)  | Email this | Comments

Polaris rover will travel to the Moon in search of polar resources, try to survive the long lunar night

Polaris rover will travel to the Moon in search of polar resources, try to survive the long lunar night

The Polaris rover may look a little punk rock, but that mohawk is no fashion statement. It's for catching solar rays which shine almost horizontally at the Moon's north pole, a location Polaris is due to explore before 2016. Built by Astrobotic Technology, it'll be ferried aboard the SpaceX Falcon 9 rocket to our celestial companion, where it'll drill into the surface in search of ice. The company, spun out of the Carnegie Mellon University, hopes to identify resources at a depth of up to four feet that could be used to support manned Moon expeditions in the future. The plan is to complete the mission during a 10-day window of sunlight, digging at up to 100 sites over a three-mile stretch. However, if it can live through the harsh two-week-long nights, then it may continue to operate "indefinitely." NASA is backing the project, providing ice-prospecting gear and money, although Astrobotic hopes to get more cash for its work -- over $20 million from Google's Lunar X Prize. Right now, Polaris is a flight prototype and there are still improvements to be made, mainly on the software side, before it tackles the rough terrain. Check out the short video of its public unveiling below, although we don't think the soundtrack quite matches the hairdo.

Continue reading Polaris rover will travel to the Moon in search of polar resources, try to survive the long lunar night

Filed under: , ,

Polaris rover will travel to the Moon in search of polar resources, try to survive the long lunar night originally appeared on Engadget on Tue, 09 Oct 2012 15:45:00 EDT. Please see our terms for use of feeds.

Permalink Gizmag  |   | Email this | Comments

Robot Hall of Fame voting begins for class of 2012, Johnny 5 learns where BigDogs sit

Robot Hall of Fame voting begins for class of 2012, Johnny 5 learns where BigDogs sitIt's that time again: time for Carnegie Mellon to roll out the red carpet and welcome the crème de la crème of the robotics world into its halls. Since 2003 the school has been selecting the best of the best and inducting them into the Robot Hall of Fame. Past honorees have included everything from LEGO Mindstorms to the Terminator. This year's list of nominees is no less impressive, with celebrity bots Johnny 5 and WALL-E pitted against each other in the entertainment category, while NASA's Robonaut takes on the PR2 and BigDog under the banner of research bots. There will also be two other inductees awarded a spot in the hall in the consumer and education category and the industrial and service field. Best of all, for the first time ever, Carnegie Mellon is letting the public vote on the inductees. And, while PETMAN was snubbed yet again, he's not letting that get him down -- the Boston Dymanic's biped just keeps on struttin'. Hit up the source link to cast your vote before the September 30th deadline and check back on October 23rd to see who's granted a podium speech.

Continue reading Robot Hall of Fame voting begins for class of 2012, Johnny 5 learns where BigDogs sit

Filed under:

Robot Hall of Fame voting begins for class of 2012, Johnny 5 learns where BigDogs sit originally appeared on Engadget on Tue, 21 Aug 2012 11:11:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceRobot Hall of Fame  | Email this | Comments

Carnegie Mellon smart headlight prototype blacks out raindrops for clearer view of the road

DNP Carnegie Mellon headlight prototype blacks out raindrops for clearer view of the road

Researchers from Carnegie Mellon have developed a prototype smart headlight which blots out individual drops of rain or snow -- improving vision by up to 90 percent. Made with an off-the-shelf Viewsonic DLP projector, a quad-core Intel Core-i7 PC and a GigE Point Grey Flea3 camera, the Rube Goldberg-esque process starts by first imaging raindrops arriving at the top of its view. After this, the signal goes to a processing unit, which uses a predictive theory developed by the team to guess the drops' path to the road. Finally, the projector -- found in the same place as the camera -- uses a beamsplitter like modern digital 3D rigs. Used in tandem with calculations, it transmits a beam with light voids matching the predicted path. The result? It all stops light from hitting the falling particles, with the cumulative process resulting in the illusion of a nearly precipitation-free road view -- at least in the lab. So far, the whole process takes about a hundredth of a second (13 ms) but scientists said that in an actual car and with many more drops, the speed would have to be about ten times quicker. That would allow 90 percent of the light located 13 feet in front of the headlights to pass through, but even at just triple the speed, it would give drivers a 70 percent better view. To see if this tech might have a snowflake's chance of making it out of the lab, go past the break for all the videos.

Continue reading Carnegie Mellon smart headlight prototype blacks out raindrops for clearer view of the road

Carnegie Mellon smart headlight prototype blacks out raindrops for clearer view of the road originally appeared on Engadget on Wed, 04 Jul 2012 13:17:00 EDT. Please see our terms for use of feeds.

Permalink PhysOrg  |  sourceCarnegie Mellon  | Email this | Comments

Carnegie Mellon researchers develop robot that takes inventory, helps you find aisle four

Carnegie Mellon researchers develop robot that takes inventory, helps you find aisle four

Fed up with wandering through supermarket aisles in an effort to cross that last item off your shopping list? Researchers at Carnegie Mellon University's Intel Science and Technology Center in Embedded Computing have developed a robot that could ease your pain and help store owners keep items in stock. Dubbed AndyVision, the bot is equipped with a Kinect sensor, image processing and machine learning algorithms, 2D and 3D images of products and a floor plan of the shop in question. As the mechanized worker roams around, it determines if items are low or out of stock and if they've been incorrectly shelved. Employees then receive the data on iPads and a public display updates an interactive map with product information for shoppers to peruse. The automaton is currently meandering through CMU's campus store, but it's expected to wheel out to a few local retailers for testing sometime next year. Head past the break to catch a video of the automated inventory clerk at work.

Continue reading Carnegie Mellon researchers develop robot that takes inventory, helps you find aisle four

Carnegie Mellon researchers develop robot that takes inventory, helps you find aisle four originally appeared on Engadget on Sat, 30 Jun 2012 19:53:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceMIT Technology Review  | Email this | Comments

New shear touch technology lets you skip a double-tap, push your device around (video)

Shear touch on Engadget's site

Most every touchscreen in the market today can only register your finger input as coordinates; that's fine for most uses, but it leads to a lot of double-taps and occasionally convoluted gestures. A pair of researchers at Carnegie Mellon University, Chris Harrison and Scott Hudson, have suggested that shear touch might be a smarter solution. Instead of gliding over fixed glass, your finger could handle secondary tasks by pushing in a specific direction, or simply pushing harder, on a sliding display. Among the many examples of what shear touch could do, the research duo has raised the possibility of skipping through music by pushing left and right, or scrolling more slowly through your favorite website with a forceful dragging motion. The academic paper is still far away from producing a shipping device, although a Microsoft doctoral fellowship's partial contribution to funding the study indicates one direction the technology might go. You can take a peek at the future in a video after the jump -- just don't expect a tablet-based Van Gogh this soon.

[Thanks, Chris]

Continue reading New shear touch technology lets you skip a double-tap, push your device around (video)

New shear touch technology lets you skip a double-tap, push your device around (video) originally appeared on Engadget on Fri, 11 May 2012 01:13:00 EDT. Please see our terms for use of feeds.

Permalink   |  sourceChris Harrison  | Email this | Comments