Nanofiber film could lead to electronic skin

If you want electronic skin or other transparent wearable devices, you need to send a current through that skin. However, it's hard to make something that's both conductive and transparent -- and that's where a team of American and Korean researcher...

Study reveals AI systems are as smart as a 4-year-old, lack common sense

DNP AIs are actually 4yearold kids

It'll take a long time before we see a J.A.R.V.I.S. in real life -- University of Illinois at Chicago researchers put MIT's ConceptNet 4 AI through the verbal portions of a children's IQ test, and rated its apparent relative intelligence as that of a 4-year-old. Despite an excellent vocabulary and ability to recognize similarities, the lack of basic life experience leaves one of the best AI systems unable to answer even easy "why" questions. Those sound simple, but not even the famed Watson supercomputer is capable of human-like comprehension, and research lead Robert Sloan believes we're far from developing one that is. We hope scientists get cracking and conjure up an AI worthy of our sci-fi dreams... so long as it doesn't pull a Skynet on humanity.

[Image credit: Kenny Louie]

Filed under: ,

Comments

Via: Extremetech

Source: University of Illinois Chicago

Researchers out faux product review groups with a lot of math and some help from Google

Google sponsors research that outs faux product review groups, calculates 'spamicity' and more

Ever consulted a crowdsourced review for a product or service before committing your hard-earned funds to the cause? Have you wondered how legit the opinions you read really are? Well, it seems that help is on the way to uncover paid opinion spamming and KIRF reviews. Researchers at the University of Illinois at Chicago have released detailed calculations in the report Spotting Fake Reviewer Groups in Consumer Reviews -- an effort aided by a Google Faculty Research Award. Exactly how does this work, you ask? Using the GSRank (Group Spam Rank) algorithm, behaviors of both individuals and a group as a whole are used to gather data on the suspected spammers.

Factors such as content similarity, reviewing products early (to be most effective), ratio of the group size to total reviewers and the number of products the group has been in cahoots on are a few bits of data that go into the analysis. The report states, "Experimental results showed that GSRank significantly outperformed the state-of-the-art supervised classification, regression, and learning to rank algorithms." Here's to hoping this research gets wrapped into a nice software application, but for now, review mods may want to brush up on their advanced math skills. If you're curious about the full explanation, hit the source link for the full-text PDF.

Researchers out faux product review groups with a lot of math and some help from Google originally appeared on Engadget on Tue, 17 Apr 2012 19:49:00 EDT. Please see our terms for use of feeds.

Permalink The Verge  |  sourceUniversity of Illinois at Chicago (PDF)  | Email this | Comments