Mars-like soil makes super strong bricks when compressed

Elon Musk's vision of Mars colonization has us living under geodesic domes made of carbon fiber and glass. But, according to a study recently published in the journal Scientific Reports, those domes may end up being made of brick, pressed from the Ma...

Researchers discover bacteria can communicate electrically, like neurons

Bacteria may be unicellular but that doesn't mean they're complete loners. They often congregate in (relatively) large colonies, not unlike human cities. In fact, a team of researchers from the University of California, San Diego have recently discov...

Software Analyzes Face Videos to Determine Pain Levels

Pain Assessment Computer Vision Algorithm

Researchers at University of California, San Diego School of Medicine have developed a computer vision algorithm that assesses pain depending on the facial expression of the patient during surgical procedures.

“On a scale from 1 to 10, how badly does it hurt?” is a question clinicians ask both children and adult patients when they are required to assess the pain level. However, we all perceive pain differently, and certain problems can mask the real answer. Determining the actual pain level should encourage the medical personnel to look for solutions to alleviate the ache. Until we get our own Baymax healthcare companion, a pain assessing computer vision algorithm might be our best chance.

To determine whether the software can correctly identify pain levels, the team of researchers tested it on kids five to 18 years old. Depending on their facial expression after a laparoscopic appendectomy, the software generated a result that was then compared to what the kids’ parents and nurses concluded. Needless to say, the computer vision algorithm is at least as accurate as nurses, so it shouldn’t take long until it is implemented on a larger scale.

Here is a fragment of the abstract published in journal Pediatrics:

“METHODS: A CVML-based model for assessment of pediatric postoperative pain was developed from videos of 50 neurotypical youth 5 to 18 years old in both endogenous/ongoing and exogenous/transient pain conditions after laparoscopic appendectomy. Model accuracy was assessed for self-reported pain ratings in children and time since surgery, and compared with by-proxy parent and nurse estimates of observed pain in youth.

RESULTS: Model detection of pain versus no-pain demonstrated good-to-excellent accuracy (Area under the receiver operating characteristic curve 0.84–0.94) in both ongoing and transient pain conditions. Model detection of pain severity demonstrated moderate-to-strong correlations (r = 0.65–0.86 within; r = 0.47–0.61 across subjects) for both pain conditions. The model performed equivalently to nurses but not as well as parents in detecting pain versus no-pain conditions, but performed equivalently to parents in estimating pain severity. Nurses were more likely than the model to underestimate youth self-reported pain ratings. Demographic factors did not affect model performance.”

It won’t be long until such software makes its way into our homes, as hospitals are not the only environment where this could prove useful. The only possible problem is if the subject fakes the facial expression to trick the software.

Be social! Follow Walyou on Facebook and Twitter, and read more related stories about the MusicGlove hand rehabilitation device, or the Philips BlueControl LED light therapy that keeps psoriasis in check.

UCSD engineers develop mini wide-angle lens that’s ten times smaller than a regular one

UCSD engineers develop mini wideangle lens that's ten times smaller than a regular one

What you see here, dear readers, is the image of a fiber-coupled monocentric lens camera that was recently developed by engineers from the University of California, San Diego. The researchers involved in the project say this particular miniature wide-angle lens is one-tenth of the size of more traditional options, such as the Canon EF 8-15mm f/4L pictured above. Don't let the sheer magnitude (or lack thereof) of this glass fool you, however: UCSD gurus note that the newly developed optics can easily mimic the performance of regular-sized lenses when capturing high-resolution photos. "It can image anything between half a meter and 500 meters away (a 100x range of focus) and boasts the equivalent of 20/10 human vision (0.2-milliradian resolution)," according to engineers. As for us, well, we can't wait to see this technology become widely adopted -- don't you agree?

Filed under:

Comments

Source: UCSD Jacobs

Researchers create algorithms that help lithium-ion batteries charge two times faster

Researchers create algorithms that help lithium-ion batteries charge two times faster

Researchers at the University of California San Diego have devised new algorithms that can cut lithium-ion battery charge times in half, help cells run more efficiently and potentially cut production costs by 25 percent. Rather than tracking battery behavior and health with the traditional technique of monitoring current and voltage, the team's mathematical models estimate where lithium ions are within cells for more precise data. With the added insight, the team can more accurately gauge battery longevity and control charging efficiency. The group was awarded $460,000 from the Department of Energy's ARPA-E research arm to further develop the algorithm and accompanying tech with automotive firm Bosch and battery manufacturer Cobasys, which both received the remainder of a $9.6 million grant. Wondering if the solution will ever find its way out of the lab? According to co-lead researcher Scott Moura, it'll see practical use: "This technology is going into products that people will actually use."

Continue reading Researchers create algorithms that help lithium-ion batteries charge two times faster

Filed under:

Researchers create algorithms that help lithium-ion batteries charge two times faster originally appeared on Engadget on Thu, 04 Oct 2012 23:07:00 EDT. Please see our terms for use of feeds.

Permalink EurekAlert!  |  sourceUCSD Jacobs School of Engineering  | Email this | Comments

Google releases Course Builder, takes online learning down an open-source road

Google releases Course Builder, takes online learning down an opensource road

Google is well-known for projects with unexpected origins. It's almost natural, then, that the code Google used to build a web course has led to a full-fledged tool for online education. The open-source Course Builder project lets anyone make their own learning resources, complete with scheduled activities and lessons, if they've got some skill with HTML and JavaScript. There's also an avenue for live teaching or office hours: the obligatory Google+ tie-in lets educators announce Hangouts on Air sessions. Code is available immediately, although you won't need to be grading virtual papers to see the benefit. A handful of schools that include Stanford, UC San Diego and Indiana University are at least exploring the use of Course Builder in their own initiatives, which could lead to more elegant internet learning -- if also fewer excuses for slacking.

Filed under:

Google releases Course Builder, takes online learning down an open-source road originally appeared on Engadget on Tue, 11 Sep 2012 20:32:00 EDT. Please see our terms for use of feeds.

Permalink Google Research Blog, TechCrunch  |  sourceCourse Builder  | Email this | Comments