The World’s Brightest Fluorescent Material Gives Us SMILES

If you really want to stand out at a rave, you dress up in fluorescent colors and then stand under the black lights. But if you want to take your illumination to 11, there’s a new material that could up your visibility even more.

A team of chemists from Indiana University, the University of Copenhagen , and the University of Southern Mississippi have developed a unique material they claim is the brightest fluorescent substance on the planet.

Known as SMILES (small-molecule ionic isolation lattices), the material is made by creating a crystalline powder, then spinning it into a thin film or incorporating it into a synthetic polymer. While there are lots of highly fluorescent dyes out there, what makes this accomplishment special is that it’s really hard to maintain their fluorescence in solid materials because their molecules stick too closely together, diminishing their brightness. By engineering a special donut-shaped molecule, the scientists were able to find a way to keep them further apart, resulting in a much brighter material. According to their research, SMILES is 30 times brighter than cadmium selenide quantum dots, a fluorescent material used in medical imaging.

If commercialized, the technology could help to improve medical lasers, solar cells, 3D displays, and more. I’m not sure if rave accessories are on the list though. For more information on the SMILES research project, you can find their full paper over on Chem.

[via TechEBlog]

These Spray-on Touchscreens Work on 3D Surfaces

From our smartphones to our laptops to our cars to our kitchen appliances, touchscreens have turned up just about everywhere. But touchscreens are generally limited to flat, squared-off surfaces. Now, a team of engineers at the UK’s University of Bristol are demonstrating a technology that could enable touch-based interfaces on all kinds of surfaces.

The technique, known as “ProtoSpray” allows for the creation of illuminated surfaces and touch sensors on three-dimensional shapes. The method uses a combination of multi-material 3D printing and a spray-on coating to add lighting and touch-sensitive interfaces to all kinds of shapes. By embedding electrodes into the 3D printed object, then spraying on an electroluminescent material, ProtoSprayed objects can both light up, and sense touch inputs. The electrodes are designed to both power the electroluminescence, and act as capacitive input sensors.

There’s more information about the process available in the team’s paper, which can be found here. If you’re interested in experimenting with ProtoSpray objects for yourself, they’ve also posted an Instructables project, which focuses primarily on the devices’ electroluminescent properties, rather than touch sensitivity.

 

[via New Atlas]

This Device Can Synthesize Any Flavor on Your Tongue: Taste the Rainbow

Back in the 1990s a team of engineers created a device called the iSmell. This unusual gadget used scent cartridges to simulate a wide variety of aromas, which could be triggered through computer code. The iSmell ultimately failed due to lack of market interest, but I always thought the idea that you could create anything from the smell of hot chocolate to pepperoni pizza just by mixing chemicals was pretty fascinating. Now a scientist in Japan has developed a similar technology, though this one simulates flavors rather than scents.

Homei Miyashita of Meiji University’s Miyashita Laboratory created this novel device that he calls a “taste display.” It uses a set of five electrolytic flavor gels which are electrically stimulated to produce taste sensations on its user’s tongue. The process of electrophoresis is used to subtractively adjust the amount of sweet (glycine), salty (sodium chloride), bitter (magnesium chloride), acidic (citric acid) and umami (glutamic sodium) flavors which are released. Theoretically, these five base flavors could replicate just about any flavor you’d like, and the system has already been used to simulate flavors ranging from sushi to gummy candies.

Now, taste alone isn’t enough to truly experience flavor, as our sense of smell is also a big part of that mechanism. Perhaps they could dust off the old plans for the iSmell and combine them, then sell this as some kind of gadget for dieting.

You can read the full research paper about the taste display in the Association for Computing Machinery’s digital library.

[via Syfy]

This Machine Will Probably Never Finish a Full Rotation

When it comes to telling time with an analog clock, the idea of gear reduction is a very critical piece of the puzzle. Basically, a set of multiple gears work in concert to gradually rotate at slower speeds. So a single motor can drive the seconds, minutes, and hour hands on a dial.

But rather than just reducing the speed of a gear a couple of times, engineer Daniel de Bruin decided to make what he says is the “biggest reduction gear in the universe.” Well, it may not be the largest in dimension, but it’s definitely the most complicated, with 100 gears, each gradually reducing the speed from the gear before it.

Each successive gear turns at exactly 1/10th of the speed of its predecessor. The result is a setup that would take literally eons before it would rotate its final gear.

According to the guys at Gizmodo, you’d have to turn the first gear

10,000,000,000,000,000,000,000,000,000,000,000,
000,000,000,000,000,000,000,000,000,000,000,000,
000,000,000,000,000,000,000,000,000,000

times to move the last gear to move just one position. Man, that’s a whole lot of zeros, and I definitely can’t count that high.

The machine’s creator explains the rationale behind his build: “Today at 14:52 I will be exactly 1 billion seconds old. To celebrate I build this machine that visualizes the number googol. That’s a 1 with a hundred zeros. A number that’s bigger than the atoms in the known universe. This machine has a gear reduction of 1 to 10 a hundred times. In order to get the last gear to turn once you’ll need to spin the first one a googol amount around. Or better said you’ll need more energy than the entire universe has to do that.”

If you’ve got a full hour to kill you can watch the contraption get through the first few of layers of gears…

[via Gizmodo]

Termitat is an Ant Farm for Termites

If you live in a house constructed with a wood frame or wooden siding, this is the last thing you ever want to buy. But if your dwelling is made from brick, concrete, stone, metal, or glass, read on… Do you want to see what termites can do to a piece of wood? With the Termitat, you can grow your very own termite community, and observe the destructive little buggers as they chew through a slice of wood. They’re like Sea Monkeys, but with wood instead of water!

The see-through container comes pre-loaded with a hunk of wood, and an already-installed Pacific Dampwood termite community. Simply give them a weekly dribble of water, and the wood-eaters will gradually bore their way for all to enjoy. The makers of the Termitat claim the acrylic desktop habitat is “escape-proof,” so theoretically wood structures will be safe from a termite invasion, but I’d rather not take the risk.

Still, the Termitat is a cool desktop novelty for those interested in insect behavior, or science in general. If you’re ready to raise your very own termite community, prices range from $139.95 to $159.95. Once your community runs out of wood – in about 2 to 3 years, you can send it back to Termitat, and they’ll rejuvenate the environment for another $75.

[via r/shutupandtakemymoney]

Computer Physics Simulation Can Accurately Mimic Bread Being Pulled Apart

Computer graphics have come a very long way in the past couple of decades, offering up images which are becoming more and more difficult to distinguish from reality. Especially notable are the improvements in physics engines, which allow objects to move and behave more like they do in real life. One of the holy grails of CGI simulation is that of being able to destroy items so they break apart realistically, and now we have the most realistic method yet… to tear apart a piece of digital bread.

Károly Zsolnai-Fehér of Two Minute Papers turned us on to this amazing computer physics tech which is designed to simulate the fractures that occur in an object as it’s torn apart.

In the paper CD-MPM: Continuum Damage Material Point Methods for Dynamic Fracture Animation (PDF), Joshuah Wolper and a team of scientists from the University of Pennsylvania describe a particle-based animation system they’ve developed which can accurately emulate the way that objects fall apart. The technology can be used to simulate everything from the way a piece of bread gradually tears when you pull it, the way that a block of Jell-O breaks into little bits when you drop it, or how a cookie crumbles when you break it apart.

The system also offers a variety of parameters which allow for fine-tuning the behavior of materials, while still retaining a realistic look. The video below explains more about this impressive graphical achievement, and shows off a few examples:

For now, computers aren’t fast enough to handle all of these computations in real time, and the rendering of a single frame can take anywhere from 17 seconds to 10 minutes, but it’s sure to be optimized in the future. Maybe someday we could have a VR game where you’re eating virtual food at your virtual keyboard and leave virtual crumbs between the keys. Or maybe even virtual Cheetos dust, all without leaving a real world mess. Of course virtual food isn’t nearly as tasty or filling as the real deal.

To learn more about this fascinating technology, you can download the paper here. The source code has also been released on GitHub in case you know what to do with it to make it work on your computer.

This Suction Cup Robot Can Climb Just About Any Wall

When it comes to wall-climbing robots, most of them rely on vacuum suction to make their way up the side of a building. The trick there is that you need a very smooth surface, like glass or marble in order to get a good grip. Now, scientists have developed a robot that can climb even the most heavily textured walls.

Image: Xin Li and Kaige Shi

Researchers Xin Li and Kaige Shi developed a system which uses something called a “zero-pressure difference” to solve the issue of leakage around suction cups. The trick is that the cups are sealed to the surface using a high-speed rotating ring of water. According to a release from the American Institute of Physics, the system uses the “centrifugal force of the rotating water eliminates the pressure difference at the boundary of the vacuum zone to prevent vacuum leakage.”

In other words, each time the robot takes a step, the water creates a constantly flowing seal that its suction cups can stick to. While the wall-climbing robot is one of the more useful applications for the technology, it’s possible that it could be used in other situations where suction cups are difficult to apply. At this point, the biggest challenge is the amount of water that is required to make the system work, and that’s the next phase for the research.

Physics, robotics, and engineering geeks can read more about how the system works in the accompanying paper, which was published in the journal IEEE/ASME Transactions on Mechatronics.

[via Neatorama]

Automatic Visual Censorship Tech is Black Mirror IRL

Did you ever see the Black Mirror episode called “Arkangel?” Basically, it tells the story of an overly-cautious mother who has a chip implanted in her daughter’s brain so she can track her every movement. But she also upgrades it with a couple of features, like the ability to see everything she sees, and to block out images of anything that might be deemed “shocking.” Needless to say, things don’t turn out too well for anyone. Regardless, there is technology in the works today that could actually be used to automatically censor images in real time.

In this clip from TEDx Talks, computer interaction scientist Lonni Besançon introduces us to a technology that could do just that. The system works a bit differently from the version seen in Black Mirror, with the goal of preserving more information about the image that’s being obscured. Rather than just pixelate out the “offensive” imagery, processing technology would apply filters to make the image less shocking. The use case explained here is one in which a surgical image could be made less repulsive, while still preserving enough detail to understand what was going on.

The core of this particular technology is more about reducing the shocking nature of specific images or video footage, rather than making decisions about what is considered offensive or shocking. That said, Besançon’s team has made a prototype Chrome extension which can automatically identify violence, nudity, or medical imagery, and apply visual filters.

While there are legitimate uses for this kind of AI-powered censorship tech, like protecting social media moderators or police detectives from having to view disturbing imagery, it could also be used to impose unwanted censorship if used improperly or forced into consumer technology.

Seismologist Suggests Using Crowdsourced Cat Data to Detect Earthquakes

Scientists will tell you that no human or animal can accurately predict an impending earthquake before it starts. However, certain animals are far more sensitive to seismic activities than humans. With that in mind, one seismologist has tossed out a wildly impractical but amusing idea for an early earthquake alert system – using cats.

PhD geophysics student Celeste Labedz posted her idea in a multi-part Twitter thread last week, and it’s well worth a read. She hypothesizes that since cats are more sensitive than humans to the weaker P-waves that happen when an earthquake is starting, that we could harness unusual cat behavior to create an early warning system. The same idea could theoretically work with dogs, but they have a tendency to be more active than cats, which could result in more false positive readings.

Labedz proposes the name PURRS (Pet-based Urban Rapid Response to Shaking) for her system. The idea is that millions of cats would be equipped with Fitbit-style Bluetooth sensors, which would detect when kitties are acting abnormally. A centralized system would take that sensor data and look for common patterns among multiple cats in the area, and should a certain threshold be reached, it could issue an alert that an earthquake is imminent. Such a network would provide many more data points and density than current early warning systems, and way more cats.

Celeste fully acknowledges that her idea would be incredibly difficult and costly to implement, but that’s why it’s merely a concept. I still love the idea of crowdsourcing pet-based data for something, since they do seem to be more sensitive to certain stimuli than humans.

Thanks to my rocket scientist friend Susan for tipping me off to this wonderfully entertaining thread.

Human-Sized Penguins Once Roamed the Planet

Penguins are some of the most adorable creatures on the entire planet. But I’m not sure they would be quite as cute if they were the same height as humans and could peck at our carotid arteries. Well, it turns out that such giant penguins may have once been commonplace.

New Zealand’s Canterbury Museum is sharing news that amateur palaeontologist Leigh Love discovered a unique fossil. Working in concert with a team of scientists, they have concluded that the leg bone is from Crossvallia waiparensis, a penguin species estimated to be about 5-feet, 3-inches tall, and weighing in at around 176 pounds.

Those dimensions make it 16″ taller than the Emperor penguin, which tops out around 4 feet tall. That’s still pretty big as penguins go, but you can’t look them right in the eye and see if they’re lying like you could with Crossvallia waiparensis.

These monster-sized penguins are thought to have lived sometime during the Paleocene Epoch, between 66 and 56 million years ago.

[via The Guardian]