3D Visualization Compares The Size, Speed, And Range Of Different Missiles

Animated by video studio RED SIDE, this is a 3D visualization comparing the size, speed, and range of various missiles used by multiple nations. Even the slowest missile is fast, but the quickest missile makes the slowest look like it’s standing still, and the slowest is traveling over 2,000MPH!

The video starts with a “drag race” comparing the missiles from slowest (the Mach 2.9 Novator Kalibr, ~2,225MPH) to fastest (the claimed Mach 27 of the Avangard, aka Objekt 4202, ~19,884MPH). It then provides an animation of how each missile is typically launched, its different stages, and what a flyby of the rocket at full speed looks like. The third part details each missile’s range; the last part is a size comparison, with all the rockets standing next to one another. I learned a lot by watching it. Mostly, I wouldn’t want to get hit with any of these, even without an explosive payload.

Which missile was your favorite? I found them all rather terrifying. Technologically impressive, sure, but scary to think about. And probably infinitely scarier to try to ride like a mechanical bull.

[via TheAwesomer]

The Proto “M”: A Compact Holographic Display and Media Device

If Back To The Future II taught us anything, it’s that the future will be filled with holograms. Of course, Back To The Future II was supposed to take place in 2015, and we haven’t realized even a small fraction of the technology it promised, but enough about my resentfulness; this is about the Proto Hologram “M,” a compact holographic display and media device made for home use. Curse you, Robert Zemeckis!

With its integrated AI-enabled smart camera, the $2,000 “M” can provide two-way holographic communication with another unit, taking video calls to the next level. That level being holographic calls, just so we’re clear. That is if my parents ever bother to pick up the hologram when I call, which they probably won’t. I swear I’m not just calling for money again!

Proto imagines the “M” being used in a variety of applications, including virtually trying on clothes, personal training workout routines, and displaying your expensive NFTs so guests know you’re a hip investor. And while all this sounds well and good, I can’t help but be a little skeptical about a technology company that only uploaded their demo video in 480p. Makes me wonder.

[via DudeIWantThat]

AI Creates Realistic Portraits of Cartoon Characters

Cartoon characters: you can’t help but wonder what they might look like in real life. And now, thanks to a project by Brazilian artist Hidreley Leli Diao, we don’t have and can use that brainpower for more important things like trying to decide what to order for dinner. I’m leaning towards Mexican or Mediterranean, but will probably debate in my head until both restaurants are closed, and I have to settle for a bowl of cereal.

Using Photoshop and three different artificial intelligence photo editing apps (FaceApp, Gradient, and Remini), the programs scoured the internet for photos of real people that have features matching the cartoon source material, and then Hidreley combined those features into the lifelike portraits you see here. The marvel of modern technology!

Does anybody else find the finished results a little unsettling? Like maybe this was a can of worms that shouldn’t have been opened? Because I’ve opened cans of worms, I wish I hadn’t before. Mostly in the car on the way to a fishing trip. Those wiggly little suckers are quick on the car floor.

[via PetaPixel]

Dasung Paperlike 253 3K HDMI E-ink Monitor: Stock Footage

Chinese company Dasung has been working to make larger and more responsive E-ink displays for seven years. They made waves online in 2015 with their 13.3″ E-ink reader, and now they’re back with a product that is almost twice that size. The Paperlike 253 is a 25.3″ 3200 x 1800 16:9 monitor that can connect to devices via HDMI, DisplayPort, or USB-C, making it just as easy to use as other monitors.

Although I doubt that anyone will buy the Paperlike 253 for anything other than viewing text and other static elements, the monitor does have a high enough refresh rate to play video at a decent clip. Dasung hasn’t revealed the exact refresh rate of the monitor, but looking at its demos it’s responsive enough for daily use.

Here’s a longer video about Dasung’s journey and the tech behind the Paperlike 253. The demo starts at around 2:52, with video playback at 4:08.

The Paperlike 253 retails for $2, 300 (USD). That’s a ton of money, but I’d argue that preserving your eyes is worth way more than that. Dasung recently completed an Indiegogo crowdfunding program for the Paperlike 253 and claims that it will deliver the first batch of orders in August 2021. The pre-order for the device is closed as of this writing, but you can enter your email on Dasung’s online store to be notified when it’s available again.

DitherPaint 1-Bit Paint App Takes You Back to the Days of MacPaint

I remember back in 1984 when I got my hands on the first Apple Macintosh computer how excited I was to use MacPaint. I had seen it demonstrated at a convention, and the idea that I could create my own artwork on my computer was pretty awe-inspiring to me as a 16-year-old kid. Over the years, I’d abandon MacPaint for more sophisticated apps like Adobe Photoshop and Illustrator. Still, there’s something kind of special about working within the limitations of black-and-white pixel art. So if you long for the simplicity of MacPaint and 1-bit painting, check out DitherPaint.

This browser-based drawing app was created by BeyondLoom, and lets you create black-and-white images using various primitive brushes and dithered patterns. For those unfamiliar with the term, dithering is a technique of using patterns to create in-between shades. In the case of 1-bit art, you get shades of grey. DitherPaint lets you apply these patterns to your brushes too. It’s also got a nifty tool that lets you create animated patterns by listing the sequence of patterns you want to use. You can also load in existing color or greyscale images, and it will automatically dither them, giving them that awesome 1980s Macintosh look. So what are you waiting for? Head on over to DitherPaint now and see what kind of creations you can come up with.

[via AdaFruit]

Google Project Starline Conferencing Tool Renders You in 3D in Real-Time

The past year has put video conferencing tools in the limelight, and in my opinion, they are sorely lacking. But technology marches onward. Just check out this mind-blowing prototype that Google claims is already in use in a few of its offices. It’s called Project Starline, a holographic communication booth that creates 3D models of both parties that are shown in real-time and in 3D.

Google says that one of the best things about Project Starline is that it just works. Judging from their demo video, I agree. You just sit down and start talking. The person – or people! – on the other end see your realistic avatar, and you see theirs. It also uses spatial audio, so it feels like you’re both in the same space, separated only by a window. It even seems to keep up with constant motion, such as the baby in the demo.

As of this writing, Google did not provide specifics on the technology or its release. The company did say that they believe that this is the future of remote communication and that they are planning to have enterprise trials later this year. I wonder how long before the technology can fit into a webcam.

Apple Pencil Patent Hints at Real World Color Sampling

Since it first landed on the scene back in 2014, the Apple Pencil and its successor Apple Pencil 2 have brought a tremendous amount of creative freedom and expression to the iPad. Now, it appears that a future Apple Pencil might add a great new feature, the ability to sample colors from the real world.

According to a recently revealed U.S. patent application, Apple is working on a technology which could add a color sensor to the popular iPad stylus. The patent was written to cover a variety of different configurations, including placing sensors in the Apple Pencil’s tip, rear, or connected to the tip using a light guide. The device would use LEDs or OLEDs to illuminate and reflect colors off of surfaces back into a collection of photosensors. A single white light source or separate red, green, blue, and infrared light sources are mentioned as possibilities, along with whatever number of photodetectors are needed to ensure accuracy.

The data gathered by the color sensors could then be used to create a color palette or immediately set the paint color within iPad art applications. While this wouldn’t the first color sensing device on the market – take the Nix Pro and Color Muse for example- the idea of integrating one into the Apple Pencil could definitely add a whole new dimension to drawing and painting on the iPad.

There’s no indication that the color sampling feature is headed to the Apple Pencil anytime soon, but the patent application is a good start towards Apple eventually including the feature in a production version.

[via MacRumors]

Computer Physics Simulation Can Accurately Mimic Bread Being Pulled Apart

Computer graphics have come a very long way in the past couple of decades, offering up images which are becoming more and more difficult to distinguish from reality. Especially notable are the improvements in physics engines, which allow objects to move and behave more like they do in real life. One of the holy grails of CGI simulation is that of being able to destroy items so they break apart realistically, and now we have the most realistic method yet… to tear apart a piece of digital bread.

Károly Zsolnai-Fehér of Two Minute Papers turned us on to this amazing computer physics tech which is designed to simulate the fractures that occur in an object as it’s torn apart.

In the paper CD-MPM: Continuum Damage Material Point Methods for Dynamic Fracture Animation (PDF), Joshuah Wolper and a team of scientists from the University of Pennsylvania describe a particle-based animation system they’ve developed which can accurately emulate the way that objects fall apart. The technology can be used to simulate everything from the way a piece of bread gradually tears when you pull it, the way that a block of Jell-O breaks into little bits when you drop it, or how a cookie crumbles when you break it apart.

The system also offers a variety of parameters which allow for fine-tuning the behavior of materials, while still retaining a realistic look. The video below explains more about this impressive graphical achievement, and shows off a few examples:

For now, computers aren’t fast enough to handle all of these computations in real time, and the rendering of a single frame can take anywhere from 17 seconds to 10 minutes, but it’s sure to be optimized in the future. Maybe someday we could have a VR game where you’re eating virtual food at your virtual keyboard and leave virtual crumbs between the keys. Or maybe even virtual Cheetos dust, all without leaving a real world mess. Of course virtual food isn’t nearly as tasty or filling as the real deal.

To learn more about this fascinating technology, you can download the paper here. The source code has also been released on GitHub in case you know what to do with it to make it work on your computer.

Automatic Visual Censorship Tech is Black Mirror IRL

Did you ever see the Black Mirror episode called “Arkangel?” Basically, it tells the story of an overly-cautious mother who has a chip implanted in her daughter’s brain so she can track her every movement. But she also upgrades it with a couple of features, like the ability to see everything she sees, and to block out images of anything that might be deemed “shocking.” Needless to say, things don’t turn out too well for anyone. Regardless, there is technology in the works today that could actually be used to automatically censor images in real time.

In this clip from TEDx Talks, computer interaction scientist Lonni Besançon introduces us to a technology that could do just that. The system works a bit differently from the version seen in Black Mirror, with the goal of preserving more information about the image that’s being obscured. Rather than just pixelate out the “offensive” imagery, processing technology would apply filters to make the image less shocking. The use case explained here is one in which a surgical image could be made less repulsive, while still preserving enough detail to understand what was going on.

The core of this particular technology is more about reducing the shocking nature of specific images or video footage, rather than making decisions about what is considered offensive or shocking. That said, Besançon’s team has made a prototype Chrome extension which can automatically identify violence, nudity, or medical imagery, and apply visual filters.

While there are legitimate uses for this kind of AI-powered censorship tech, like protecting social media moderators or police detectives from having to view disturbing imagery, it could also be used to impose unwanted censorship if used improperly or forced into consumer technology.

Arlo Video Doorbell Helps Nab Package Thieves

Until I have a sentry of robots or Mark Rober’s Glitter Bomb 2.0 guarding my mailbox, my big fear this time of year is leaving the house and missing a package delivery. I’m not paranoid, but I am an Amazon junkie, and my neighbors know from personal experience that front porch thefts are on the rise. Luckily, my order for an Arlo Wired Video Doorbell wasn’t stolen upon delivery.

It’s a hard-wired device that hooks up to your existing doorbell wiring to spy on whoever steps up to your door. When a movement triggers the motion detector, the Arlo immediate sends your smartphone a live, HD video alert with a 180-degree viewing angle. It has night vision too, so you can see what’s going on even without a porch light. Pre-recorded messages let you reply quickly, or you can respond with a live convo to fool lurkers into thinking you’re in the house, rather than across the country. It even protects itself with a siren that blares if the tiny camera is tampered with. It’s smarter than the average ambivalent teen, too, with artificial intelligence that gives specific reports for people, packages, vehicles and animals.

Next time a thief steals my Simpson’s fluffy slippers order, or my SpongeBob popcorn popper, I’ll be prepared to call 911 to let the mocking begin. To make sure I can hand over evidence to the police, Arlo’s cloud storage service keeps video clips accessible and transferable for 30 days.