Instead of clicking photos, this AI Camera doodles them

draw_this_camera_1

The results may look childish, but what the ‘Draw This’ camera manages to do is quite smart, as it mimics what we as humans do when we practice real-life sketches.

Tapping into Google’s repository of doodles from its Quick Draw project back in 2016, the Draw This camera uses a tiny camera lens module, a Raspberry Pi computer, and a thermal printer to capture images, detect the objects within them, and doodle/print them out for you, turning the concept of the Polaroid into something that’s more quirky, fun, and can sometimes be a hilarious hit-and-miss!

“One of the fun things about this re-imagined camera is that you never get to see the original image. You point, and shoot – and out pops a cartoon; the camera’s best interpretation of what it saw. The result is always a surprise. A food selfie of a healthy salad might turn into an enormous hotdog, or a photo with friends might be photobombed by a goat.”, says creator Dan Macnish, who’s made the code for the Draw This camera public on GitHub, so tinkerers and makers can create their own versions of the camera. Would make a great addition to a game of Pictionary, wouldn’t it??

Designer: Dan Macnish

draw_this_camera_2

draw_this_camera_3

draw_this_camera_4

This Camera Turns Images Into Doodles

Cameras are for capturing exactly what we see with our naked eye and we won’t be satisfied until the technology captures scenes perfectly. That’s why we still have new and improved cameras hitting the market. But what if we went in the opposite direction? What if a camera turned our images into doodles? Well, this one does just that.

Dan Macnish has created a Raspberry Pi-powered shooter called “Draw This.” It’s an instant camera that captures images and turns them into rudimentary doodles. It’s fun and unique in the world of still photography. But how does it accomplish this?

The camera uses a neural network’s ability to recognize objects and reference them to the thousands of images available on Google’s Quick, Draw! database. The camera then chooses the closest object available to what you are shooting and prints it out with a built-in thermal printer. The results can be a bit hit-and-miss, but that’s okay because this camera is all about fun.

You never get to see the original photo. Only the doodle. Dan has provided the code on GitHub if you want to fool around with it for yourself. You jus tneed a Raspberry Pi 3, a V2 camera, and a thermal printer.

[via Engadget via Mike Shouts]

Smartphone with a dual camera? How about 7 more?

light_9_camera_1

You may remember Light, the company that launched a smartphone-sized point-and-shoot with as many as 16 camera modules that covered the entire back of the phone. The idea was to get the quality you’d expect from a DSLR, not by making sensors bigger, but by increasing the number of them. 16 cameras, as you may have already thought to yourself, does seem like a little too much, which is why Light’s new smartphone (of which we have only a single image) comes with nine cameras, with 8 arranged in a circular format around the primary camera.

Light’s new phone promises to be, unlike its ancestor the L16, as thin as the iPhone itself, and promises to deliver images that would absolutely blow its competition out of the water. The lenses, all together, manage to capture 64-megapixel shots and work remarkably well in low-light as well as high-speed conditions. Nine cameras would also allow you to create significantly detailed depth-maps, enabling your photos to generate much more accurate artificial bokehs/background blurs. The cons of a phone so remarkably powerful? Significant battery drain as the phone struggles to piece together nine images to create an incredibly hi-res photograph… and obviously, the price. Remember, the L16 retails for $1950! Stay tuned for its official release at the end of this year!

Designer: Light

Samsung’s ISOCELL Plus camera sensor upgrades low light performance

While Samsung may be playing catch up in some fields, it continues to charge ahead with its smartphone camera tech. Today it's unveiled its new ISOCELL Plus technology, which means sharper and more accurate photos even in challenging light environmen...

Amazon Echo Look review: Good selfie taker, so-so stylist

Walking into my closet can sometimes feel like visiting Narnia. There are beautiful, whimsical friends in there, alongside a dizzying array of outdated pieces and random monstrosities from the early aughts. I'm someone who would benefit from Am...

How the selfie camera is driving people to get nose-jobs

There’s a strong chance that you may have come across the gif above somewhere on the internet. Or the image below. This, is the handiwork of something called Focal Lengths. It determines, very basically, how narrow or wide your lens’s field of vision is.

selfie_nose_1

Camera lenses come in various types, but the two focal lengths we’re going to really address are the wide angle and the telephoto. Wide angle lenses have lower focal lengths, and basically capture a wider range of imagery, with everything in focus. Telephoto lenses, on the other hand, are responsible for taking longer shots, focusing on a much smaller area, and distinctly blurring the background keeping a subject in focus. In short, a wide angle lens is perfect for landscapes, allowing you to cram more into a single picture, whereas a telephoto lens is ideal for subject portraits, making it great for animals/subjects that are far away, or for faces, because of the way they focus on them. So what’s all this got to do with nose jobs, you ask? Well, let’s dig deeper.

You’re with a bunch of friends at a get-together, clicking pictures. You use your primary camera to capture everything from the food you eat, to a candid photo of your friends, to the sunset. All these objects are at varying distances, and your phone camera works to capture each photo and make it look good. Your food photos have a spectacular portrait-style blur, while the sunset photo is crisp and clear. The fact that your phone has two lenses on the back only makes it easier for your phone to multi-task. However, the front-facing selfie camera is a different story. There’s usually always a fixed distance between the camera and the subject… the length of your arm. The front-facing camera is almost always used for selfies, which means the distance between the camera and the subject (your face) is usually a fixed parameter, i.e., the length of your arm. The natural choice for capturing portrait pictures would be to put a telephoto lens on the front, above the screen, so you get remarkable portrait shots… but the problem there is the fact that A. telephoto lenses require subjects to be quite far away from the camera, and B. Human arms aren’t long enough.

selfie_nose_2

This makes the wide-angle lens the only feasible option for the front-facing camera. It captures things in focus, and it captures a wider field of view, which means your selfies have more imagery in them, making for wonderful group selfies… but there’s a caveat. Fitting more imagery into a fixed viewing area can only mean one thing. Distortion.

Think of how a fish-eye lens works versus a telescope. Both show images within a circular area, but a fisheye lens fits in a lot more within that circle, distorting elements as a result. That’s an exaggerated but pretty accurate version of what happens when you click selfies too. The wide angle lens on your phone (coupled with how short human arms are) results in faces getting distorted, and noses looking a few sizes bigger than they already are. This distortion is enough for people to perceive themselves differently and studies show that as many as 55% of people getting nose-jobs in 2017 were concerned because their noses looked bulbous in selfies… and that’s pretty concerning.

A video below by the people at Vox talks about the phenomenon and the how, demonstrating that the further the camera is from you, the lesser distortion… which means the best bet you have right now is to either outstretch your arm or pick up a selfie stick. The difference of as little as a few feet can be pretty remarkable. What’s more is the fact that scientists are even working on an AI-based technology that can edit/correct your pictures to make your faces look less distorted… which is pretty neat considering we’ve done so much to make orur photos look great, from red-eye removal, to skin-softening algorithms, to artificial portrait modes where a machine literally analyzes and differentiates between foreground and background and blurs the latter out. Could distortion-correction be the next feature to drive smartphone sales up and nose-jobs down? Well, only time can tell. Until then, you’ll just have to make do with that selfie stick!