OpenAI co-founder and Chief Scientist Ilya Sutskever is leaving the company

Ilya Sutskever has announced on X, formerly known as Twitter, that he's leaving OpenAI almost a decade after he co-founded the company. He's confident that OpenAI "will build [artificial general intelligence] that is both safe and beneficial" under the leadership of CEO Sam Altman, President Greg Brockman and CTO Mira Murati, he continued. In his own post about Sutskever's departure, Altman called him "one of the greatest minds of our generation" and credited him for his work with the company. Jakub Pachocki, OpenAI's previous Director of Research who headed the development of GPT-4 and OpenAI Five, has taken Sutskever's role as Chief Scientist. 

While Sutskever and Altman praised each other in their farewell messages, the two were embroiled in the company's biggest scandal last year. In November, OpenAI's board of directors suddenly fired Altman and company President Greg Brockman. "[T]he board no longer has confidence in [Altman's] ability to continue leading OpenAI," the ChatGPT-maker announced back then. Sutskever, who was a board member, was involved in their dismissal and was the one who asked both Altman and Brockman to separate meetings where they were informed that they were being fired. According to reports that came out at the time, Altman and Sutskever had been butting heads when it came to how quickly OpenAI was developing and commercializing its generative AI technology. 

Both Altman and Brockman were reinstated just five days after they were fired, and the original board was disbanded and replaced with a new one. Shortly before that happened, Sutskever posted on X that he "deeply regre[tted his] participation in the board's actions" and that he will do everything he can "to reunite the company." He then stepped down from his role as a board member, and while he remained Chief Scientist, The New York Times says he never really returned to work. 

Sutskever shared that he's moving on to a new project that's "very personally meaningful" to him, though he has yet to share details about it. As for OpenAI, it recently unveiled GPT-4o, which it claims can recognize emotion and can process and generate output in text, audio and images.

This article originally appeared on Engadget at https://www.engadget.com/openai-co-founder-and-chief-scientist-ilya-sutskever-is-leaving-the-company-054650964.html?src=rss

13 Inch M4 iPad Pro Reviewed (Video)

M4 iPad Pro

The 2024 M4 iPad Pro 13, recently reviewed by The Tech Chap, brings several significant hardware upgrades, ensuring it stands out in the tablet market. While it maintains the same 10-hour battery life as its predecessor, the advancements in performance, design, and accessories make it a notable update. Design and Build You will be pleased […]

The post 13 Inch M4 iPad Pro Reviewed (Video) appeared first on Geeky Gadgets.

Want Perfect Sourdough Bread every single time? This Kitchen Tool gives you foolproof results

The beauty of baking your own bread lies in its simplicity, and the fact that the yeast and bacteria does most of the job for you. You don’t need fancy equipment or ingredients, just a big container to proof your bread, and an oven to bake a perfectly rustic loaf of sourdough that you can then top up with avocado or ricotta and honey. However, what most bakers don’t tell you is that your loaf of bread is actually a living being. The yeast, whether natural or the instant kind you buy at the market, is a living creature that transforms your ball of dough into a fluffy, airy, mildly tart slice of bread that tastes so good with anything you put on top. This yeast needs just 3 things to perform this transformation – flour, water (hydration), and the right temperature. Most home bakers ignore that last metric, and if you’ve made a loaf of bread that just lacked that oomph or the right texture, chances are you followed your recipe correctly, but missed out on ensuring the yeast could grow under the right temperature. The DoughBed, however, takes care of that part of the breadmaking journey for you. Designed to be a perfectly optimal proofing tray for your dough, the Doughbed is a wide glass tray with a cork lid and a heating bed that creates exactly the right temperature for your loaf. Keep your dough to proof in the DoughBed and you’ll be consistently rewarded with the perfect proofing every time, whether it’s for sourdough loaves, pizzas, baguettes, focaccias, brioche buns, or any other kind of leavened bread you desire to bake!

Designers: Sourhouse (Erik Fabian & Jennifer Yoko Olson)

Click Here to Buy Now: $175 $225 ($50 off). Hurry, deal ends in 72-hours! Raised over $220,000.

Designed by home-baker Erik Fabian and industrial designer Jennifer Yoko Olson, the DoughBed is basically an incubation chamber for bread microbes that creates the perfect thermal conditions for yeast (and sourdough bacteria) to feed and grow. “The problem is not your dough, it’s your kitchen. Kitchen temperatures are often too cold for bread dough, and more importantly, always changing,” say the duo behind the DoughBed. The yeast in your dough thrives at temperatures of 75-82°F (24-28°C), but that might just be a tad too warm for humans, who set their thermostats or air conditioners to slightly lower temperatures. This mild temperature difference (of a mere 4-5°F) can be the difference between a perfect loaf, and a loaf that just doesn’t have the right open crumb. The solution? A mini habitat for your doughball, allowing it to do its job flawlessly well, every single time.

Mat + Bowl + Lid = The Perfect Proofing Solution

DoughBed combines a warming mat, a glass dough bowl, and a cork lid to create an efficient warming solution. The dough bowl has a wide bottom so your dough is gently and evenly warmed to 75-82°F (24-28°C) by the mat below.

Each DoughBed is made of 3 parts – an oval-shaped glass bowl, a cork lid, and a warming base that gives your bread microbes the ideal temperature to do its job. The wide oval tray is big enough to hold 3 loaves worth of dough at one time, and is perfectly shaped and sized for mixing your dough in, resting and proofing, and pouring your dough out for shaping before baking. As a bonus, it’s even designed to be oven safe, although baking a loaf that big would be overkill!

The warming mat is made with cork to prevent heat loss into your cold counter.

Once your dough’s ready for proofing, simply cover the glass tray with the DoughBed’s cork lid and place the tray and lid on the DoughBed’s electric base, which heats up to just the right temperature for your bread microbes to thrive. The bowl’s wide base helps the bread dough heat evenly and quickly, and the DoughBed’s single temperature target works with remarkable consistency all throughout the year, giving you perfect loaves even in autumn or winter months.

Oval = the Best Shape for Dough Handling

The DoughBed is perfect for home bakers looking to upgrade their bread game. The oblong oval shape of the glass bowl is ideal for mixing, kneading, and folding with both your hands, and the cork lid and base don’t just give the DoughBed its rustic aesthetic, they’re key to helping your dough maintain its temperature efficiently, and the cork base prevents heat-loss into your kitchen counter.

The DoughBed’s base operates with just 10W of power (that’s 75% less than an oven light), relying on a USB cable that can be wound up and tucked into the underside of the base when not in use. The cork lid comes with a removable, food-safe polypropylene liner, and both the glass bowl and polypropylene liner are dishwasher safe. We’d recommend not washing the cork to ensure it lasts longer. The DoughBed starts at a discounted $175 for backers – that’s a lot cheaper than the Le Creuset you’d mix and proof your sourdough loaf in.

Click Here to Buy Now: $175 $225 ($50 off). Hurry, deal ends in 72-hours! Raised over $220,000.

The post Want Perfect Sourdough Bread every single time? This Kitchen Tool gives you foolproof results first appeared on Yanko Design.

Google Project Astra hands-on: Full of potential, but it’s going to be a while

At I/O 2024, Google’s teaser for Project Astra gave us a glimpse at where AI assistants are going in the future. It’s a multi-modal feature that combines the smarts of Gemini with the kind of image recognition abilities you get in Google Lens, as well as powerful natural language responses. However, while the promo video was slick, after getting to try it out in person, it's clear there’s a long way to go before something like Astra lands on your phone. So here are three takeaways from our first experience with Google’s next-gen AI.

Sam’s take:

Currently, most people interact with digital assistants using their voice, so right away Astra’s multi-modality (i.e. using sight and sound in addition to text/speech) to communicate with an AI is relatively novel. In theory, it allows computer-based entities to work and behave more like a real assistant or agent – which was one of Google’s big buzzwords for the show – instead of something more robotic that simply responds to spoken commands.

The first project Astra demo we tried used a large touchscreen connected to a downward-facing camera.
Photo by Sam Rutherford/Engadget

In our demo, we had the option of asking Astra to tell a story based on some objects we placed in front of camera, after which it told us a lovely tale about a dinosaur and its trusty baguette trying to escape an ominous red light. It was fun and the tale was cute, and the AI worked about as well as you would expect. But at the same time, it was far from the seemingly all-knowing assistant we saw in Google's teaser. And aside from maybe entertaining a child with an original bedtime story, it didn’t feel like Astra was doing as much with the info as you might want.

Then my colleague Karissa drew a bucolic scene on a touchscreen, at which point Astra correctly identified the flower and sun she painted. But the most engaging demo was when we circled back for a second go with Astra running on a Pixel 8 Pro. This allowed us to point its cameras at a collection of objects while it tracked and remembered each one’s location. It was even smart enough to recognize my clothing and where I had stashed my sunglasses even though these objects were not originally part of the demo.

In some ways, our experience highlighted the potential highs and lows of AI. Just the ability for a digital assistant to tell you where you might have left your keys or how many apples were in your fruit bowl before you left for the grocery store could help you save some real time. But after talking to some of the researchers behind Astra, there are still a lot of hurdles to overcome.

An AI-generated story about a dinosaur and a baguette created by Google's Project Astra
Photo by Sam Rutherford/Engadget

Unlike a lot of Google’s recent AI features, Astra (which is described by Google as a “research preview”) still needs help from the cloud instead of being able to run on-device. And while it does support some level of object permanence, those “memories” only last for a single session, which currently only spans a few minutes. And even if Astra could remember things for longer, there are things like storage and latency to consider, because for every object Astra recalls, you risk slowing down the AI, resulting in a more stilted experience. So while it’s clear Astra has a lot of potential, my excitement was weighed down with the knowledge that it will be some time before we can get more full-feature functionality.

Karissa’s take:

Of all the generative AI advancements, multimodal AI has been the one I’m most intrigued by. As powerful as the latest models are, I have a hard time getting excited for iterative updates to text-based chatbots. But the idea of AI that can recognize and respond to queries about your surroundings in real-time feels like something out of a sci-fi movie. It also gives a much clearer sense of how the latest wave of AI advancements will find their way into new devices like smart glasses.

Google offered a hint of that with Project Astra, which may one day have a glasses component, but for now is mostly experimental (the glasses shown in the demo video during the I/O keynote were apparently a “research prototype.”) In person, though, Project Astra didn’t exactly feel like something out of sci-fi flick.

During a demo at Google I/O, Project Astra was able to remember the position of objects seen by a phone's camera.
Photo by Sam Rutherford/Engadget

It was able to accurately recognize objects that had been placed around the room and respond to nuanced questions about them, like “which of these toys should a 2-year-old play with.” It could recognize what was in my doodle and make up stories about different toys we showed it.

But most of Astra’s capabilities seemed on-par with what Meta has already made available with its smart glasses. Meta’s multimodal AI can also recognize your surroundings and do a bit of creative writing on your behalf. And while Meta also bills the features as experimental, they are at least broadly available.

The Astra feature that may set Google’s approach apart is the fact that it has a built-in “memory.” After scanning a bunch of objects, it could still “remember” where specific items were placed. For now, it seems Astra’s memory is limited to a relatively short window of time, but members of the research team told us that it could theoretically be expanded. That would obviously open up even more possibilities for the tech, making Astra seem more like an actual assistant. I don’t need to know where I left my glasses 30 seconds ago, but if you could remember where I left them last night, that would actually feel like sci-fi come to life.

But, like so much of generative AI, the most exciting possibilities are the ones that haven’t quite happened yet. Astra might get there eventually, but right now it feels like Google still has a lot of work to do to get there.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-project-astra-hands-on-full-of-potential-but-its-going-to-be-a-while-235607743.html?src=rss

Google Project Astra hands-on: Full of potential, but it’s going to be a while

At I/O 2024, Google’s teaser for Project Astra gave us a glimpse at where AI assistants are going in the future. It’s a multi-modal feature that combines the smarts of Gemini with the kind of image recognition abilities you get in Google Lens, as well as powerful natural language responses. However, while the promo video was slick, after getting to try it out in person, it's clear there’s a long way to go before something like Astra lands on your phone. So here are three takeaways from our first experience with Google’s next-gen AI.

Sam’s take:

Currently, most people interact with digital assistants using their voice, so right away Astra’s multi-modality (i.e. using sight and sound in addition to text/speech) to communicate with an AI is relatively novel. In theory, it allows computer-based entities to work and behave more like a real assistant or agent – which was one of Google’s big buzzwords for the show – instead of something more robotic that simply responds to spoken commands.

The first project Astra demo we tried used a large touchscreen connected to a downward-facing camera.
Photo by Sam Rutherford/Engadget

In our demo, we had the option of asking Astra to tell a story based on some objects we placed in front of camera, after which it told us a lovely tale about a dinosaur and its trusty baguette trying to escape an ominous red light. It was fun and the tale was cute, and the AI worked about as well as you would expect. But at the same time, it was far from the seemingly all-knowing assistant we saw in Google's teaser. And aside from maybe entertaining a child with an original bedtime story, it didn’t feel like Astra was doing as much with the info as you might want.

Then my colleague Karissa drew a bucolic scene on a touchscreen, at which point Astra correctly identified the flower and sun she painted. But the most engaging demo was when we circled back for a second go with Astra running on a Pixel 8 Pro. This allowed us to point its cameras at a collection of objects while it tracked and remembered each one’s location. It was even smart enough to recognize my clothing and where I had stashed my sunglasses even though these objects were not originally part of the demo.

In some ways, our experience highlighted the potential highs and lows of AI. Just the ability for a digital assistant to tell you where you might have left your keys or how many apples were in your fruit bowl before you left for the grocery store could help you save some real time. But after talking to some of the researchers behind Astra, there are still a lot of hurdles to overcome.

An AI-generated story about a dinosaur and a baguette created by Google's Project Astra
Photo by Sam Rutherford/Engadget

Unlike a lot of Google’s recent AI features, Astra (which is described by Google as a “research preview”) still needs help from the cloud instead of being able to run on-device. And while it does support some level of object permanence, those “memories” only last for a single session, which currently only spans a few minutes. And even if Astra could remember things for longer, there are things like storage and latency to consider, because for every object Astra recalls, you risk slowing down the AI, resulting in a more stilted experience. So while it’s clear Astra has a lot of potential, my excitement was weighed down with the knowledge that it will be some time before we can get more full-feature functionality.

Karissa’s take:

Of all the generative AI advancements, multimodal AI has been the one I’m most intrigued by. As powerful as the latest models are, I have a hard time getting excited for iterative updates to text-based chatbots. But the idea of AI that can recognize and respond to queries about your surroundings in real-time feels like something out of a sci-fi movie. It also gives a much clearer sense of how the latest wave of AI advancements will find their way into new devices like smart glasses.

Google offered a hint of that with Project Astra, which may one day have a glasses component, but for now is mostly experimental (the glasses shown in the demo video during the I/O keynote were apparently a “research prototype.”) In person, though, Project Astra didn’t exactly feel like something out of sci-fi flick.

During a demo at Google I/O, Project Astra was able to remember the position of objects seen by a phone's camera.
Photo by Sam Rutherford/Engadget

It was able to accurately recognize objects that had been placed around the room and respond to nuanced questions about them, like “which of these toys should a 2-year-old play with.” It could recognize what was in my doodle and make up stories about different toys we showed it.

But most of Astra’s capabilities seemed on-par with what Meta has already made available with its smart glasses. Meta’s multimodal AI can also recognize your surroundings and do a bit of creative writing on your behalf. And while Meta also bills the features as experimental, they are at least broadly available.

The Astra feature that may set Google’s approach apart is the fact that it has a built-in “memory.” After scanning a bunch of objects, it could still “remember” where specific items were placed. For now, it seems Astra’s memory is limited to a relatively short window of time, but members of the research team told us that it could theoretically be expanded. That would obviously open up even more possibilities for the tech, making Astra seem more like an actual assistant. I don’t need to know where I left my glasses 30 seconds ago, but if you could remember where I left them last night, that would actually feel like sci-fi come to life.

But, like so much of generative AI, the most exciting possibilities are the ones that haven’t quite happened yet. Astra might get there eventually, but right now it feels like Google still has a lot of work to do to get there.

Catch up on all the news from Google I/O 2024 right here!

This article originally appeared on Engadget at https://www.engadget.com/google-project-astra-hands-on-full-of-potential-but-its-going-to-be-a-while-235607743.html?src=rss

This ‘Super Tiny’ Tiny Home Is All Set To Provide You With The Ultimate Off-Grid Lifestyle

Portugal’s Madeiguinco designed its latest tiny home offering – the Pego. In a world where tiny homes reign supreme, it is tough to create one that truly stands out, but by fostering woodworking expertise and building timber dwellings, Madeguincho managed to offer us something new and refreshing. The Pego is a compact tiny home that is designed to accompany you on your adventures into the wild with the help of a solar panel setup. The tiny home is pretty small and compact, but you shouldn’t judge the dwelling by its size, because it does pack a punch with its functionality and utility.

Designer: Madeiguinco

The Pego tiny home features a length of 16 feet, which is quite small for a European tiny home. Based on a double-axle trailer and finished in wood, the home perfectly presents the firm’s wonderful craftsmanship. The home is finished in wood both inside and outside, and the craftsmanship is reflected in the shutters on the windows, and in the doors that close up the house. The home is topped with solar panels, which keep it powered up, irrespective of where it is placed. The tiny home also has a standard RV-style hookup.

You can enter the home via double glass doors. The floor space is mostly taken up by a combined living room/kitchen area. This includes a massive L-shaped sofa with integrated storage, a sink, cabinetry, an electric cooktop, and storage space. The space also includes a mini wood-burning stove, much like the ones you use while camping in a tent.

The ground floor of the Pego tiny home includes a bathroom that is equipped with a shower, sink, and toilet. The bathroom features a secondary door that offers access to the outdoors, which is quite an unusual feature. The tiny home is amped with only one bedroom which can be accessed using a fixed wooden ladder. The bedroom room has a typical tiny home loft-style setup, partnered up with a low ceiling. There is sufficient space for a double bed.

The post This ‘Super Tiny’ Tiny Home Is All Set To Provide You With The Ultimate Off-Grid Lifestyle first appeared on Yanko Design.

An Interactive Lamp Series That Brings The Cosmic Moments Into Interiors

Space, with its vastness and complexity, has always captivated the human imagination. Our solar system, a celestial ballet of planets and stars, has inspired various aspects of human life and design, from ancient sundials to modern-day innovations. The COSMOOVAL lamp series is a testament to this inspiration, drawing on the phenomenal interconnectivity of our solar system to create a collection of lamps that not only illuminate spaces but also tell a cosmic story.

Designer: LFD Official – Seohyun NamNam Woo KimDoyoon Kim

The designers of Cosmooval drew inspiration from the celestial bodies in our solar system, considering the way they influence our planet and the intricate dance of light and shadow they create. The lamp series incorporates key elements such as expandability, limitation, transparency, and immateriality to bring the essence of space into our living environments.

The design process began with the creation of a mood board, reflecting the tension and spatial dynamics of the universe. Simple basic figures, inspired by solar and lunar eclipses, shooting stars, and planetary movements, were arranged to evoke the mood of the cosmos. A clay mockup emphasized stability through the use of circles and triangles, laying the foundation for the lamp series’ structural elements.

Several idea sketches were explored, with the initial focus on a triangular structure within three circles. As the design evolved, proportions, details, and interactions were refined in subsequent sketches. The final design selected a form that considered materials, structure, and user interaction, resulting in three distinct types of lamps within the Cosmooval series.

Each lamp in the series offers a unique interaction with light, adding to the overall cosmic experience. The ceiling lamp, representing expandability, spreads light by adjusting the angle of an oval disk. The table lamp, embodying limitation, controls light brightness through the movement of a red sphere, mimicking the motion of a shooting star. The floor lamp, combining transparency and immateriality, simulates orbiting planets and solar eclipses, changing light intensity as the red sphere is manipulated.

Cosmooval, derived from the fusion of “Cosmo” (space) and “Oval” (ellipse), is more than just a lighting solution; it is an artistic representation of the cosmos. The series serves as a visual metaphor for planets, satellites, and shooting stars, moving in harmony with their orbits.

The ceiling lamp symbolizes the expansiveness of space, spreading light with three ovals arranged in a stable manner. By pulling the red sphere attached to a string, users can open and close the ovals, controlling the brightness and essential light in their space.

1

In the table lamp, a triangular structure controls the concentrated light source. Moving the red sphere along a diagonal line mimics the motion of a shooting star, allowing users to experience the fleeting brightness associated with celestial phenomena.

1

The floor lamp embodies transparency and immateriality, recreating the orbits of planets and solar eclipses. Pushing the red sphere sideways changes the shape and intensity of light, providing a dynamic representation of the passage of time and celestial revolutions.

1

The Cosmooval lamp series transcends conventional lighting, offering users an immersive experience that connects them to the wonders of our solar system. Through innovative design and thoughtful interaction, these lamps bring the cosmos into our living spaces, reminding us of the beauty and complexity of the universe that surrounds us.

The post An Interactive Lamp Series That Brings The Cosmic Moments Into Interiors first appeared on Yanko Design.

Engadget Podcast: The good, the bad and the AI of Google I/O 2024

We just wrapped up coverage on Google's I/O 2024 keynote, and we're just so tired of hearing about AI. In this bonus episode, Cherlynn and Devindra dive into the biggest I/O news: Google's intriguing Project Astra AI assistant; new models for creating video and images; and some improvements to Gemini AI. While some of the announcements seem potentially useful, it's still tough to tell if the move towards AI will actually help consumers, or if Google is just fighting to stay ahead of OpenAI.


Listen below or subscribe on your podcast app of choice. If you've got suggestions or topics you'd like covered on the show, be sure to email us or drop a note in the comments! And be sure to check out our other podcast, Engadget News!

Hosts: Cherlynn Low and Devindra Hardawar
Music: Dale North

This article originally appeared on Engadget at https://www.engadget.com/engadget-podcast-the-good-the-bad-and-the-ai-of-google-io-2024-221741082.html?src=rss