Black hole behavior suggests Dr. Who’s ‘bigger on the inside’ Tardis trick is theoretically possible

Do black holes, like dying old soldiers, simply fade away? Do they pop like hyperdimensional balloons? Maybe they do, or maybe they pass through a cosmic rubicon, effectively reversing their natures and becoming inverse anomalies that cannot be entered through their event horizons but which continuously expel energy and matter back into the universe. 

In his latest book, White Holes, physicist and philosopher Carlo Rovelli focuses his attention and considerable expertise on the mysterious space phenomena, diving past the event horizon to explore their theoretical inner workings and and posit what might be at the bottom of those infinitesimally tiny, infinitely fascinating gravitational points. In this week's Hitting the Books excerpt, Rovelli discusses a scientific schism splitting the astrophysics community as to where all of the information — which, from our current understanding of the rules of our universe, cannot be destroyed — goes once it is trapped within an inescapable black hole.   

White Holes by Carlo Rovelli cover
Riverhead Books

Excerpted from by White Holes by Carlo Rovelli. Published by Riverhead Books. Copyright © 2023 by Carlo Rovelli. All rights reserved.


In 1974, Stephen Hawking made an unexpected theoretical discovery: black holes must emit heat. This, too, is a quantum tunnel effect, but a simpler one than the bounce of a Planck star: photons trapped inside the horizon escape thanks to the pass that quantum physics provides to everything. They “tunnel” beneath the horizon. 

So black holes emit heat, like a stove, and Hawking computed their temperature. Radiated heat carries away energy. As it loses energy, the black hole gradually loses mass (mass is energy), becoming ever lighter and smaller. Its horizon shrinks. In the jargon we say that the black hole “evaporates.” 

Heat emission is the most characteristic of the irreversible processes: the processes that occur in one time direction and cannot be reversed. A stove emits heat and warms a cold room. Have you ever seen the walls of a cold room emit heat and heat up a warm stove? When heat is produced, the process is irreversible. In fact, whenever the process is irreversible, heat is produced (or something analogous). Heat is the mark of irreversibility. Heat distinguishes past from future. 

There is therefore at least one clearly irreversible aspect to the life of a black hole: the gradual shrinking of its horizon.

But, careful: the shrinking of the horizon does not mean that the interior of the black hole becomes smaller. The interior largely remains what it is, and the interior volume keeps growing. It is only the horizon that shrinks. This is a subtle point that confuses many. Hawking radiation is a phenomenon that regards mainly the horizon, not the deep interior of the hole. Therefore, a very old black hole turns out to have a peculiar geometry: an enormous interior (that continues to grow) and a minuscule (because it has evaporated) horizon that encloses it. An old black hole is like a glass bottle in the hands of a skillful Murano glassblower who succeeds in making the volume of the bottle increase as its neck becomes narrower. 

At the moment of the leap from black to white, a black hole can therefore have an extremely small horizon and a vast interior. A tiny shell containing vast spaces, as in a fable.

In fables, we come across small huts that, when entered, turn out to contain hundreds of vast rooms. This seems impossible, the stuff of fairy tales. But it is not so. A vast space enclosed in a small sphere is concretely possible. 

If this seems bizarre to us, it is only because we became habituated to the idea that the geometry of space is simple: it is the one we studied at school, the geometry of Euclid. But it is not so in the real world. The geometry of space is distorted by gravity. The distortion permits a gigantic volume to be enclosed within a tiny sphere. The gravity of a Planck star generates such a huge distortion. 

An ant that has always lived on a large, flat plaza will be amazed when it discovers that through a small hole it has access to a large underground garage. Same for us with a black hole. What the amazement teaches is that we should not have blind confidence in habitual ideas: the world is stranger and more varied than we imagine. 

The existence of large volumes within small horizons has also generated confusion in the world of science. The scientific community has split and is quarreling about the topic. In the rest of this section, I tell you about this dispute. It is more technical than the rest — skip it if you like — but it is a picture of a lively, ongoing scientific debate. 

The disagreement concerns how much information you can cram into an entity with a large volume but a small surface. One part of the scientific community is convinced that a black hole with a small horizon can contain only a small amount of information. Another disagrees. 

What does it mean to “contain information”? 

More or less this: Are there more things in a box containing five large and heavy balls, or in a box that contains twenty small marbles? The answer depends on what you mean by “more things.” The five balls are bigger and weigh more, so the first box contains more matter, more substance, more energy, more stuff. In this sense there are “more things” in the box of balls. 

But the number of marbles is greater than the number of balls. In this sense, there are “more things,” more details, in the box of marbles. If we wanted to send signals, by giving a single color to each marble or each ball, we could send more signals, more colors, more information, with the marbles, because there are more of them. More precisely: it takes more information to describe the marbles than it does to describe the balls, because there are more of them. In technical terms, the box of balls contains more energy, whereas the box of marbles contains more information

An old black hole, considerably evaporated, has little energy, because the energy has been carried away via the Hawking radiation. Can it still contain much information, after much of its energy is gone? Here is the brawl.

Some of my colleagues convinced themselves that it is not possible to cram a lot of information beneath a small surface. That is, they became convinced that when most energy has gone and the horizon has become minuscule, only little information can remain inside. 

Another part of the scientific community (to which I belong) is convinced of the contrary. The information in a black hole—even a greatly evaporated one—can still be large. Each side is convinced that the other has gone astray. 

Disagreements of this kind are common in the history of science; one may say that they are the salt of the discipline. They can last long. Scientists split, quarrel, scream, wrangle, scuffle, jump at each other’s throats. Then, gradually, clarity emerges. Some end up being right, others end up being wrong. 

At the end of the nineteenth century, for instance, the world of physics was divided into two fierce factions. One of these followed Mach in thinking that atoms were just convenient mathematical fictions; the other followed Boltzmann in believing that atoms exist for real. The arguments were ferocious. Ernst Mach was a towering figure, but it was Boltzmann who turned out to be right. Today, we even see atoms through a microscope. 

I think that my colleagues who are convinced that a small horizon can contain only a small amount of information have made a serious mistake, even if at first sight their arguments seem convincing. Let’s look at these.

The first argument is that it is possible to compute how many elementary components (how many molecules, for example) form an object, starting from the relation between its energy and its temperature. We know the energy of a black hole (it is its mass) and its temperature (computed by Hawking), so we can do the math. The result indicates that the smaller the horizon, the fewer its elementary components. 

The second argument is that there are explicit calculations that allow us to count these elementary components directly, using both of the most studied theories of quantum gravity—string theory and loop theory. The two archrival theories completed this computation within months of each other in 1996. For both, the number of elementary components becomes small when the horizon is small.

These seem like strong arguments. On the basis of these arguments, many physicists have accepted a “dogma” (they call it so themselves): the number of elementary components contained in a small surface is necessarily small. Within a small horizon there can only be little information. If the evidence for this “dogma” is so strong, where does the error lie? 

It lies in the fact that both arguments refer only to the components of the black hole that can be detected from the outside, as long as the black hole remains what it is. And these are only the components residing on the horizon. Both arguments, in other words, ignore that there can be components in the large interior volume. These arguments are formulated from the perspective of someone who remains far from the black hole, does not see the inside, and assumes that the black hole will remain as it is forever. If the black hole stays this way forever—remember—those who are far from it will see only what is outside or what is right on the horizon. It is as if for them the interior does not exist. For them

But the interior does exist! And not only for those (like us) who dare to enter, but also for those who simply have the patience to wait for the black horizon to become white, allowing what was trapped inside to come out. In other words, to imagine that the calculations of the number of components of a black hole given by string theory or loop theory are complete is to have failed to take on board Finkelstein’s 1958 article. The description of a black hole from the outside is incomplete. 

The loop quantum gravity calculation is revealing: the number of components is precisely computed by counting the number of quanta of space on the horizon. But the string theory calculation, on close inspection, does the same: it assumes that the black hole is stationary, and is based on what is seen from afar. It neglects, by hypothesis, what is inside and what will be seen from afar after the hole has finished evaporating — when it is no longer stationary. 

I think that certain of my colleagues err out of impatience they want everything resolved before the end of evaporation, where quantum gravity becomes inevitable) and because they forget to take into account what is beyond that which can be immediately seen — two mistakes we all frequently make in life. 

Adherents to the dogma find themselves with a problem. They call it “the black hole information paradox.” They are convinced that inside an evaporated black hole there is no longer any information. Now, everything that falls into a black hole carries information. So a large amount of information can enter the hole. Information cannot vanish. Where does it go? 

To solve the paradox, the devotees of the dogma imagine that information escapes the hole in mysterious and baroque ways, perhaps in the folds of the Hawking radiation, like Ulysses and his companions escaping from the cave of the cyclops by hiding beneath sheep. Or they speculate that the interior of a black hole is connected to the outside by hypothetical invisible canals . . . Basically, they are clutching at straws—looking, like all dogmatists in difficulty, for abstruse ways of saving the dogma. 

But the information that enters the horizon does not escape by some arcane, magical means. It simply comes out after the horizon has been transformed from a black horizon into a white horizon.

In his final years, Stephen Hawking used to remark that there is no need to be afraid of the black holes of life: sooner or later, there will be a way out of them. There is — via the child white hole.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-white-holes-carlo-rovelli-riverhead-153058062.html?src=rss

What is going on with OpenAI and Sam Altman?

It’s been an eventful weekend at OpenAI’s headquarters in San Francisco. In a surprise move Friday, the company’s board of directors fired co-founder and CEO Sam Altman, which set off an institutional crisis that has seen senior staff resign in protest with nearly 700 rank-and-file employees threatening to do the same. Now the board is facing calls for its own resignation, even after Microsoft had already swooped in to hire Altman’s cohort away for its own AI projects. Here’s everything you need to know about the situation to hold your own at Thanksgiving on Thursday.

How it started

Thursday, November 16

This saga began forever ago by internet standards, or last Thursday in the common parlance. Per a tweet from former-company president Greg Brockman, that was when OpenAI’s head researcher and board member, Ilya Sutskever, contacted Altman to set up a meeting the following day at noon. In that same tweet chain (posted Friday night), Brockman accused the company of informing the first interim-CEO, OpenAI CTO Mira Murati, of the upcoming firings at that time as well:

- Last night, Sam got a text from Ilya asking to talk at noon Friday. Sam joined a Google Meet and the whole board, except Greg, was there. Ilya told Sam he was being fired and that the news was going out very soon.

- At 12:19PM, Greg got a text from Ilya asking for a quick call. At 12:23PM, Ilya sent a Google Meet link. Greg was told that he was being removed from the board (but was vital to the company and would retain his role) and that Sam had been fired. Around the same time, OpenAI published a blog post.

- As far as we know, the management team was made aware of this shortly after, other than Mira who found out the night prior.

Friday, November 17

Everything kicked off at that Friday noon meeting. Brockman was informed that he would be demoted — removed from the board but remain president of the company, reporting to Murati once she’s installed. Barely ten minutes later, Brockman alleges, Altman was informed of his termination as the public announcement was published. Sutskever subsequently sent a company-wide email stating that “Change can be scary,” per The Information.

Later that afternoon, the OpenAI board along with new CEO Murati addressed a “shocked” workforce in an all-hands meeting. During that meeting, Sutskever reportedly told employees the moves will ultimately “make us feel closer."

At this point, Microsoft, which just dropped a cool $10 billion into OpenAI’s coffers in January as part of a massive, multi-year investment deal with the company weighed in on the day’s events. CEO Satya Nadella released the following statement:

As you saw at Microsoft Ignite this week, we’re continuing to rapidly innovate for this era of AI, with over 100 announcements across the full tech stack from AI systems, models and tools in Azure, to Copilot. Most importantly, we’re committed to delivering all of this to our customers while building for the future. We have a long-term agreement with OpenAI with full access to everything we need to deliver on our innovation agenda and an exciting product roadmap; and remain committed to our partnership, and to Mira and the team. Together, we will continue to deliver the meaningful benefits of this technology to the world.

By Friday evening, things really began to spiral. Brockman announced via Twitter that he quit in protest. Director of research Jakub Pachocki and head of preparedness Aleksander Madry announced that they too were resigning in solidarity.

How it’s going

Saturday/Sunday, November 18/19

On Saturday, November 18, the backtracking begins. Altman’s Friday termination notice states that, “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.”

The following morning, OpenAI COO Brad Lightcap wrote in internal communications obtained by Axios that the decision “took [the management team] by surprise” and that management had been in conversation “with the board to try to better understand the reasons and process behind their decision.”

“We can say definitively that the board’s decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices,” Lightcap wrote. “This was a breakdown in communication between Sam and the board … We still share your concerns about how the process has been handled, are working to resolve the situation, and will provide updates as we’re able.”

A report from The Information midmorning Saturday revealed that OpenAI’s prospective share sale being led by Thrive Capital, valued at $86 billion, is in jeopardy following Altman’s firing. Per three unnamed sources within the company, even if the sale does go through, it will likely be at a lower valuation. The price of OpenAI shares has tripled since the start of the year, and quadrupled since 2021, so current and former employees, many of whom were offered stock as hiring incentives, were in line for a big payout. A payout might not be coming anymore.

On Saturday afternoon, Altman announced on Twitter that he would be forming a new AI startup with Brockman’s assistance, potentially doing something with AI chips to counter NVIDIA’s dominance in the sector. At this point OpenAI’s many investors, rightly concerned that their money was about to go up in generative smoke, began pressuring the board of directors to reinstate Altman and Brockman.

Microsoft’s Satya Nadella reportedly led that charge. Bloomberg’s sources say Nadella was “furious” over the decision to oust Altman — especially having been given just “a few minutes” of notice before the public announcement was made — even going so far as to recruit Altman and his cohort for their own AI efforts.

Microsoft also has leverage in the form of its investment, much of which is in the form of cloud compute credits (which the GPT platform needs to operate) rather than hard currency. Denying those credits to OpenAI would effectively hobble the startup’s operations.

Interim-CEO Mira Murati’s 48-hour tenure at the head of OpenAI came to an end on Sunday when the board named Twitch co-founder Emmett Shear as the new interim-CEO. According to Bloomberg reporter Ashley Vance, Murati had planned to hire Altman and Brockman back in a move designed to force the board of directors into action. Instead, the board “went into total silence” and “found their own CEO Emmett Shear.” Altman spent Sunday at OpenAI HQ, posting an image of himself holding up a green “Guest” badge.

“First and last time i ever wear one of these,” he wrote.

Monday, November 20

On Monday morning, an open letter from more than 500 OpenAI employees circulated online. The group threatened to quit and join the new Microsoft subsidiary unless the board itself resigns and brings back Altman and Brockman (and presumably the other two as well). The number of signatories has since grown to nearly 700.

Despite Sutskever’s early morning mea culpa, that seemed unlikely. The board had missed its deadline to respond to the open letter, Microsoft claimed to have hired Altman and Brockman and Shear had been named interim CEO.

Shear stepped down as CEO of Twitch in March, where he led the company for more than 16 years and has been working as a partner at Y Combinator for the past seven months. Amazon acquired the live video streaming app in 2014 for just under $1 billion.

“I took this job because I believe that OpenAI is one of the most important companies currently in existence. When the board shared the situation and asked me to take the role, I did not make the decision lightly,” Shear told OpenAI employees Monday.

“Ultimately I felt that I had a duty to help if I could,” he added.

Shear was quick to point out that Altman’s termination was “handled very badly, which has seriously damaged our trust.” As such he announced the company will hire an independent investigator to report on the run-up to Friday’s SNAFU.

“The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that,” Shear continued. “I’m not crazy enough to take this job without board support for commercializing our awesome models.”

Following his departure to Microsoft on Monday, Altman posted, “the OpenAI leadership team, particularly mira brad and jason but really all of them, have been doing an incredible job through this that will be in the history books.”

“Incredibly proud of them,” he wrote.

There was one more twist in store on Monday, however. Reports suggested that Altman's move to Microsoft wasn't a sure thing — and that he was still angling for a return to OpenAI.

Tuesday, November 21

Tuesday was another eventful day in this soap opera-esque saga. Altman was said to be discussing his potential return to OpenAI with the board, just four days after those same people booted him out of the company. Bloomberg reported that, until Monday, the board "largely refused to engage" with Altman, so the fresh talks were notable. The negotiations were said to involve board member Adam D’Angelo (who is CEO of Quora) along with OpenAI investors who had been pushing for Altman's return.

Things largely remained quiet on the OpenAI front for several hours. However, on Tuesday afternoon, Brockman posted about ChatGPT's voice conversation feature becoming available to all users. That raised a few eyebrows, given that he seemed not to be involved with the company at the time.

The biggest shock of all emerged late on Tuesday night (early Wednesday on the East Coast) when OpenAI said it had reached an agreement in principle for Altman to return as CEO. The company noted that all parties were "collaborating to figure out the details." Brockman also said late Tuesday that he was returning and "getting back to coding tonight."

The board has a new look as well, with only D’Angelo remaining. Google Maps co-creator and former Salesforce co-CEO Bret Taylor succeeded Brockman as chair. Former US Treasury Secretary Larry Summers is the other member of the three-person board, which will reportedly vet a new set of up to nine permanent directors who will have the task of resetting OpenAI's governance. One of those board seats is said to be earmarked for Altman, while Microsoft is set to take one.

"I love OpenAI, and everything I’ve done over the past few days has been in service of keeping this team and its mission together," Altman said after the news of his return broke. "With the new board and with Satya's support, I'm looking forward to returning to OpenAI and building on our strong partnership with [Microsoft]." Altman added that when he decided to join Microsoft on Sunday evening, he felt at the time that "was the best path for me and the team."

"We are encouraged by the changes to the OpenAI board," Nadella wrote on X. "We believe this is a first essential step on a path to more stable, well-informed, and effective governance." Other OpenAI investors, such as Thrive Capital, were pleased about Altman's return, as was Shear.

"I am deeply pleased by this result, after ~72 very intense hours of work," Shear wrote. "Coming into OpenAI, I wasn’t sure what the right path would be. This was the pathway that maximized safety alongside doing right by all stakeholders involved. I’m glad to have been a part of the solution." In a nod to his time at Twitch and that platform's speedrunning community, Shear joked that he'd zipped through his time as OpenAI CEO in 55 hours and 32 minutes.

Many OpenAI workers went to the company's office to celebrate Altman and Brockman's return. At one point during the party, a smoke machine was said to have triggered a fire alarm.

Altman and Brockman may not have too much time to enjoy their stunning comeback before it's back to serious business, though. It also emerged on Tuesday that yet another lawsuit has been filed alleging that OpenAI didn't gain permission from rights holders before using their intellectual property without permission to train its generative AI models. In this case, a group of non-fiction authors say OpenAI did not compensate them for feeding their books and academic journals into its systems.

This article originally appeared on Engadget at https://www.engadget.com/what-is-going-on-with-openai-and-sam-altman-215725312.html?src=rss

Stadium card stunts and the art of programming a crowd

With college bowl season just around the corner, football fans across the nation will be dazzled, not just by the on-field action, but also by the intricate "card stunts" performed by members of the stadium's audience. The highly-coordinated crowd work is capable of producing detailed images that resemble the pixelated images on computer screens — and which are coded in much the same manner.  

Michael Littman's new book, Code to Joy: Why Everyone Should Learn a Little Programming, is filled with similar examples of how the machines around us operate and how we need not distrust an automaton-filled future so long as we learn to speak their language (at least until they finish learning ours). From sequencing commands to storing variables, Code to Joy provides an accessible and entertaining guide to the very basics of programming for fledgling coders of all ages.  

Code to Joy cover
MIT Press

Excerpted from Code to Joy: Why Everyone Should Learn a Little Programming by Michael L Littman. Published by MIT Press. Copyright © 2023 by Michael L Littman. All rights reserved.


“GIMME A BLUE!”

Card stunts, in which a stadium audience holds up colored signs to make a giant, temporary billboard, are like flash mobs where the participants don’t need any special skills and don’t even have to practice ahead of time. All they have to do is show up and follow instructions in the form of a short command sequence. The instructions guide a stadium audience to hold aloft the right poster-sized colored cards at the right time as announced by a stunt leader. A typical set of card-stunt instructions begins with instructions for following the instructions: 

  • listen to instructions carefully 

  • hold top of card at eye level (not over your head) 

  • hold indicated color toward field (not facing you) 

  • pass cards to aisle on completion of stunts (do not rip up the cards)

These instructions may sound obvious, but not stating them surely leads to disaster. Even so, you know there’s gotta be a smart alec who asks afterward, “Sorry, what was that first one again?” It’s definitely what I’d do. 

Then comes the main event, which, for one specific person in the crowd, could be the command sequence: 

  1. Blue 

  2. Blue 

  3. Blue 

Breathtaking, no? Well, maybe you have to see the bigger picture. The whole idea of card stunts leverages the fact that the members of a stadium crowd sit in seats arranged in a grid. By holding up colored rectangular sign boards, they transform themselves into something like a big computer display screen. Each participant acts as a single picture element— person pixels! Shifts in which cards are being held up change the image or maybe even cause it to morph like a larger-than-life animated gif. 

Card stunts began as a crowd-participation activity at college sports in the 1920s. They became much less popular in the 1970s when it was generally agreed that everyone should do their own thing, man. In the 1950s, though, there was a real hunger to create ever more elaborate displays. Cheer squads would design the stunts by hand, then prepare individual instructions for each of a thousand seats. You’ve got to really love your team to dedicate that kind of energy. A few schools in the 1960s thought that those newfangled computer things might be helpful for taking some of the drudgery out of instruction preparation and they designed programs to turn sequences of hand-drawn images into individualized instructions for each of the participants. With the help of computers, people could produce much richer individualized sequences for each person pixel that said when to lift a card, what color to lift, and when to put it down or change to another card. So, whereas the questionnaire example from the previous section was about people making command sequences for the computer to follow, this example is about the computer making command sequences for people to follow. And computer support for automating the process of creating command sequences makes it possible to create more elaborate stunts. That resulted in a participant’s sequence of commands looking like:

  • up on 001 white 

  • 003 blue 

  • 005 white 

  • 006 red 

  • 008 white 

  • 013 blue 

  • 015 white 

  • 021 down 

  • up on 022 white 

  • 035 down 

  • up on 036 white 

  • 043 blue 

  • 044 down 

  • up on 045 white 

  • 057 metallic red 

  • 070 down

Okay, it’s still not as fun to read the instructions as to see the final product—in this actual example, it’s part of an animated Stanford “S.” To execute these commands in synchronized fashion, an announcer in the stadium calls out the step number (“Forty-one!”) and each participant can tell from his or her instructions what to do (“I’m still holding up the white card I lifted on 36, but I’m getting ready to swap it for a blue card when the count hits 43”). 

As I said, it’s not that complicated for people to be part of a card stunt, but it’s a pretty cool example of creating and following command sequences where the computer tells us what to do instead of the other way around. And, as easy as it might be, sometimes things still go wrong. At the 2016 Democratic National Convention, Hillary Clinton’s supporters planned an arena-wide card stunt. Although it was intended to be a patriotic display of unity, some attendees didn’t want to participate. The result was an unreadable mess that, depressingly, was supposed to spell out “Stronger Together.” 

These days, computers make it a simple matter to turn a photograph into instructions about which colors to hold up where. Essentially, any digitized image is already a set of instructions for what mixture of red, blue, and green to display at each picture position. One interesting challenge in translating an image into card-stunt instructions is that typical images consist of millions of colored dots (megapixels), whereas a card stunt section of a stadium has maybe a thousand seats. Instead of asking each person to hold up a thousand tiny cards, it makes more sense to compute an average of the colors in that part of the image. Then, from the collection of available colors (say, the classic sixty-four Crayola options), the computer just picks the closest one to the average. 

If you think about it, it’s not obvious how a computer can average colors. You could mix green and yellow and decide that the result looks like the spring green crayon, but how do you teach a machine to do that? Let’s look at this question a little more deeply. It’ll help you get a sense of how computers can help us instruct them better. Plus, it will be our entry into the exciting world of machine learning. 

There are actually many, many ways to average colors. A simple one is to take advantage of the fact that each dot of color in an image file is stored as the amount of red, green, and blue color in it. Each component color is represented as a whole number between 0 and 255, where 255 was chosen because it’s the largest value you can make with eight binary digits, or bits. Using quantities of red-blue-green works well because the color receptors in the human eye translate real-world colors into this same representation. That is, even though purple corresponds to a specific wavelength of light, our eyes see it as a particular blend of green, blue, and red. Show someone that same blend, and they’ll see purple. So, to summarize a big group of pixels, just average the amount of blue in those pixels, the amount of red in those pixels, and the amount of green in those pixels. That basically works. Now, it turns out, for a combination of physical, perceptual, and engineering reasons, you get better results by squaring the values before averaging, and square rooting the values after averaging. But that’s not important right now. The important thing is that there is a mechanical way to average a bunch of colored dots to get a single dot whose color summarizes the group. 

Once that average color is produced, the computer needs a way of finding the closest color to the cards we have available. Is that more of a burnt sienna or a red-orange? A typical (if imperfect) way to approximate how similar two colors are using their red-blue-green values is what’s known as the Euclidean distance formula. Here’s what that looks like as a command sequence:

  • take the difference between the amount of red in the two colors square it 

  • take the difference between the amount of blue in the two colors square it 

  • take the difference between the amount of green in the two colors square it add the three squares together 

  • take the square root

So to figure out what card should be held up to best capture the average of the colors in the corresponding part of the image, just figure out which of the available colors (blue, yellow green, apricot, timberwolf, mahogany, periwinkle, etc.) has the smallest distance to that average color at that location. That’s the color of the card that should be given to the pixel person sitting in that spot in the grid. 

The similarity between this distance calculation and the color averaging operation is, I’m pretty sure, just a coincidence. Sometimes a square root is just a square root. 

Stepping back, we can use these operations — color averaging and finding the closest color to the average — to get a computer to help us construct the command sequence for a card stunt. The computer takes as input a target image, a seating chart, and a set of available color cards, and then creates a map of which card should be held up in each seat to best reproduce the image. In this example, the computer mostly handles bookkeeping and doesn’t have much to do in terms of decision-making beyond the selection of the closest color. But the upshot here is that the computer is taking over some of the effort of writing command sequences. We’ve gone from having to select every command for every person pixel at every moment in the card stunt to selecting images and having the computer generate the necessary commands. 

This shift in perspective opens up the possibility of turning over more control of the command-sequence generation process to the machine. In terms of our 2 × 2 grid from chapter 1, we can move from telling (providing explicit instructions) to explaining (providing explicit incentives). For example, there is a variation of this color selection problem that is a lot harder and gives the computer more interesting work to do. Imagine that we could print up cards of any color we needed but our print shop insists that we order the cards in bulk. They can only provide us with eight different card colors, but we can choose any colors we want to make up that eight. (Eight is the number of different values we can make with 3 bits — bits come up a lot in computing.) So we could choose blue, green, blue-green, blue-violet, cerulean, indigo, cadet blue, and sky blue, and render a beautiful ocean wave in eight shades of blue. Great! 

But then there would be no red or yellow to make other pictures. Limiting the color palette to eight may sound like a bizarre constraint, but it turns out that early computer monitors worked exactly like that. They could display any of millions of colors, but only eight distinct ones on the screen at any one time. 

With this constraint in mind, rendering an image in colored cards becomes a lot trickier. Not only do you have to decide which color from our set of color options to make each card, just as before, but you have to pick which eight colors will constitute that set of color options. If we’re making a face, a variety of skin tones will be much more useful than distinctions among shades of green or blue. How do we go from a list of the colors we wish we could use because they are in the target image to the much shorter list of colors that will make up our set of color options? 

Machine learning, and specifically an approach known as clustering or unsupervised learning, can solve this color-choice problem for us. I will tell you how. But first let’s delve into a related problem that comes from turning a face into a jigsaw puzzle. As in the card-stunt example, we’re going to have the computer design a sequence of commands for rendering a picture. But there’s a twist—the puzzle pieces available for constructing the picture are fixed in advance. Similar to the dance-step example, it will use the same set of commands and consider which sequence produces the desired image.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-code-to-joy-michael-l-littman-mit-press-153036241.html?src=rss

OpenAI reportedly considering reinstating just-ousted CEO Sam Altman

Following his surprise firing on Friday, former OpenAI CEO Sam Altman might not be as out of a job as we initially thought he was, according to report from The Verge on Saturday. Reportedly, sources close to Altman say that the board itself, in a stunning reversal, have "agreed in principal" to resign while reinstating him to his former position. However, the board has since reportedly missed a 5pm PT deadline regarding the decision.

Shortly after Altman's firing on Friday afternoon, several senior staffers, including former Chairman and President Greg Brockman, Director of Research Jakub Pachocki, Head of Preparedness Aleksander Madry and Senior Researcher Szymon Sidor, tendered their resignations in protest. Additional OpenAI staffers were supposedly set to quit in solidarity at that meeting as well. They're reportedly willing to follow Altman, a la Jerry Maguire, to a new AI startup venture, should he decide to launch one. 

An internal memo circulated after Altman's dismissal argued that his termination was not related to "malfeasance or anything related to our financial, business, safety or security/privacy practices,” per Axios' reporting.

Microsoft is a major investor in the OpenAI venture, having injected another $10 billion into the project's coffers this past January as part of a long term partnership between the two. In all, it has invested around $13 billion in OpenAI. In a statement, Microsoft said it maintains the "utmost confidence" in OpenAI interim-CEO Mira Murati and "remains confident" in the partnership overall. 

Despite those assurances, rank-and-file employees were given little notice prior to the official announcement of Altman's ouster (Altman himself received even less — reportedly, just 5 to 10 minutes). Altman had, in the days leading up to his termination, remained an active supporter and recruiter for the firm, appearing at the Asia-Pacific Economic Cooperation forum less than a day prior to his firing. 

According to The New York Times, neither Altman nor Brockman are guaranteed a return to power, largely on account of the company's non-profit origins, which preclude investors from directing company-wide decisions. They instead leave those choices to members of the board itself. Altman and Brockman were both members of the OpenAI board. However, with their departures, only lead researcher, Ilya Sutskever; Quora CEO Adam D’Angelo; director of strategy at Georgetown’s Center for Security and Emerging Technology Helen Toner; and computer scientist Tasha McCauley remain members — at least, through the weekend.

“We are still working towards a resolution and we remain optimistic,” Chief Strategy Officer Jason Kwon wrote to company staff in a Saturday memo, per The Information. “By resolution, we mean bringing back Sam, Greg, Jakub [Pachocki], Szymon [Sidor], Aleksander [Madry] and other colleagues (sorry if I missed you!) and remaining the place where people who want to work on AGI research, safety, products and policy can do their best work.”

This article originally appeared on Engadget at https://www.engadget.com/openai-potentially-considering-reinstating-its-freshly-ousted-ceo-sam-altman-051223213.html?src=rss

OpenAI fires CEO Sam Altman as ‘board no longer has confidence’ in his leadership

In a surprise shakeup of its c-suite Friday, OpenAI's board of directors announced that CEO Sam Altman has been fired and will be leaving both the company and the board, effective immediately. Chief Technology Officer Mira Murati has been named interim CEO.

Altman's oustering reportedly follows an internal "deliberative review process" which found he had not been "consistently candid in his communications with the board, hindering its ability to exercise its responsibilities," the company announced. As such, "the board no longer has confidence in his ability to continue leading OpenAI."

OpenAI, which owns popular AI chatbot ChatGPT, thanked Altman for his "many contributions to the founding and growth of OpenAI," but believes that "as the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO." The board added it has "the utmost confidence in her ability to lead OpenAI during this transition period.”

OpenAI's board is comprised of the company's Chief Scientist Ilya Sutskever, as well as Chairman and President Greg Brockman. Independent advisors, who hold no equity in the company, are also board members: Quora CEO Adam D’Angelo, tech entrepreneur Tasha McCauley and privacy advocate Helen Toner of the Georgetown Center for Security and Emerging Technology. Altman was also considered an independent advisor on the board, despite being CEO of the company prior to his departure.

Altman's personal profile has grown alongside the meteoric rise of generative AI technologies over the past year, making him something of the unofficial face for both OpenAI and the burgeoning industry as a whole. Previously the president of Y Combinator, Altman has appeared before Congressional panels and committees, attended Senate AI Insight forums and made numerous rounds at industry conferences.

The suddenness of Friday's announcement is certainly surprising given how steadily, and heavily, Altman has been promoting his company and its products in the days leading up to his termination.

Just last week, Altman took the stage at OpenAI's 2023 DevDay to announce a faster and more responsive GPT-4 Turbo platform as well as smaller, application-specific models simply dubbed GPTs. On Thursday Altman attended the Asia-Pacific Economic Cooperation CEO Summit in San Francisco. "Something has qualitatively changed,” he said during the event. “Now I can talk to this thing. It’s like the ‘Star Trek’ computer I was always promised… I think a lot of the world has collectively gone through a lurch this year to catch up.” 

Altman and Murati aren't the only ones caught in this shuffle. Brockman was also notified that he would have to step down from his role as board President, however, "based on today's news, I quit," he wrote to OpenAI employees in a company-wide email Friday.

Microsoft, which signed a "multibillion-dollar" partnership extension with OpenAI in January, was down in market trading Friday afternoon. Despite the stock price hit, Microsoft will maintain its existing partnership with OpenAI, a company spokesperson told Engadget via email. “We have a long-term partnership with OpenAI and Microsoft remains committed to Mira and their team as we bring this next era of AI to our customers.” the spokesperson said. However, according to an report by The Information, few people within the Microsoft organization were warned of Altman's sacking prior to the public news release, including teams tasked with developing products based on OpenAI tech. 

The software giant's stance isn't surprising given the reported details of its $10 billion investment this past January, which bumped OpenAI's valuation to $23 billion. Microsoft will reportedly receive a lion's share of OpenAI's profits, some 75 percent, until that investment has been repaid, whereupon that figure will reportedly drop to 49 percent.

"We have a long-term agreement with OpenAI with full access to everything we need to deliver on our innovation agenda and an exciting product roadmap; and remain committed to our partnership, and to Mira and the team," Microsoft CEO Satya Nadella said in a prepared statement Friday. "Together, we will continue to deliver the meaningful benefits of this technology to the world."

Altman co-founded OpenAI with Elon Musk in 2015 as a nonprofit and has served as the CEO for the for-profit arm since 2019. The release of the company's ultra-popular ChatGPT conversational AI last November is credited with kickstarting the generative AI boom

The system, originally built atop the GPT-3.5 platform, initially enabled users to converse with a digital agent — one more capable than the previous generation of Siri, Alexa and Assistant — using natural language. Those capabilities quickly expanded to include myriad languages and modalities, as well as the ability to output programming code and control remote processes and devices through API access.

This article originally appeared on Engadget at https://www.engadget.com/openai-ceo-sam-altman-ousted-as-board-no-longer-has-confidence-in-his-leadership-204924006.html?src=rss

What happened to Washington’s wildlife after the largest dam removal in US history

The man made flood that miraculously saved our heroes at the end of O Brother Where Art Thou were an actual occurrence in the 19th and 20th century — and a fairly common one at that — as river valleys across the American West were dammed up and drowned out at the altar of economic progress and electrification. Such was the case with Washington State's Elwha river in the 1910s. Its dam provided the economic impetus to develop the Olympic Peninsula but also blocked off nearly 40 miles of river from the open ocean, preventing native salmon species from making their annual spawning trek. However, after decades of legal wrangling by the Lower Elwha Klallam Tribe, the biggest dams on the river today are the kind made by beavers. 

In this week's Hitting the Books selection, Eat, Poop, Die: How Animals Make Our World, University of Vermont conservation biologist Joe Roman recounts how quickly nature can recover when a 108-foot tall migration barrier is removed from the local ecosystem. This excerpt discusses the naturalists and biologists who strive to understand how nutrients flow through the Pacific Northwest's food web, and the myriad ways it's impacted by migratory salmon. The book as a whole takes a fascinating look at how the most basic of biological functions (yup, poopin!) of even just a few species can potentially impact life in every corner of the planet.   

white background with black text, images of sundry wildlife, none of whom are dropping deuces.
Hatchette Books

Excerpted from by Eat, Poop, Die: How Animals Make Our World by Joe Roman. Published by Hachette Book Group. Copyright © 2023 by Joe Roman. All rights reserved.


When construction began in 1910, the Elwha Dam was designed to attract economic development to the Olympic Peninsula in Washington, supplying the growing community of Port Angeles with electric power. It was one of the first high-head dams in the region, with water moving more than a hundred yards from the reservoir to the river below. Before the dam was built, the river hosted ten anadromous fish runs. All five species of Pacific salmon — pink, chum, sockeye, Chinook, and coho — were found in the river, along with bull trout and steelhead. In a good year, hundreds of thousands of salmon ascended the Elwha to spawn. But the contractors never finished the promised fish ladders. As a result, the Elwha cut off most of the watershed from the ocean and 90 percent of migratory salmon habitat.

Thousands of dams block the rivers of the world, decimating fish populations and clogging nutrient arteries from sea to mountain spring. Some have fish ladders. Others ship fish across concrete walls. Many act as permanent barriers to migration for thousands of species.

By the 1980s, there was growing concern about the effect of the Elwha on native salmon. Populations had declined by 95 per cent, devastating local wildlife and Indigenous communities. River salmon are essential to the culture and economy of the Lower Elwha Klallam Tribe. In 1986, the tribe filed a motion through the Federal Energy Regulatory Commission to stop the relicensing of the Elwha Dam and the Glines Canyon Dam, an upstream impoundment that was even taller than the Elwha. By blocking salmon migration, the dams violated the 1855 Treaty of Point No Point, in which the Klallam ceded a vast amount of the Olympic Peninsula on the stipulation that they and all their descendants would have “the right of taking fish at usual and accustomed grounds.” The tribe partnered with environmental groups, including the Sierra Club and the Seattle Audubon Society, to pressure local and federal officials to remove the dams. In 1992, Congress passed the Elwha River Ecosystem and Fisheries Restoration Act, which authorized the dismantling of the Elwha and Glines Canyon Dams.

The demolition of the Elwha Dam was the largest dam-removal project in history; it cost $350 million and took about three years. Beginning in September 2011, coffer dams shunted water to one side as the Elwha Dam was decommissioned and destroyed. The Glines Canyon was more challenging. According to Pess, a “glorified jackhammer on a floating barge” was required to dismantle the two-hundred-foot impoundment. The barge didn’t work when the water got low, so new equipment was helicoptered in. By 2014, most of the dam had come down, but rockfall still blocked fish passage. It took another year of moving rocks and concrete before the fish had full access to the river.

The response of the fish was quick, satisfying, and sometimes surprising. Elwha River bull trout, landlocked for more than a century, started swimming back to the ocean. The Chinook salmon in the watershed increased from an average of about two thousand to four thousand. Many of the Chinook were descendants of hatchery fish, Pess told me over dinner at Nerka. “If ninety percent of your population prior to dam removal is from a hatchery, you can’t just assume that a totally natural population will show up right away.” Steelhead trout, which had been down to a few hundred, now numbered more than two thousand.

Within a few years, a larger mix of wild and local hatchery fish had moved back to the Elwha watershed. And the surrounding wildlife responded too. The American dipper, a river bird, fed on salmon eggs and insects infused with the new marine-derived nutrients. Their survival rates went up, and the females who had access to fish became healthier than those without. They started having multiple broods and didn’t have to travel so far for their food, a return, perhaps, to how life was before the dam. A study in nearby British Columbia showed that songbird abundance and diversity increased with the number of salmon. They weren’t eating the fish — in fact, they weren’t even present during salmon migration. But they were benefiting from the increase in insects and other invertebrates.

Just as exciting, the removal of the dams rekindled migratory patterns that had gone dormant. Pacific lamprey started traveling up the river to breed. Bull trout that had spent generations in the reservoir above the dam began migrating out to sea. Rainbow trout swam up and down the river for the first time in decades. Over the years, the river started to look almost natural as the sediments that had built up behind the dams washed downstream.

The success on the Elwha could be the start of something big, encouraging the removal of other aging dams. There are plans to remove the Enloe Dam, a fifty-four-foot concrete wall in northern Washington, which would open up two hundred miles of river habitat for steelhead and Chinook salmon. Critically endangered killer whales, downstream off the coast of the Pacific Northwest, would benefit from this boost in salmon, and as there are only seventy individuals remaining, they need every fish they can get.

The spring Chinook salmon run on the Klamath River in Northern California is down 98 percent since eight dams were constructed in the twentieth century. Coho salmon have also been in steep decline. In the next few years, four dams are scheduled to come down with the goal of restoring salmon migration. Farther north, the Snake River dams could be breached to save the endangered salmon of Washington State. If that happens, historic numbers of salmon could come back — along with the many species that depended on the energy and nutrients they carry upstream.

Other dams are going up in the West — dams of sticks and stones and mud. Beaver dams help salmon by creating new slow-water habitats, critical for juvenile salmon. In Washington, beaver ponds cool the streams, making them more productive for salmon. In Alaska, the ponds are warmer, and the salmon use them to help metabolize what they eat. Unlike the enormous concrete impoundments, designed for stability, beaver dams are dynamic, heterogeneous landscapes that salmon can easily travel through. Beavers eat, they build dams, they poop, they move on. We humans might want things to be stable, but Earth and its creatures are dynamic.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-eat-poop-die-joe-roman-hatchette-books-153032502.html?src=rss

Humane’s Ai Pin costs $699 and ships in early 2024, which is about all we know for certain

Wearable startup Humane AI has been dripping details about its upcoming device, the AI Pin, for months now. We firs saw it at a TED Talk in May and, more recently, got a glimpse of its promised capabilities at Paris Fashion Week, ahead of Thursday's official unveiling. However many questions regarding how the wearable AI will actually do what it says it will remain to be answered.

Here's what we do know: Humane is a much-hyped startup founded by former Apple employees. Its first product is the Humane AI Pin, a pocket-worn wearable AI assistant that can reportedly perform the tasks that many modern cellphones and digital assistants do, but in a radically different form factor. It has no screen, instead reportedly operating primarily through voice commands and occasionally through a virtual screen projected onto the user's hand. It costs $700 plus another $24 because Humane insisted on launching its own MVNO (mobile virtual network operator) on top of T-Mobile's network. That $24/month "Humane Subscription" includes a dedicated cell phone number for the Pin with unlimited talk, text and data, rather than allow the device to tether to your existing phone. 

Humane AI Pin
Humane AI

The device itself will be available in three colors — Eclipse, Equinox, and Lunar — when orders begin shipping in early 2024. The magnetic clip that affixes the device to your clothing doubles as the battery storage and includes a pair of backup batteries for users to keep with them. The AI Pin also sports an ultra-wide RGB camera, depth and motion sensors, all of which allow "the device to see the world as you see it," per the company's release.

The AI Pin will reportedly run on a Snapdragon processor with a dedicated Qualcomm AI Engine supporting its custom Cosmos OS. Its "entirely new AI software framework, the Ai Bus," reportedly removes the need to actually download content to the device itself. Instead, it "quickly understands what you need, connecting you to the right AI experience or service instantly." Collaborations with both Microsoft and OpenAI will reportedly give the AI Pin, "access to some of the world’s most powerful AI models and platforms." 

There is still much we don't know about the AI Pin, however, like how long each battery module lasts and how sensitive the system's anti-tamper system is that will lock down a "compromised" device. Live demonstrations of the technology have been rare to date and hands-on opportunities nearly nonexistent. Humane is hosting a debut event Thursday afternoon where, presumably, functional iterations of the AI Pin will be on display.

This article originally appeared on Engadget at https://www.engadget.com/humanes-ai-pin-costs-699-and-ships-in-early-2024-which-is-about-all-we-know-for-certain-181048809.html?src=rss

Humane’s Ai Pin costs $699 and ships in early 2024, which is about all we know for certain

Wearable startup Humane AI has been dripping details about its upcoming device, the AI Pin, for months now. We firs saw it at a TED Talk in May and, more recently, got a glimpse of its promised capabilities at Paris Fashion Week, ahead of Thursday's official unveiling. However many questions regarding how the wearable AI will actually do what it says it will remain to be answered.

Here's what we do know: Humane is a much-hyped startup founded by former Apple employees. Its first product is the Humane AI Pin, a pocket-worn wearable AI assistant that can reportedly perform the tasks that many modern cellphones and digital assistants do, but in a radically different form factor. It has no screen, instead reportedly operating primarily through voice commands and occasionally through a virtual screen projected onto the user's hand. It costs $700 plus another $24 because Humane insisted on launching its own MVNO (mobile virtual network operator) on top of T-Mobile's network. That $24/month "Humane Subscription" includes a dedicated cell phone number for the Pin with unlimited talk, text and data, rather than allow the device to tether to your existing phone. 

Humane AI Pin
Humane AI

The device itself will be available in three colors — Eclipse, Equinox, and Lunar — when orders begin shipping in early 2024. The magnetic clip that affixes the device to your clothing doubles as the battery storage and includes a pair of backup batteries for users to keep with them. The AI Pin also sports an ultra-wide RGB camera, depth and motion sensors, all of which allow "the device to see the world as you see it," per the company's release.

The AI Pin will reportedly run on a Snapdragon processor with a dedicated Qualcomm AI Engine supporting its custom Cosmos OS. Its "entirely new AI software framework, the Ai Bus," reportedly removes the need to actually download content to the device itself. Instead, it "quickly understands what you need, connecting you to the right AI experience or service instantly." Collaborations with both Microsoft and OpenAI will reportedly give the AI Pin, "access to some of the world’s most powerful AI models and platforms." 

There is still much we don't know about the AI Pin, however, like how long each battery module lasts and how sensitive the system's anti-tamper system is that will lock down a "compromised" device. Live demonstrations of the technology have been rare to date and hands-on opportunities nearly nonexistent. Humane is hosting a debut event Thursday afternoon where, presumably, functional iterations of the AI Pin will be on display.

This article originally appeared on Engadget at https://www.engadget.com/humanes-ai-pin-costs-699-and-ships-in-early-2024-which-is-about-all-we-know-for-certain-181048809.html?src=rss

Google’s AI-powered search feature goes global with a 120-country expansion

Google's Search Generative Experience (SGE), which currently provides generative AI summaries at the top of the search results page for select users, is about to be much more available. Just six months after its debut at I/O 2023, the company announced Wednesday that SGE is expanding to Search Labs users in 120 countries and territories, gaining support for four additional languages and receiving a handful of helpful new features.

Unlike its frenetic rollout of the Bard chatbot in March, Google has taken a slightly more measured tone in distributing its AI search assistant. The company began with English language searches in the US in May, expanded to English-language users in India and Japan in August and on to teen users in September. As of Wednesday, users from Brazil to Bhutan can give the feature a try. In addition to English, SGE now supports Spanish, Portuguese, Korean and Indonesian (in addition to the existing English, Hindi and Japanese) so you'll be able to search and converse with the assistant in natural language, whichever form it might take. These features arrive on Chrome desktop Wednesday with the Search Labs for Android app versions slowly rolling out over the coming week.

Among SGE's new features is an improved follow-up function where users can ask additional questions of the assistant directly on the search results page. Like a mini-Bard window tucked into the generated summary, the new feature enables users to drill down on a subject without leaving the results page or even needing to type their queries out. Google will reportedly restrict ads to specific, denoted, areas of the page so as to avoid confusion between them and the generated content. Users can expect follow-ups to start showing up in the coming weeks. They're only for English language users in the US to start but will likely expand as Google continues to iterate the technology. 

SGE will start helping with clarifying ambiguous translation terms as well. For example, if you're trying to translate "Is there a tie?" into Spanish, both the output, the gender and speaker's intention are going to change if you're talking about a tie, as in a draw between two competitors (e.g. "un empate") and for the tie you wear around your neck ("una corbata"). This new feature will automatically recognize such words and highlight them for you to click on, which pops up a window asking you to pick between the two versions. This is going to be super helpful with languages that, say, think of cars as boys but bicycles as girls, and you need to specify the version you're intending. Luckily, Spanish is one of those languages and this capability is coming first to US users for English-to-Spanish translations.

Finally, Google plans to expand its interactive definitions normally found in the generated summaries for educational topics like science, history or economics to coding and health related searches as well. This update should arrive within the next month, again, first for English language users in the US before spreading to more territories in the coming months. 

This article originally appeared on Engadget at https://www.engadget.com/googles-ai-powered-search-feature-goes-global-with-a-120-country-expansion-180028037.html?src=rss

NVIDIA’s Eos supercomputer just broke its own AI training benchmark record

Depending on the hardware you're using, training a large language model of any significant size can take weeks, months, even years to complete. That's no way to do business — nobody has the electricity and time to be waiting that long. On Wednesday, NVIDIA unveiled the newest iteration of its Eos supercomputer, one powered by more than 10,000 H100 Tensor Core GPUs and capable of training a 175 billion-parameter GPT-3 model on 1 billion tokens in under four minutes. That's three times faster than the previous benchmark on the MLPerf AI industry standard, which NVIDIA set just six months ago.

Eos represents an enormous amount of compute. It leverages 10,752 GPUs strung together using NVIDIA's Infiniband networking (moving a petabyte of data a second) and 860 terabytes of high bandwidth memory (36PB/sec aggregate bandwidth and 1.1PB sec interconnected) to deliver 40 exaflops of AI processing power. The entire cloud architecture is comprised of 1344 nodes — individual servers that companies can rent access to for around $37,000 a month to expand their AI capabilities without building out their own infrastructure. 

In all, NVIDIA set six records in nine benchmark tests: the 3.9 minute notch for GPT-3, a 2.5 minute mark to to train a Stable Diffusion model using 1,024 Hopper GPUs, a minute even to train DLRM, 55.2 seconds for RetinaNet, 46 seconds for 3D U-Net and the BERT-Large model required just 7.2 seconds to train.

NVIDIA was quick to note that the 175 billion parameter version of GPT-3 used in the benchmarking is not the full-sized iteration of the model (neither was the Stable Diffusion model). The larger GPT-3 offers around 3.7 trillion parameters and is just flat out too big and unwieldy for use as a benchmarking test. For example, it'd take 18 months to train it on the older A100 system with 512 GPUs — though, Eos needs just eight days. 

So instead, NVIDIA and MLCommons, which administers the MLPerf standard, leverage a more compact version that uses 1 billion tokens (the smallest denominator unit of data that generative AI systems understand). This test uses a GPT-3 version with the same number of potential switches to flip (s the full-size (those 175 billion parameters), just a much more manageable data set to use in it (a billion tokens vs 3.7 trillion).

The impressive improvement in performance, granted, came from the fact that this recent round of tests employed 10,752 H100 GPUs compared to the 3,584 Hopper GPUs the company used in June's benchmarking trials. However NVIDIA explains that despite tripling the number of GPUs, it managed to maintain 2.8x scaling in performance — an 93 percent efficiency rate — through the generous use of software optimization.

"Scaling is a wonderful thing," Salvator said."But with scaling, you're talking about more infrastructure, which can also mean things like more cost. An efficiently scaled increase means users are "making the best use of your of your infrastructure so that you can basically just get your work done as fast [as possible] and get the most value out of the investment that your organization has made."

The chipmaker was not alone in its development efforts. Microsoft's Azure team submitted a similar 10,752 H100 GPU system for this round of benchmarking, and achieved results within two percent of NVIDIA's.

"[The Azure team have] been able to achieve a performance that's on par with the Eos supercomputer," Dave Salvator Director of Accelerated Computing Products at NVIDIA, told reporters during a Tuesday prebrief. What's more "they are using Infiniband, but this is a commercially available instance. This isn't some pristine laboratory system that will never have actual customers seeing the benefit of it. This is the actual instance that Azure makes available to its customers."

 NVIDIA plans to apply these expanded compute abilities to a variety of tasks, including the company's ongoing work in foundational model development, AI-assisted GPU design, neural rendering, multimodal generative AI and autonomous driving systems.

"Any good benchmark looking to maintain its market relevance has to continually update the workloads it's going to throw at the hardware to best reflect the market it's looking to serve," Salvator said, noting that MLCommons has recently added an additional benchmark for testing model performance on Stable Diffusion tasks. "This is another exciting area of generative AI where we're seeing all sorts of things being created" — from programming code to discovering protein chains.

These benchmarks are important because, as Salvator points out, the current state of generative AI marketing can a bit of a "Wild West." The lack of stringent oversight and regulation means, "we sometimes see with certain AI performance claims where you're not quite sure about all the parameters that went into generating those particular claims." MLPerf provides the professional assurance that the benchmark numbers companies generate using its tests "were reviewed, vetted, in some cases even challenged or questioned by other members of the consortium," Salvator said. "It's that sort of peer reviewing process that really brings credibility to these results."

NVIDIA has been steadily focusing on its AI capabilities and applications in recent months. "We are at the iPhone moment for AI," CEO Jensen Huang said during his GTC keynote in March. At that time the company announced its DGX cloud system which portions out slivers of the supercomputer's processing power — specifically by either eight H100 or A100 chips running 60GB of VRAM (640 of memory in total). The company expanded its supercomputing portfolio with the release of DGX GH200 at Computex in May.

This article originally appeared on Engadget at https://www.engadget.com/nvidias-eos-supercomputer-just-broke-its-own-ai-training-benchmark-record-170042546.html?src=rss