NASA’s John Mather keeps redefining our understanding of the cosmos

Space isn't hard only on account of the rocket science. The task of taking a NASA mission from development and funding through construction and launch — all before we even use the thing for science — can span decades. Entire careers have been spent putting a single satellite into space. Nobel-winning NASA physicist John Mather, mind you, has already helped send up two.

In their new book, Inside the Star Factory: The Creation of the James Webb Space Telescope, NASA's Largest and Most Powerful Space Observatory, author Christopher Wanjek and photographer Chris Gunn take readers on a behind the scenes tour of the James Webb Space Telescope's own journey from inception to orbit. Weaving examinations of the radical imaging technology that enables us to peer deeper into the early universe than ever before with profiles of the researchers, advisors, managers, engineers and technicians that made it possible through three decades of effort. In this week's Hitting the Books excerpt, a look at JWST project scientist John Mather and his own improbable journey from rural New Jersey to NASA. 

it's the cover of the book
MIT Press

Excerpted from “Inside the Star Factory: The Creation of the James Webb Space Telescope, NASA's Largest and Most Powerful Space Observatory” Copyright © 2023 by Chris Gunn and Christopher Wanjek. Used with permission of the publisher, MIT Press.


John Mather, Project Scientist 

— The steady hand in control 

John Mather is a patient man. His 2006 Nobel Prize in Physics was thirty years in the making. That award, for unswerving evidence of the Big Bang, was based on a bus-sized machine called COBE — yet another NASA mission that almost didn’t happen. Design drama? Been there. Navigate unforeseen delays? Done that. For NASA to choose Mather as JWST Project Scientist was pure prescience. 

Like Webb, COBE — the Cosmic Background Explorer — was to be a time machine to reveal a snapshot of the early universe. The target era was just 370,000 years after the Big Bang, when the universe was still a fog of elementary particles with no discernable structure. This is called the epoch of recombination, when the hot universe cooled to a point to allow protons to bind with electrons to form the very first atoms, mostly hydrogen with a sprinkling of helium and lithium. As the atoms formed, the fog lifted, and the universe became clear. Light broke through. That ancient light, from the Big Bang itself, is with us today as remnant microwave radiation called the cosmic microwave background. 

Tall but never imposing, demanding but never mean, Mather is a study in contrasts. His childhood was spent just a mile from the Appalachian Trail in rural Sussex County, New Jersey, where his friends were consumed by earthly matters such as farm chores. Yet Mather, whose father was a specialist in animal husbandry and statistics, was more intrigued by science and math. At age six he grasped the concept of infinity when he filled up a page in his notebook with a very large number and realized he could go on forever. He loaded himself up with books from a mobile library that visited the farms every couple of weeks. His dad worked for Rutgers University Agriculture Experiment Station and had a laboratory on the farm with radioisotope equipment for studying metabolism and liquid nitrogen tanks with frozen bull semen. His dad also was one of the earliest users of computers in the area, circa 1960, maintaining milk production records of 10,000 cows on punched IBM cards. His mother, an elementary school teacher, was quite learned, as well, and fostered young John’s interest in science.

A chance for some warm, year-round weather ultimately brought Mather in 1968 to University of California, Berkeley, for graduate studies in physics. He would fall in with a crowd intrigued by the newly detected cosmic microwave background, discovered by accident in 1965 by radio astronomers Arno Penzias and Robert Wilson. His thesis advisor devised a balloon experiment to measure the spectrum, or color, of this radiation to see if it really came from the Big Bang. (It does.) The next obvious thing was to make a map of this light to see, as theory suggested, whether the temperature varied ever so slightly across the sky. And years later, that’s just what he and his COBE team found: anisotropy, an unequal distribution of energy. These micro-degree temperature fluctuations imply matter density fluctuations, sufficient to stop the expansion, at least locally. Through the influence of gravity, matter would pool into cosmic lakes to form stars and galaxies hundreds of millions of years later. In essence, Mather and his team captured a sonogram of the infant universe. 

Yet the COBE mission, like Webb, was plagued with setbacks. Mather and the team proposed the mission concept (for a second time) in 1976. NASA accepted the proposal but, that year, declared that this satellite and most others from then on would be delivered to orbit by the Space Shuttle, which itself was still in development. History would reveal the foolishness of such a plan. Mather understood immediately. This wedded the design of COBE to the cargo bay of the unbuilt Shuttle. Engineers would need to meet precise mass and volume requirements of a vessel not yet flown. More troublesome, COBE required a polar orbit, difficult for the Space Shuttle to deliver. The COBE team was next saddled with budget cuts and compromises in COBE’s design as a result of cost overruns of another pioneering space science mission, the Infrared Astronomical Satellite, or IRAS. Still, the tedious work continued of designing instruments sensitive enough to detect variations of temperatures just a few degrees above absolute zero, about −270°C. From 1980 onward, Mather was consumed by the creation of COBE all day every day. The team needed to cut corners and make risky decisions to stay within budget. News came that COBE was to be launched on the Space Shuttle mission STS-82-B in 1988 from Vandenberg Air Force Base. All systems go.

Then the Space Shuttle Challenger exploded in 1986, killing all seven of its crew. NASA grounded Shuttle flights indefinitely. COBE, now locked to Shuttle specifications, couldn’t launch on just any other rocket system. COBE was too large for a Delta rocket at this point; ironically, Mather had the Delta in mind in his first sketch in 1974. The team looked to Europe for a launch vehicle, but this was hardly an option for NASA. Instead, the project managers led a redesign to shave off hundreds of pounds, to slim down to a 5,000-pound launch mass, with fuel, which would just make it within the limits of a Delta by a few pounds. Oh, and McDonnell Douglas had to build a Delta rocket from spare parts, having been forced to discontinue the series in favor of the Space Shuttle. 

The team worked around the clock over the next two years. The final design challenge was ... wait for it ... a sunshield that now needed to be folded into the rocket and spring-released once in orbit, a novel approach. COBE got the greenlight to launch from Vandenberg Air Force Base in California, the originally desired site because it would provide easier access to a polar orbit compared to launching a Shuttle from Florida. Launch was set for November 1989. COBE was delivered several months before. 

Then, on October 17, the California ground shook hard. A 6.9-magnitude earthquake struck Santa Cruz County, causing widespread damage to structures. Vandenberg, some 200 miles south, felt the jolt. As pure luck would have it, COBE was securely fastened only because two of the engineers minding it secured it that day before going off to get married. The instrument suffered no damage and launched successfully on November 18. More drama came with the high winds on launch day. Myriad worries followed in the first weeks of operation: the cryostat cooled too quickly; sunlight reflecting off of Antarctic ice played havoc with the power system; trapped electrons and protons in the Van Allen belts disrupted the functioning of the electronics; and so on. 

All the delays, all the drama, faded into a distant memory for Mather as the results of the COBE experiment came in. Data would take four years to compile. But the results were mind-blowing. The first result came weeks after launch, when Mather showed the spectrum to the American Astronomical Society and received a standing ovation. The Big Bang was safe as a theory. Two years later, at an April 1992 meeting of the American Physical Society, the team showed their first map. Data matched theory perfectly. This was the afterglow of the Big Bang revealing the seeds that would grow into stars and galaxies. Physicist Stephen Hawking called it “the most important discovery of the century, if not of all time.” 

Mather spoke humbly of the discovery at his Nobel acceptance speech in 2006, fully crediting his remarkable team and his colleague George Smoot, who shared the prize with him that year. But he didn’t downplay the achievement. He noted that he was thrilled with the now broader “recognition that our work was as important as people in the professional astronomy world have known for so long.” 

Mather maintains that realism today. While concerned about delays, threats of cancellation, cost overruns, and not-too-subtle animosity in the broader science community over the “telescope that ate astronomy,” he didn’t let this consume him or his team. “There’s no point in trying to manage other people’s feelings,” he said. “Quite a lot of the community opinion is, ‘well, if it were my nickel, I’d spend it differently.’ But it isn’t their nickel; and the reason why we have the nickel in the first place is because NASA takes on incredibly great challenges. Congress approved of us taking on great challenges. And great challenges aren’t free. My feeling is that the only reason why we have an astronomy program at NASA for anyone to enjoy — or complain about — is that we do astonishingly difficult projects. We are pushing to the edge of what is possible.” 

Webb isn’t just a little better than the Hubble Space Telescope, Mather added; it’s a hundred times more powerful. Yet his biggest worry through mission design was not the advanced astronomy instruments but rather the massive sunshield, which needed to unfold. All instruments and all the deployment mechanisms had redundancy engineered into them; there are two or more ways to make them work if the primary method fails. But that’s not the only issue with a sunshield. It would either work or not work. 

Now Mather can focus completely on the science to be had. He expects surprises; he’d be surprised if there were no surprises. “Just about everything in astronomy comes as a surprise,” he said. “When you have new equipment, you will get a surprise.” His hunch is that Webb might reveal something weird about the early universe, perhaps an abundance of short-lived objects never before seen that say something about dark energy, the mysterious force that seems to be accelerating the expansion of the universe, or the equally mysterious dark matter. He also can’t wait until Webb turns its cameras to Alpha Centauri, the closest star system to Earth. What if there’s a planet there suitable for life? Webb should have the sensitivity to detect molecules in its atmosphere, if present. 

“That would be cool,” Mather said. Hints of life from the closest star system? Yes, cool, indeed.

This article originally appeared on Engadget at https://www.engadget.com/inside-the-star-factory-chris-gunn-christopher-wanjek-mit-press-143046496.html?src=rss

Tesla begins Cybertruck deliveries on November 30

After slogging through years of delays and redesigns, the Tesla Cybertruck can finally be seen on public roads this holiday season, the company announced. Deliveries of the long-awaited luxury EV SUV will begin to select customers starting November 30, before the vehicle enters full production next year at its Texas Gigafactory.  

At the same time, the vehicle's electrical architecture is reportedly now being redesigned to accomodate an 800-volt standard, up from the 400V its existing Tesla lineup. A lot of luxury, performance and heavy duty EV models — from the Audi e-Tron to the GMC Hummer EV — utilize the 800V architecture, it's what enables EVs with large battery capacities to charge at a higher rate (thereby reducing charging time) without reducing the vehicle's wiring harness to slag. 

"A lot of people excited about cyber truck," Musk told reporters on Wednesday's investor call. "I am too, i driven the car, it's an amazing product. I want to emphasize that there will be enormous challenges in reaching volume production with the Cyber Truck and then in making it cash-flow positive."

"This is normal for when you've got a product with a lot of new technology, or any brand new vehicle program — but especially one that is as different and advanced as the Cyber Truck," he continued."It is going to require work to reach planned production and be cash-flow positive, at a price that people can afford."

For its existing model lines, Tesla's production and deliveries are both down this quarter, about seven percent or roughly 30,000 units compared to Q2, but still significantly higher year over year, up ~100,000 units, over 2022. The EV automaker has slashed the prices on its vehicles repeatedly this year, first in March, then again in September (taking a full 20 percent off the MSRP at the time) and once more in early October

The Model X, for example, began 2023 retailing for $120,990 — it currently lists for $79,990. The models S (now $74,990), Y ($52,490, down 24 percent from January) and 3 ($38,990, down 17 percent) have all seen similar price drops. In all, Tesla reports its cost of goods sold per vehicle decreased to ~$37,500 in Q3.

Musk had previously explained his willingness to drop prices and endure reduced margins if it translates to increased sales volume. “I think it does make sense to sacrifice margins in favor of making more vehicles,” he said in July. 

“A sequential decline in volumes was caused by planned downtimes for factory upgrades, as discussed on the most recent earnings call. Our 2023 volume target of around 1.8 million vehicles remains unchanged,” Tesla wrote in an October press statement. The company delivered some 435,059 vehicles globally in Q3. 

The company continues to increase its investments in AI development as well, having "more than doubled" the amount of processing power it dedicates to training its vehicular and Optimus robot AI systems, compared to Q2. The Optimus itself is reportedly receiving hardware upgrades and is being trained via AI, rather than "hard-coded" software. 

Additionally, the company announced that all US and Canadian Hertz rentals will have access to the Tesla App, allowing them to use their phones as key fobs. Customers who already have a Tesla profile set up can apply those settings to their Hertz rental as well.

This article originally appeared on Engadget at https://www.engadget.com/tesla-begins-cybertruck-deliveries-on-november-30-210430697.html?src=rss

Baidu’s CEO says its ERNIE AI ‘is not inferior in any aspect to GPT-4’

ERNIE, Baidu’s answer to ChatGPT, has “achieved a full upgrade,” company CEO Robin Li told the assembled crowd at the Baidu World 2023 showcase on Tuesday, “with drastically improved performance in understanding, generation, reasoning, and memory.”

During his keynote address, Li demonstrated improvements to those four core capabilities on-stage by having the AI create a multimodal car commercial in a few minutes based on a short text prompt , solve complex geometry problems and progressively iterate the plot for a short story on the spot. The fourth-gen generative AI system “is not inferior in any aspect to GPT-4,” he continued.

ERNIE 4.0 will offer an “improved” search experience resembling that of Google’s SGE, aggregating and summarizing information pulled from the wider web and distilled into a generated response.The system will be multimodal, providing answers as text, images or animated graphs through an “interactive chat interface for more complex searches, enabling users to iteratively refine their queries until reaching the optimal answer, all in one search interface,” per the company’s press. What’s more, the AI will be able to recommend “highly customized” content streams based on previous interactions with the user.

Similar to ChatGPT Enterprise, ERNIE’s new Generative Business Intelligence will offer a more finely-tuned and secure model trained on each client’s individual data silo. ERNIE 4.0 will also be capable of, “conducting academic research, summarizing key information, creating documents, and generating slideshow presentations” as well as enable users to search and retrieve files using text and voice prompts.

Baidu is following the example set by the rest of the industry and has announced plans to put its generative AI in every app and service it can manage. The company has already integrated some of the AI’s functions into Baidu Maps, including navigation, ride hailing and hotel bookings. It is also offering “ow-threshold access and productivity tools” to help individuals and enterprises develop API plugins for the Baidu Qianfan Foundation Model Platform.

Baidu had already been developing its ERNIE large language model for a number of years prior to the debut of ChatGPT in 2022, though its knowledge-base is focused primarily on the Chinese market. Baidu released ERNIE Bot, it’s answer to ChatGPT, this March with some 550 billion facts packed into its knowledge graph, though it wasn’t until this August that it rolled out to the general public.

Baidu’s partner startups also showed off new product series that will integrate the AI’s functionality during the event, including a domestic robot, an All-in-One learning machine and a smart home speaker.

This article originally appeared on Engadget at https://www.engadget.com/baidus-ceo-says-its-ernie-ai-is-not-inferior-in-any-aspect-to-gpt-4-162333722.html?src=rss

Honda to test its Autonomous Work Vehicle at Toronto’s Pearson Airport

While many of the flashy, marquee mobility and transportation demos that go on at CES tend to be of the more... aspirational variety, Honda's electric cargo hauler, the Autonomous Work Vehicle (AWV), could soon find use on airport grounds as the robotic EV trundles towards commercial operations. 

Honda first debuted the AWV as part of its CES 2018 companion mobility demonstration, then partnered with engineering firm Black & Veatch to further develop the platform. The second-generation AWV was capable of being remotely piloted or following a preset path while autonomously avoiding obstacles. It could carry nearly 900 pounds of sutff onboard and atow another 1,600 pounds behind it, both on-road and off-road. Those second-gen prototypes spent countless hours ferrying building materials back and forth across a 1,000-acre solar panel construction worksite, both individually and in teams, as part of the development process. 

This past March, Honda unveiled the third-generation AWV with a higher carrying capacity, higher top speed, bigger battery and better obstacle avoidance. On Tuesday, Honda revealed that it is partnering with the Greater Toronto Airports Authority to test its latest AWV at the city's Pearson Airport. 

The robotic vehicles will begin their residencies by driving the perimeters of airfields, using mounted cameras and an onboard AI, checking fences and reporting any holes or intrusions. The company is also considering testing the AWV as a FOD (foreign object debris) tool to keep runways clear, as an aircraft component hauler, people mover or baggage cart tug. 

The AWV is just a small part of Honda's overall electrification efforts. The automaker is rapidly shifting its focus from internal combustion to e-motors with plans to release a fully-electric mid-size SUV, as well as nearly a dozen EV motorcycle models by 2025, and develop an EV sedan with Sony. Most importantly, however, the Motocompatco is making a comeback

This article originally appeared on Engadget at https://www.engadget.com/honda-to-test-its-autonomous-work-vehicle-at-torontos-pearson-airport-153025911.html?src=rss

Hitting the Books: Voice-controlled AI copilots could lead to safer flights

Siri and Alexa were only the beginning. As voice recognition and speech synthesis technologies continue to mature, the days of typing on keyboards to interact with the digital world around us could be coming to an end — and sooner than many of us anticipated. Where today's virtual assistants exist on our mobile devices and desktops to provide scripted answers to specific questions, the LLM-powered generative AI copilots of tomorrow will be there, and everywhere else too. This is the "voice-first" future Tobias Dengel envisions in his new book, The Sound of the Future: The Coming Age of Voice Technology.

Using a wide-ranging set of examples, and applications in everything from marketing, sales and customer service to manufacturing and logistics, Dengel walks the reader through how voice technologies can revolutionize the ways in which we interact with the digital world. In the excerpt below, Dengel discusses voice technology might expand its role in the aviation industry, even after the disastrous outcome of its early use in the Boeing 737 MAX.       

black writing on white background with multicolored stylized waveform
PublicAffairs

Excerpted from THE SOUND OF THE FUTURE: The Coming Age of Voice Technology by Tobias Dengel with Karl Weber. Copyright © 2023. Available from PublicAffairs, an imprint of Hachette Book Group, Inc.


REDUCING THE BIGGEST RISKS: MAKING FLYING SAFER

Some workplaces involve greater risks than others. Today’s technology-driven society sometimes multiplies the risks we face by giving ordinary people control over once-incredible amounts of power, in forms that range from tractor trailers to jet airplanes. People carrying out professional occupations that involve significant risks on a daily basis will also benefit from the safety edge that voice provides — as will the society that depends on these well-trained, highly skilled yet imperfect human beings.

When the Boeing 737 MAX airliner was rolled out in 2015, it featured a number of innovations, including distinctive split-tip winglets and airframe modifications that affected the jumbo jet’s aerodynamic characteristics. A critical launch goal for Boeing was to enable commercial pilots to fly the new plane without needing new certifications, since retraining pilots is very expensive for airlines. To achieve that goal, the airliner’s software included an array of ambitious new features, including many intended to increase safety by taking over control from the crew in certain situations. These included something called the Maneuvering Characteristics Augmentation System (MCAS), which was supposed to compensate for an excessive nose-up angle by adjusting the horizontal stabilizer to keep the aircraft from stalling— a complicated technical “hack” implemented by Boeing to avoid the larger cost involved in rewriting the program from the ground up.

The 737 MAX was a top seller right out of the gate. But what Boeing and its airline customers hadn’t realized was that the software was being asked to do things the pilots didn’t fully understand. As a result, pilots found themselves unable to interface in a timely fashion with the complex system in front of them. The ultimate result was two tragic crashes with 346 fatalities, forcing the grounding of the 737 MAX fleet and a fraud settlement that cost Boeing some $2.5 billion. Additional losses from cancelled aircraft orders, lowered stock value, and other damages have been estimated at up to $60 billion. 

These needless losses — financial and human — were caused, in large part, by small yet fatal failures of cockpit communication between people and machines. The pilots could tell that something serious was wrong, but the existing controls made it difficult for them to figure out what that was and to work with the system to correct the problem. As a result, in the words of investigative reporter Peter Robison, “the pilots were trying to retake control of the plane, so that the plane was pitching up and down over several minutes.” Based on his re-creation of what happened, Robison concludes, “it would have been terrifying for the people on the planes.”

When voice becomes a major interface in airliner cockpits, a new tool for preventing such disasters will be available. In traditional aviation, pilots receive commands like “Cleared Direct Casanova VOR” or “Intercept the ILS 3” via radio from dispatchers at air traffic control. After the pilots get this information, they must use their eyes and hands to locate and press a series of buttons to transmit the same commands to the aircraft. In a voice-driven world, that time-wasting, error-prone step will be eliminated. In the first stage of voice adoption, pilots will simply be able to say a few words without moving their eyes from the controls around them, and the plane will respond. According to Geoff Shapiro, a human factors engineer at the former Rockwell Collins Advanced Technology Center, this shift trims the time spent when entering simple navigational commands from half a minute to eight seconds — a huge improvement in circumstances when a few moments can be critical. In the second stage, once veteran pilots have recognized and accepted the power of voice, the plane will automatically follow the spoken instructions from air traffic control, merely asking the pilot to confirm them.

A voice-interface solution integrating the latest capabilities of voice-driven artificial intelligence can improve airline safety in several ways. It gives the system self-awareness and the ability to proactively communicate its state and status to pilots, thereby alerting them to problems even at moments when they might otherwise be distracted or inattentive. Using increasingly powerful voice-technology tools like automatic speech recognition and natural language understanding, it also allows the airplane’s control systems to process and act on conversational speech, making the implementation of pilot commands faster and more accurate than ever. It facilitates real-time communications linking the cockpit, air traffic control, the airline carrier, and maintenance engineers to remove inconsistencies in communication due to human indecision or misjudgment. In the near future, it may even be able to use emerging voice-tech tools such as voice biometrics and real-time sentiment analysis to determine stress levels being experienced by pilots —information that could be used to transmit emergency alerts to air traffic controllers and others on the ground.

Voice technology won’t eliminate all the traditional activities pilots are trained to perform. But in critical moments when the speed of response to messages from a control tower may spell the difference between survival and disaster, the use of a voice interface will prevent crashes and save lives. This is not a fantasy about the remote future. Today’s planes have all the electronics needed to make it possible. 

One field of aviation in which safety risks are especially intense is military flying. It’s also an arena in which voice-enabled aviation is being avidly pursued. Alabama-based Dynetics has received $12.3 million from DARPA, the Pentagon’s storied defense-technology division, to develop the use of AI in “high-intensity air conflicts.” The third phrase of the current three-phase research/implementation program involves a “realistic, manned-flight environment involving complex human-machine collaboration” — including voice communication. 

The US Air Force is not alone in pursuing this technological advantage. The next generation of the MiG-35, the highly advanced Russian fighter jet, will apparently feature a voice assistant to offer advice in high-pressure situations. Test pilot Dmitry Selivanov says, “We call her Rita, the voice communicant. Her voice remains pleasant and calm even if fire hits the engine. She does not talk all the time, she just makes recommendations if the plane approaches some restrictions. Hints are also provided during combat usage.”

Voice-controlled flying is also in development for civilian aircraft. Companies like Honeywell and Rockwell are designing voice interfaces for aviation, with an initial focus on reducing pilot workload around tedious tasks involving basic, repetitive commands like “Give me the weather at LAX and any critical weather en route.” More extensive and sophisticated use cases for voice tech in aviation are steadily emerging. Vipul Gupta is general manager of Honeywell Aerospace Avionics. He and his team are deeply focused on perfecting the technology of the voice cockpit, especially its response speed, which is a crucial safety feature. Their engineers have reduced the voice system’s average response time to 250 milliseconds, which means, in effect, that the system can react more quickly than a human pilot can.

Over time, voice-controlled aircraft systems will become commonplace in most forms of aviation. But in the short term, the most important use cases will be in general aviation, where single-pilot operators are notoriously overloaded, especially when operating in bad weather or congested areas. Having a “voice copilot” will ease those burdens and make the flying experience safer for pilot and passengers alike.

Voice-controlled aircraft are also likely to dominate the emerging field of urban air mobility, which involves the use of small aircraft for purposes ranging from cargo deliveries to sightseeing tours within city and suburban airspaces. New types of aircraft, such as electric vertical takeoff and landing aircraft (eVTOLs) are likely to dominate this domain, with the marketplace for eVTOLs expected to explode from nothing in 2022 to $1.75 billion in 2028. As this new domain of flight expands, experienced pilots will be in short supply, so the industry is now designing simplified cockpit systems, controlled by voice, that trained “operators” will be able to manage.

Vipul Gupta is bullish about the future of the voice-powered cockpit. “Eventually,” he says, “we’ll have a voice assistant where you will just sit in [the aircraft] and the passenger will say, ‘Hey, fly me there, take me there. And then the system does it.’”

As a licensed pilot with significant personal experience in the cock- pit, I suspect he will be right —eventually. As with most innovations, I believe it will take longer than the early adopters and enthusiasts believe. This is especially likely in a critical field like aviation, in which human trust issues and regulatory hurdles can take years to overcome. But the fact is that the challenges of voice-powered flight are actually simpler in many ways than those faced by other technologies, such as autonomous automobiles. For example, a plane cruising at 20,000 feet doesn’t have to deal with red lights, kids dashing into the street, or other drivers tailgating.

For this reason, I concur with the experts who say that we will have safe, effective voice-controlled planes sooner than autonomous cars. And once the technology is fully developed, the safety advantages of a system that can respond to spoken commands almost instantly in an emergency will be too powerful for the aviation industry to forgo.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-the-sound-of-the-future-tobias-dengel-publicaffairs-143020776.html?src=rss

Starlink’s satellite cell service is set to launch in 2024, but only for SMS

The launch of Starlink's much-anticipated satellite cellular service, Direct-to-Cell, will reportedly begin rolling out for SMS in 2024, according to a newly published promotional site by the company. Eventually the system will "enable ubiquitous access to texting, calling, and browsing wherever you may be on land, lakes, or coastal waters," and connect to IoT devices through the LTE standard.

Starlink has partnered with T-Mobile on the project, which was originally announced last August at the "Coverage and Above and Beyond" event. The collaboration sees T-Mobile setting aside a bit of its 5G spectrum for use by Starlink's second-generation satellites; Starlink in turn will allow T-Mobile phones to access the satellite network giving the cell service provider "near complete coverage" of the United States. 

During the event last August, SpaceX CEO Elon Musk tweeted that "Starlink V2" would launch this year on select mobile phones, as well as in Tesla vehicles. “The important thing about this is that it means there are no dead zones anywhere in the world for your cell phone,” Musk said in a press statement at the time. “We’re incredibly excited to do this with T-Mobile.” That estimate was revised during a March panel discussion at the Satellite Conference and Exhibition 2023, when SpaceX VP of Starlink enterprise sales Jonathan Hofeller estimated testing — not commercial operation — would begin in 2023

The existing constellation of 4,265 satellites are not compatible with the new cell service so Starlink is going to have to launch a whole new series of microsats with the necessary eNodeB modem installed, over the next few years. As more satellites are launched, the adde voice and data features will become available. 

As an messaging-only satellite service, Direct-to-Cell will immediately find competition from Apple, with its Emergency SOS via Satellite feature in iOS 14, as well as Qualcomm's rival Snapdragon Satellite, which delivers texts to Android phones from orbit using the Iridium constellation. Competition is expected to be fierce in this emerging market, Lynk Global CEO Charles Miller noted during the March event, arguing that satellite cell service could potentially be the "biggest category in satellite."

This article originally appeared on Engadget at https://www.engadget.com/starlinks-satellite-cell-service-is-set-to-launch-in-2024-but-only-for-sms-215036124.html?src=rss

You can now generate AI images directly in the Google Search bar

Back in the olden days of last December, we had to go to specialized websites to have our natural language prompts transformed into generated AI art, but no longer! Google announced Thursday that users who have opted-in for its Search Generative Experience (SGE) will be able to create AI images directly from the standard Search bar.

SGE is Google’s vision for our web searching future. Rather than picking websites from a returned list, the system will synthesize a (reasonably) coherent response to the user’s natural language prompt using the same data that the list’s links led to. Thursday’s updates are a natural expansion of that experience, simply returning generated images (using the company’s Imagen text-to-picture AI) instead of generated text. Users type in a description of what they’re looking for (a Capybara cooking breakfast, in Google’s example) and, within moments, the engine will create four alternatives to pick from and refine further. Users will also be able to export their generated images to Drive or download them.

a user in front of a computer, asking it to generate them a picture of a capybara making breakfast.
Google

What’s more, users will be able to generate images directly in Google Images. So, if you’re looking for (again, Google’s example) “minimalist halloween table settings” or “spooky dog house ideas,” you’ll be able to type that into the search bar and have Google generate an image based on it. What’s really cool is that you can then turn Google Lens on that generated image to search for actual, real-world products that most closely resemble what the computer hallucinated for you. 

There are, of course, a number of limitations built into the new features. You’ll have to be signed up for Google Labs and have opted-in to the SGE program to use any of this. Additionally, the new image generation functions will be available only within the US, in English-language applications and for users over the age of 18. That last requirement is a just bit odd given that Google did just go out of its way to make the program accessible to teens.

The company is also expanding its efforts to rein in the misuse of generative AI technology. Users will be blocked from creating photorealistic images of human faces. You want a photorealistic capybara cooking bacon, that’s no problem. You want a photorealistic Colonel Sanders cooking bacon, you’re going to run into issues and not just in terms of advertising canon. You’ll also be prevented from generating images of “notable” people, so I guess Colonel Sanders is out either way. 

Finally, Google is implementing the SynthID system developed by DeepMind announced last month. SythID is a visually undetectable metadata watermark that denotes a generated image as such, as well as provides information on who, or what, created it and when. The new features will be available through the Labs tab (click the flask icon) in the Google app on iOS and Android, and on Chrome desktop to select users today and expanding to more in the coming weeks.

This article originally appeared on Engadget at https://www.engadget.com/you-can-now-generate-ai-images-directly-in-the-google-search-bar-160020809.html?src=rss

California’s ‘right to repair’ bill is now California’s ‘right to repair’ law

California became just the third state in the nation to pass a "right to repair" consumer protection law on Tuesday, following Minnesota and New York, when Governor Gavin Newsom signed SB 244. The California Right to Repair bill had originally been introduced in 2019. It passed, nearly unanimously, through the state legislature in September. 

“This is a victory for consumers and the planet, and it just makes sense,” Jenn Engstrom, state director of CALPIRG, told iFixit (which was also one of SB244's co-sponsors). “Right now, we mine the planet’s precious minerals, use them to make amazing phones and other electronics, ship these products across the world, and then toss them away after just a few years’ use ... We should make stuff that lasts and be able to fix our stuff when it breaks, and now thanks to years of advocacy, Californians will finally be able to, with the Right to Repair.”

Turns out Google isn't offering seven years of replacement parts and software updates to the Pixel 8 out of the goodness of its un-beating corporate heart. The new law directly stipulates that all electronics and appliances costing $50 or more, and sold within the state after July 1, 2021 (yup, two years ago), will be covered under the legislation once it goes into effect next year, on July 1, 2024. 

For gear and gadgets that cost between $50 and $99, device makers will have to stock replacement parts and tools, and maintain documentation for three years. Anything over $100 in value gets covered for the full seven-year term. Companies that fail to do so will be fined $1,000 per day on the first violation, $2,000 a day for the second and $5,000 per day per violation thereafter.

There are, of course, carve outs and exceptions to the rules. No, your PS5 is not covered. Not even that new skinny one. None of the game consoles are, neither are alarm systems or heavy industrial equipment that "vitally affects the general economy of the state, the public interest, and the public welfare." 

“I’m thrilled that the Governor has signed the Right to Repair Act into law," State Senator Susan Talamantes Eggman, one of the bill's co-sponsors, said. "As I’ve said all along, I’m so grateful to the advocates fueling this movement with us for the past six years, and the manufacturers that have come along to support Californians’ Right to Repair. This is a common sense bill that will help small repair shops, give choice to consumers, and protect the environment.”

The bill even received support from Apple, of all companies. The tech giant famous for its "walled garden" product ecosystem had railed against the idea when it was previously proposed in Nebraska, claiming the state would become "a mecca for hackers." However, the company changed its tune when SB 244 was being debated, writing a letter of support reportedly stating, "We support 'SB 244' because it includes requirements that protect individual users' safety and security as well as product manufacturers' intellectual property."

This article originally appeared on Engadget at https://www.engadget.com/californias-right-to-repair-bill-is-now-californias-right-to-repair-law-232526782.html?src=rss

Adobe’s next-gen Firefly 2 offers vector graphics, more control and photorealistic renders

Just seven months after its beta debut, Adobe's Firefly generative AI is set to receive a trio of new models as well as more than 100 new features and capabilities, company executives announced at the Adobe Max 2023 event on Tuesday. The Firefly Image 2 model promises higher fidelity generated images and more granular controls for users and the Vector model will allow graphic designers to rapidly generate vector images, a first for the industry. The Design model for generating print and online advertising layouts offers another first: text-to-template generation.

Adobe is no stranger to using machine learning in its products. The company released its earliest commercial AI, Sensei, in 2016. Firefly is built atop the Sensei system and offers image and video editors a whole slew of AI tools and features, from "text to color enhancement" saturation and hue adjustments to font and design element generation and even creating and incorporating background music into video scenes on the fly. The generative AI suite is available across Adobe's product ecosystem including Premiere Pro, After Effects, Illustrator, Photoshop and Express, as well as on all subscription levels the Creative Cloud platform (yes, even the free one).

Adobe firefly 2 side by side comparison against the original using
Adobe

Firefly Image 2 is the updated version of the existing text-to-image system. Like its predecessor, this one is trained exclusively on licensed and public domain content to ensure that its output images are safe for commercial use. It also accommodates text prompts in any of 100 languages. 

Image 1 vs Image 2 models in terms of brightly colored blue-red bird images.
Adobe

Adobe's AI already works across modalities, from still images, video and audio to design elements and font effects. As of Tuesday, it also generates vector art thanks to the new Firefly Vector model. Currently available in beta, this new model will also offer Generative Match, which will recreate a given artistic style in its output images. This will enable users to stay within bounds of the brand's guidelines, quickly spin up new designs using existing images and their aesthetics, as well as seamless, tileable fill patterns and vector gradients.

The final, Design model, is geared heavily towards advertising and marketing professionals for use in generating print and online copy templates using Adobe Express. Users will be able to generate images in Firefly then port them to express for use in a layout generated from the user's natural language prompt. Those templates can be generated in any of the popular aspect ratios and are fully editable through conventional digital methods. 

rainbow aura fashion show
Adobe

The Firefly web application will also receive three new features: Generative Match, as above, for maintaining consistent design aesthetics across images and assets. Photo Settings will generate more photorealistic images (think: visible, defined pores) as well as enable users to tweak images using photography metrics like depth of field, blur and field of view. The system's depictions of plant foliage will reportedly also improve under this setting. Prompt Guidance will even rewrite whatever hackneyed prose you came up with into something it can actually work from, reducing the need for the wholesale re-generation of prompted images.

This article originally appeared on Engadget at https://www.engadget.com/adobes-next-gen-firefly-2-offers-vector-graphics-more-control-and-photorealistic-renders-160030349.html?src=rss

ElevenLabs is building a universal AI dubbing machine

After Disney releases a new film in English, the company will go back and localize it in as many as 46 global languages to make the movie accesible to as wide an audience as possible. This is a massive undertaking, one for which Disney has an entire division — Disney Character Voices International Inc — to handle the task. And it's not like you're getting Chris Pratt back in the recording booth to dub his GotG III lines in Icelandic and Swahili — each version sounds a little different given the local voice actors. But with a new "AI dubbing" system from ElevenLabs, we could soon get a close recreation of Pratt's voice, regardless of the language spoken on-screen.   

ElevenLabs is an AI startup that offers a voice cloning service, allowing subscribers to generate nearly identical vocalizations with AI based on a few minutes worth of audio sample uploads. Not wholly unsurprising, as soon as the feature was released in beta, it was immediately exploited to impersonate celebrities, sometimes even without their prior knowledge and consent

The new AI dubbing feature does essentially the same thing — in more than 20 different languages including Hindi, Portuguese, Spanish, Japanese, Ukrainian, Polish and Arabic — but legitimately, and with permission. This tool is designed for use by media companies, educators and internet influencers who don't have Disney Money™ to fund their global adaptation efforts.

ElevenLabs asserts that the system will be able to not only translate "spoken content to another language in minutes" but also generate new spoken dialog in the target language using the actor's own voice. Or, at least, a AI generated recreation. The system is even reportedly capable of maintaining the "emotion and intonation" of the existing dialog and transferring that over to the generated translation.

 "It will help audiences enjoy any content they want, regardless of the language they speak," ElevenLabs CEO Mati Staniszewski said in a press statement. "And it will mean content creators can easily and authentically access a far bigger audience across the world."

This article originally appeared on Engadget at https://www.engadget.com/elevenlabs-is-building-a-universal-ai-dubbing-machine-130053504.html?src=rss