OpenAI’s policy no longer explicitly bans the use of its technology for ‘military and warfare’

Just a few days ago, OpenAI's usage policies page explicitly states that the company prohibits the use of its technology for "military and warfare" purposes. That line has since been deleted. As first noticed by The Intercept, the company updated the page on January 10 "to be clearer and provide more service-specific guidance," as the changelog states. It still prohibits the use of its large language models (LLMs) for anything that can cause harm, and it warns people against using its services to "develop or use weapons." However, the company has removed language pertaining to "military and warfare."

While we've yet to see its real-life implications, this change in wording comes just as military agencies around the world are showing an interest in using AI. "Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” Sarah Myers West, a managing director of the AI Now Institute, told the publication. 

The explicit mention of "military and warfare" in the list of prohibited uses indicated that OpenAI couldn't work with government agencies like the Department of Defense, which typically offers lucrative deals to contractors. At the moment, the company doesn't have a product that could directly kill or cause physical harm to anybody. But as The Intercept said, its technology could be used for tasks like writing code and processing procurement orders for things that could be used to kill people. 

When asked about the change in its policy wording, OpenAI spokesperson Niko Felix told the publication that the company "aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs." Felix explained that "a principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts," adding that OpenAI "specifically cited weapons and injury to others as clear examples." However, the spokesperson reportedly declined to clarify whether prohibiting the use of its technology to "harm" others included all types of military use outside of weapons development. 

In a statement to Engadget, an OpenAI spokesperson admitted that the company is already working with the US Department of Defense. "Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property," the spokesperson said. "There are, however, national security use cases that align with our mission. For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under 'military' in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions."

Update, January 14 2024, 10:22AM ET: This story has been updated to include a statement from OpenAI.

This article originally appeared on Engadget at https://www.engadget.com/openais-policy-no-longer-explicitly-bans-the-use-of-its-technology-for-military-and-warfare-123018659.html?src=rss

OpenAI’s policy no longer explicitly bans the use of its technology for ‘military and warfare’

Just a few days ago, OpenAI's usage policies page explicitly states that the company prohibits the use of its technology for "military and warfare" purposes. That line has since been deleted. As first noticed by The Intercept, the company updated the page on January 10 "to be clearer and provide more service-specific guidance," as the changelog states. It still prohibits the use of its large language models (LLMs) for anything that can cause harm, and it warns people against using its services to "develop or use weapons." However, the company has removed language pertaining to "military and warfare."

While we've yet to see its real-life implications, this change in wording comes just as military agencies around the world are showing an interest in using AI. "Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” Sarah Myers West, a managing director of the AI Now Institute, told the publication. 

The explicit mention of "military and warfare" in the list of prohibited uses indicated that OpenAI couldn't work with government agencies like the Department of Defense, which typically offers lucrative deals to contractors. At the moment, the company doesn't have a product that could directly kill or cause physical harm to anybody. But as The Intercept said, its technology could be used for tasks like writing code and processing procurement orders for things that could be used to kill people. 

When asked about the change in its policy wording, OpenAI spokesperson Niko Felix told the publication that the company "aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs." Felix explained that "a principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts," adding that OpenAI "specifically cited weapons and injury to others as clear examples." However, the spokesperson reportedly declined to clarify whether prohibiting the use of its technology to "harm" others included all types of military use outside of weapons development. 

In a statement to Engadget, an OpenAI spokesperson said, "Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission. For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under “military” in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions."

Update, January 14 2024, 10:22AM ET: This story has been updated to include a statement from OpenAI.

This article originally appeared on Engadget at https://www.engadget.com/openais-policy-no-longer-explicitly-bans-the-use-of-its-technology-for-military-and-warfare-123018659.html?src=rss

Google changes its Play Store policy to allow more real-money games

There may be a lot more real-money gaming (RMG) apps available in the Google Play Store before the year ends — at least in certain locations. Google initially started allowing apps that deal with real money in its store back in 2021, but only if they fall under a game type that's regulated by the government. Now, the company has announced that it's tweaking its rules to allow more "game types and operators not covered by an existing licensing framework." That will open the Play Store to games that aren't that popular or aren't played widely enough for local governments to create laws around them. 

Google says it conducted several pilot programs in different parts of the world since 2021 to determine how to support more real-money game operators and game types. In India, for instance, its pilot tests included apps offering Rummy card games and Daily Fantasy Sports. The company will enforce its new policy in India, Mexico and Brazil starting on June 30. After that, operators part of its pilot programs can release their current applications — and other types of real-money gaming apps — like any other developer, so long as they're in compliance with local laws. 

The company said it has plans to expand the availability of RMG apps in other regions in the future, but it clarified that its age requirements to be able to access those games will remain in place. Also, developers will still be required to geo-fence their products, so that they'll only be available where they're legal. It has also revealed that it's "evolving its service fee model" for real-money gaming apps to "help sustain the Android and Play ecosystems." As 9to5Google notes, RMG apps can't use Google Play billing, but that's likely set to change if the company intends to take a cut of developers' earnings. 

This article originally appeared on Engadget at https://www.engadget.com/google-changes-its-play-store-policy-to-allow-more-real-money-games-100511069.html?src=rss

Meta reportedly laid off 60 technical program managers at Instagram

When Mark Zuckerberg announced last year that Meta was laying off 10,000 workers, he described 2023 as a "year of efficiency" defined by removing layers of middle management to create a "leaner org." Turns out the company still isn't done restructuring its organization. According to Business Insider, Meta recently told at least 60 of its employees at Instagram that it's eliminating their position altogether. The affected employees are technical program managers, the people who go in between Meta's tech workers, including its engineers, and the higher level product managers.

Based on posts on Blind, an app for tech employees, and on LinkedIn seen by the publication, the workers losing their jobs are given the chance to be interviewed to be considered for a position as product manager. By March, those who chose to leave or weren't given a new role will no longer have a job with Meta. The company slashed 11,000 jobs in the fall of 2022 in addition to the 10,000 workers it laid off last year in an effort to cut costs. It also issued a hiring freeze and closed thousands of open roles it was originally hiring for. 

"A leaner org will execute its highest priorities faster. People will be more productive, and their work will be more fun and fulfilling," Zuckerberg said last year. It's unclear if Meta has already lifted its hiring freeze, but it's expected to do so only after it's done with restructuring. 

This article originally appeared on Engadget at https://www.engadget.com/meta-reportedly-laid-off-60-technical-program-managers-at-instagram-095558424.html?src=rss

Valve’s new guidelines will allow for more AI content in games

Valve has introduced new rules to abide by that will allow the company to add more games with AI content to its Steam gaming platform. To start with, it's updating its content survey form for developers so that they can give the company a description of how they use artificial intelligence in their games. If they used AI tools to generate art, code, sound or any other kind of content for their title, developers must ensure that they do not include anything illegal or anything that infringes on someone else's copyright. Valve says it will evaluate each game and check if the developer has submitted truthful information. 

For live-generated AI content, developers have to tell the company what kind of guardrails they've put up to prevent their games from creating anything considered illegal. And since Valve will not be able to review all content games create in real time, it's launching a new system on Steam that will allow players to easily send in a report. If a player sees anything they believe should've been caught by appropriate guardrails, they can use Steam's new in-game overlay to notify the company.

Valve said it will also be transparent with gamers when it comes with what kind of AI content a developer's title has by including their disclosure on their Steam store page. The company explained that the new rules are a result of it improving its "understanding of the landscape and risks" in the AI space. Last year, Valve admitted that it was still "working through" how to account for AI content in its review process after developers complained that the company was rejecting their submissions. It needed "some time to learn about the fast-moving and legally murky space of AI technology," Valve clarified in its new post. The company said it still can't release games with live-generated adult sexual content right now, but that it will revisit its rules as it learns more about the technology and as the legal issues surrounding it evolves. 

This article originally appeared on Engadget at https://www.engadget.com/valves-new-guidelines-will-allow-for-more-ai-content-in-games-134515623.html?src=rss

Hyundai shows off its updated S-A2 air taxi at CES 2024

Hyundai has debuted its new air taxi concept, the S-A2, at CES 2024 in Las Vegas. The electric vertical takeoff and landing (eVTOL) vehicle is a follow-up to the S-A1 model it introduced at the same event back in 2020. Hyundai still envisions the S-A2 as an every day transportation solution for urban areas, one that could get passengers from point A to point B a lot more quickly than if they'd traveled by car or bus and had to contend with traffic.

The S-A2 has a cruising speed of 120mph upon reaching an altitude of 1,500 feet and was designed to fly short trips between 25 to 40 miles. It has eight rotors and an electric propulsion architecture that the company says can operate "as quietly as a dishwasher" unlike loud traditional helicopters. Inside, the vehicle has seats for a pilot and four passengers, and it has lighting that provides visual cues, such as where to enter and exit. For safety purposes, it has a lot of redundant components, such as its powertrain and flight controls, which can take over if the main ones malfunction. 

Hyundai's air mobility company Supernal is hoping to achieve commercial aviation safety levels and to enter the market with an eVTOL vehicle by 2028. We might see future versions of the concept in the next CES events before that year — or after, if the company has to adjust its timeline. If and when Supernal does make it to market, it intends to use Hyundai's mass production capabilities to manufacture its eVTOLs and make sure its business is cost-effective. 

A silver and blue aircraft
Hyundai

We're reporting live from CES 2024 in Las Vegas from January 6-12. Keep up with all the latest news from the show here.

This article originally appeared on Engadget at https://www.engadget.com/hyundai-shows-off-its-updated-s-a2-air-taxi-at-ces-2024-115516581.html?src=rss

The ASUS AirVision M1 is a wearable display for multi-taskers

ASUS has introduced quite a lengthy list of products at CES 2024 in Las Vegas, including a high-tech eyewear called the AirVision M1. It's not really a competitor to the upcoming Apple Vision Pro and the mixed reality headgears other companies debuted at the event, though. The AirVision M1 is a wearable display with the ability to generate multiple virtual screens, supposedly so that users can juggle several tasks at once. It's equipped with an FHD (1,920 x 1,080) Micro OLED display that has a 57-degree vertical perspective field of view. 

The device's system has three degrees of freedom, and users can pin several screens where they want in the aspect ratio they prefer, whether it's 16:9, 21:9 or 32:9. They can do so through the glasses' intuitive touchpad located on the left temple, where they can also adjust brightness and activate 3D mode. The device also comes with built-in noise-canceling microphones and speakers.

While it may sound like the AirVision M1 could be a good companion for people who need to bring their work with them when they travel, it's not a standalone wearable: It has to be connected to a PC or a phone via USB-C to work. ASUS has yet to reveal how much it costs and when it'll be available, but its specs and capabilities indicate that it'll cost a fraction of Apple's Vision Pro. 

We're reporting live from CES 2024 in Las Vegas from January 6-12. Keep up with all the latest news from the show here.

This article originally appeared on Engadget at https://www.engadget.com/the-asus-airvision-m1-is-a-wearable-display-for-multi-taskers-060237509.html?src=rss

Google apps are coming to select Ford, Nissan and Lincoln vehicles in 2024

Google has teamed up with more automakers to offer vehicles that come pre-installed with Google apps, the company revealed today at CES 2024 in Las Vegas. Nissan, Ford and Lincoln are rolling out select models with built-in Google Maps, Assistant and Play Store — among other applications — this year, while Porsche is expected to follow suit in 2025. They're the upcoming addition to the growing list of auto brands embracing tighter Google integration, which includes Honda, Volvo, Polestar, Chevrolet, GMC, Cadillac and Renault. 

The company has also announced new features for cars with built-in Google apps. One of those features rolling out today is the ability to send trips users have planned on their Android or iOS Google Maps app to their cars. That way, they'll no longer need to plug in multi-stop trips on their car's Google Maps again after they've already plotted it meticulously on their phones. In addition, Chrome is making its way to select Polestar and Volvo cars today as part of a beta release, allowing users to browse websites and even access their bookmarks while they're parked. The browser will be available for more cars later this year. 

Google is also adding PBS KIDS and Crunchyroll to its list of apps for vehicles to give users and their kids access to more entertainment content. And to give drivers a quick way to keep an eye on changing weather conditions, Google's built-in apps for cars now includes The Weather Channel's. It will provide users with hourly forecasts, as well as alerts and a "Trip View" radar on their dashboard, so they no longer have to check their phones. Finally, Google has announced that it's expanding its digital car keys' availability to select Volvo cars soon, allowing owners to unlock, lock and even start their cars with their Android phone. 

We're reporting live from CES 2024 in Las Vegas from January 6-12. Keep up with all the latest news from the show here.

This article originally appeared on Engadget at https://www.engadget.com/google-apps-are-coming-to-select-ford-nissan-and-lincoln-vehicles-in-2024-180007640.html?src=rss

OpenAI admits it’s impossible to train generative AI without copyrighted materials

OpenAI and its biggest backer, Microsoft, are facing several lawsuits accusing them of using other people's copyrighted works without permission to train the former's large language models (LLMs). And based on what OpenAI told the House of Lords Communications and Digital Select Committee, we might see more lawsuits against the companies in the future. It would be "impossible to train today's leading AI models without using copyrighted materials," OpenAI wrote in its written evidence (PDF) submission for the committee's inquiry into LLMs, as first reported by the The Guardian.

The company explained that it's because copyright today "covers virtually every sort of human expression — including blog posts, photographs, forum posts, scraps of software code, and government documents." It added that "[l]imiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today's citizens." OpenAI also insisted that it complies with copyright laws when it trains its models. In a new post on its blog made in response to the The New York Times' lawsuit, it said the use of publicly available internet materials to train AI falls under fair use doctrine. 

It admitted, however, that there is "still work to be done to support and empower creators." The company talked about the ways it's allowing publishers to block the GPTBot web crawler from being able to access their websites. It also said that it's developing additional mechanisms allowing rightsholders to opt out of training and that it's engaging with them to find mutually beneficial agreements. 

In some of the lawsuits filed against OpenAI and Microsoft, the plaintiffs accuse the companies of refusing to pay authors for their work while building a billion-dollar industry and enjoying enormous financial gain from copyrighted materials. The more recent case filed by a couple of non-fiction authors argued that the companies could've explored alternative financing options, such as profit sharing, but have "decided to steal" instead.

OpenAI didn't address those particular lawsuits, but it did provide a direct answer to The New York Times' complaint that accuses it of using its published news articles without permission. The publication isn't telling the full story, it said. It was already negotiating with The Times regarding a "high-value partnership" that would give it access to the publication's reporting. The two parties were apparently still in touch until December 19, and OpenAI only found out about the lawsuit on December by reading about it on The Times.

In the complaint filed by the newspaper, it cited instances of ChatGPT providing users with "near-verbatim excerpts" from paywalled articles. OpenAI accused the publication of intentionally manipulating prompts, such as including lengthy excerpts of articles in its interaction with the chatbot to get it to regurgitate content. It's also accusing The Times of cherry picking examples from many attempts. OpenAI said the lawsuit filed by The Times has no merit, but it's still hopeful for a "constructive partnership" with the publication. 

This article originally appeared on Engadget at https://www.engadget.com/openai-admits-its-impossible-to-train-generative-ai-without-copyrighted-materials-103311496.html?src=rss

Duolingo lays off contractors as it starts relying more on AI

Duolingo has cut 10 percent of its contractors and using AI tools to handle the tasks they used to do, Bloomberg reports. "We just no longer need as many people to do the type of work some of these contractors were doing," a spokesperson told the news organization without saying what they did for the company exactly. "Part of that could be attributed to AI." 

As Bloomberg notes, Chief Executive Officer Luis von Ahn told shareholders in November that the company is using AI to create new content, such as scripts, "dramatically faster." Duolingo also relies on AI to generate the voices users hear in-app. The company previously released customer-facing AI features, as well. Last year, it introduced a premium tier called Duolingo Max that gives subscribers access to a chatbot that can explain why their responses were correct or incorrect. Another Max feature called Roleplay lets subscribers practice their language skills in made-up scenarios, like ordering food in a Parisian cafe. 

The rise of modern generative AIs over the past couple of years brought to surface society's fear of losing jobs to technology. In this case, no full-timers were affected by the job reductions, and the spokesperson said it's not a sign that it's straight up replacing its workers with artificial intelligence. A lot of the company's full-time employees and contractors are apparently using AI tools to accomplish certain tasks in their work.

This article originally appeared on Engadget at https://www.engadget.com/duolingo-lays-off-contractors-as-it-starts-relying-more-on-ai-060331602.html?src=rss