OpenAI hit with another privacy complaint over ChatGPT’s love of making stuff up

OpenAI has been hit with a privacy complaint in Austria by an advocacy group called NOYB, which stands for None Of Your Business. The complaint alleges that the company’s ChatGPT bot repeatedly provided incorrect information about a real individual (who for privacy reasons is not named in the complaint), as reported by Reuters. This may breach EU privacy rules.

The chatbot allegedly spat out incorrect birthdate information for the individual, instead of just saying it didn’t know the answer to the query. Like politicians, AI chatbots like to confidently make stuff up and hope we don’t notice. This phenomenon is called a hallucination. However, it’s one thing when these bots make up ingredients for a recipe and another thing entirely when they invent stuff about real people.

The complaint also indicates that OpenAI refused to help delete the false information, responding that it was technically impossible to make that kind of change. The company did offer to filter or block the data on certain prompts. OpenAI’s privacy policy says that if users notice the AI chatbot has generated “factually inaccurate information” about them that they can submit a “correction request”, but the company says that it “may not be able to correct the inaccuracy in every instance”, as reported by TechCrunch.

This is bigger than just one complaint, as the chatbot’s tendency toward making stuff up could run afoul of the region’s General Data Protection Regulation (GDPR), which governs how personal data can be used and processed. EU residents have rights regarding personal information, including a right to have false data corrected. Failure to comply with these regulations can accrue serious financial penalties, up to four percent of global annual turnover in some cases. Regulators can also order changes to how information is processed.

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals,” Maartje de Graaf, NOYB data protection lawyer, said in a statement. “If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The complaint also brought up concerns regarding transparency on the part of OpenAI, suggesting that the company doesn’t offer information regarding where the data it generates on individuals comes from or if this data is stored indefinitely. This is of particular importance when considering data pertaining to private individuals.

Again, this is a complaint by an advocacy group and EU regulators have yet to comment one way or the other. However, OpenAI has acknowledged in the past that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers.” NOYB has approached the Austrian Data Protection Authority and asked the organization to investigate the issue.

The company is facing a similar complaint in Poland, in which the local data protection authority began investigating ChatGPT after a researcher was unable to get OpenAI’s help with correcting false personal information. That complaint accuses OpenAI of several breaches of the EU’s GDPR, with regard to transparency, data access rights and privacy.

There’s also Italy. The Italian data protection authority conducted an investigation into ChatGPT and OpenAI which concluded by saying it believes the company has violated the GDPR in various ways. This includes ChatGPT’s tendency to make up fake stuff about people. The chatbot was actually banned in Italy before OpenAI made certain changes to the software, like new warnings for users and the option to opt-out of having chats be used to train the algorithms. Despite no longer being banned, the Italian investigation into ChatGPT continues.

OpenAI hasn’t responded to this latest complaint, but did respond to the regulatory salvo issued by Italy’s DPA. “We want our AI to learn about the world, not about private individuals,” the company wrote. “We actively work to reduce personal data in training our systems like ChatGPT, which also rejects requests for private or sensitive information about people.”

This article originally appeared on Engadget at https://www.engadget.com/openai-hit-with-another-privacy-complaint-over-chatgpts-love-of-making-stuff-up-162250335.html?src=rss

OpenAI hit with another privacy complaint over ChatGPT’s love of making stuff up

OpenAI has been hit with a privacy complaint in Austria by an advocacy group called NOYB, which stands for None Of Your Business. The complaint alleges that the company’s ChatGPT bot repeatedly provided incorrect information about a real individual (who for privacy reasons is not named in the complaint), as reported by Reuters. This may breach EU privacy rules.

The chatbot allegedly spat out incorrect birthdate information for the individual, instead of just saying it didn’t know the answer to the query. Like politicians, AI chatbots like to confidently make stuff up and hope we don’t notice. This phenomenon is called a hallucination. However, it’s one thing when these bots make up ingredients for a recipe and another thing entirely when they invent stuff about real people.

The complaint also indicates that OpenAI refused to help delete the false information, responding that it was technically impossible to make that kind of change. The company did offer to filter or block the data on certain prompts. OpenAI’s privacy policy says that if users notice the AI chatbot has generated “factually inaccurate information” about them that they can submit a “correction request”, but the company says that it “may not be able to correct the inaccuracy in every instance”, as reported by TechCrunch.

This is bigger than just one complaint, as the chatbot’s tendency toward making stuff up could run afoul of the region’s General Data Protection Regulation (GDPR), which governs how personal data can be used and processed. EU residents have rights regarding personal information, including a right to have false data corrected. Failure to comply with these regulations can accrue serious financial penalties, up to four percent of global annual turnover in some cases. Regulators can also order changes to how information is processed.

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals,” Maartje de Graaf, NOYB data protection lawyer, said in a statement. “If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”

The complaint also brought up concerns regarding transparency on the part of OpenAI, suggesting that the company doesn’t offer information regarding where the data it generates on individuals comes from or if this data is stored indefinitely. This is of particular importance when considering data pertaining to private individuals.

Again, this is a complaint by an advocacy group and EU regulators have yet to comment one way or the other. However, OpenAI has acknowledged in the past that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers.” NOYB has approached the Austrian Data Protection Authority and asked the organization to investigate the issue.

The company is facing a similar complaint in Poland, in which the local data protection authority began investigating ChatGPT after a researcher was unable to get OpenAI’s help with correcting false personal information. That complaint accuses OpenAI of several breaches of the EU’s GDPR, with regard to transparency, data access rights and privacy.

There’s also Italy. The Italian data protection authority conducted an investigation into ChatGPT and OpenAI which concluded by saying it believes the company has violated the GDPR in various ways. This includes ChatGPT’s tendency to make up fake stuff about people. The chatbot was actually banned in Italy before OpenAI made certain changes to the software, like new warnings for users and the option to opt-out of having chats be used to train the algorithms. Despite no longer being banned, the Italian investigation into ChatGPT continues.

OpenAI hasn’t responded to this latest complaint, but did respond to the regulatory salvo issued by Italy’s DPA. “We want our AI to learn about the world, not about private individuals,” the company wrote. “We actively work to reduce personal data in training our systems like ChatGPT, which also rejects requests for private or sensitive information about people.”

This article originally appeared on Engadget at https://www.engadget.com/openai-hit-with-another-privacy-complaint-over-chatgpts-love-of-making-stuff-up-162250335.html?src=rss

Google asks court to reject the DOJ’s lawsuit that accuses it of monopolizing ad tech

Google filed a motion on Friday in a Virginia federal court asking for the Department of Justice’s antitrust lawsuit against it to be thrown away. The DOJ sued Google in January 2023, accusing the company of monopolizing digital advertising technologies through “anticompetitive and exclusionary conduct.” Per Bloomberg, Google is now seeking summary judgment to avoid the case going to trial in September as planned.

Attorney General Merrick B. Garland said at the time the lawsuit was first announced that Google “has used anticompetitive, exclusionary, and unlawful conduct to eliminate or severely diminish any threat to its dominance over digital advertising technologies.” The lawsuit alleges that Google controls digital advertising tools to such an extent that it “pockets on average more than 30 percent of the advertising dollars that flow through its digital advertising technology products,” according to a press release from the agency last year.

Google now argues that that the DOJ hasn’t shown that the company controls at least 70 percent of the market, which some previous cases have used as the threshold for qualifying as a monopoly, and that the agency “made up markets specifically for this case,” according to Bloomberg, excluding its major competitors like social media platforms. The company also claims the DOJ’s case goes “beyond the boundaries of antitrust law,” Reuters reports.

This article originally appeared on Engadget at https://www.engadget.com/google-asks-court-to-reject-the-dojs-lawsuit-that-accuses-it-of-monopolizing-ad-tech-183830791.html?src=rss

Google asks court to reject the DOJ’s lawsuit that accuses it of monopolizing ad tech

Google filed a motion on Friday in a Virginia federal court asking for the Department of Justice’s antitrust lawsuit against it to be thrown away. The DOJ sued Google in January 2023, accusing the company of monopolizing digital advertising technologies through “anticompetitive and exclusionary conduct.” Per Bloomberg, Google is now seeking summary judgment to avoid the case going to trial in September as planned.

Attorney General Merrick B. Garland said at the time the lawsuit was first announced that Google “has used anticompetitive, exclusionary, and unlawful conduct to eliminate or severely diminish any threat to its dominance over digital advertising technologies.” The lawsuit alleges that Google controls digital advertising tools to such an extent that it “pockets on average more than 30 percent of the advertising dollars that flow through its digital advertising technology products,” according to a press release from the agency last year.

Google now argues that that the DOJ hasn’t shown that the company controls at least 70 percent of the market, which some previous cases have used as the threshold for qualifying as a monopoly, and that the agency “made up markets specifically for this case,” according to Bloomberg, excluding its major competitors like social media platforms. The company also claims the DOJ’s case goes “beyond the boundaries of antitrust law,” Reuters reports.

This article originally appeared on Engadget at https://www.engadget.com/google-asks-court-to-reject-the-dojs-lawsuit-that-accuses-it-of-monopolizing-ad-tech-183830791.html?src=rss

Grindr sued for allegedly sharing users’ HIV status and other info with ad companies

Grindr has been sued for allegedly sharing personal information with advertising companies without users' consent. A lawsuit filed in London claims that the data included HIV statuses and test dates, ethnicity and sexual orientation, Bloomberg reports.

According to the class action-style suit, the alleged data sharing involved adtech companies Localytics and Apptimize. Grindr is said to have supplied the companies with user info before April 2018 and then between May 2018 and April 2020. Engadget has asked Grindr for comment.

In April 2018, Grindr admitted it had shared HIV data with Apptimize and Localytics following an investigation by BuzzFeed News and Norwegian non-profit SINTEF. It said it would stop the practice.

This isn't the only time Grindr has been accused of sharing users' personal information. A 2022 report from The Wall Street Journal indicated that precise location data on Grindr users was up for sale for at least three years. In addition, Norway's data protection agency fined Grindr $6 million in 2021 for violating the European Union's General Data Protection Regulation. The agency said Grindr had unlawfully shared "personal data with third parties for marketing purposes."

This article originally appeared on Engadget at https://www.engadget.com/grindr-sued-for-allegedly-sharing-users-hiv-status-and-other-info-with-ad-companies-141748725.html?src=rss

Google fired 28 workers who protested Israeli government cloud contract

Google has fired 28 employees involved in protests against the company's "Project Nimbus" cloud contract with the Israeli government, according to an internal memo seen by The Verge. That follows the arrest and suspension of nine employees on April 16 and a previous firing related to the same project last month. 

Some of the fired workers were forcibly removed after occupying the office of Google Cloud CEO Thomas Kurian. Google head of global security Chris Rackow said that the company "will not tolerate" such incidences and warned that the company could take further action. 

"If you’re one of the few who are tempted to think we’re going to overlook conduct that violates our policies, think again," he told employees in a letter. "The company takes this extremely seriously, and we will continue to apply our longstanding policies to take action against disruptive behavior — up to and including termination."

Behavior like this has no place in our workplace and we will not tolerate it. It clearly violates multiple policies that all employees must adhere to — including our Code of Conduct and Policy on Harassment, Discrimination, Retaliation, Standards of Conduct, and Workplace Concerns.

However, workers in the "No Tech for Apartheid" group organizing the protests called the dismissals "a flagrant act of retaliation." It added that the Google saying protests largely involve people not working at the company is "insulting," adding that the push to drop Project Nimbus is supported by "thousands" of their colleagues. 

"In the three years that we have been organizing against Project Nimbus, we have yet to hear from a single executive about our concerns,” it wrote in a Medium post. "Google workers have the right to peacefully protest about terms and conditions of our labor. These firings were clearly retaliatory.”

This article originally appeared on Engadget at https://www.engadget.com/google-fired-28-workers-who-protested-israeli-government-cloud-contract-084444878.html?src=rss

Google fired 28 workers who protested Israeli government cloud contract

Google has fired 28 employees involved in protests against the company's "Project Nimbus" cloud contract with the Israeli government, according to an internal memo seen by The Verge. That follows the arrest and suspension of nine employees on April 16 and a previous firing related to the same project last month. 

Some of the fired workers were forcibly removed after occupying the office of Google Cloud CEO Thomas Kurian. Google head of global security Chris Rackow said that the company "will not tolerate" such incidences and warned that the company could take further action. 

"If you’re one of the few who are tempted to think we’re going to overlook conduct that violates our policies, think again," he told employees in a letter. "The company takes this extremely seriously, and we will continue to apply our longstanding policies to take action against disruptive behavior — up to and including termination."

Behavior like this has no place in our workplace and we will not tolerate it. It clearly violates multiple policies that all employees must adhere to — including our Code of Conduct and Policy on Harassment, Discrimination, Retaliation, Standards of Conduct, and Workplace Concerns.

However, workers in the "No Tech for Apartheid" group organizing the protests called the dismissals "a flagrant act of retaliation." It added that the Google saying protests largely involve people not working at the company is "insulting," adding that the push to drop Project Nimbus is supported by "thousands" of their colleagues. 

"In the three years that we have been organizing against Project Nimbus, we have yet to hear from a single executive about our concerns,” it wrote in a Medium post. "Google workers have the right to peacefully protest about terms and conditions of our labor. These firings were clearly retaliatory.”

This article originally appeared on Engadget at https://www.engadget.com/google-fired-28-workers-who-protested-israeli-government-cloud-contract-084444878.html?src=rss

Netflix true crime documentary may have used AI-generated images of a real person

Netflix has been accused of using AI-manipulated imagery in the true crime documentary What Jennifer Did, Futurism has reported. Several photos show typical signs of AI trickery, including mangled hands, strange artifacts and more. If accurate, the report raises serious questions about the use of such images in documentaries, particularly since the person depicted is currently in prison awaiting retrial

In one egregious image, the left hand of the documentary's subject Jennifer Pan is particularly mangled, while another image shows a strange gap in her cheek. Netflix has yet to acknowledge the report, but the images show clear signs of manipulation and were never labeled as AI-generated.

Netflix true crime documentary may have used AI-generated images of a real person
Netflix

The AI may be generating the imagery based on real photos of Pan, as PetaPixel suggested. However, the resulting output may be interpreted as being prejudicial instead of presenting the facts of the case without bias. 

A Canadian court of appeal ordered Pan's retrial because the trial judge didn't present the jury with enough options, the CBC reported. 

One critic, journalist Karen K. HO, said that the Netflix documentary is an example of the "true crime industrial complex" catering to an "all-consuming and endless" appetite for violent content. Netflix's potential use of AI manipulated imagery as a storytelling tool may reinforce that argument.

Regulators in the US, Europe and elsewhere have enacted laws on the use of AI, but so far there appears to be no specific laws governing the use of AI images or video in documentaries or other content. 

This article originally appeared on Engadget at https://www.engadget.com/netflix-true-crime-documentary-may-have-used-ai-generated-images-of-a-real-person-090024761.html?src=rss

Apple claims Epic is trying to ‘micromanage’ its business operations in a new court filing

Last month, Epic Games filed a motion asking a California judge to hold Apple in contempt for what it claims are violations of a 2021 injunction relating to the company’s App Store practices. Now, Apple is asking the judge to reject Epic’s request, alleging in a new filing spotted by Reuters that the motion is an attempt to “micromanage Apple’s business operations in a way that would increase Epic’s profitability.”

The original injunction by US District Judge Yvonne Gonzalez Rogers required Apple to let developers provide an option for external payment methods, which would allow them to avoid fees of up to 30 percent on App Store and in-app purchases. Apple introduced new App Store guidelines for developers in January that do allow linking to external websites for purchasing alternatives, but the new rules also require they get Apple’s approval to do so and impose a commission of 12-27 percent for these transactions. Per Reuters, Epic argued that this makes alternative payment options “commercially unusable.”

Epic also said at the time that Apple’s “so-called compliance is a sham,” and accused the company of violating the injunction with its recent moves. Apple maintains that it has acted in compliance with the injunction, stating in the new filing, “The purpose of the Injunction is to make information regarding alternative purchase options more readily available, not to dictate the commercial terms on which Apple provides access to its platform, tools and technologies, and userbase.”

This article originally appeared on Engadget at https://www.engadget.com/apple-claims-epic-is-trying-to-micromanage-its-business-operations-in-a-new-court-filing-171011659.html?src=rss

Apple claims Epic is trying to ‘micromanage’ its business operations in a new court filing

Last month, Epic Games filed a motion asking a California judge to hold Apple in contempt for what it claims are violations of a 2021 injunction relating to the company’s App Store practices. Now, Apple is asking the judge to reject Epic’s request, alleging in a new filing spotted by Reuters that the motion is an attempt to “micromanage Apple’s business operations in a way that would increase Epic’s profitability.”

The original injunction by US District Judge Yvonne Gonzalez Rogers required Apple to let developers provide an option for external payment methods, which would allow them to avoid fees of up to 30 percent on App Store and in-app purchases. Apple introduced new App Store guidelines for developers in January that do allow linking to external websites for purchasing alternatives, but the new rules also require they get Apple’s approval to do so and impose a commission of 12-27 percent for these transactions. Per Reuters, Epic argued that this makes alternative payment options “commercially unusable.”

Epic also said at the time that Apple’s “so-called compliance is a sham,” and accused the company of violating the injunction with its recent moves. Apple maintains that it has acted in compliance with the injunction, stating in the new filing, “The purpose of the Injunction is to make information regarding alternative purchase options more readily available, not to dictate the commercial terms on which Apple provides access to its platform, tools and technologies, and userbase.”

This article originally appeared on Engadget at https://www.engadget.com/apple-claims-epic-is-trying-to-micromanage-its-business-operations-in-a-new-court-filing-171011659.html?src=rss