OpenAI whistleblowers call for SEC probe into NDAs that kept employees from speaking out on safety risks

OpenAI’s NDAs are once again under scrutiny after whistleblowers penned a letter to the SEC alleging that employees were made to sign “illegally restrictive” agreements preventing them from speaking out on the potential harms of the company’s technology. The letter, which was obtained and published online by The Washington Post, accuses OpenAI of violating SEC rules meant to protect employees’ rights to report their concerns to federal authorities and prevent retaliation. It follows an official complaint that was filed with the SEC in June.

In the letter, the whistleblowers ask the SEC to “take swift and aggressive steps” to enforce the rules they say OpenAI has violated. The alleged violations include making employees sign agreements “that failed to exempt disclosures of securities violations to the SEC” and requiring employees obtain consent from the company before disclosing confidential information to the authorities. The letter also says OpenAI’s agreements required employees to “waive compensation that was intended by Congress to incentivize reporting and provide financial relief to whistleblowers.”

In a statement to the Post, OpenAI spokesperson Hannah Wong said, “Our whistleblower policy protects employees’ rights to make protected disclosures,” and added that the company has made “important changes” to its off-boarding papers to do away with nondisparagement terms. OpenAI previously said it was fixing these agreements after it was accused this spring of threatening to claw back exiting employees’ vested equity if they didn’t sign NDAs on their way out.

According to The Washington Post, the SEC has responded to the complaint, but no details have yet been released regarding any action it is or isn’t going to take. But the whistleblowers say enforcement is of utmost importance “even if OpenAI is making reforms in light of the public disclosures of their illegal contracts.” The letter says it is necessary “not as an attack on OpenAI or to hinder the advancement of AI technology, but to send the message to others in the AI space, and to the tech industry at large, that violations on the right of employees or investors to report wrongdoing will not be tolerated.”

This article originally appeared on Engadget at https://www.engadget.com/openai-whistleblowers-call-for-sec-probe-into-ndas-that-kept-employees-from-speaking-out-on-safety-risks-171604829.html?src=rss

Apple will allow developers access to its NFC technology, avoiding an EU fine

After four years of back and forth, the European Union and Apple have finally come to an agreement on the latter's tap-and-go technology. The European Commission announced Apple made "legally binding" commitments to provide developers with their Near-Field Communication (NFC) technology, which is used for tap-and-go technology, and access iOS features like Face ID authentication and double-click to launch. The agreement saves Apple from facing an antitrust fine equal to up to 10 percent of its worldwide annual turnover — about $40 billion. 

Apple has also agreed to stipulations such as allowing users to make third-party wallets their default app. "It opens up competition in this crucial sector, by preventing Apple from excluding other mobile wallets from the iPhone's ecosystem," Margrethe Vestage, the EU's executive vice president in charge of competition policy, stated in the release. "From now on, competitors will be able to effectively compete with Apple Pay for mobile payments with the iPhone in shops. So consumers will have a wider range of safe and innovative mobile wallets to choose from." The commitments are binding for ten years, with an independent monitor ensuring Apple follows them across the European Economic Area (EEA). 

The European Commission opened its investigation into Apple in 2020, alleging that Apple was restricting rival mobile wallet developers from accessing necessary technology. Two years later, the regulatory body issued a preliminary view that Apple "abused its dominant position." 

Then, in early 2024, Apple finally offered to open up its NFC technology and report to an independent reviewer. The European Commission shared the terms publicly, encouraging Apple's rivals and other interested parties to give their opinion. The final agreement between the European Commission and Apple results from those consultations.  

The tech giant could still be on the hook for tens of billions of dollars in a different case after the European Commission issued its preliminary view that Apple violated the Digital Markets Act (DMA). The new law went into effect in March, and the European Commission soon opened an investigation into whether Apple prevented developers from telling users that they could pay less for services elsewhere. Apple currently takes a 30 percent commission on any purchases made through the App Store. The European Commission has until March 2025 to make a final ruling in the case. 

This article originally appeared on Engadget at https://www.engadget.com/apple-will-allow-developers-access-to-its-nfc-technology-avoiding-an-eu-fine-123026127.html?src=rss

Elon Musk escapes paying $500 million to former Twitter employees

The social media platform formerly known as Twitter has been at the center of multiple legal battles from the very beginning of Elon Musk's takeover. One such suit relates to the more than 6,000 employees laid off by Musk following his acquisition of the company – and his alleged failure to pay them their full severance. Yesterday, Musk notched a win over his former employees.

The case in question is a class-action lawsuit filed by former Twitter employee Courtney McMillian. The complaint argued that under the federal Employee Retirement Income Security Act (ERISA), the Twitter Severance Plan owed laid off workers three months of pay. They received less than that, and sought $500 million in unpaid severance. However, on Tuesday, US District Judge Trina Thompson in the Northern District of California granted Musk's motion to dismiss the class-action complaint.

Judge Thompson found that the Twitter severance plan did not qualify under ERISA because they received notice of a separate payout scheme prior to the layoffs. Instead, she dismissed the case, ruling that the severance program adopted after Musk's takeover was the one that applied to these former employees, rather than the 2019 one the plaintiffs were expecting.

This ruling is a setback for the thousands of dismissed Twitter staffers, but there are future chances for them to win larger payments. Thompson's order noted that the plaintiffs could amend their complaint for non-ERISA claims. If they do, Thompson said "this Court will consider issuing an Order finding this case related to one of the cases currently pending" against X Corp/Twitter. There are still lawsuits underway on behalf of some past top brass at Twitter, one which is seeking $128 million in unpaid severance and another attempting to recoup about $1 million in unpaid legal fees.

This article originally appeared on Engadget at https://www.engadget.com/elon-musk-escapes-paying-500-million-to-former-twitter-employees-203813996.html?src=rss

Microsoft and Apple give up their OpenAI board seats

Microsoft has withdrawn from OpenAI's board of directors a couple of weeks after the European Commission revealed that it's taking another look at the terms of their partnership, according to the Financial Times. The company has reportedly sent OpenAI a letter, announcing that it was giving up its seat "effective immediately." Microsoft took on an observer, non-voting role within OpenAI's board following an internal upheaval that led to the firing (and eventual reinstatement) of the latter's CEO, Sam Altman. 

According to previous reports, Apple was also supposed to get an observer seat at the board following its announcement that it will integrate ChatGPT into its devices. The Times says that will no longer be the case. Instead, OpenAI will take on a new approach and hold regular meetings with key partners, including the two Big Tech companies. In the letter, Microsoft reportedly told OpenAI that it's confident in the direction the company is taking, so its seat on the board is no longer necessary. 

The company also wrote that its seat "provided insights into the board's activities without compromising its independence," but the European Commission wants to take a closer look at their relationship before deciding if it agrees. "We’re grateful to Microsoft for voicing confidence in the board and the direction of the company, and we look forward to continuing our successful partnership," an OpenAI spokesperson told The Times.

Microsoft initially invested $1 billion into OpenAI in 2019. Since then, the company has poured more money into the AI company until it has reached $13 billion in investments. The European Commission started investigating their partnership to figure out if it breaks the bloc's merger rules last year, but it ultimately concluded that Microsoft didn't gain control of OpenAI. It didn't drop the probe altogether, however — Margrethe Vestager, the commission's executive vice-president for competition policy, revealed in June that European authorities asked Microsoft for additional information regarding their agreement "to understand whether certain exclusivity clauses could have a negative effect on competitors."

The commission is looking into the Microsoft-OpenAI agreement as part of a bigger antitrust investigation. It also sent information requests to other big players in the industry that are also working on artificial intelligence technologies, including Meta, Google and TikTok. The commission intends to ensure fairness in consumer choices and to examine acqui-hires to "make sure these practices don’t slip through [its] merger control rules if they basically lead to a concentration."

This article originally appeared on Engadget at https://www.engadget.com/microsoft-and-apple-give-up-their-openai-board-seats-120022867.html?src=rss

Texas court blocks the FTC’s ban on noncompete agreements

The Federal Trade Commission's (FTC) ban on noncompete agreements was supposed to take effect on September 4, but a Texan court has postponed its implementation by siding with the plaintiffs in a lawsuit that seeks to block the rule. Back in April, the FTC banned noncompetes, which have been widely used in the tech industry for years, to drive innovation and protect workers' rights and wages. A lot of companies are unsurprisingly unhappy with the agency's rule — as NPR notes, Dallas tax services firm Ryan LLC sued the FTC hours after its announcement. The US Chamber of Commerce and other groups of American businesses eventually joined the lawsuit. 

"Noncompete clauses keep wages low, suppress new ideas, and rob the American economy of dynamism," FTC Chair Lina M. Khan said when the rule was announced. They prevent employees from moving to another company or from building businesses of their own in the same industry, so they may be stuck working in a job with lower pay or in an environment they don't like. But the Chamber of Commerce’s chief counsel Daryl Joseffer called the ban an attempt by the government to micromanage business decisions in a statement sent to Bloomberg

"The FTC’s blanket ban on noncompetes is an unlawful power grab that defies the agency’s constitutional and statutory authority and sets a dangerous precedent where the government knows better than the markets," Joseffer said. The FTC disagrees and told NPR that its "authority is supported by both statute and precedent."

US District Judge Ada Brown, an appointee of former President Donald Trump, wrote in her decision that "the text, structure, and history of the FTC Act reveal that the FTC lacks substantive rulemaking authority with respect to unfair methods of competition." Brown also said that the plaintiffs are "likely to succeed" in getting the rule struck down and that it's in the public's best interest to grant the plaintiff's motion for preliminary injunction. The judge added that the court will make a decision "on the ultimate merits of this action on or before August 30."

This article originally appeared on Engadget at https://www.engadget.com/texas-court-blocks-the-ftcs-ban-on-noncompete-agreements-150020601.html?src=rss

Supreme Court remands social media moderation cases over First Amendment issues

Two state laws that could upend the way social media companies handle content moderation are still in limbo after a Supreme Court ruling sent the challenges back to lower courts, vacating previous rulings. In a 9 - 0 decision in Moody v. NetChoice and NetChoice v. Paxton, the Supreme Court said that earlier rulings in lower courts had not properly evaluated the laws’ impact on the First Amendment.

The cases stem from two state laws, from Texas and Florida, which tried to impose restrictions on social media companies’ ability to moderate content. The Texas law, passed in 2021, allows users to sue large social media companies over alleged “censorship” of their political views. The Supreme Court suspended the law in 2022 following a legal challenge. Meanwhile, the Florida measure, also passed in 2021, attempted to impose fines on social media companies for banning politicians. That law has also been on hold pending legal challenges.

Both laws were challenged by NetChoice, an industry group that represents Meta, Google, X and other large tech companies. NetChoice argued that the laws were unconstitutional and would essentially prevent large platforms from performing any kind of content moderation. The Biden Administration also opposed both laws. In a statement, NetChoice called the decision “a victory for First Amendment rights online.”

In a decision authored by Justice Elena Kagan, the court said that lower court rulings in both cases “concentrated” on the issue of “whether a state law can regulate the content-moderation practices used in Facebook’s News Feed (or near equivalents).” But, she writes, “they did not address the full range of activities the laws cover, and measure the constitutional against the unconstitutional applications.”

Essentially, the usually-divided court agreed that the First Amendment implications of the laws could have broad impacts on parts of these sites unaffected by algorithmic sorting or content moderation (like direct messages, for instance) as well as on speech in general. Analysis of those externalities, Kagan wrote, simply never occurred in the lower court proceedings. The decision to remand means that analysis should take place, and the case may come back before SCOTUS in the future.

“In sum, there is much work to do below on both these cases … But that work must be done consistent with the First Amendment, which does not go on leave when social media are involved,” Kagan wrote. 

This article originally appeared on Engadget at https://www.engadget.com/supreme-court-remands-social-media-moderation-cases-over-first-amendment-issues-154001257.html?src=rss

Detroit police can no longer use facial recognition results as the sole basis for arrests

The Detroit Police Department has to adopt new rules curbing its reliance on facial recognition technology after the city reached a settlement this week with Robert Williams, a Black man who was wrongfully arrested in 2020 due to a false face match. It’s not an all-out ban on the technology, though, and the court’s jurisdiction to enforce the agreement only extends four years. Under the new restrictions, which the ACLU is calling the strongest such policies for law enforcement in the country, police cannot make arrests based solely on facial recognition results or conduct a lineup based only on facial recognition leads.

Williams was arrested after facial recognition technology flagged his expired driver’s license photo as a possible match for the identity of an alleged shoplifter, which police then used to construct a photo lineup. He was arrested at his home, in front of his family, which he says “completely upended my life.” Detroit PD is known to have made at least two other wrongful arrests based on the results of facial recognition technology (FRT), and in both cases, the victims were Black, the ACLU noted in its announcement of the settlement. Studies have shown that facial recognition is more likely to misidentify people of color.

The new rules stipulate that “[a]n FRT lead, combined with a lineup identification, may never be a sufficient basis for seeking an arrest warrant,” according to a summary of the agreement. There must also be “further independent and reliable evidence linking a suspect to a crime.” Police in Detroit will have to undergo training on the technology that addresses the racial bias in its accuracy rates, and all cases going back to 2017 in which facial recognition was used to obtain an arrest warrant will be audited.

In an op-ed for TIME published today, Williams wrote that the agreement means, essentially, that “DPD can no longer substitute facial recognition for basic investigative police work.”

This article originally appeared on Engadget at https://www.engadget.com/detroit-police-can-no-longer-use-facial-recognition-results-as-the-sole-basis-for-arrests-204454537.html?src=rss

EU competition chief jabs at Apple from both sides over AI delay

It's safe to say Apple and the European Commission aren't exactly bosom buddies. The two sides have been at loggerheads over Apple's compliance — or alleged lack thereof — with the European Union's Digital Markets Act (DMA), a law designed to rein in the power of major tech companies.

Apple said last week it would delay the rollout of certain features in the European Union, including Apple Intelligence AI tools, over concerns "that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security." As it turns out, the EU is not exactly happy about that decision.

The call to push back the rollout of Apple Intelligence in the EU is a "stunning, open declaration that they know 100 percent that this is another way of disabling competition where they have a stronghold already,” EU competition commissioner Margrethe Vestager said at a Forum Europa event, according to Euractiv. Vestager added that the “short version of the DMA” means companies have to be open for competition to keep operating in the region.

Not to leap to the defense of Apple here, but these comments are sure to raise an eyebrow or two, especially after Vestager also said she "was personally quite relieved that I would not get an AI-updated service on my iPhone." Apple does intend to bring Apple Intelligence to Europe more broadly, but it's taking a cautious approach with the tech in that region due to "regulatory uncertainties" and ensuring it won't have to compromise on user safety.

As it stands, the European Commission is carrying out multiple investigations into the company over possible violations of the DMA. This week, it accused Apple of violating the law's anti-steering provisions by blocking app developers from freely informing users about alternate payment options outside of the company's ecosystem. If it's found guilty, Apple could be on the hook for a fine of up to 10 percent of its global annual revenue. Based on its 2023 sales, that could be a penalty of up to $38 billion. The percentage of the fine can double for repeated violations.

Earlier this year, before the DMA came into force, the European Commission fined Apple €1.8 billion ($1.95 billion) over a violation of previous anti-steering rules. According to the Commission, Apple prevented rival music streaming apps from telling users that they could pay less for subscriptions if they sign up outside of iOS apps. Apple has challenged the fine.

This article originally appeared on Engadget at https://www.engadget.com/eu-competition-chief-jabs-at-apple-from-both-sides-over-ai-delay-140022585.html?src=rss

The nation’s oldest nonprofit newsroom is suing OpenAI and Microsoft

The Center for Investigative Reporting, the nation’s oldest nonprofit newsroom that produces Mother Jones and Reveal sued OpenAI and Microsoft in federal court on Thursday for allegedly using its content to train AI models without consent or compensation. This is the latest in a long line of lawsuits filed by publishers and creators accusing generative AI companies of violating copyright.

“OpenAI and Microsoft started vacuuming up our stories to make their product more powerful, but they never asked for permission or offered compensation, unlike other organizations that license our material,” said Monika Bauerlein, CEO of the Center for Investigative Reporting, in a statement. “This free rider behavior is not only unfair, it is a violation of copyright. The work of journalists, at CIR and everywhere, is valuable, and OpenAI and Microsoft know it.” Bauerlein said that OpenAI and Microsoft treat the work of nonprofit and independent publishers “as free raw material for their products," and added that such moves by generative AI companies hurt the public’s access to truthful information in a “disappearing news landscape.”

OpenAI and Microsoft did not respond to a request for comment by Engadget.

The CIR’s lawsuit, which was filed in Manhattan’s federal court, accuses OpenAI and Microsoft, which owns nearly half of the company, of violating the Copyright Act and the Digital Millennium Copyright Act multiple times.

News organizations find themselves at an inflection point with generative AI. While the CIR is joining publishers like The New York Times, New York Daily News, The Intercept, AlterNet and Chicago Tribune in suing OpenAI, others publishers have chosen to strike licensing deals with the company. These deals will allow OpenAI to train its models on archives and ongoing content published by these publishers and cite information from them in responses offered by ChatGPT.

On the same day as the CIR sued OpenAI, for instance, TIME magazine announced a deal with the company that would grant it access to 101 years of archives. Last month, OpenAI signed a $250 million multi-year deal with News Corp, the owner of The Wall Street Journal, to train its models on more than a dozen brands owned by the publisher. The Financial Times, Axel Springer (the owner of Politico and Business Insider), The Associated Press and Dotdash Meredith have also signed deals with OpenAI.

This article originally appeared on Engadget at https://www.engadget.com/the-nations-oldest-nonprofit-newsroom-is-suing-openai-and-microsoft-174748454.html?src=rss

Supreme Court ruling may allow officials to coordinate with social platforms again

The US Supreme Court has ruled on controversial attempt by two states, Missouri and Louisiana, to limit Biden Administration officials and other government agencies from engaging with workers at social media companies about misinformation, election interference and other policies. Rather than set new guidelines on acceptable communication between these parties, the Court held that the plaintiffs lacked standing to bring the issue at all. 

In Murthy, the states (as well as five individual social media users) alleged that, in the midst of the COVID pandemic and the 2020 election, officials at the CDC, FBI and other government agencies "pressured" Meta, Twitter and Google "to censor their speech in violation of the First Amendment."

The Court wrote, in an opinion authored by Justice Barrett, that "the plaintiffs must show a substantial risk that, in the near future, at least one platform will restrict the speech of at least one plaintiff in response to the actions of at least one Government defendant. Here, at the preliminary injunction stage, they must show that they are likely to succeed in carrying that burden." She went on to describe this as "a tall order." 

Though a Louisiana District Court order blocking contact between social media companies and Biden Administration officials has been on hold, the case has still had a significant impact on relationships between these parties. Last year, Meta revealed that its security researchers were no longer receiving their usual briefings from the FBI or CISA (Cybersecurity and Infrastructure Security Agency) regarding foreign election interference. FBI officials had also warned that there were instances in which they discovered election interference attempts but didn’t warn social media companies due to additional layers of legal scrutiny implemented following the lawsuit. With today's ruling it seems possible such contact might now be allowed to continue. 

In part, it seems the Court was reluctant to rule on the case because of the potential for far-reaching First Amendment implications. Among the arguments made by the Plaintiffs was an assertion of a "right to listen" theory, that social media users have a Constitutional right to engage with content. "This theory is startlingly broad," Barrett wrote, "as it would grant all social-media users the right to sue over someone else’s censorship." The opinion was joined by Justices Roberts, Sotomayor, Kagan, Kavanaugh and Jackson. Justice Alito dissented, and was joined by Justices Thomas and Gorsuch. 

The case was one of a handful involving free speech and social media to come before the Supreme Court this term. The court is also set to rule on two linked cases involving state laws from Texas and Florida that could upend the way social media companies handle content moderation.

This article originally appeared on Engadget at https://www.engadget.com/supreme-court-ruling-may-allow-officials-to-coordinate-with-social-platforms-again-144045052.html?src=rss