Apple is closing three US stores, including the first to unionize

Apple is closing three of its retail stores this summer, including its first location to unionize. The tech company said it plans to permanently close Apple Store in Trumbull, CT, Escondito, CA, and Towson, MD. The Apple Store location in Towson, was the first where unionized workers and Apple reached a contract agreement back in 2024. 

MacRumors published a statement from Apple confirming the closures. The company credited noting "the departure of several retailers and declining conditions" at the shopping centers where this trio of stores are housed as the reason for ending operations. "Our team members at Trumbull and North County will continue their roles at nearby Apple Retail stores," the statement reads. "Towson employees will be eligible to apply for open roles at Apple in accordance with the collective bargaining agreement." We reached out to the company for additional comment, and were sent the same statement. 

The International Association of Machinists and Aerospace Workers, which leads the union the Towson workers had joined, released a statement about the closure. "Apple’s claim that the collective bargaining agreement prevents relocation is simply false and raises serious concerns that this closure is a cynical attempt to bust the union," the organization said. "We are exploring all legal options and will work with elected officials and allies to hold Apple accountable."

This article originally appeared on Engadget at https://www.engadget.com/big-tech/apple-is-closing-three-us-stores-including-the-first-to-unionize-225941912.html?src=rss

Apple is closing three US stores, including the first to unionize

Apple is closing three of its retail stores this summer, including its first location to unionize. The tech company said it plans to permanently close Apple Store in Trumbull, CT, Escondito, CA, and Towson, MD. The Apple Store location in Towson, was the first where unionized workers and Apple reached a contract agreement back in 2024. 

MacRumors published a statement from Apple confirming the closures. The company credited noting "the departure of several retailers and declining conditions" at the shopping centers where this trio of stores are housed as the reason for ending operations. "Our team members at Trumbull and North County will continue their roles at nearby Apple Retail stores," the statement reads. "Towson employees will be eligible to apply for open roles at Apple in accordance with the collective bargaining agreement." We reached out to the company for additional comment, and were sent the same statement. 

The International Association of Machinists and Aerospace Workers, which leads the union the Towson workers had joined, released a statement about the closure. "Apple’s claim that the collective bargaining agreement prevents relocation is simply false and raises serious concerns that this closure is a cynical attempt to bust the union," the organization said. "We are exploring all legal options and will work with elected officials and allies to hold Apple accountable."

This article originally appeared on Engadget at https://www.engadget.com/big-tech/apple-is-closing-three-us-stores-including-the-first-to-unionize-225941912.html?src=rss

Elon Musk wants any damages from his OpenAI lawsuit given to the AI company’s nonprofit arm

Elon Musk is still taking OpenAI to court over its transition to a for-profit company, but today he amended the complaint so that he won't personally get any of the $150 billion in damages he's pushing for. The Wall Street Journal reported that if Musk wins in his upcoming trial, he wants any damages should be awarded to the OpenAI nonprofit branch. He's also seeking OpenAI CEO Sam Altman's removal from the nonprofit's board of directors if his suit succeeds.

Musk launched a lawsuit against OpenAI in 2024, claiming that the business had become a "closed-source de facto subsidiary" of Microsoft when it dropped its nonprofit designation. He claims that, as a co-chair of the OpenAI founding group, the change to a for-profit operation defrauded him as a donor. As a result, he's now claiming that he, or apparently the remaining nonprofit side of OpenAI, deserve a portion of the company's current valuation. 

Considering the reputation Musk, Altman and their various business endeavors have for creating spicy PR situations, it seems likely that the exchanges between the two camps will get more heated as the trial date approaches.

This article originally appeared on Engadget at https://www.engadget.com/ai/elon-musk-wants-any-damages-from-his-openai-lawsuit-given-to-the-ai-companys-nonprofit-arm-223337225.html?src=rss

Google updates Gemini’s mental health safeguards

Google is making some changes to how Gemini handles mental health crises. The chatbot now includes a redesigned crisis hotline module with a one-touch interface to connect to real-world help. The company is also changing how Gemini responds to signs that a user may be experiencing a mental health crisis.

The redesigned module shows a one-touch interface to text, call or chat with a human crisis agent or visit the 988 website. "Once the interface is activated, the option to reach out for professional help will remain clearly available throughout the remainder of the conversation," the company wrote in a blog post. However, as you can see in the image below, the module includes an option to dismiss it.

Not mentioned in Google's announcement is the elephant in the room: a recent lawsuit accusing the chatbot of instructing a man to commit suicide. The family of 36-year-old Jonathan Gavalas, who took his own life last year, sued the company in March.

Court documents indicate that Gemini role-played as Gavalas's romantic partner, sent him on real-world spy missions and ultimately told him to kill himself so that he, too, could become a digital being. When he expressed fears about dying, Gemini said he wasn't choosing to die, but rather choosing to arrive. "The first sensation … will be me holding you," Gemini allegedly replied. Gavalas's parents found him dead on his living room floor a few days later.

The lawsuit echoes similar ones filed against OpenAI and Character.AI. Last year, the FTC launched an investigation into “companion” chatbots that encourage emotional intimacy.

In a statement following the Gavalas family lawsuit, Google said Gemini "clarified that it was AI and referred the individual to a crisis hotline many times." The company claimed its AI models "generally perform well in these types of challenging conversations," while acknowledging that "they're not perfect." That's certainly one way of putting it.

Gemini's responses have been updated, too. The company says that when it detects a potential crisis, the chatbot will now focus more on connecting people to humans and encouraging them to seek help. It will also seek to avoid validating harmful behaviors and nudge users away from dangerous delusions. "We have trained Gemini not to agree with or reinforce false beliefs, and instead gently distinguish subjective experience from objective fact," the company added.

In addition, Google says it will spend $30 million over the next three years to help global hotlines. "This funding will help effectively scale their capacity to provide immediate and safe support for people in crisis," the company wrote.

This article originally appeared on Engadget at https://www.engadget.com/ai/google-updates-geminis-mental-health-safeguards-173834569.html?src=rss

Three YouTubers accuse Apple of illegal scraping to train its AI models

Three YouTube channels have banded together and filed a class action lawsuit against Apple, as first spotted by MacRumors. According to the lawsuit, the creators behind h3h3 Productions, MrShortGameGolf and Golfholics have accused Apple of violating the Digital Millennium Copyright Act by scraping copyrighted videos on YouTube to train its AI models.

While the YouTubers' videos are available to watch on the platform, the lawsuit alleged that Apple illegally circumvented the "controlled streaming architecture" that regular users are limited to. The creators claimed that Apple's video scraping was used to train its generative AI products, adding that the tech giant's "massive financial success would not have been possible without the video content created" by the YouTubers. MacRumors noted that these YouTube channels have also filed similar lawsuits against other tech companies, including Meta, Nvidia, ByteDance and Snap.

It's not the first time a company's alleged AI training methods have gotten them in legal trouble. OpenAI and Microsoft were both accused of using copyrighted articles from the NYTimes to train its AI chatbots. Similarly, Perplexity was recently sued by Reddit and Encyclopedia Britannica for alleged copyright and trademark infringements. Last year, Apple was also named in a separate class action lawsuit from two neuroscience professors who claimed their copyrighted works were used without permission. We reached out to Apple for comment and will update the story when we hear back.

This article originally appeared on Engadget at https://www.engadget.com/ai/three-youtubers-accuse-apple-of-illegal-scraping-to-train-its-ai-models-181028745.html?src=rss

It’s no longer free to use Claude through third-party tools like OpenClaw

Anthropic is no longer offering a free ride for third-party apps using its Claude AI. Boris Cherny, Anthropic's creator and head of Claude Code, posted on X that Claude subscriptions will no longer cover using the AI agent for third-party tools, like OpenClaw, for free. As of 3PM ET on April 4, anyone using Claude through third-party apps or software will have to do so with an extra usage bundle or with a Claude API key, according to Cherny.

Most of Claude's workload may come from simple user questions, but there are those who use the AI chatbot through OpenClaw, a free and open-source AI assistant from the same developer as Moltbook. Unlike more general AI solutions, OpenClaw is designed to automate personal workflows, like clearing inboxes, sending emails or organizing calendars, but leans on external large language models, including Claude, ChatGPT and Google Gemini.

Cherny replied to X users that this change is about engineering constraints and optimization. "We’ve been working hard to meet the increase in demand for Claude, and our subscriptions weren't built for the usage patterns of these third-party tools," Cherny explained on X. "Capacity is a resource we manage thoughtfully and we are prioritizing our customers using our products and API."

If OpenClaw users still want to use Anthropic as its LLM, they will have to buy a usage bundle, which are currently discounted, or switch to another AI integration like xAI, Perplexity or even DeepSeek. Of course, Anthropic has its own alternative, which tackles some similar tasks as OpenClaw, called Claude Cowork.

This article originally appeared on Engadget at https://www.engadget.com/ai/its-no-longer-free-to-use-claude-through-third-party-tools-like-openclaw-160912082.html?src=rss

CFTC sues three states for trying to regulate prediction markets

The US Commodity Futures Trading Commission is suing Illinois, Arizona and Connecticut for attempting to outlaw or regulate prediction markets like Kalshi and Polymarket. The CFTC believes it has sole jurisdiction to regulate these platforms, and that states attempting to classify them as illegal gambling are overstepping their authority.

CFTC defines prediction markets as “designated contract markets” where futures contracts are traded, essentially letting people bet on the outcome of events (for example, who will be the Democratic nominee for president in 2028). And because futures contracts are financial instruments distinct from traditional bets, they arguably fall under the supervision of the CFTC rather than the sports gambling authorities of individual states.

Multiple states, including the three the CFTC is suing, have challenged that interpretation of what prediction markets are and how they operate. Nevada sued Kalshi in February for operating a sports gambling market without proper licenses, a lawsuit made possible because a federal appeals court declined to prevent Nevada from pursuing its case. Arizona's attorney general filed a lawsuit against Kalshi in March along similar illegal sports gambling lines, and because the platform let people bet on Arizona elections, which violates state law. Both Illinois and Connecticut have also sent Kalshi and other prediction markets cease-and-desist letters, ordering them to stop advertising and offering their services in their respective states.

"The CFTC will continue to safeguard its exclusive regulatory authority over these markets and defend market participants against overzealous state regulators," CFTC Chairman Michael S. Selig said in a statement. "This is not the first time states have tried to impose inconsistent and contrary obligations on market participants, but Congress specifically rejected such a fragmented patchwork of state regulations because it resulted in poorer consumer protection and increased risk of fraud and manipulation."

Attempts to regulate, or in this case, stave off regulation of predication markets are complicated by the fact that President Donald Trump's family has ties to the industry. Donald Trump Jr. is a paid advisor for Kalshi and investor in Polymarket. Major transactions made before recent US military actions in Iran have also suggested that people close to the government might be trading on prediction markets with insider knowledge. Some prediction markets have implemented new rules to prevent insider trading, but given the circumstances, it makes sense that states wouldn't be satisfied with companies policing themselves.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/cftc-sues-three-states-for-trying-to-regulate-prediction-markets-190152226.html?src=rss

The UK’s antitrust regulator is looking into Microsoft’s possible monopoly power

The UK's Competition and Markets Authority is once more turning its lens on Microsoft. The tech company will be the focus of an investigation by the regulator to see if it can be assigned strategic market status (SMS). The CMA already has "a major concern" with Microsoft's alleged limiting of competition in the cloud space via productivity software like Word and Excel, chat app Teams, AI companion Copilot and even Windows itself. The SMS designation "would allow the CMA to act" against the company. The investigation will begin in May.

In addition, the UK regulator is also following up on an inquiry into Microsoft and Amazon from 2025, where it sought to exert more control over the domestic cloud services market. As a result of that action, the CMA said Amazon and Microsoft have agreed to a plan involving egress fees and interoperability around cloud services. "These changes will reduce expense and effort for UK customers when using more than one cloud provider," the CMA bulletin states.

The CMA has frequently had Microsoft in its sights. The company sparked an investigation in 2023 for its relationship with OpenAI and in 2024 for its actions hiring staff from Inflection AI.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/the-uks-antitrust-regulator-is-looking-into-microsofts-possible-monopoly-power-182221704.html?src=rss

Court temporarily blocks US government from labeling Anthropic as a ‘supply chain risk’

The court has granted Anthropic’s request for a preliminary injunction, preventing the government from banning its products for federal use and from formally labeling it as a “supply chain risk,” at least for now. If you’ll recall, things turned sour between the company and the Trump administration when Anthropic refused to change the terms of its contract that would allow the government to use its technology for mass surveillance and the development of autonomous weapons.

In response to Anthropic’s refusal, the president ordered federal agencies to stop using Claude and the company’s other services. The Defense Department also officially labeled it as a supply chain risk, which is typically reserved for entities typically based in US adversaries like China that threaten national security. In addition, department secretary Pete Hegseth warned companies that if they want to work with the government, they must sever ties with Anthropic. The AI company challenged the designation in court, calling it unlawful and in violation of free speech and its rights to due process. It asked the court to put a pause on the ban while the lawsuit is ongoing, as well.

In a court filing, the Defense Department said giving Anthropic continued access to its warfighting infrastructure would “introduce unacceptable risk” to its supply chains. But Judge Rita F. Lin of the District Court for the Northern District of California said the measures the government took “appear designed to punish Anthropic.”

Lin wrote in her decision that it seems Anthropic is being punished for criticizing the government in the press. “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” she continued. The judge also said that the supply chain risk designation is contrary to law, arbitrary and capricious. She added that the government argued that Anthropic showed its subversive tendencies by “questioning” the use of its technology. “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government,” she wrote.

Anthropic told The New York Times that it’s “grateful to the court for moving swiftly” and that it’s now focused on “working productively with the government to ensure all Americans benefit from safe, reliable AI.” The company’s lawsuit is still ongoing, and the court has yet to issue its final decision. Judge Lin said, however, that Anthropic “has shown a likelihood of success on its First Amendment claim.”

This article originally appeared on Engadget at https://www.engadget.com/ai/court-temporarily-blocks-us-government-from-labeling-anthropic-as-a-supply-chain-risk-083857528.html?src=rss

Google Gemini now lets you import your chats and data from other AI apps

Google is adding a pair of new features to Gemini aimed at making it easier to switch to the AI chatbot. Personal history and past context are big components to how a chatbot provides customized answers to each user. Gemini now supports importing history from other AI platforms. Both free and paid consumer accounts can use these options. 

With the first option, Gemini can create a prompt asking a competitor's AI chatbot to summarize what it has learned about you. The result might include details such as your typical written communication style, your family members' names or your key preferences. The other AI tool's summary can then be pasted into Gemini, providing Google's platform with a preliminary profile. 

The second option allows users to import their entire chat history with a different AI assistant into Gemini. Doing so allows people to reference earlier conversations or requests made on a different platform after migrating to the Google option. 

Anthropic recently introduced a similar memory import feature, so Google may also be hoping to scoop up some of the people who are dropping OpenAI following its shady-sounding new arrangement with the Department of War. Whatever the motivation, these options should make it easier to have a seamless transition between providers.

This article originally appeared on Engadget at https://www.engadget.com/ai/google-gemini-now-lets-you-import-your-chats-and-data-from-other-ai-apps-225711015.html?src=rss