EU finds Microsoft violated antitrust laws by bundling Teams

It has been nearly a year since the European Commission opened its investigation into Microsoft and there's finally a preliminary finding. The European Union's executive body announced its "view" that the tech giant violated antitrust laws by tying Microsoft Teams to its Office 365 and Microsoft 365 business suites. Last October, Microsoft unbundled Teams for users in the European Union and Switzerland, but the European Commission's Statement of Objections calls it "insufficient."

The European Commission used its statement to detail its concern "that Microsoft may have granted Teams a distribution advantage by not giving customers the choice whether or not to acquire access to Teams when they subscribe to their SaaS productivity applications. This advantage may have been further exacerbated by interoperability limitations between Teams' competitors and Microsoft's offerings. The conduct may have prevented Teams' rivals from competing, and in turn innovating, to the detriment of customers in the European Economic Area."

Microsoft faces a fine equal to 10 percent of its annual worldwide turnover if the EU confirms its preliminary findings, so it's no surprise the company is being cordial. "Having unbundled Teams and taken initial interoperability steps, we appreciate the additional clarity provided today and will work to find solutions to address the Commission's remaining concerns," said Brad Smith, Vice Chair and President of Microsoft, in a statement shared with Engadget.

This ordeal began in 2020 when Slack — now owned by Salesforce — filed an antitrust complaint against Microsoft, claiming it broke the EU's competition rules in bundling Teams to its suites. In April 2023, Microsoft declared its intention to offer Teams on its own (albeit without a clear plan), but the European Commission still formally opened an investigation just three months later. Following October's unbundling, Microsoft announced this past April that Teams would be available separately from Microsoft 365 and Office 365 to customers worldwide — current users could also switch plans. 

The European Commission's Statement of Objections also mentions a complaint by Alfaview, another video-conferencing software, which filed a similar grievance to Slack in July 2023 and notes it has open proceedings based on that complaint.

This article originally appeared on Engadget at https://www.engadget.com/eu-finds-microsoft-violated-antirust-laws-by-bundling-teams-121520916.html?src=rss

Amazon reportedly thinks people will pay up to $10 per month for next-gen Alexa

We've known for a while that Amazon is planning to soup up Alexa with generative AI features. While the company says it has been integrating that into various aspects of the voice assistant, it's also working on a more advanced version of Alexa that it plans to charge users to access. Amazon has reportedly dubbed the higher tier "Remarkable Alexa" (let's hope it doesn't stick with that name for the public rollout).

According to Reuters, Amazon is still determining pricing and a release date for Remarkable Alexa, but it has mooted a fee of between roughly $5 and $10 per month for consumers to use it. Amazon is also said to have been urging its workers to have Remarkable Alexa ready by August — perhaps so it's able to discuss the details as its usual fall Alexa and devices event.

This will mark the first major revamp of Alexa since Amazon debuted the voice assistant alongside Echo speakers a decade ago. The company is now in a position where it's trying to catch up with the likes of ChatGPT and Google Gemini. Amazon CEO Andy Jassy, who pledged that the company was working on a “more intelligent and capable Alexa" in an April letter to shareholders, has reportedly taken a personal interest in the overhaul. Jassy noted last August that every Amazon division had generative AI projects in the pipeline.

"We have already integrated generative AI into different components of Alexa, and are working hard on implementation at scale — in the over half a billion ambient, Alexa-enabled devices already in homes around the world — to enable even more proactive, personal, and trusted assistance for our customers," said an Amazon spokeswoman told Reuters. However, the company has yet to deploy the more natural-sounding and conversational version of Alexa it showed off last September.

Remarkable Alexa is said to be capable of complex prompts, such as being able to compose and send an email, and order dinner all from a single command. Deeper personalization is another aspect, while Amazon reportedly expects that consumers will use it for shopping advice, as with its Rufus assistant.

Upgraded home automation capability is said to be a priority too. According to the report, Remarkable Alexa may be able to gain a deeper understanding of user preferences, so it might learn to turn on the TV to a specific show. It may also learn to turn on the coffee machine when your alarm clock goes off (though it's already very easy to set this up through existing smart home systems).

Alexa has long been an unprofitable endeavor for Amazon — late last year, it laid off several hundred people who were working on the voice assistant. It's not a huge surprise that the company would try to generate more revenue from Remarkable Alexa (which, it's claimed, won't be offered as a Prime benefit). Users might need to buy new devices with more powerful tech inside so that Remarkable Alexa can run on them properly.

In any case, $10 (or even $5) per month for an upgraded voice assistant seems like a hard sell, especially when the current free version of Alexa can already handle a wide array of tasks. 

This article originally appeared on Engadget at https://www.engadget.com/amazon-reportedly-thinks-people-will-pay-up-to-10-per-month-for-next-gen-alexa-152205672.html?src=rss

Humane is said to be seeking a $1 billion buyout after only 10,000 orders of its terrible AI Pin

It emerged recently that Humane was trying to sell itself for as much as $1 billion after its confuddling, expensive and ultimately pretty useless AI Pin flopped. A New York Times report that dropped on Thursday shed a little more light on the company's sales figures and, like the wearable AI assistant itself, the details are not good.

By early April, around the time that many devastating reviews of the AI Pin were published, Humane is said to have received around 10,000 orders for the device. That's a far cry from the 100,000 it was hoping to ship this year, and about 9,000 more than I thought it might get. It's hard to think it picked up many more orders beyond those initial 10,000 after critics slaughtered the AI Pin.

At a price of $700 (plus a mandatory $24 per month for 4G service), that puts Humane's initial revenue at a maximum of about $7.24 million, not accounting for canceled orders. And yet Humane wants a buyer for north of $1 billion after taking a swing and missing so hard it practically knocked out the umpire.

HP is reportedly one of the companies that Humane was in talks with over a potential sale, with discussions starting only a week or so after the reviews came out. Any buyer that does take the opportunity to snap up Humane's business and tech might be picking up somewhat of a poisoned chalice. Not least because the company this week urged its marks customers to stop using the AI Pin's charging case over a possible “fire safety risk.”

This article originally appeared on Engadget at https://www.engadget.com/humane-is-said-to-be-seeking-a-1-billion-buyout-after-only-10000-orders-of-its-terrible-ai-pin-134147878.html?src=rss

Ex-Meta engineer sues company, claiming he was fired over handling of Palestine content

Ferras Hamad, who used to be an engineer working with Meta's machine learning team, has accused the company of firing him over his handling of Palestine-related Instagram posts in a lawsuit. According to Reuters, he is accusing the company of discrimination, wrongful termination and showing a pattern of bias against Palestinians. Hamad said he noted procedural irregularities on how the company handled restrictions on content from Palestinian Instagram personalities, which prevented them from appearing in feeds and searches. One particular case that involved a short video showing a destroyed building in Gaza seemingly led to his dismissal in February. 

Hamad discovered that the video, which was taken by Palestinian photojournalist Motaz Azaiza, was misclassified as pornographic. He said he received conflicting guidance on whether he was authorized to help resolve the issue but was eventually told in writing that helping troubleshoot it was part of his tasks. A month later, though, Hamad was reportedly notified that he was the subject of an investigation. He filed an internal discrimination complaint in response, but he was fired days later and was told that it was because he violated a policy that prohibits employees from working on issues involving accounts of people they personally know. Hamad, who is Palestinian-American, has denied that he personally knew Azaiza. 

In addition to detailing the events that led to his firing in the lawsuit, Hamad also accused the company of deleting internal communication between employees talking about deaths of their relatives in Gaza. Employees that use the Palestinian flag emoji were investigated, as well, whereas those who've previously posted the Israeli or the Ukrainian flags in similar contexts weren't subjected to the same scrutiny. 

Meta has been accused of suppressing posts that support Palestine even before the October 7 Hamas attacks against Israel. Late last year, Senator Elizabeth Warren wrote Mark Zuckerberg a letter raising concerns about how numerous Instagram users were accusing the company of "shadowbanning" them for posting about the conditions in Gaza. Meta's Oversight Board ruled last year that the company's tools mistakenly removed a video posted on Instagram showing the aftermath of a strike on the Al-Shifa Hospital in Gaza during Israel’s ground offensive. More recently, the board opened an investigation to review cases involving Facebook posts that used the phrase "from the river to the sea." We've asked Meta for a statement on Hamad's lawsuit, and we'll update this post when we hear back.

This article originally appeared on Engadget at https://www.engadget.com/ex-meta-engineer-sues-company-claiming-he-was-fired-over-handling-of-palestine-content-123057080.html?src=rss

AI workers demand stronger whistleblower protections in open letter

A group of current and former employees from leading AI companies like OpenAI, Google DeepMind and Anthropic has signed an open letter asking for greater transparency and protection from retaliation for those who speak out about the potential concerns of AI. “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” the letter, which was published on Tuesday, says. “Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.”

The letter comes just a couple of weeks after a Vox investigation revealed OpenAI had attempted to muzzle recently departing employees by forcing them to chose between signing an aggressive non-disparagement agreement, or risk losing their vested equity in the company. After the report, OpenAI CEO Sam Altman said that he had been genuinely embarrassed" by the provision and claimed it has been removed from recent exit documentation, though it's unclear if it remains in force for some employees. After this story was published, nn OpenAI spokesperson told Engadget that the company had removed a non-disparagement clause from its standard departure paperwork and released all former employees from their non-disparagement agreements.

The 13 signatories include former OpenAI employees Jacob Hinton, William Saunders and Daniel Kokotajlo. Kokotajlo said that he resigned from the company after losing confidence that it would responsibly build artificial general intelligence, a term for AI systems that is as smart or smarter than humans. The letter — which was endorsed by prominent AI experts Geoffrey Hinton, Yoshua Bengio and Stuart Russell — expresses grave concerns over the lack of effective government oversight for AI and the financial incentives driving tech giants to invest in the technology. The authors warn that the unchecked pursuit of powerful AI systems could lead to the spread of misinformation, exacerbation of inequality and even the loss of human control over autonomous systems, potentially resulting in human extinction.

“There is a lot we don’t understand about how these systems work and whether they will remain aligned to human interests as they get smarter and possibly surpass human-level intelligence in all areas,” wrote Kokotajlo on X. “Meanwhile, there is little to no oversight over this technology. Instead, we rely on the companies building them to self-govern, even as profit motives and excitement about the technology push them to ‘move fast and break things.’ Silencing researchers and making them afraid of retaliation is dangerous when we are currently some of the only people in a position to warn the public.”

In a statement shared with Engadget, an OpenAI spokesperson said: “We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology and we'll continue to engage with governments, civil society and other communities around the world.” They added: “This is also why we have avenues for employees to express their concerns including an anonymous integrity hotline and a Safety and Security Committee led by members of our board and safety leaders from the company.”

Google and Anthropic did not respond to request for comment from Engadget. In a statement sent to Bloomberg, an OpenAI spokesperson said the company is proud of its “track record providing the most capable and safest AI systems" and it believes in its "scientific approach to addressing risk.” It added: “We agree that rigorous debate is crucial given the significance of this technology and we'll continue to engage with governments, civil society and other communities around the world.”

The signatories are calling on AI companies to commit to four key principles:

  • Refraining from retaliating against employees who voice safety concerns

  • Supporting an anonymous system for whistleblowers to alert the public and regulators about risks

  • Allowing a culture of open criticism

  • And avoiding non-disparagement or non-disclosure agreements that restrict employees from speaking out

The letter comes amid growing scrutiny of OpenAI's practices, including the disbandment of its "superalignment" safety team and the departure of key figures like co-founder Ilya Sutskever and Jan Leike, who criticized the company's prioritization of "shiny products" over safety.

Update, June 05 2024, 11:51AM ET: This story has been updated to include statements from OpenAI.

This article originally appeared on Engadget at https://www.engadget.com/former-openai-google-and-anthropic-workers-are-asking-ai-companies-for-more-whistleblower-protections-175916744.html?src=rss

Malicious code has allegedly compromised TikTok accounts belonging to CNN and Paris Hilton

There’s a new exploit making its way through TikTok and it has already compromised the official accounts of Paris Hilton, CNN and others, as reported by Forbes. It’s spread via direct message and doesn’t require a download, click or any form of response, beyond opening the chat. It’s currently unclear how many accounts have been affected.

Even weirder? The hacked accounts aren’t really doing anything. A source within TikTok told Forbes that these impacted accounts “do not appear to be posting content”. TikTok issued a statement to The Verge, saying that it is "aware of a potential exploit targeting a number of brand and celebrity accounts." The social media giant is "working directly with affected account owners to restore access." 

Semafor recently reported that CNN’s TikTok had been hacked, which forced the network to disable the account. It’s unclear if this is the very same hack that has gone on to infect other big-time accounts. The news organization said that it was “working with TikTok on the backend on additional security measures.” 

CNN staffers told Semafor that the news entity had “grown lax” regarding digital safety practices, with one employee noting that dozens of colleagues had access to the official TikTok account. However, another network source suggested that the breach wasn’t the result of someone gaining access from CNN’s end. That’s about all we know for now. We’ll update this post when more news comes in.

Of course, this isn’t the first big TikTok hack. Back in 2023, the company acknowledged that around 700,000 accounts in Turkey had been compromised due to insecure SMS channels involved with its two-factor authentication. Researchers at Microsoft discovered a vulnerability in 2022 that allowed hackers to overtake accounts with just a single click. Later that same year, an alleged security breach allegedly impacted more than a billion users.

This article originally appeared on Engadget at https://www.engadget.com/malicious-code-has-allegedly-compromised-tiktok-accounts-belonging-to-cnn-and-paris-hilton-174000353.html?src=rss

Twitch removes every member of its Safety Advisory Council

Twitch signed up cyberbullying experts, web researchers and community members back in 2020 to form the Safety Advisory Council. The review board was formed to help it draft new policies, develop products that improve safety and protect the interests of marginalized groups. Now, CNBC reports that the streaming website has terminated all the members of the council. Twitch reportedly called the nine members into a meeting on May 6 to let them know that their existing contracts would end on May 31 and that they would not be getting paid for the second half of 2024. 

The Safety Advisory Council's members include Dr. Sameer Hinduja, co-director of the Cyber Bullying Research Center, and Dr. T.L. Taylor, the co-founder and director of AnyKey, an organization that advocates for inclusion and diversity in video games and esports. There's also Emma Llansó, the director of the Free Expression Project for the Center for Democracy and Technology.  

In an email sent to the members, Twitch reportedly told them that going forward, "the Safety Advisory Council will primarily be made up of individuals who serve as Twitch Ambassadors." The Amazon subsidiary didn't mention any names, but it describes its Ambassadors as people who "positively contribute to the Twitch community — from being role models for their community, to establishing new content genres, to having inspirational stories that empower those around them."

In a statement sent to The Verge, Twitch trust and safety communications manager Elizabeth Busby said that the new council members will "offer [the website] fresh, diverse perspectives" after working with the same core members for years. "We’re excited to work with our global Twitch Ambassadors, all of whom are active on Twitch, know our safety work first hand, and have a range of experiences to pull from," Busby added.

It's unclear if the Ambassadors taking the current council members' place will get paid or if they're expected to lend their help to the company for free. If it's the latter, then this development could be a cost-cutting measure: The outgoing members were paid between $10,000 and $20,000 a year, CNBC says. Back in January, Twitch also laid off 35 percent of its workforce to "cut costs" and to "build a more sustainable business." In the same month, it reduced how much streamers make from every Twitch Prime subscription they generate, as well.

This article originally appeared on Engadget at https://www.engadget.com/twitch-removes-every-member-of-its-safety-advisory-council-131501219.html?src=rss

Twitch removes every member of its Safety Advisory Council

Twitch signed up cyberbullying experts, web researchers and community members back in 2020 to form the Safety Advisory Council. The review board was formed to help it draft new policies, develop products that improve safety and protect the interests of marginalized groups. Now, CNBC reports that the streaming website has terminated all the members of the council. Twitch reportedly called the nine members into a meeting on May 6 to let them know that their existing contracts would end on May 31 and that they would not be getting paid for the second half of 2024. 

The Safety Advisory Council's members include Dr. Sameer Hinduja, co-director of the Cyber Bullying Research Center, and Dr. T.L. Taylor, the co-founder and director of AnyKey, an organization that advocates for inclusion and diversity in video games and esports. There's also Emma Llansó, the director of the Free Expression Project for the Center for Democracy and Technology.  

In an email sent to the members, Twitch reportedly told them that going forward, "the Safety Advisory Council will primarily be made up of individuals who serve as Twitch Ambassadors." The Amazon subsidiary didn't mention any names, but it describes its Ambassadors as people who "positively contribute to the Twitch community — from being role models for their community, to establishing new content genres, to having inspirational stories that empower those around them."

In a statement sent to The Verge, Twitch trust and safety communications manager Elizabeth Busby said that the new council members will "offer [the website] fresh, diverse perspectives" after working with the same core members for years. "We’re excited to work with our global Twitch Ambassadors, all of whom are active on Twitch, know our safety work first hand, and have a range of experiences to pull from," Busby added.

It's unclear if the Ambassadors taking the current council members' place will get paid or if they're expected to lend their help to the company for free. If it's the latter, then this development could be a cost-cutting measure: The outgoing members were paid between $10,000 and $20,000 a year, CNBC says. Back in January, Twitch also laid off 35 percent of its workforce to "cut costs" and to "build a more sustainable business." In the same month, it reduced how much streamers make from every Twitch Prime subscription they generate, as well.

This article originally appeared on Engadget at https://www.engadget.com/twitch-removes-every-member-of-its-safety-advisory-council-131501219.html?src=rss

OpenAI says it stopped multiple covert influence operations that abused its AI models

OpenAI said that it stopped five covert influence operations that used its AI models for deceptive activities across the internet. These operations, which OpenAI shutdown between 2023 and 2024, originated from Russia, China, Iran and Israel and attempted to manipulate public opinion and influence political outcomes without revealing their true identities or intentions, the company said on Thursday. “As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services,” OpenAI said in a report about the operation, and added that it worked with people across the tech industry, civil society and governments to cut off these bad actors.

OpenAI’s report comes amidst concerns about the impact of generative AI on multiple elections around the world slated for this year including in the US. In its findings, OpenAI revealed how networks of people engaged in influence operations have used generative AI to generate text and images at much higher volumes than before, and fake engagement by using AI to generate fake comments on social media posts.

“Over the last year and a half there have been a lot of questions around what might happen if influence operations use generative AI,” Ben Nimmo, principal investigator on OpenAI’s Intelligence and Investigations team, told members of the media in a press briefing, according to Bloomberg. “With this report, we really want to start filling in some of the blanks.”

OpenAI said that the Russian operation called “Doppelganger”, used the company’s models to generate headlines, convert news articles to Facebook posts, and create comments in multiple languages to undermine support for Ukraine. Another Russian group used used OpenAI's models to debug code for a Telegram bot that posted short political comments in English and Russian, targeting Ukraine, Moldova, the US, and Baltic States. The Chinese network "Spamouflage," known for its influence efforts across Facebook and Instagram, utilized OpenAI's models to research social media activity and generate text-based content in multiple languages across various platforms. The Iranian "International Union of Virtual Media" also used AI to generate content in multiple languages.

OpenAI’s disclosure is similar to the ones that other tech companies make from time to time. On Wednesday, for instance, Meta released its latest report on coordinated inauthentic behavior detailing how an Israeli marketing firm had used fake Facebook accounts to run an influence campaign on its platform that targeted people in the US and Canada.

This article originally appeared on Engadget at https://www.engadget.com/openai-says-it-stopped-multiple-covert-influence-operations-that-abused-its-ai-models-225115466.html?src=rss

OpenAI says it stopped multiple covert influence operations that abused its AI models

OpenAI said that it stopped five covert influence operations that used its AI models for deceptive activities across the internet. These operations, which OpenAI shutdown between 2023 and 2024, originated from Russia, China, Iran and Israel and attempted to manipulate public opinion and influence political outcomes without revealing their true identities or intentions, the company said on Thursday. “As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services,” OpenAI said in a report about the operation, and added that it worked with people across the tech industry, civil society and governments to cut off these bad actors.

OpenAI’s report comes amidst concerns about the impact of generative AI on multiple elections around the world slated for this year including in the US. In its findings, OpenAI revealed how networks of people engaged in influence operations have used generative AI to generate text and images at much higher volumes than before, and fake engagement by using AI to generate fake comments on social media posts.

“Over the last year and a half there have been a lot of questions around what might happen if influence operations use generative AI,” Ben Nimmo, principal investigator on OpenAI’s Intelligence and Investigations team, told members of the media in a press briefing, according to Bloomberg. “With this report, we really want to start filling in some of the blanks.”

OpenAI said that the Russian operation called “Doppelganger”, used the company’s models to generate headlines, convert news articles to Facebook posts, and create comments in multiple languages to undermine support for Ukraine. Another Russian group used used OpenAI's models to debug code for a Telegram bot that posted short political comments in English and Russian, targeting Ukraine, Moldova, the US, and Baltic States. The Chinese network "Spamouflage," known for its influence efforts across Facebook and Instagram, utilized OpenAI's models to research social media activity and generate text-based content in multiple languages across various platforms. The Iranian "International Union of Virtual Media" also used AI to generate content in multiple languages.

OpenAI’s disclosure is similar to the ones that other tech companies make from time to time. On Wednesday, for instance, Meta released its latest report on coordinated inauthentic behavior detailing how an Israeli marketing firm had used fake Facebook accounts to run an influence campaign on its platform that targeted people in the US and Canada.

This article originally appeared on Engadget at https://www.engadget.com/openai-says-it-stopped-multiple-covert-influence-operations-that-abused-its-ai-models-225115466.html?src=rss