The Apple Watch ban is here: Why Apple is no longer selling the Watch Series 9 and Watch Ultra

You can't buy the Apple Watch Series 9 and the Ultra 2 from Apple's online store anymore — and as of December 24, they're no longer available from the company's retail outlets. Here's why.

Why is there an Apple Watch ban?

Apple has pulled the watch models from its website after the United States International Trade Commission (ITC) ordered the company to stop selling them in the US.

The ITC issued the Apple Watch ban after siding with Masimo, a medical technology company, which sued Apple in 2021 for allegedly infringing on five patents related to light-based blood oxygen monitoring. In October, the ITC upheld a judge's ruling from earlier this year that the Apple Watch did violate Masimo's patents. Both the affected models come with the feature, but older models with the capability are not included in the sales ban. Apple started offering blood oxygen monitoring with the Watch Series 6. 

The ITC had upheld a judge’s previous ruling from earlier this year that Apple did violate Masimo’s patents. Apple is appealing the decision and tried to convince the commission to put a pause on the ban until it’s done. However, the ITC has denied the request, meaning the ban is pushing through unless the president himself steps in and vetoes the order. The US Trade Representative is reviewing the ITC’s decision, as well, and could choose to disapprove it due to policy reasons.

Masimo originally sued Apple in 2020 for allegedly stealing trade secrets. It alleged that Apple hired several Masimo employees and used their knowledge of Masimo's products to develop the Apple Watch's blood oxygen monitoring capabilities. That case is still ongoing.

What is Apple doing about it?

Apple previously told Engadget that it’s pulling the watch models from its websites on December 21 and from its retail outlets on December 24 as a preemptive measure. The import ban took effect on December 26, following the Presidential Review Period, which also ended December 25 without veto.

"Apple’s teams work tirelessly to create products and services that empower users with industry-leading health, wellness, and safety features," the company said earlier this month. "Apple strongly disagrees with the order and is pursuing a range of legal and technical options to ensure that Apple Watch is available to customers." The company added that it will "continue to take all measures to return Apple Watch Series 9 and Apple Watch Ultra 2 to customers in the US as soon as possible.”

In 2022, Apple itself filed two patent infringement lawsuits against Masimo that accuse it of releasing a smartwatch that copies its watches’ features. If neither the president nor the US Trade Representative overturns the ban, however, the company may have to wait for the results of its appeal. 

Apple could also come to an agreement with Masimo, which most likely means money will be changing hands. The company's CEO has said he is open to a financial settlement, but told Bloomberg that Apple has not tried to negotiate an agreement. Bloomberg also reports that Apple is working on a software update that it believes will resolve the ITC dispute.

How can I buy an Apple Watch now?

You can still get the brand’s older watches, or the Apple SE, which doesn’t have a blood oxygen monitor. If you’re looking to buy either of the affected models this holiday season, they will still be available from third-party retailers. 

With the Apple Watch import ban now in effect, retailers will only be able to sell through their existing stock. So your best bet for buying these models would be a reputable retailer like Amazon, Best Buy, Target or Walmart. If they're out of stock, you'll just have to wait for this mess to get sorted out — or take it as an excuse to vacation in Mexico or Canada.

This article originally appeared on Engadget at https://www.engadget.com/the-apple-watch-ban-is-here-why-apple-is-no-longer-selling-the-watch-series-9-and-watch-ultra-203706971.html?src=rss

Apple’s Vision Pro mixed-reality headset could be available by February 2024

When Apple introduced the Vision Pro mixed-reality headset, it had no clear release schedule and only said that the device will be available sometime early next year. According to a new report by Bloomberg’s Mark Gurman, “early next year” means as soon as February. Apple reportedly ramped up production of the headset in China over the past several weeks with the intention of getting the devices ready for consumers by the end of January. The plan is to make the Vision Pro available the month after that. 

In addition to ramping up production, Apple has reportedly sent developers an email, notifying them to test their apps for the headset with the latest tools and to send their software to the company for feedback. Gurman says that’s another sign of the device’s impending release. In his report, Gurman also detailed the steps Apple is taking to launch a completely new product category. The last time the company introduced a brand new product was in 2015 when it started selling the Apple Watch, but the Vision Pro is a different beast that requires meticulous planning for its release.

Since the headset has multiple possible configurations and could be customized to meet each customers’ needs, Apple is apparently sending at least two staffers from each retail store to its headquarters for training in January. There, they’ll be taught how to attach the device’s headband and light seals, as well as how to fit prescription lenses. The Vision Pro will set customers back $3,499 when it goes on sale, but Gurman previously reported that Apple is working on a more affordable (and less powerful) version that will cost between $1,500 and $2,500.

This article originally appeared on Engadget at https://www.engadget.com/apples-vision-pro-mixed-reality-headset-could-be-available-by-february-2024-060156965.html?src=rss

Vizio’s latest smart TV update enables faster startups and app switching

Vizio has released a software update for its Home platform across all current models in its lineup, which makes the new interface respond faster than before. The company says TV sets that receive the update will power up twice as fast, while apps will now load instantly. This improvement in speed also leads to quicker switching between apps so that moving from one streaming service to another doesn’t affect people’s viewing experience.

In addition, the latest version of the OS is meant to surface recommendations and return search results in both voice and text formats much faster than its predecessor. The hope is that users would benefit from the upgrade by spending less time looking for content and more time actually watching them. The updated Home platform also comes with a new left-side navigation menu, as well as hierarchy sorting, to make it easier to browse for new shows and movies.

When Vizio rolled out its redesigned Home interface in June, it already had features created to make it easier and faster for viewers to find new things to watch. They included new navigation tools, recommendations and a reworked onscreen keyboard. According to a Vizio representative, the new update will roll out in the coming weeks and will include "all 2021 and newer model year VIZIO Smart TVs and select TVs from the 2020 model year.”

This article originally appeared on Engadget at https://www.engadget.com/vizios-latest-smart-tv-update-enables-faster-startups-and-app-switching-170043935.html?src=rss

Rite Aid is banned from using AI facial surveillance technology for the next five years

Rite Aid will not be able to use any kind of facial recognition security system for next five years as part of its settlement with the Federal Trade Commission, which accused it of "reckless use of facial surveillance systems.” The FTC said in its complaint that the drugstore chain deployed an artificial intelligence-powered facial recognition technology from 2012 to 2020 to identify customers who may have previously shoplifted or have engaged in problematic behavior. Apparently, the company had created a database with “tens of thousands” of customer images, along with their names, dates of birth and alleged crimes. Those photos were of poor quality, taken by its security cameras, employees’ phones and even from news stories. As a result, the system generated thousands of false-positive alerts.

Samuel Levine, Director of the FTC’s Bureau of Consumer Protection, said the technology’s use left Rite Aid’s customers “facing humiliation and other harms.” Employees would follow flagged customers around the store, the complaint said, would publicly accuse them of wrongdoing in front of friends and family and would sometimes get the police involved. Further, the system was more likely to generate false positives in predominantly Black and Asian communities. A Reuters investigation in 2020 revealed that the company used facial surveillance in “largely lower-income, non-white neighborhoods.” The FTC noted in its complaint that the technology and “Rite Aid’s failures were likely to cause substantial injury to consumers, especially to Black, Asian, Latino and women customers.”

In addition to prohibiting the use of facial surveillance technologies, the order also requires Rite Aid to delete the photos it collected, notify consumers when their information is registered in a database for security purposes and to provide conspicuous notices if it does use facial recognition or other types of biometric surveillance technologies. It also has to implement a proper data security program to protect the information it collects and will need to have a third party assess it. The proposed order will take effect after being approved by the bankruptcy court, since the company is currently going through bankruptcy proceedings.

Rite Aid, however, said that it “fundamentally disagree[s]” with the agency’s allegations and that it stopped using the surveillance technology years ago.

“We are pleased to reach an agreement with the FTC and put this matter behind us,” the drugstore chain said in a statement. “We respect the FTC’s inquiry and are aligned with the agency’s mission to protect consumer privacy. However, we fundamentally disagree with the facial recognition allegations in the agency’s complaint. The allegations relate to a facial recognition technology pilot program the Company deployed in a limited number of stores. Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC’s investigation regarding the Company’s use of the technology began.

Rite Aid’s mission has always been and will continue to be to safely and conveniently serve the communities in which we operate. The safety of our associates and customers is paramount. As part of the agreement with the FTC, we will continue to enhance and formalize the practices and policies of our comprehensive information security program.”

This article originally appeared on Engadget at https://www.engadget.com/ftc-bans-rite-aid-from-using-facial-surveillance-systems-for-five-years-053134856.html?src=rss

Meta’s automated tools removed Israel-Hamas war content that didn’t break its rules

Meta's Oversight Board has published its decision for its first-ever expedited review, which only took 12 days instead of weeks, focusing on content surrounding the Israel-Hamas war. The Board overturned the company's original decision to remove two pieces of content from both sides of the conflict. Since it supported Meta's subsequent move to restore the posts on Facebook and Instagram, no further action is expected from the company. However, the Board's review cast a spotlight on how Meta's reliance on automated tools could prevent people from sharing important information. In this particular case, the Board noted that "it increased the likelihood of removing valuable posts informing the world about human suffering on both sides of the conflict in the Middle East."

For its first expedited review, the Oversight Board chose to investigate two particular appeals that represent what the users in the affected region have been submitting since the October 7th attacks. One of them is a video posted on Facebook of a woman begging her captors not to kill her when she was taken hostage during the initial terrorist attacks on Israel. The other video posted on Instagram shows the aftermath of a strike on the Al-Shifa Hospital in Gaza during Israel’s ground offensive. It showed dead and injured Palestinians, children included.

The Board’s review found that the two videos were mistakenly removed after Meta adjusted its automated tools to be more aggressive in policing content following the October 7 attacks. For instance, the Al-Shifa Hospital video takedown and the rejection of a user appeal to get it reinstated were both made without human intervention. Both videos were later restored with warning screens stating that such content is allowed for the purpose of news reporting and raising awareness. The Board commented that Meta “should have moved more quickly to adapt its policy given the fast-moving circumstances, and the high costs to freedom and access to information for removing this kind of content…” It also raised concerns that the company's rapidly changing approach to moderation could give it an appearance of arbitrariness and could put its policies in question.

That said, the Board found that Meta demoted the content it reinstated with warning screens. It excluded them from being recommended to other Facebook and Instagram users even after the company determined that they were intended to raise awareness. To note, a number of users had reported being shadowbanned in October after posting content about the conditions in Gaza.

The Board also called attention to how Meta only allowed hostage-taking content from the October 7 attacks to be posted by users from its cross-check lists between October 20 and November 16. These lists are typically made up of high-profile users exempted from the company’s automated moderation system. The Board said Meta’s decision highlights its concerns about the program, specifically its “unequal treatment of users [and] lack of transparent criteria for inclusion.” It said that the company needs “to ensure greater representation of users whose content is likely to be important from a human-rights perspective on Meta’s cross-check lists.”

“We welcome the Oversight Board’s decision today on this case. Both expression and safety are important to us and the people who use our services. The board overturned Meta’s original decision to take this content down but approved of the subsequent decision to restore the content with a warning screen. Meta previously reinstated this content so no further action will be taken on it,” the company told Engadget in a statement. “As explained in our Help Center, some categories of content are not eligible for recommendations and the board disagrees with Meta barring the content in this case from recommendation surfaces. There will be no further updates to this case, as the board did not make any recommendations as part of their decision.”

This article originally appeared on Engadget at https://www.engadget.com/oversight-board-says-metas-automated-tools-took-down-israel-hamas-war-content-that-didnt-break-its-rules-110034154.html?src=rss

Microsoft Office apps arrive on Meta Quest VR headsets

Meta Quest users will now be able to write reports, edit spreadsheets and create presentations — if they event want to do any of those tasks on a VR headset, that is. Android Central reports that support for the basic Microsoft Office suite has arrived on the original Oculus Quest, the Meta Quest 2, the Meta Quest Pro, and the latest model, the Meta Quest 3. Users can now download Microsoft Word, Excel and PowerPoint from the Meta Quest store for free.

The company first revealed that it was going to launch Microsoft 365 app experiences for its headsets during its Connect 2022 event. It also promised users access to Outlook, Teams and a Windows experience as part of its partnership with Microsoft. To be able to use the basic Office suite apps on their device, users will need to have and log into their Microsoft account. The app files are pretty small because they run on the cloud, so they're quick to download and can run side by side for the multitaskers out there. 

According to The Verge, though, the apps aren't exactly optimized for virtual reality, so users may have to contend with tiny icons and other elements that don't work as well in the environment. In addition, it's not easy typing on the Quest's onscreen keyboard, so users may have to link Bluetooth accessories if they need to get some serious work done. 

This article originally appeared on Engadget at https://www.engadget.com/microsoft-office-apps-arrive-on-meta-quest-vr-headsets-123030297.html?src=rss

Activision Blizzard will pay $54 million to settle California’s gender discrimination lawsuit

California's Civil Rights Department (CRD) has announced that it has reached a settlement agreement with Activision Blizzard for a case it filed in 2021, accusing the company of systemic gender discrimination and fostering a culture that encouraged rampant misogyny and sexual harassment. The agency, which sued the developer when it was still called the California Department of Fair Employment and Housing, said Activision Blizzard will have to pay $54 million to settle its allegations. Out of the total, $45.75 million will go towards a fund meant to compensate female employees and contract workers who worked for the company in California from October 12, 2015 until December 31, 2020. 

In addition, the developer is expected to retain an independent consultant to evaluate its promotion policies and training materials, as well as to make recommendations based on what they see. If you'll recall, the agency's lawsuit alleged that female employees were overlooked for promotions and were paid less than their male colleagues. According to Marketwatch, though, the settlement will also see the agency withdraw its claims that there was widespread sexual harassment at the company. The department will reportedly have to file an amended complaint that only focuses on gender-based pay gap and discrimination. 

California's original lawsuit detailed how Activision Blizzard condoned a "frat boy" culture that encouraged certain unsavory behaviors. Male employees allegedly did "cube crawls," wherein they routinely groped and sexually harassed their female colleagues at their desks. A spokesperson for the company told Marketwatch that it is "gratified that the CRD has agreed to file an amended complaint that entirely withdraws its 2021 claims alleging widespread and systemic workplace harassment at Activision Blizzard." They added: "We appreciate the importance of the issues addressed in this agreement and we are dedicated to fully implementing all the new obligations we have assumed as part of it. We are committed to ensuring fair compensation and promotion policies and practices for all our employees, and we will continue our efforts regarding inclusion of qualified candidates from underrepresented communities in outreach, recruitment, and retention."

Meanwhile, the department told the website that its announcement, which contains no reference to its earlier sexual harassment allegations, "largely speaks for itself with respect to the historic nature of this more than $50 million settlement agreement, which will bring direct relief and compensation to women who were harmed by the company’s discriminatory practices."

As The Wall Street Journal noted when it reported the settlement, this lawsuit set the stage for Microsoft to acquire the developer. After reports came out that Activision Blizzard CEO Bobby Kotick kept sexual harassment allegations within the company from reaching its board of directors, the developer's shares fell, giving Microsoft the opening to offer a deal. The $68.7 billion acquisition was finalized in October after almost two years of contending with regulators trying to block the purchase. 

This article originally appeared on Engadget at https://www.engadget.com/activision-blizzard-will-pay-54-million-to-settle-californias-gender-discrimination-lawsuit-101149166.html?src=rss

Naughty Dog cancels development on The Last of Us Online

Alas, The Last of Us Online will never see the light of day. Naughty Dog has announced that it has "made the incredibly difficult decision to stop" its development. It explains that the online team had a clear vision of the project and had already refined its gameplay. However, it soon became clear when the company was ramping the game up to full production that it was going to bite off more than it can chew. If it releases an online game, it has to dedicate all its resources to supporting post launch content in the future. That means becoming a studio that exclusively offers live gaming services — one with no capacity to release more single-player narrative games like the original The Last of Us titles. 

The studio first gave us a peek at concept art from the project in 2022, but it offered very little in terms of updates since. After the PlayStation Showcase in May, it admitted that it knows fans of the franchise are looking forward to hearing more about the game but that it realized that it needed more time to work on it and couldn't share details just yet. Bloomberg reported shortly after that, though, that the studio had already reassigned developers working on the project to other teams and was reconsidering its viability. Clearly, Naughty Dog has decided its path, and it doesn't lead to the release of an online title. The developer says it has "more than one ambitious, brand new single player game" in the works and will be sharing what's next when it's ready.

This article originally appeared on Engadget at https://www.engadget.com/naughty-dog-cancels-development-on-the-last-of-us-online-055333989.html?src=rss

Discord could ban users if they continue to deadname trans people

Discord has officially updated its hateful conduct policy to add behaviors that don't reflect its "goal to promote acceptance and inclusivity." These newly added bannable behaviors include "deadnaming or misgendering a transgender person." According to TechCrunch, Discord started internally implementing its expanded policy in 2022, but the chat app has just only made it public in an effort to provide more transparency.

"As part of our ongoing efforts to ensure Discord remains a safe and fun place for people to hang out with friends, we continually evaluate potential harms and update our policies," a spokesperson told the publication. "We often work with organization and subject matter experts to ensure our policies accurately encompass a holistic view of how these issues manifest across the internet and society."

In addition to misgendering and deadnaming trans people, Discord also considers expressing contempt or disgust towards members of protected groups, perpetuating negative stereotypes about them, repeatedly using slurs to degrade them, threatening or promoting violence against them, as well as calling for their segregation and exclusion as hateful behaviors. LGBT organization GLAAD has praised Discord in its call for social networks to update their policies to recognize deadnaming and targeted or deliberate misgendering as hate speech. 

GLAAD also points out that among the biggest social networks today, TikTok is the only one that explicitly prohibits intentional misgendering and deadnaming. Notably, X implemented a rule against the behavior in 2018 when it was still called Twitter, but it quietly removed that section in its hateful conduct policy under Elon Musk's leadership.

Discord won't be banning users who violate its hateful conduct policy after just one infraction, though. Under its warning system, users who go against its rules will receive a direct message detailing their offense, with the platform weighing each violation differently based on the "severity of harm." Users can see their account standing in their settings page. If they have one or more violation, their accounts will be marked "at risk," while they could be permanently suspended if they're marked as having "severe or repeated" violations. 

This article originally appeared on Engadget at https://www.engadget.com/discord-could-ban-users-if-they-continue-to-deadname-trans-people-083112064.html?src=rss

Twitch clears up its confusing sexual content guidelines

Twitch has finally streamlined its confusing guidelines surrounding sexual content after a creator was able to appear seemingly topless in a stream posted on the website. The incident compelled viewers to question what kind of content could actually appear — and what could get you banned — on livestreams. Following feedback from users, Twitch has merged the two separate sexual content policy sections on its guidelines page and clarified that some materials that were previously prohibited are now allowed on the platform, as long as they're properly labeled. 

They include content that "deliberately highlight breasts, buttocks and pelvic region" when fully clothed, which Twitch admits has caused female-presenting streamers to be "disproportionately penalized." The website now also allows streams to show drawn, animated or sculpted female-presenting breasts, genitalia or buttocks that are fully exposed. That said, fictionalized sexual acts and masturbation are still prohibited. Videos that show writing on female-presenting breasts and buttocks are now allowed, as well, along with videos that contain strip tease dances. Meanwhile, dance moves that include "twerking" and "grinding" can now be shown in videos even without being labeled. 

In addition to clarifying its sexual content policy, Twitch has also altered its homepage algorithm so that it'll no longer recommend content that had been labeled with Drugs, Intoxication, or Excessive Tobacco Use; Violent and Graphic Depictions; Gambling; and/or Sexual Themes. The website explains that while viewers must deliberately click on videos on the homepage to watch them, parts of streams with those themes were still visible even to those who may be uncomfortable seeing them due to their thumbnails. Now, viewers must explicitly seek out videos containing those themes. 

This article originally appeared on Engadget at https://www.engadget.com/twitch-clears-up-its-confusing-sexual-content-guidelines-065648446.html?src=rss