X is fully online after going down for most of the morning

X seems to be working again after struggling with an outage that took the service offline and made it slow to load for much of the morning. According to X’s developer platform page, there is an ongoing incident related to streaming endpoints that’s caused increased errors. The incident started at 7:39AM PT, according to the page.

That roughly coincides with a spike in reports at Down Detector. The issues seemed to be somewhat intermittent. At some points, X’s website loaded partially and only showed older posts. At other times, the app and website failed to load at all.

As of 9:30AM PT, X’s Explore and trending pages were loading, but the “following” tab wasn’t showing posts and instead suggested users “find some people and topic to follow” (as shown in the screenshot below).

Posts aren't loading.
Posts aren't loading.
X

As of 11:15AM PT, X’s developer site was still indicating ongoing issues, so there may still be some lingering problems even though the website seems to be functioning normally again. Reports on Down Detector have also dropped off considerably.

X didn’t immediately respond to a request for comment on the outage. As TechCrunch notes, this is the second time this week that X has experienced significant issues. The service also went down for many users around the world on Tuesday.

Bluesky changed its profile photo earlier in the week.
Bluesky changed its profile photo earlier in the week.
X

But while the latest issues were widespread, some posts are were still managing to go through. Rival Bluesky, which earlier in the week changed its profile picture on X to its butterfly logo in a bikini, took the opportunity to throw some shade.

At 1PM PT, X updated its status page to indicate the issue had been resolved after nearly six hours. It didn’t elaborate on the underlying cause.

Update, January 16, 2026, 2:09PM PT: Updated with the latest information from X’s status page.

This article originally appeared on Engadget at https://www.engadget.com/social-media/x-is-fully-online-after-going-down-for-most-of-the-morning-171843711.html?src=rss

X says Grok will no longer edit images of real people into bikinis

X says it is changing its policies around Grok’s image-editing abilities following a multi-week outcry over the chatbot repeatedly being accused of generating sexualized images of children and nonconsensual nudity. In an update shared from the @Safety account on X, the company said it has “implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis.”

The new safeguards, according to X, will apply to all users regardless of whether they pay for Grok. xAI is also moving all of Grok’s image-generating features behind its subscriber paywall so that non-paying users will no longer be able to create images. And it will geoblock "the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X" in regions where it's illegal.

The company's statement comes hours after the state of California opened an investigation into xAI and Grok over its handling of AI-generated nudity and child exploitation material. A statement from California Attorney General Rob Bonta cited one analysis that found "more than half of the 20,000 images generated by xAI between Christmas and New Years depicted people in minimal clothing," including some that appeared to be children.

In its update, X said that it has "zero tolerance" for child exploitation and that it removes "high-priority violative content, including Child Sexual Abuse Material (CSAM) and non-consensual nudity" from its platform. Earlier in the day, Elon Musk said he was "not aware of any naked underage images generated by Grok." He later added that when its NSFW setting is enabled, "Grok is supposed [sic] allow upper body nudity of imaginary adult humans (not real ones) consistent with what can be seen in R-rated movies on Apple TV." He added that "this will vary in other regions" based on local laws.  

Malaysia and Indonesia both recently moved to block Grok citing safety concerns and its handling of sexually explicit AI-generated material. In the UK, where regulator Ofcom is also investigating xAI and Grok, officials have also said they would back a similar block of the chatbot. 

Have a tip for Karissa? You can reach her by email, on X, Bluesky, Threads, or send a message to @karissabe.51 to chat confidentially on Signal.

This article originally appeared on Engadget at https://www.engadget.com/ai/x-says-grok-will-no-longer-edit-images-of-real-people-into-bikinis-231430257.html?src=rss

California is investigating Grok over AI-generated CSAM and nonconsensual deepfakes

California authorities have launched an investigation into xAI following weeks of reports that the chatbot was generating sexualized images of children. "xAI appears to be facilitating the large-scale production of deepfake nonconsensual intimate images that are being used to harass women and girls across the internet, including via the social media platform X," California Attorney General Rob Bonta's office said in a statement

The statement cited a report that "more than half of the 20,000 images generated by xAI between Christmas and New Years depicted people in minimal clothing," including some that appeared to be children. "We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or of child sexual abuse material,” Bonta said. “Today, my office formally announces an investigation into xAI to determine whether and how xAI violated the law.

The investigation was announced as California Governor Gavin Newsom also called on Bonta to investigate xAI. "xAI’s decision to create and host a breeding ground for predators to spread nonconsensual sexually explicit AI deepfakes, including images that digitally undress children, is vile," Newsom wrote.

California authorities aren't the first to investigate the company following widespread reports of AI-generated child sexual abuse material (CSAM) and non-consensual intimate images of women. UK regulator Ofcom has also opened an official inquiry, and European Union officials have said they are also looking into  the issue. Malaysia and Indonesia have moved to block Grok. 

Last week, xAI began imposing rate limits on Grok's image generation abilities, but has so far declined to pull the plug entirely. When asked to comment on the California investigation, xAI responded with an automated email that said "Legacy Media Lies." 

Earlier on Wednesday, Elon Musk said he was "not aware of any naked underage images generated by Grok." Notably, that statement does not directly refute Bonta's allegation that Grok is being used "to alter images of children to depict them in minimal clothing and sexual situations." Musk said that "the operating principle for Grok is to obey the laws" and that the company works to address cases of "adversarial hacking of Grok prompts."

This article originally appeared on Engadget at https://www.engadget.com/ai/california-is-investigating-grok-over-ai-generated-csam-and-nonconsensual-deepfakes-202029635.html?src=rss

Instagram wants you to personalize your Reels algorithm for 2026

Last month, Instagram began rolling out a new set of controls that allowed users to personalize the topics recommended to them by the Reels algorithm. Now, Meta is making that feature available to all English users of the app globally, along with the ability to highlight their top topics for the coming year. 

The feature begins with a selection of topics Meta's AI thinks you're interested in based on your recent activity, and has controls to remove them or add new categories. There's also a separate field for identifying what you want to see less of, and a new "build your 2026 algorithm" that allows you to highlight three topics in particular. 

A screenshot of Instagram reels showing a ski jumper mid-air with a label that says "snowboarding."
Meta's algorithm tagged a skiing clip as "snowboarding."
Screenshot via Instagram

I don't yet have the 2026-specific control yet, but I was able to tweak some of my preferred topics and was surprised at how quickly the algorithm seemed to adjust. I added "snowboarding" as a topic and then later, when I clicked over to Reels, the first clip I saw was tagged "snowboarding." Unfortunately, the video wasn't actually about snowboarding — it featured a clip of a freestyle skiing event — so Meta's systems might still need a little work at classifying the actual content. But given how sensitive the Reels algorithm can be, it's nice to have a way of opting out of interests even if you briefly went down a  rabbit hole. 

The feature won't, however, let you ask to see fewer ads. I tried to add "ads" to my "what you want to see less of" list and received an error. "No results found. Try another topic or interest." I was able to successfully add "sponsored content" and "AI" to my "see less" list, though I'm pretty sure the latter will affect videos about AI rather than those made with the help of it.

This article originally appeared on Engadget at https://www.engadget.com/social-media/instagram-wants-you-to-personalize-your-reels-algorithm-for-2026-215252736.html?src=rss

Meta has closed three VR studios as part of its metaverse cuts

Several of Meta's VR studios have been affected by the company's metaverse-focused layoffs. The company has shuttered three of its VR studios, including Armature, Sanzaru and Twisted Pixel. VR fitness app Supernatural will no longer be updated with fresh content.

Employees at Twisted Pixel, which released Marvel's Deadpool VR in November, and Sanzaru, known for Asgard's Wrath, posted on social media about the closures. Bloomberg reported that Armature, which brought Resident Evil 4 to Quest back in 2021 has also closed and that the popular VR fitness app Supernatural will no longer get updates.

“Due to recent organizational changes to our Studio, Supernatural will no longer receive new content or feature updates starting today,” the company wrote in an update on Facebook. The app “will remain active” for existing users.

A spokesperson for Meta confirmed the closures. "We said last month that we were shifting some of our investment from Metaverse toward Wearables," the spokesperson said in a statement to Engadget. "This is part of that effort, and we plan to reinvest the savings to support the growth of wearables this year."

The cuts raise questions about Meta's commitment to supporting a VR ecosystem it has invested heavily in. The company hasn't announced any new VR headsets since the Quest 3S in 2024, and last month it "paused" planned Horizon OS headsets from Asus and Lenovo. Now, it's also pulling back on in-house game development too. 

Meta is claiming, internally at least, that it remains committed to supporting the industry. “These changes do not mean we are moving away from video games,” Oculus Studios director Tamara Sciamanna wrote in a memo reported by Bloomberg. "With this change we are shifting our investment to focus on our third-party developers and partners to ensure long-term sustainability.”

Have a tip for Karissa? You can reach her by email, on X, Bluesky, Threads, or send a message to @karissabe.51 to chat confidentially on Signal.

Update, January 13, 2026, 2:13PM PT: This post was updated to additional information about Supernatural.

This article originally appeared on Engadget at https://www.engadget.com/ar-vr/meta-has-closed-three-vr-studios-as-part-of-its-metaverse-cuts-202720670.html?src=rss

Mark Zuckerberg announces new ‘Meta Compute’ initiative for its data center and AI projects

On the heels of Mark Zuckerberg announcing that Meta's former board member, Dina Powell McCormick, would be formally joining the company as president and vice chairman, the CEO has shared new details about her purview at the company. The executive will play a key role overseeing Meta's sprawling infrastructure investments as part of a newly announced initiative called Meta Compute.

"Meta is planning to build tens of gigawatts this decade, and hundreds of gigawatts or more over time," Zuckerberg said in an update. "How we engineer, invest, and partner to build this infrastructure will become a strategic advantage."

Zuckerberg said that Meta's head of global engineering Santosh Janardhan will lead the "top-level initiative" and that recent hire and former Safe Superintelligence CEO Daniel Gross will "lead a new group responsible for long-term capacity strategy, supplier partnerships, industry analysis, planning, and business modeling." McCormick is expected to "work on partnering with governments and sovereigns to build, deploy, invest in, and finance Meta's infrastructure."

Meta has been investing heavily in infrastructure to fuel its AI "superintelligence" ambitions. The company also recently announced three agreements to buy massive amounts of nuclear power to help power its data centers. Zuckerberg has previously said he expects Meta to spend $600 billion on AI infrastructure and jobs by 2028.

This article originally appeared on Engadget at https://www.engadget.com/ai/mark-zuckerberg-announces-new-meta-compute-initiative-for-its-data-center-and-ai-projects-192100086.html?src=rss

Handwriting is my new favorite way to text with the Meta Ray-Ban Display glasses

When Meta first announced its display-enabled smart glasses last year, it teased a handwriting feature that allows users to send messages by tracing letters with their hands. Now, the company is starting to roll it out, with people enrolled in its early access program getting it first,

I got a chance to try the feature at CES and it made me want to start wearing my Meta Ray-Ban Display glasses more often. When I reviewed the glasses last year, I wrote about how one of  my favorite tings about the neural band is that it reduced my reliance on voice commands. I've always felt a bit self conscious at speaking to my glasses in public. 

Up to now, replying to messages on the display glasses has still generally required voice dictation or generic preset replies. But handwriting means that you can finally send custom messages and replies somewhat discreetly. 

Sitting at a table wearing the Meta Ray-Ban Display glasses and neural band, I was able to quickly write a message just by drawing the letters on the table in front of me. It wasn't perfect — it misread a capital "I" as an "H" — but it was surprsingly intuitive. I was able to quickly trace out a short sentence and even correct a typo (a swipe from left to right will let you add a space, while a swipe from right to left deletes the last character). 

Alongside handwriting, Meta also announced a new teleprompter feature. Copy and paste a bunch of text — it supports up to 16,000 characters (roughly a half-hour's worth of speech) — and you can beam your text into the glasses' display. 

If you've ever used a teleprompter, Meta's version works a bit differently in that the text doesn't automatically scroll while you speak. Instead, the text is displayed on individual cards you manually swipe through. The company told me it originally tested a scrolling version, but that in early tests, people said they preferred to be in control of when the words appeared in front of them. 

Teleprompter is starting to roll out now, though Meta says it could take some time before everyone is able to access. 

The updates are the among the first major additions Meta has made to its display glasses since launching them late last year and a sign that, like its other smart glasses, the company plans to keep them fresh with new features. Elsewhere at CES, the company announced some interesting new plans for the device's neural band and that it was delaying a planned international rollout of the device.

This article originally appeared on Engadget at https://www.engadget.com/wearables/handwriting-is-my-new-favorite-way-to-text-with-the-meta-ray-ban-display-glasses-213744708.html?src=rss

This haptic wristband pairs with Meta smart glasses to decode facial expressions

It's only been a few months since Meta announced that it would open its smart glasses platform to third-party developers. But one startup at CES is already showing off how the glasses can help power an intriguing set of accessibility features.

Hapware has created Aleye, a haptic wristband that, when paired with Ray-Ban Meta smart glasses, can help people understand the facial expressions and other nonverbal cues of the people they are talking to. The company says the device could help people who are blind, low vision or neurodivergent unlock a type of communication that otherwise wouldn't be available.

Aleye is a somewhat chunky wristband that can vibrate in specific patterns on your wrist to correspond to the facial expressions and gestures of the person you're talking to. It uses the Meta Ray-Ban glasses's computer vision abilities to stream video of your conversation to the accompanying app, which uses an algorithm to detect facial expressions and gestures.

The bumps on the underside of the Aleye vibrate to form unique patterns.
The bumps on the underside of the Aleye vibrate to form unique patterns.
Karissa Bell for Engadget

Users can customize which expressions and gestures they want to detect in the app, which also provides a way for people to learn to distinguish between the different patterns. Hapware CEO Jack Walters said in their early testing people have been able to learn a handful of patterns within a few minutes. The company has also tried to make them intuitive. "Jaw drop might feel like a jaw drop, a wave feels more like a side to side haptics," he explains.

The app is also able to use Meta AI to give vocal cues about people's expressions, though Hapware's CTO Dr. Bryan Duarte told me it can get a bit distracting to talk to people while the assistant is babbling in your ear. Duarte, who has been blind since a motorcycle accident at the age of 18, told me he prefers Aleye to Meta AI's other accessibility features like Live AI. "It will only tell me there's a person in front of me," he explains. "It won't tell me if you're smiling. You have to prompt it every time, it won't just tell you stuff."

Hapware has started taking pre-orders for the Aleye, which starts at $359 for the wristband or $637 for the wristband plus a year subscription to the app (a subscription is required and otherwise will cost $29 a month). A pair of Ray-Ban Meta glasses is also not included, though Meta has also been building a number of its own accessibility features for the device.

This article originally appeared on Engadget at https://www.engadget.com/wearables/this-haptic-wristband-pairs-with-meta-smart-glasses-to-decode-facial-expressions-214305431.html?src=rss

LG’s CLOiD robot can fold laundry and serve food… very slowly

When LG announced that it would demo a laundry-folding, chore-doing robot at CES 2026, I was immediately intrigued. For years, I've wandered the Las Vegas Convention Center halls and wondered when someone might create a robot that can tackle the mundane but useful tasks I despise like folding laundry. With CLOiD (pronounced like "Floyd"), LG has proven that this is theoretically possible, but probably not likely to happen any time soon. 

I went to the company's CES booth to watch its demonstration of CLOiD's abilities, which also include serving food, fetching objects and fitness coaching. During a very carefully choreographed 15-minute presentation, I watched CLOiD grab a carton of milk out of the fridge, put a croissant in an oven, sort and fold some laundry and grab a set of keys off a couch and hand them to the human presenter.

Throughout the demonstration, LG showed off how its own appliances can play along with the robot. When it rolled over to the fridge, the door automatically opened, as did the oven. When the LG-branded robot vacuum needed to move around a hamper, CLOiD helpfully cleared the path. But the robot also moved very slowly, which you can see in the highlight video below. 

The appliance maker is selling the setup as a part of its vision for a "zero labor home" where its appliances and, I guess, robotics technology can come together to take care of all your chores and household upkeep. Maybe I'm jaded from a decade of watching CES vaporware, but I left the slick demo thinking the concept is unlikely to amount to much anytime soon.

On one hand, it is exciting to see robots competently performing tasks that would actually be useful to most people. But this technology is still far from accessible. Even LG isn't making any firm commitments about CLOiD's future as anything more than a CES demo. The company has instead said that CLOiD is a signal of its interest in creating "home robots with practical functions" and "robotized appliances," like fridges with doors that can open automatically. 

That may be a more reasonable target for the company (and yet another way for LG to sell us more appliance upgrades). But it's still pretty far from anything approaching the fantasy of a "zero labor home."

This article originally appeared on Engadget at https://www.engadget.com/home/smart-home/lgs-cloid-robot-can-fold-laundry-and-serve-food-very-slowly-181902306.html?src=rss

These robotic sneakers gave me a surprising boost at CES

I'll admit that I've always kind of taken walking for granted. Other than a knee injury more than a decade ago, my ability to walk long distances has largely been limited only by my own choices. That's not the case for everyone, though. And robotics company Dephy has created a pair of robotic sneakers, called the Sidekick, that are meant to help people who want to walk more than their bodies might otherwise be capable of.

The system consists of two parts: an ankle-worn exoskeleton and a special pair of sneakers that attach to it. The exoskeleton hooks onto the back of the shoe and is secured with a strap around your calf. The battery powered device is equipped with sensors that can detect and adapt to the wearer's gait in order to deliver an extra "boost" with each step. 

The whole setup is pricey, at $4,500, but Dephy is betting that people who have "personal range anxiety" might be willing to pay for the extra confidence the Sidekick can provide. "This is a device that's kind of like [having] an extra calf muscle," Dephy CEO Luke Mooney told me. 

The Sidekick.
The Sidekick.
Karissa Bell for Engadget

I was able to take the Sidekick for a spin around the CES showfloor and it was a truly surprising sensation. The best way I can describe walking with the Sidekick powered on is that with every step forward there's a noticeable upward push from under your heel. It wasn't enough to throw me off balance, but it did feel a bit strange.

The Sidekick has adjustable power levels based on how much help you might need. At the highest level, it definitely felt unnecessarily pushy. The lower levels were still noticeable but felt less disruptive. I just felt… bouncy. Later, when Mooney turned off the power entirely, I noticed that my feet felt weirdly heavy in a way they hadn't just a few minutes before. 

Mooney was quick to tell me that I'm not Dephy's target demographic. "A lot of times people who are fit, or like athletes, actually struggle to adopt to the technology because their body's so in tune with how they move," he said. "Whereas folks who are not as physically active and fit, their body's ready to accept help."

The company's technology will be used in products more focused on athletic performance, however. Dephy has partnered with Nike on its upcoming robotic sneaker currently known as Project Amplify. Mooney declined to share details on the collaboration, but the shoemaker has claimed that some early testers have been able to improve their mile times by two minutes. 

I tried the Sidekick early in the day. Several hours later, though, when I was walking between the Las Vegas Conventions Center halls for the third or fourth time, I started thinking about those robotic sneakers again. I was getting close to 10,000 steps and hadn't sat down for hours. My feet were sore. I remembered that strange, bouncy boost and thought it sounded kind of nice.

This article originally appeared on Engadget at https://www.engadget.com/wearables/these-robotic-sneakers-gave-me-a-surprising-boost-at-ces-174500005.html?src=rss