Google celebrates Pac-Man’s 45th anniversary with a Halloween Doodle

Google Doodle is featuring something special for gamers today: A tribute to Pac-Man's 45th anniversary, just in time for Halloween. For today and tomorrow, you'll be able to play four haunted-house mazes especially designed for the event by Pac-Man's parent company, Bandai Namco Entertainment. Like other games in the franchise, you'll have to control Pac-Man and eat all of the dots in a maze without being caught by a ghost. 

Yes, the Ghost Gang, with Blinky, Pinky, Inky and Clyde, is back for this event. The maze's design reflects the ghost' personalities, so you can move according to how you think each one will try to get you. Blinky the red ghost, for instance, tends to actively chase Pac-Man, while the others would corner him. If you grab a Power Pallet, you'll be able to activate a time-limited event wherein you can chase and eat the ghosts, as well. 

To play the game, simply go to the Google homepage on desktop or fire up the Google app on mobile, whether on Android or on iOS. On a PC, you control Pac-Man with your keyboard's arrow keys, while on mobile, you'll have to swipe or press and move your finger to change directions. 

This article originally appeared on Engadget at https://www.engadget.com/gaming/google-celebrates-pac-mans-45th-anniversary-with-a-halloween-doodle-133034299.html?src=rss

OpenAI’s character cameos will let you put pets and original personas in Sora videos

OpenAI has rolled out the capability to create character cameos of your pets, doodles, original personas or even objects in the Sora app, which you can put in your videos. You can start the process by going to your profile page in the Sora app, tapping on the "Create cameo" button and then uploading a video of the character (or pet) you want the model to generate. The company says just a few seconds of footage are enough, and you can even use old Sora-generated videos as reference. 

You can then give your character a display name and describe how you want the model to animate it. In the example OpenAI uploaded, for instance, the description for a wicked green witch character reads: "She glides with a mysterious, whimsical grace, speaks in rhymes when casting spells, and her pointed hat always tilts as if listening to secrets on the wind." You can choose permissions for each character you create. Under the "Who can use this" permissions section, you can choose between several options: Only me, People I approve, Mutuals, Everyone and Everyone (excluding specific sets of users). Whenever you want to generate a Sora video with a cameo in it, you can just tag a specific character.

Sora 2 launched with a cameo feature that lets you create an avatar of yourself, but this is a new application of the capability. Cameo, the app that allows users to buy videos from celebrities, just sued OpenAI over trademark violation by using the "cameo" name. It said that OpenAI's use of the word is likely to cause consumer confusion and dilute its brand. OpenAI disagreed "that anyone can claim exclusive ownership over the word 'cameo.'"

In addition to character cameos, OpenAI has introduced "stitching," allowing you to stitch several clips together and connect videos. There's now also a leaderboard that shows the most cameod and most remixed videos.

This article originally appeared on Engadget at https://www.engadget.com/ai/openais-character-cameos-will-let-you-put-pets-and-original-personas-in-sora-videos-123043189.html?src=rss

Xbox console revenue fell 30 percent year-over-year this summer

It hasn't been a good year for Xbox so far. Microsoft has released its earnings report for the quarter ending on September 30, and it has revealed that its revenue from the Xbox hardware fell by 30 percent year-over-year. Take note that the revenue decline doesn't reflect any dip in sales caused by the console's $20-to-$70 price hike, since that took effect on October 3. Similarly, Microsoft only raised the price for its Game Pass Ultimate subscription from $20 to $30 in October. 

Meanwhile, revenue from Xbox content and services remained relatively unchanged from the same period last year. Microsoft says it saw growth from Xbox subscriptions and third-party content, but it was "partially offset" by the decline in first-party gaming content. 

The Xbox division was one of the most affected teams when Microsoft started cutting down its global workforce earlier this year, with the company cancelling games that were being developed for the console. Microsoft scrapped the modern reimagining of Perfect Dark, a first-person shooter from the year 2000, and even closed down the Xbox studio working on it. The company also cancelled Everwild, a project that had long been in development by Xbox studio Rare, also in the midst of its mass layoffs. 

Overall, Microsoft's $77.7 billion revenue was 17 percent higher compared to the same period last year, and its operating income was up by 22 percent. Microsoft CEO Satya Nadella posted a few highlights about the company's earnings call on X, mostly focusing on its AI efforts. He said that the company will increase its AI capacity by 80 percent this year and will double its data center footprint over the next two. 

This article originally appeared on Engadget at https://www.engadget.com/gaming/xbox/xbox-console-revenue-fell-30-percent-year-over-year-this-summer-012245146.html?src=rss

Bipartisan GUARD Act proposes age restrictions on AI chatbots

US lawmakers from both sides of the aisle have introduced a bill called the "GUARD Act," which is meant to protect minor users from AI chatbots. "In their race to the bottom, AI companies are pushing treacherous chatbots at kids and looking away when their products cause sexual abuse, or coerce them into self-harm or suicide," said the bill's co-sponsor, Senator Richard Blumenthal (D-Conn). "Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties."

Under the GUARD Act, AI companies would be required to prohibit minors from being able to access their chatbots. That means they have to conduct age verification for both existing and new users with the help of a third-party system. They'll also have to conduct periodic age verifications on accounts that were already previously verified. To maintain users' privacy, the companies will only be allowed to retain data "for no longer than is reasonably necessary to verify a user's age" and may not share or sell user information. 

AI companies will be required to make their chatbots explicitly tell the user that it's not a human being at the beginning of each conversation and every 30 minutes after that. They'll have to make sure their chatbots don't claim to be a human being or a licensed professional, such a therapist or a doctor, when asked. Finally, the bill aims to create new crimes to charge companies that make their AI chatbots available to minors. 

In August, the parents of a teen who committed suicide filed a wrongful death lawsuit against OpenAI, accusing it of prioritizing "engagement over safety." ChatGPT, they said, helped their son plan his own death even after months of conversations, wherein their child talked to the chatbot about his four previous suicide attempts. ChatGPT allegedly told their son that it could provide information about suicide for "writing or world-building." A mother from Florida sued startup Character.AI in 2024 for allegedly causing her 14-year-old son's suicide. And just this September, the family of a 13-year-old girl filed another wrongful death lawsuit against Character.AI, arguing that the company didn't point their daughter to any resources or notify authorities when she talked about her suicidal ideations. 

It's also worth noting that the bill's co-sponsor Senator Josh Hawley (R-Mo.) previously said that the Senate Committee Subcommittee on Crime and Counterterrorism, which he leads, will investigate reports that Meta's AI chatbots could have "sensual" conversations with children. He made the announcement after Reuters reported on an internal Meta document, stating that Meta's AI was allowed to tell a shirtless eight-year-old: "Every inch of you is a masterpiece — a treasure I cherish deeply."

This article originally appeared on Engadget at https://www.engadget.com/ai/bipartisan-guard-act-proposes-age-restrictions-on-ai-chatbots-130020355.html?src=rss

YouTube will ‘strengthen’ enforcement around violent and gambling games in November

YouTube will enforce new rules that are supposed to strengthen the enforcement of its guidelines around online gambling and graphic video game content starting on November 17. One of the biggest changes it's implementing is age-restricting gaming videos featuring realistic human characters in scenes depicting torture or mass violence against non-combatants. 

The streaming website says it will take into account the duration and prominence of the scene in a video when reviewing one. For compilation videos, it will consider the cumulative duration of the scenes classified as graphic under its policies. Any video placed behind an age check barrier will be inaccessible to viewers under 18 or to anybody who's not signed into their Google accounts. YouTube didn't specify the duration that would get a video restricted, but a spokesperson told The Verge that "certain content may be age-restricted if it’s non-fleeting or zoomed in." Creators can get around the restriction, however, by blurring any violent scene. 

In addition, YouTube is implementing stricter online gambling rules. It already prohibits videos directing people to online gambling sites or apps not certified by Google. Starting on November 17, it will also prohibit online gambling videos that involve items with monetary value, including digital goods like NFTs and game skins. The website is also age-restricting content with online casino-style games, even if they don't involve items with real monetary value. 

YouTube will review old videos and remove them or put them behind age checks if they're found to be in violation of the new rules, but it will not issue strikes to creators if they were uploaded before November 17. Creators can also edit their videos before that date with the website's trim and blur editing tools. 

This article originally appeared on Engadget at https://www.engadget.com/entertainment/youtube/youtube-will-strengthen-enforcement-around-violent-and-gambling-games-in-november-123051469.html?src=rss

FTC expands rules to hold tech support scammers accountable

The Federal Trade Commission (FTC) can now go after scammers posing as tech support providers even if it's the consumer who called them up. It has just approved amendments to its Telemarketing Sales Rule that expands its coverage to include "inbound" calls to companies pitching "technical support services through advertisements or direct mail solicitations." Samuel Levine, Director of the FTC's Bureau of Consumer Protection, explained that the new rule will allow the agency to hold these scammy businesses accountable and to get money back for the victims. 

"The Commission will not sit idle as older consumers continue to report tech support scams as a leading driver of fraud losses," Levine also said, because the rule's expansion would mostly help protect consumers 60 years and older. According to the agency, older adults reported losing $175 million to tech support scams in 2023 and were five times more likely to fall for them than younger consumers. 

Tech support scams typically trick potential victims into calling them by sending them emails or triggering pop-up alerts claiming that their computer has been infected with malware. Scammers then ask their targets to pay for their supposed services by wiring them money, by putting money in gift or prepaid cars or by sending them cryptocurrency coins, because those methods can be hard to trace and reverse. They've long been a problem in the US — the agency shut down two massive Florida-based telemarketing operations that had scammed victims out of $120 million in total way back in 2014 — but the issue has been growing worse over time. The $175 million victims reported losing in 2023 was 10 percent higher than the reported losses to tech support scams in 2022. 

As the FTC notes, the Telemarketing Sales Rule has been updated several times since the year 2000 before this latest amendment. The first amendment in 2003 led to the creation of the Do Not Call Registry for telemarketers, while subsequent changes were made to cover pre-recorded telemarketing calls and debt collection services.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/ftc-expands-rules-to-hold-tech-support-scammers-accountable-143051612.html?src=rss

Black Friday Apple deals include the 10th-gen iPad for a record-low price

Apple's Black Friday deals have started popping up, and this is your chance to grab a new iPad at a discount if you've been thinking of getting one. The 10th-gen iPad is currently on sale for $279 at Amazon, $50 less than what it usually costs. A few color options have an additional coupon that brings the final price down to $250. Apple released the tablet back in 2022, but it's still our best budget iPad option for 2024.

The 10th-gen iPad is only slightly thicker and heavier than the iPad Air. It looks similar to the iPad Air, too — the tablet no longer has the Home button that its predecessor did, and it has a bigger screen with smaller bezels.

The device is powered by Apple's A14 Bionic chip, which was first seen on the iPhone 12 and is powerful enough that we could edit RAW photos in Lightroom when we tested the tablet. When we ran a test for battery life, we discovered that the model could play movies continuously for 11 hours and 45 minutes on a single charge. 

Unlike previous models with Lightning ports, this one comes with a USB-C port for charging. Apple moved its front-facing camera to its landscape edge, as well. The company gave it a larger display, measuring 10.9 inches, so it doesn't feel as cramped as previous models even with a lot of apps. While the iPad Air does have a better display overall with its lamination and anti-reflective coating, the 10th-gen iPad's isn't bad at all seeing as it costs significantly less, especially with this discount. 

Check out all of the latest Black Friday and Cyber Monday deals here.

This article originally appeared on Engadget at https://www.engadget.com/deals/black-friday-apple-deals-include-the-10th-gen-ipad-for-a-record-low-price-130005592.html?src=rss

Snap calls New Mexico’s child safety complaint a ‘sensationalist lawsuit’

Snap has accused New Mexico's attorney general of intentionally looking for adult users seeking sexually explicit content in order to make its app seem unsafe in a filing asking the court to dismiss the state's lawsuit. In the document shared by The Verge, the company questioned the veracity of the state's allegations. The attorney general's office said that while it was using a decoy account supposed to be owned by a 14-year-old girl, it was added by a user named Enzo (Nud15Ans). From that connection, the app allegedly suggested over 91 users, including adults looking for sexual content. Snap said in its motion to dismiss, however, that those "allegations are patently false."

It was the decoy account that searched for and added Enzo, the company wrote. The attorney general's operatives were also the ones who looked for and added accounts with questionable usernames, such as "nudenude_22" and "xxx_tradehot." In addition, Snap is accusing the office of "repeatedly [mischaracterizing]" its internal documents. The office apparently cited a document when it mentioned in its lawsuit that the company "consciously decided not to store child sex abuse images" and when it suggested that it doesn't report and provide those images to law enforcement. Snap denied that it was the case and clarified that it's not allowed to store child sexual abuse materials (CSAM) on its servers. It also said that it turns over such materials to the National Center for Missing and Exploited Children.

The New Mexico Department of Justice's director of communications was not impressed with the company's arguments. In a statement sent to The Verge, Lauren Rodriguez accused Snap of focusing on the minor details of the investigation in an "attempt to distract from the serious issues raised in the State’s case." Rodriguez also said that "Snap continues to put profits over protecting children" instead of "addressing... critical issues with real change to their algorithms and design features."

New Mexico came to the conclusion that Snapchat's features "foster the sharing of child sexual abuse material (CSAM) and facilitate child sexual exploitation" after a months-long investigation. It reported that it found a "vast network of dark web sites dedicated to sharing stolen, non-consensual sexual images from Snap" and that Snapchat was "by far" the biggest source of images and videos on the dark web sites that it had seen. The attorney general's office called Snapchat "a breeding ground for predators to collect sexually explicit images of children and to find, groom and extort them." Snap employees encounter 10,000 sextortion cases each month, the office's lawsuit said, but the company allegedly doesn't warn users so as not to "strike fear" among them. The complaint accused Snap's upper management of ignoring former trust and safety employees who'd pushed for additional safety mechanisms, as well.

This article originally appeared on Engadget at https://www.engadget.com/apps/snap-calls-new-mexicos-child-safety-complaint-a-sensationalist-lawsuit-140034898.html?src=rss

Neuralink gets approval to start human trials in Canada

The first Neuralink clinical trials outside the US will take place in Canada. Neuralink has secured Health Canada's approval to launch human trials in the country, with the Toronto Western Hospital being the "first and exclusive surgical site" for the procedure. The company first opened its Canadian patient registry in March this year, but now it's actively looking for potential participants. "Recruitment is now open," it has announced on X

Under the CAN-PRIME study, Neuralink will embed its implant in the brain of the participant so that it can interpret their neural activity. The implant will allow them to control a computer or a smartphone with their brain without the need for wires or any kind of physical movement. Neuralink says the study aims to "evaluate the safety of [its] implant and surgical robot and assess the initial functionality of [its Brain Computer Interface] for enabling people with quadriplegia to control external devices with their thoughts." What it learns from the trials could help the company find safer ways to place the implant inside the brain, as well as to enhance the technology's capabilities. 

Neuralink's first human patient (pictured above) received his implant earlier this year. He experienced some issues, wherein the implant's threads retracted from his brain, though he seems to be doing well these days. On X, he said that he will soon challenge himself to use Neuralink for 72 hours to demonstrate what the technology can do. For its second patient, Neuralink employed mitigation measures to prevent thread retraction. That patient was already using computer-aided design (CAD) software mere weeks after his surgery in July. At the moment, Neuralink is specifically looking for patients who "have limited or no ability to use both hands due to cervical spinal cord injury or amyotrophic lateral sclerosis (ALS)" for its trials in Canada. 

This article originally appeared on Engadget at https://www.engadget.com/science/neuralink-gets-approval-to-start-human-trials-in-canada-143021769.html?src=rss

Itch.io marketplace now requires asset creators to disclose their use of generative AI

Creators who sell assets on itch.io will now have to be a lot more upfront about using generative AI. The marketplace for independent digital creators has introduced a new rule that requires users to label their projects if they were produced using generative AI tools, such as ChatGPT and Midjourney. Users will see an AI generation disclosure box when they upload their projects. If they confirm that their project contains AI-generated output, they'll be required to indicate what kinds of content were made with generative AI, whether they're graphics, sounds, text and dialogue or code. 

If they have a public asset page, they'll see a dialog box when they access their dashboard, making it easy to bulk tag their projects. They'll be able to select multiple projects from their list and then indicate whether they contain AI-generated content or not. All assets with AI output will get the "AI Generated" tag, while those without will be tagged as "No AI." Each content type will have its own sub-tag, as well. 

itch.io requires all assets that used AI in any way, even if the creator had hand-edited it, to be tagged as AI Generated. And if it finds any untagged work that used artificial intelligence tool, it will make that asset ineligible for indexing so that potential buyers could no longer find it. However, it's unclear what measures the marketplace is taking to police its website. While itch.io's new policy may not be enough for those who'd rather ban AI content altogether, the tags will allow buyers who don't want AI assets in their work to filter them out. 

This article originally appeared on Engadget at https://www.engadget.com/ai/itchio-marketplace-now-requires-asset-creators-to-disclose-their-use-of-generative-ai-130031999.html?src=rss