More electronic devices reportedly exploded in Lebanon a day after coordinated pager attack

An attack in Lebanon reportedly killed eight people and injured over 2,700. Hundreds of pagers belonging to Hezbollah members detonated simultaneously on Tuesday, leading the Iran-backed militant organization to blame Israel. The New York Times reported that Israel was behind the attacks and conducted it by hiding explosive material inside the pagers. A second wave of attacks, these targeting handheld radios used by Hezbollah members, was reported on Wednesday by The Washington Post.

A day after Israeli leaders warned of escalating its military campaign against Hezbollah, pagers belonging to the Lebanese group’s members exploded at once. Witnesses reported seeing smoke emanating from the victims’ pockets, followed by sounds reminiscent of fireworks or gunshots.

Lebanon’s health minister said 200 of the injured were in critical condition. He added that many victims had facial injuries, especially to the eyes. Hand and stomach injuries were also common, according to the health minister. Among those wounded was Mojtaba Amini, Iran’s ambassador to Lebanon, according to Iranian state media.

A second wave of attacks across different areas of Lebanon on Wednesday reportedly killed one person and injured over 100 others. The latest attacks reportedly targeted “wireless devices.” One of the explosions, triggered by a handheld radio, was reported at a funeral for four victims of Tuesday’s blasts. “Anyone who has a device, take out the battery now!” The Washington Post reported that Hezbollah security members yelled at the mourners. “Turn off your phones, switch it to airplane mode!”

Israel hasn’t commented on the attacks. But NYT reports that officials (including American ones) briefed on the operation said Israel was behind them. They claim as little as one to two ounces of explosive material were planted next to each pager’s battery, along with a switch allowing for remote detonations. At 3PM in Lebanon on Tuesday, the pagers received a message (appearing to be from Hezbollah leadership) that triggered the coordinated explosions, according to officials. The devices allegedly beeped for several seconds before detonating.

The Washington Post reports that the logo of Taiwanese pager maker Gold Apollo was seen on the sabotaged pagers. However, Gold Apollo claimed the devices were “entirely handled” by a Hungarian company, BAC Consulting Kft, which was authorized to use Gold Apollo’s branding in some regions. “That product isn’t ours,” Gold Apollo’s founder and president, Hsu Ching-Kuang, told The New York Times. “They just stick on our company brand.”

Officials speaking with NYT claimed the devices were tampered with before reaching Lebanon. Most were Gold Apollo’s AR924 model, which the company displayed an image of on its website before removing them on Wednesday.

The attacks sparked a wave of fear of using mobile devices. NYT reports some in Lebanon were scared to use their phones after Tuesday’s attacks, with one resident crying out, “Please hang up, hang up!” to their caller.

The Times reports that Hezbollah, long suspicious of cellphone use near the Israeli border due to the devices’ geolocation capabilities, recently switched from mobile phones to pagers. In February, Hezbollah chief Hassan Nasrallah reportedly warned the group that their phones were dangerous and could be used by Israel as spy tools. He advised the group that they should “break or bury them.”

Experts reportedly don’t yet know precisely how the pagers were distributed to Hezbollah’s members. They say that Iran, given its history of supplying Hezbollah with arms, tech and other military aid, would have been pivotal to their adoption and delivery.

Update, September 18, 2024, 11:48AM ET: This story has been updated to add new details about Tuesday’s attacks and the second wave of reported blasts on Wednesday.

This article originally appeared on Engadget at https://www.engadget.com/mobile/pagers-explode-simultaneously-in-hundreds-of-hezbollah-members-pockets-190304565.html?src=rss

California passes landmark law requiring actors’ permission for AI likenesses

California has given the go-ahead to a landmark AI bill to protect performers' digital likenesses. On Tuesday, Governor Gavin Newsom signed Assembly Bill 2602, which will go into effect on January 1, 2025. The bill requires studios and other employers to get consent before using “digital replicas” of performers. Newsom also signed AB 1836, which grants similar rights to deceased performers, requiring their estate’s permission before using their AI likenesses.

AB 2602, introduced in April, covers film, TV, video games, commercials, audiobooks and non-union performing jobs. Deadline notes its terms are similar to those in the contract that ended the 2023 actors’ strike against Hollywood studios. SAG-AFTRA, the film and TV actors’ union that held out for last year’s deal, strongly supported the bill. The Motion Picture Association first opposed the legislation but later switched to a neutral stance after revisions.

The bill mandates that employers can’t use an AI recreation of an actor’s voice or likeness if it replaces work the performer could have done in person. It also prevents digital replicas if the actor’s contract doesn’t explicitly state how the deepfake will be used. It also voids any such deals signed when the performer didn’t have legal or union representation.

The bill defines a digital replica as a “computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of an individual that is embodied in a sound recording, image, audiovisual work, or transmission in which the actual individual either did not actually perform or appear, or the actual individual did perform or appear, but the fundamental character of the performance or appearance has been materially altered.”

Meanwhile, AB 1836 expands California’s postmortem right of publicity. Hollywood must now get permission from a decedent's estate before using their digital replicas. Deadline notes that exceptions were included for “satire, comment, criticism and parody, and for certain documentary, biographical or historical projects.”

“The bill, which protects not only SAG-AFTRA performers but all performers, is a huge step forward,” SAG-AFTRA chief negotiator Duncan Crabtree-Ireland told the The LA Times in late August. “Voice and likeness rights, in an age of digital replication, must have strong guardrails around licensing to protect from abuse, this bill provides those guardrails.”

AB2602 passed the California State Senate on August 27 with a 37-1 tally. (The lone holdout was from State Senator Brian Dahle, a Republican.) The bill then returned to the Assembly (which passed an earlier version in May) to formalize revisions made during Senate negotiations.

On Tuesday, SAG-AFTRA President Fran Drescher celebrated the passage, which the union fought for. “It is a momentous day for SAG-AFTRA members and everyone else, because the A.I. protections we fought so hard for last year are now expanded upon by California law thanks to the Legislature and Gov. Gavin Newsom,” Drescher said. 

This article originally appeared on Engadget at https://www.engadget.com/ai/california-passes-landmark-regulation-to-require-permission-from-actors-for-ai-deepfakes-174234452.html?src=rss

Former MoviePass CEO reportedly pleads guilty to securities fraud

Mitch Lowe, one of two MoviePass leaders indicted by the Justice Department in 2022, has pleaded guilty to securities fraud charges. The former CEO admitted to conspiring to deceive the public and investors about the service’s sustainability. Variety reports that the details of Lowe’s plea agreement haven’t been made public.

Prosecutors claim Lowe knew from the start that the company’s $9.95 “unlimited” plan was a short-term gimmick to attract subscribers and inflate stock. He’s also accused of making false statements in press releases, interviews and SEC filings about MoviePass’ long-term viability.

Those statements included allegedly lying about the company’s ability to become profitable on subscription fees alone and having tech that could generate revenue from customer data. He also claimed MoviePass was profiting from multiple revenue streams despite not having any income beyond subscriptions.

Prosecutors also accused Lowe and Ted Farnsworth, former CEO of MoviePass’ parent company Helios and Matheson, of preventing subscribers from getting what was promised from the “unlimited” subscription. The company settled with the FTC in 2021 over allegations that it intentionally invalidated subscriber passwords to freeze their accounts, blocking their ability to get the movie tickets the service promised. MoviePass and its parent company declared bankruptcy in 2020.

Although no sentencing date has been set, Lowe is free on bond and has a status conference court date scheduled in Miami for March 2025. The 72-year-old former executive faces a maximum of five years in federal prison.

“Mitch is a good man who is looking to move forward with his life,” Lowe’s attorneys, Margot Moss and David Oscar Markus, said in a statement to Variety. “He has accepted responsibility for his actions in this case and will continue to try to make things right.”

Meanwhile, Farnsworth is still in custody. He was initially freed on a $1 million bond that was revoked in August 2023 after the feds accused him of misusing nearly $300,000 in company funds. Farnsworth's former boyfriend, who he met on an escort site, was paid $147,000, and received a Cadillac worth $144,000; after the pair split up, the feds say he falsely accused his ex of stealing the vehicle.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/former-moviepass-ceo-reportedly-pleads-guilty-to-securities-fraud-201131284.html?src=rss

watchOS 11 is out now, with new Sleep Apnea feature

Over three months after Apple introduced it at WWDC 2024, watchOS 11 is officially here. The 2024 Apple Watch update, which adds the new Vitals app, widget improvements and sleep apnea detection, is now available to install on your smartwatch.

Apple’s sleep apnea detection feature, which the company highlighted in its Apple Watch Series 10 reveal, will also work with a couple of year-old models. If you own the Apple Watch Series 9 or Apple Watch Ultra 2, you can try the feature before the new model makes it into customers’ hands later this week. Sleep apnea detection will send you an alert if the watch’s sensors detect overnight breathing disturbances. The health feature, similar to one Samsung included with the Galaxy Watch 7 earlier this year, received FDA approval last week.

watchOS 11 also introduces a new Vitals app, further beefing up Apple’s health-tracking features on its wearable. For those who wear their Apple Watch to bed for sleep tracking (and a handy alarm in the morning), Vitals collects your overnight data in one place. The app establishes baselines for your health metrics. It lets you know if any fall outside your typical range, potentially handy for spotting irregularities like oncoming illnesses or tracking the effects of alcohol use.

Similarly, the new Training Load feature measures the intensity of your workouts over time. After establishing an intensity baseline over 28 days, it shows how hard you’re pushing yourself in your workouts — comparing it with your standard averages. At launch, it supports 17 workout types, including walks, runs, cycling, rowing, swings and more. You’ll find your Training Load in the Activity app on your Apple Watch and the Fitness app on your iPhone.

Grid showing various features for watchOS 11.
Apple

Apple added a long-requested feature this year: the ability to pause and customize Activity ring goals. It hardly makes sense to keep pushing yourself (at your watch’s prodding) if you’re sick or need rest. The wearable now lets you take a break for a day, week, month or more without losing your award streaks. In addition, you can set different Activity ring goals for each day of the week and customize the data you care about most in the iOS 18 Fitness app.

The Apple Watch’s Smart Stack (the pile of widgets you see when you scroll down from your watch face) now shows widgets automatically based on context. (For example, rain alerts.) In addition, Live Activities, which arrived on the iPhone two years ago, is also coming to the Apple Watch in the new update. You’ll find Live Activities for things like sports scores you track or an arriving Uber in the watchOS 11 Smart Stack.

Check In is a new feature that lets you notify a friend when you reach your destination. You can begin a Check In from the watchOS Messages app by tapping the plus button next to the text field, choosing Check In and entering where you’re going and when you expect to arrive. Similarly, when exercising, you can start a Check In from the workouts app: Swipe right from the workout screen and choose Check In from the controls. You can then pick a contact to share your exercise routine with.

Other features include new pregnancy tracking in the Cycles app and a Double Tap API that lets third-party developers incorporate hands-free controls.

To download watchOS 11, you’ll first need to install iOS 18 on your paired iPhone. After that, open the Watch app on your phone, then head to General > Software Update. It should then prompt you to update to the 2024 software.

This article originally appeared on Engadget at https://www.engadget.com/wearables/watchos-11-is-out-now-with-new-sleep-apnea-feature-182103629.html?src=rss

NASA confirms it’s developing the Moon’s new time zone

NASA confirmed on Friday that it’s developing a new lunar time system for the Moon. The White House published a policy memo in April, directing NASA to create the new standard by 2026. Over five months later (government time, y’all), the space agency’s confirmation states it will work with “U.S. government stakeholders, partners, and international standards organizations” to establish a Coordinated Lunar Time (LTC).

To understand why the Moon needs its own time zone, look no further than Einstein. His theories of relativity say that because time changes relative to speed and gravity, time moves slightly faster on our celestial neighbor (because of its weaker gravity). So, an Earth clock on the Moon would gain about 56 microseconds a day — enough to throw off calculations that could put future missions requiring precision in danger.

“For something traveling at the speed of light, 56 microseconds is enough time to travel the distance of approximately 168 football fields,” said Cheryl Gramling, NASA timing and standards leader, in a press release. “If someone is orbiting the Moon, an observer on Earth who isn’t compensating for the effects of relativity over a day would think that the orbiting astronaut is approximately 168 football fields away from where the astronaut really is.”

Classic image of Buzz Aldrin in astronaut suit on the Moon's surface.
NASA

April’s White House memo directed NASA to work with the Departments of Commerce, Defense, State and Transportation to plot the course for LTC’s introduction by the end of 2026. Global stakeholders, particularly Artemis Accords signees, will play a role. Established in 2020, the agreements include a growing collection of 43 countries committed to norms expected to be honored in space. Notably, China and Russia have refused to join.

NASA’s Space Communication and Navigation (SCaN) program will lead the initiative. One of LTC’s goals is to be scalable to other celestial bodies in the future, including Mars. The time standard will be determined by a weighted average of atomic clocks on the Moon, although their locations are still up for debate. Such a weighted average is similar to how scientists calculate Earth’s Coordinated Universal Time (UTC).

NASA plans to send crewed missions back to the Moon through its Artemis program. Artemis 2, scheduled for September 2025, plans to send four people on a pass around the Moon. A year later, Artemis 3 aims to land astronauts near the Moon’s South Pole.

This article originally appeared on Engadget at https://www.engadget.com/science/space/nasa-confirms-its-developing-the-moons-new-time-zone-165345568.html?src=rss

The LCD Steam Deck is up to 25 percent off right now

Valve has big savings on the entry-level (LCD) Steam Deck. You can take 25 percent off the 512GB model or 15 percent off the 64GB one, dropping their prices to $336.75 and $296.65, respectively. The sale runs through September 26 while supplies last for the two phased-out models.

The 512GB (NVMe SSD) LCD Steam Deck initially cost $449, so the sale shaves over $112 off its MSRP. For some perspective, the OLED version with the same storage costs $549. This deal on the (lower-grade but still high-quality) LCD variant is a terrific chance to get started with handheld PC gaming on the cheap.

The LCD Steam Deck has a seven-inch display (1200 x 800) with 60Hz refresh rates and 400 nits brightness. The 512GB model adds anti-glare etched glass that the 64GB lacks, so — if both fall within your budget — the former is a no-brainer upgrade at only $40 extra.

Both variants have 40Wh batteries with a theoretical eight hours of uptime, but our tests found they averaged around 4.5 hours with regular use. (If needed, you can squeeze more out by lowering brightness and refresh rates.) Each model includes a standard carrying case.

In Engadget’s 2023 re-review of the LCD Steam Deck, Jessica Conditt concluded the entry-level model offers “a fantastic return on investment,” even compared to its premium OLED sibling. Most mainstream games that launch today are classified as either Verified or Playable on Steam Deck, and most gamepad-friendly games will fare well with the handheld.

The bottom line: Although the OLED model is worth the upgrade if it fits your budget, this LCD model — especially when cut by up to 25 percent — is still a fantastic entry-level handheld gaming device that offers only a slightly compromised experience compared to the (much more expensive) high-end one.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/the-lcd-steam-deck-is-up-to-25-percent-off-right-now-194740400.html?src=rss

The FDA greenlights Apple’s Hearing Aid feature for AirPods Pro

The Food and Drug Administration has approved Apple’s over-the-counter Hearing Aid feature. Designed for people with mild to moderate hearing loss, it transforms the second-gen AirPods Pro into OTC hearing aids. This follows the FDA’s 2022 decision to allow adults with less-than-severe impairment to use corrective consumer hearing devices without a professional test, prescription or fitting.

The FDA says Apple’s software-based Hearing Test feature for AirPods Pro showed similar benefits to those who received a professional fitting of the wireless earbuds. “Results also showed comparable performance for tests measuring levels of amplification in the ear canal, as well as a measure of speech understanding in noise,” the FDA wrote in its announcement. The agency adds that it didn’t observe any “adverse events” from using the device as an OTC hearing aid.

Apple’s Hearing Aid feature, coming in iOS 18, starts with a hearing test on your paired iPhone or iPad. As the image above shows, the test begins by ensuring your earbuds have a good seal. After that, it activates active noise cancellation (ANC) and asks you to tap the screen when you hear tones in the left and right ears.

Once you finish, your results will live in the iOS Health app, where you can see how your results change (or not) over time. You can download your results and give them to an audiologist anytime. (If the test determines you have severe hearing loss, it will recommend you seek a professional assessment since the AirPods feature is only approved for those with mild to moderate impairment.)

Engadget’s Billy Steele got an early preview of the feature after Apple’s big iPhone 16 event earlier this week. “It seems to be as quick and easy as Apple describes,” our audio expert wrote. Although the demo was a simulation, it covered each step of the process, adding up to only about five minutes.

Apple developed the feature using 150,000 real-world audiograms and millions of simulations. The company’s FDA application was reviewed under the agency’s De Novo premarket pathway, which provides a runway for novel devices that don’t carry serious risk.

Apple’s Hearing Aid and Hearing Test features will arrive no earlier than when iOS 18 launches to the public on September 16. The AirPods Pro (second-gen) is required to use the feature.

This article originally appeared on Engadget at https://www.engadget.com/audio/headphones/the-fda-greenlights-apples-hearing-aid-feature-for-airpods-pro-164912484.html?src=rss

Disney+ Basic is only $6 for three months in this limited-time deal

You can save big on a Disney+ subscription if you can live with some ads. New and returning customers can get a Disney+ Basic (with ads) subscription for $2 monthly for three months. That tier is currently $8 per month and includes all Disney+ content.

A Disney+ Basic (with ads) subscription unlocks all of the platform’s series and movies. That includes Disney-branded originals, Pixar, Star Wars, Marvel, The Simpsons and National Geographic (among others). It even has Taylor Swift: The Eras Tour (Taylor’s Version) for the Swiftie in your life.

As for the fine print, the subscription will auto-renew at the full price after three months unless you cancel first. And that tier is about to get more expensive, increasing to $10 monthly starting on October 17. So, set a reminder to cancel if you only want to plow through Andor, the WandaVision spinoff Agatha All Along or The Acolyte’s first and only season before your three cheap months run out.

The deal is only eligible for those 18 or older and expires on September 27. If $2 streaming tickles your fancy, head to the Disney+ website to sign up or reactivate your subscription.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/deals/disney-basic-is-only-6-for-three-months-in-this-limited-time-deal-070055707.html?src=rss

Nevada will use Google AI to process a backlog of unemployment cases

Nevada has a new helper in its quest to plow through a backlog of unemployment claims: Google AI. Gizmodo reports that the initiative will task one of the company’s cloud-based AI models with analyzing appeals hearing transcripts and suggesting whether cases should be approved. Welcome to the future, where a robot weighs in on whether you get the government money you requested.

The Nevada Independent wrote in June that the AI model, trained on the state’s unemployment law and policies, will analyze transcripts of virtual appeals hearings. It will then spit out a ruling, which a state employee will review for mistakes and decide whether to honor.

It replaces the current Nevada Department of Employment, Training and Rehabilitation (DETR) process, which averages three hours for a real-life human to complete. Carl Stanfield, DETR’s IT administrator, told the Nevada Independent that Google’s AI (which uses the company’s Vertex cloud system) can rule within five minutes. “The time saving is pretty phenomenal,” Stanfield said.

It’s easy to understand why Nevada would be eager to lean on the emerging tech. As recently as June, the state reportedly had a backlog of over 10,000 unprocessed appeals, about 1,500 of which were left over from the pandemic. And if the tech’s reviews are accurate — or the human reviewers catch its mistakes — it could be an enormous timesaver.

However, there could be psychological pressure for the employees reviewing the cases to rubber-stamp the AI’s conclusions. “If a robot’s just handed you a recommendation and you just have to check a box and there’s pressure to clear out a backlog, that’s a little bit concerning,” Michele Evermore, a former deputy director for unemployment modernization policy at the Department of Labor, told Gizmodo.

Stanfield told Gizmodo that a governance committee will meet weekly while the state is fine-tuning the model and quarterly after it goes live to monitor for hallucinations and bias. The stakes could be high for claimants as the AI-powered system could affect their ability to appeal bogus decisions. “In cases that involve questions of fact, the district court cannot substitute its own judgment for the judgment of the appeal referee,” Elizabeth Carmona, a senior attorney with Nevada Legal Services, told Gizmodo. In other words, if the human reviewing the decision misses the AI’s mistakes, a court may not have the legal standing to overturn it.

One Nevada politician put it a bit more bluntly. “Are we out of our ever-loving minds?” NV state senator Skip Daly (D-Reno) said to the Nevada Independent this summer. “I’m just dubious of the whole concept of overreliance on algorithms and computers. I hope that we are cautious about it, and think before we just say, ‘We got to be faster or better than the next guy.’”

This article originally appeared on Engadget at https://www.engadget.com/ai/nevada-will-use-google-ai-to-process-a-backlog-of-unemployment-cases-202718427.html?src=rss

Adobe previews AI video tools that arrive later this year

On Wednesday, Adobe unveiled Firefly AI video generation tools that will arrive in beta later this year. Like many things related to AI, the examples are equal parts mesmerizing and terrifying as the company slowly integrates tools built to automate much of the creative work its prized user base is paid for today. Echoing AI salesmanship found elsewhere in the tech industry, Adobe frames it all as supplementary tech that “helps take the tedium out of post-production.”

Adobe describes its new Firefly-powered text-to-video, Generative Extend (which will be available in Premiere Pro) and image-to-video AI tools as helping editors with tasks like “navigating gaps in footage, removing unwanted objects from a scene, smoothing jump cut transitions, and searching for the perfect b-roll.” The company says the tools will give video editors “more time to explore new creative ideas, the part of the job they love.” (To take Adobe at face value, you’d have to believe employers won’t simply increase their output demands from editors once the industry has fully adopted these AI tools. Or pay less. Or employ fewer people. But I digress.)

Firefly Text-to-Video lets you — you guessed it — create AI-generated videos from text prompts. But it also includes tools to control camera angle, motion and zoom. It can take a shot with gaps in its timeline and fill in the blanks. It can even use a still reference image and turn it into a convincing AI video. Adobe says its video models excel with “videos of the natural world,” helping to create establishing shots or b-rolls on the fly without much of a budget.

For an example of how convincing the tech appears to be, check out Adobe’s examples in the promo video:

Although these are samples curated by a company trying to sell you on its products, their quality is undeniable. Detailed text prompts for an establishing shot of a fiery volcano, a dog chilling in a field of wildflowers or (demonstrating it can handle the fantastical as well) miniature wool monsters having a dance party produce just that. If these results are emblematic of the tools’ typical output (hardly a guarantee), then TV, film and commercial production will soon have some powerful shortcuts at its disposal — for better or worse.

Meanwhile, Adobe’s example of image-to-video begins with an uploaded galaxy image. A text prompt prods it to transform it into a video that zooms out from the star system to reveal the inside of a human eye. The company’s demo of Generative Extend shows a pair of people walking across a forest stream; an AI-generated segment fills in a gap in the footage. (It was convincing enough that I couldn’t tell which part of the output was AI-generated.)

Still from an Adobe video showing a text prompt creating a moody shot of a man on a rainy street.
Adobe

Reuters reports that the tool will only generate five-second clips, at least at first. To Adobe’s credit, it says its Firefly Video Model is designed to be commercially safe and only trains on content the company has permission to use. “We only train them on the Adobe Stock database of content that contains 400 million images, illustrations, and videos that are curated to not contain intellectual property, trademarks or recognizable characters,” Adobe’s VP of Generative AI, Alexandru Costin, told Reuters. The company also stressed that it never trains on users’ work. However, whether or not it puts its users out of work is another matter altogether.

Adobe says its new video models will be available in beta later this year. You can sign up for a waitlist to try them.

This article originally appeared on Engadget at https://www.engadget.com/ai/adobe-previews-ai-video-tools-that-arrive-later-this-year-172021715.html?src=rss