Anthropic will use AWS AI chips after $4 billion Amazon investment

Amazon is doubling its investment in Anthropic. The e-commerce giant will provide Anthropic with an additional $4 billion in funding on top of the $4 billion it committed last year. Although Amazon remains a minority investor, Anthropic has agreed to make Amazon Web Services (AWS) its “primary cloud and training partner.”

Before today’s announcement, The Information had reported that Amazon wanted to make any additional funding contingent on a commitment from Anthropic to use the company’s in-house AI chips instead of silicon from NVIDIA. It appears Amazon got its way, with both companies noting in separate press releases that Anthropic will use AWS Trainium and Inferentia chips to train future foundation models.

Additionally, Anthropic says it will collaborate with Amazon’s Annapurna Labs to develop future Trainium accelerators. “Through deep technical collaboration, we’re writing low-level kernels that allow us to directly interface with the Trainium silicon, and contributing to the AWS Neuron software stack to strengthen Trainium,” the company said. “Our engineers work closely with Annapurna’s chip design team to extract maximum computational efficiency from the hardware, which we plan to leverage to train our most advanced foundation models.”

According to another recent report, Anthropic expects to burn through more than $2.7 billion before the end. Before today, the company had raised $9.7 billion. Either way, it’s bought itself some much-needed runway as it looks to compete against OpenAI and other companies in the AI space.

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-will-use-aws-ai-chips-after-4-billion-amazon-investment-222053145.html?src=rss

Black Friday deals include reMarkable 2 bundles for $89 off

If you’ve been eyeing the reMarkable 2 for a while, now is a great time to buy one. While the E Ink tablet itself isn’t on sale, reMarkable has discounted the two bundles it offers alongside the 2. Until the end of December 2, you can save $89 off the Type Folio and Book Folio bundles. Both include reMarkable’s Marker Plus stylus, which comes with an eraser feature not found on the regular Marker stylus. It’s also black instead of gray and four grams heavier. As for the two folios, the type one is the one to buy if you need a keyboard.

The reMarkable 2 is easily the best E Ink tablet you can buy right now. It’s the top pick in our E Ink tablet guide, and for good reason. It boasts a tremendous reading and writing experience, with a responsive, low-latency display that offers the closest pen-and-paper experience among the tablets Engadget tested. 

The reMarkable 2 makes accessing your favorite books and files easy, too. It includes support for both PDFs and ePUBs, and you can link your Google Drive, Microsoft OneDrive and Dropbox to make transferring those files a cinch. Each new reMarkable 2 tablet also comes with a complimentary one-year subscription to Remarkable Connect, which is great for transferring any notes you write to your other devices.

One of the few downsides of the ReMarkable 2 is how expensive it is to buy. Although reMarkable hasn’t directly discounted the tablet, a folio cover and Marker Plus stylus are accessories most people will probably want to buy anyway, so this Black Friday promotion still makes the device more accessible. 

Check out all of the latest Black Friday and Cyber Monday deals here.

This article originally appeared on Engadget at https://www.engadget.com/mobile/tablets/black-friday-deals-include-remarkable-2-bundles-for-89-off-210003470.html?src=rss

Teach mode, Rabbit’s tool for automating R1 tasks, is now available to all users

When the Rabbit R1 arrived earlier this year, it was an unfinished product. Engadget’s own Devindra Hardawar called it “a toy that fails at almost everything.” Most of the features Rabbit promised, including its signature “large action model” (LAM), were either missing at launch or didn’t work as promised. Now, after more 20 software updates since the spring, Rabbit is releasing its most substantial update yet. Starting today, every R1 user now has beta access to teach mode, a feature that allows you to train Rabbit’s AI model to automate tasks for you on any website you can visit from your computer.

Rabbit CEO and founder Jesse Lyu gave me a demo of teach mode ahead of today’s announcement. The tool is accessible through the company’s Rabbithole hub, and features a relatively simple interface for programming automations. Once logged into your Rabbit account, you navigate to a website and input your credentials if they’re required to access the service you want to teach the R1 to use for you. Lyu was quick to note Rabbit won’t store any username and password you input; instead, the company saves the cookie from your teach mode session for the R1 to use later. In June, Rabbit had to move quickly to patch a security issue that could have led to a serious data breach. 

Once you’ve named your automation and written a description for it, all you need to do is carry out the task you want to automate as you usually would. Rabbit’s software will translate each click and interaction into instructions the R1 can later carry out on its own. When Lyu demoed teach mode for me, he taught his R1 to tweet for him.

Once the software has had a chance to analyze a lesson, you can replay the automation before trying it out on your R1 to ensure it works properly. While it’s technically true you don’t need any coding knowledge to use teach mode, approaching it from a programming perspective is likely to produce better results. That’s because you can annotate the steps the software records you doing when showing it an automation. It’s also useful from a troubleshooting perspective, as you can see from the video embedded above.

After you’ve tested your automation, it’s just a matter of asking your R1 to complete a query using teach mode. The resulting process isn’t exactly the polished experience I imagine most people have come to expect from their mobile devices. The R1 announces each step of a task, and it can take a few moments for the device to work its way through a query. According to Rabbit, part of that is by design. Early testers found it helpful for the R1 to state its progress.

I’ll be honest, it’s hard to escape the conclusion that some of the R1 automations Lyu showed me, while creative, don’t offer a more efficient way to do certain tasks than the apps people are already familiar with, a point he conceded when I said as much during our call.

“There are a lot of tasks that are not a single destination,” Lyu said. To that point, where he believes teach mode will be transformational is in interactions involving multiple platforms. Lyu gave an example of an R1 user who taught his device to order groceries. With some work, that person could use the R1’s camera to take photos of the shopping lists his wife produced, which the device would then use to order the family’s weekly groceries from their preferred stores. 

Another area where the R1 could provide a better experience than a dedicated app is in situations where there are competing standards, like the situation that exists with smart home automation currently. Say you’re trying to get some HomeKit and Google Home devices to work together. You won’t need to wait for the Matter Alliance to sort things out. With teach mode, the R1 will navigate that mess for you.

“You need to think about velocity,” Lyu tells me before laying out Rabbit’s end game with teach mode. For now, R1 users can freely add community lessons they find on Rabbithole to their devices. Lyu envisions a future where users will be able to sell their automations, with Rabbit taking a cut. Moreover, while teach mode is currently limited to navigating websites, Lyu suggests it will eventually learn to use more complex apps like Excel. At that point, Lyu contends Rabbit will be in a position to deliver an artificial general intelligence, one that will understand every piece of software ever made for humans.

Of course, questions remain. One major one is whether people will pay for community lessons if they could just as easily replicate an automation on their own. Here, Rabbit expects things to play out like they’ve done on existing app stores, with most people choosing to download apps they like instead of making their own. “For the future agent store, we anticipate a similar situation where any user could teach their own lesson if they want to, but most people will probably find lessons or agents created by other users that meet their needs very well,” the company told me in an email.

I also asked Rabbit if the company is preparing for the possibility that some platforms might block people from using teach mode to automate tasks on their R1. In the company’s view, bot detection systems like CAPTCHA will need to evolve to differentiate between “good agents” like those created by Rabbit users and malicious bots.

“When a user uses LAM to perform tasks on third-party platforms, they are logging into their own accounts with their own credentials, and paying those companies directly for those subscriptions or services,” the company added. “We are just providing a new platform for those transactions to happen, similar to you can play music on your phone and on your laptop... We do not see a conflict of interests here.”

I’m not so sure if things will play out as smoothly as Rabbit hopes, but what is clear is that the company is closer to the future Lyu promised at the start of the year — even if that future still feels years away and may be decided by another company. For now, Rabbit hopes R1 users embrace teach mode enthusiastically, as that will allow the software to improve more quickly. 

This article originally appeared on Engadget at https://www.engadget.com/ai/teach-mode-rabbits-tool-for-automating-r1-tasks-is-now-available-to-all-users-170036677.html?src=rss

Sonos Black Friday deals: Save up to $200 on speakers and soundbars

Sonos Black Friday sales have begun, kicking off one of the few times of the year you can actually save a decent amount on the company’s speakers, soundbars and other gear. This time around, you can get up to $200 off, with one of the highlights being $50 off the Sonos Era 100 — it’s down to a record low of $199.

If you’re in the market for a soundbar, consider the Sonos Arc. Now that the Arc Ultra is available, Sonos has discounted its previous flagship soundbar by $200 to $699. For something more affordable, the Beam 2 is currently $369, down from $499. Lastly, there’s the Era 300. Right now, you can buy the Dolby Atmos-compatible speaker for $359, instead of $449 as usually priced. All of these deals are being matched by Amazon, too.

More than a few of Sonos’ speakers, including the Era 100, have found a spot and stayed on Engadget’s list of best smart speakers. If you care about music but still want a speaker with modern features, a Sonos system is the way to go. Not only do the company’s speakers sound great, but you also get access to things like AirPlay 2 that make it incredibly easy to play exactly the song you want to listen to in the moment.

You may have seen that Sonos bungled the release of the latest version of its companion app. That’s true, but as things stand, the company has done a lot of work in recent months to fix its software. As a daily user, I can safely say the Sonos app is in much better shape now than it was in the spring. Other than the premium price that comes with Sonos products, there’s not much they don’t do as well or better than the competition. With the discounts the company is offering for Black Friday, its speakers come even more highly recommended.

Check out all of the latest Black Friday and Cyber Monday deals here.

This article originally appeared on Engadget at https://www.engadget.com/deals/sonos-black-friday-deals-save-up-to-200-on-speakers-and-soundbars-130038835.html?src=rss

The Echo Show 8 drops to a record low of $80 in this Amazon Black Friday deal

Black Friday has arrived, which means Amazon’s smart displays are back on sale and significantly discounted. To start, the Echo Show 8 is $70 off its regular $150 price. That’s the cheapest Amazon has sold the Show 8 for since the company’s Prime Day sales event in July when the device hit a record low price. Amazon has also discounted the more affordable Echo Show 5. Right now, it’s on sale for $50, down from $90.

Both the Echo Show 8 and Show 5 have been on Engadget’s best smart displays list for years. Of the two, the former is the best pick for most people. The 8-inch screen is just large enough to make it easy to interact with the display, but not so big so as to make a device that hogs space on your bedside table. The fact the Show 8 will adapt the size of its user interface to how far away you are from it is icing on the cake.

The Show 8 is also a great choice if you want a smart display that’s great for video calling. Not only does its 13-megapixel camera offers great image quality, but Amazon has also included a feature that automatically frames your face and follows your movements. As you can imagine, it’s a useful feature to have if you want to move around while chatting with your friends and loved ones. When you’re not using the Show 8, there’s a physical camera cover to protect your privacy. I should also mention that the Show 8 is one of the better-sounding smart displays Engadget has tested, thanks to the inclusion of spatial audio and a room calibration feature.

As for the Echo Show 5, it’s a great option if space is limited on your desk or nightstand. It’s currently one of the smallest smart displays on the market. The inclusion of an ambient light sensor and tap-to-snooze features make for a great smart alarm clock. It can also work as a sunrise clock if you don’t want to be jarred from bed.

Either way, both the Show 8 and Show 5 are great smart display, especially when you can get them on sale like they are now.

Check out all of the latest Black Friday and Cyber Monday deals here.

This article originally appeared on Engadget at https://www.engadget.com/deals/the-echo-show-8-drops-to-a-record-low-of-80-in-this-amazon-black-friday-deal-150003009.html?src=rss

Sony’s A1 II features a dedicated AI processor and refined ergonomics

When the A1 arrived in 2021, it put the camera world on notice. In more than a few categories, Sony’s full-frame mirrorless camera outperformed rivals like the Canon R5 and came with a lofty $6,500 price to match. However, after nearly four years, the A1 finds itself in an awkward position. Despite its position as Sony’s flagship, the A1 is not the most complete camera in the company’s lineup, with the more recently released A7R V and A9 III each offering features not found on their sibling. That’s changing today with the introduction of A1 II, which retains the performance capabilities of its predecessor while borrowing quality-of-life improvements from the A7R V and A9 III.

To start, the A1 II features the same fully stacked 50.1-megapixel CMOS sensor found inside the A1. As before, Sony says photographers can expect 15 stops of dynamic range for stills. The company has once again paired that sensor with its Bionz XR image processing engine but added a dedicated AI processor to handle subject recognition and autofocus. As a result, the A1 II can still shoot at up to 30 frames per second using its electronic shutter, and the autofocus system once again offers 759 points, good enough for 92 percent coverage of the sensor.

The a1 II features a new four-axis tilting LCD screen.
Sony

However, Sony is promising substantial improvements in autofocus accuracy due to that dedicated AI processing unit. Specifically, the camera is 50 percent better at locking eye focus on birds and 30 percent better at eye autofocus when it comes to other animals and humans. Additionally, you won’t need to toggle between different subject-detection modes. Instead, the camera will automatically handle that for you. Sony’s pre-capture feature also offers a one-second buffer that can capture up to 30 frames before fully depressing the shutter button.

That said, the most notable addition is the inclusion of Sony’s most powerful in-body image stabilization (IBIS) to date, with the A1 II offering an impressive 8.5 stops of stabilization. For context, that’s three additional stops of stabilization over the original A1.

When it comes to video, the A1 II is no slouch. It can capture 8K footage at up to 30 fps using the full readout of its sensor. It can also record 4K video at 120 fps and FHD footage at 240 fps for slow motion, with support for 10-bit 4:2:2 recording. If Super 35 is your thing, there you have the option for 5.8K oversampling. In addition to Sony’s color profiles, the A1 II can store up to 16 user-generated LUTs, and the camera offers the company’s breathing compensation and auto stabilization features. Of the latter, Sony says you can get “gimbal-like” footage with only a slight crop.

Sony's new 27-70mm G Master lens features a constant f/2 aperture.
Sony

On the useability front, the A1 II borrows the deeper grip and control layout of the A9 III. Also carried over from the A9 III is the camera’s 3.2-inch four-axis LCD screen and 9.44-million dot OLED viewfinder with 240Hz refresh rate. Moreover, the new camera includes Sony’s latest menu layout design. Oh, and the company plans to include two separate eyecups in the box. Nice. When it comes to connectivity, there’s a full-sized HDMI connection, USB-C and an upgraded Ethernet port that supports transfer speeds up to 2.5Gbps. For storage, the camera comes with two CFexpress Type A card slots that are also capable of reading and saving to UHS-II SD cards.

Alongside the A1 II, Sony also announced a new 28-70mm G Master Lens with a constant f/2 aperture (pictured above). While not the lightest lens in Sony’s stable, it still weighs under a kilogram. Both the A1 II and the 28-70mm F2 G Master will arrive in December. They will cost $6,500 and $2,900, respectively.

This article originally appeared on Engadget at https://www.engadget.com/cameras/sonys-a1-ii-features-a-dedicated-ai-processor-and-refined-ergonomics-164840579.html?src=rss

I wish Blizzard loved Warcraft as much as I do

Blizzard's first real-time strategy games had a profound impact on me as a young immigrant to Canada in 1994 and ’95. Warcraft: Orcs & Humans and Warcraft II: Tides of Darkness helped me learn how to read and write in English, and formed the basis for some of my oldest friendships in a brand-new country. Suffice to say, I have a lot of love for these old RTS games — maybe more than Blizzard itself.

So you can imagine my excitement at remaster rumors for Warcraft II and its expansion, Beyond the Dark Portal. When Blizzard aired its Warcraft Direct last week, not only were those rumors confirmed, but it announced that the original Warcraft would receive the same treatment, and both would be sold alongside Warcraft III: Reforged (itself a remaster) as part of a new battle chest. Of course, I immediately booted up Battle.net and bought the bundle.

I was just as quickly disappointed. Where to start? The most obvious place is the new hand-drawn graphics. Some fans have accused Blizzard of using AI to upscale the art in Warcraft and Warcraft II. I don’t think that’s what happened here, but what is clear is that the new assets don’t live up to the company’s usual quality. 

The unit sprites are completely missing the charm of their original counterparts. They also don’t look properly proportioned, and many of them have new stilted animations. Additionally, the extensive use of black outlining makes everything look a bit too stark. At best, the remasters resemble poorly made mobile games.

Both games feature a toggle to switch between their original and remastered graphics seamlessly, but here again, Blizzard missed the mark. There’s a great YouTube video explaining the issue, but the short of it is the company didn't accurately represent the “tall pixels” that the original graphics were designed around, so every asset appear stretched horizontally. 

Like every game from that era, Warcraft was designed to be played on a 4:3 CRT monitor. However, the original art assets were made to scale within a 320 x 200 frame, which is a 16:10 resolution. As a result, UI elements and units look taller in the 1994 release than in the remaster. GOG correctly accounted for this when it rereleased Warcraft and Warcraft II in 2019, and there’s no reason Blizzard couldn’t do the same in 2024. Without these nods to the game’s original visuals, Warcraft: Remastered just doesn’t look right.

What gameplay enhancements the remasters include are minimal, and while they’re all appreciated, Blizzard could and should have done more. In Warcraft, for instance, it’s now possible to select up to 12 units simultaneously, up from four, and bind buildings to hotkeys for more efficient macro play. Oh, and you can finally issue attack move commands, something you couldn’t do in the original release.

However, any features you might find in a modern RTS are notably missing. For example, neither game allows you to queue commands or tab between different types of units in a control group. If this sounds familiar, it’s because Blizzard took the same approach with StarCraft: Remastered. StarCraft: Brood War still had a sizable professional scene when Blizzard released its remaster. Had Blizzard touched the balance or mechanics of that game, it would have caused an outcry. By contrast, Warcraft II is essentially moribund, and would have greatly benefited from modernization. At the very least, Blizzard could have done a balance pass and added a ladder mode to give the game a chance to attract a new multiplayer fanbase.

Coming back from the dead is achievable for an old RTS. Age of Empires II managed to pull this trick off with flying colors: Since the release of its Definitive Edition in 2019, Microsoft’s genre-defining RTS has never been in a better place. A constant stream of support, including a substantial new expansion that was released just last week, has managed to grow the AoE2 community. At any time, there are as many a 30,000 people playing the Definitive Edition on Steam. If you ask me, that’s pretty great for a game that was originally released in 1999, and it shows what’s possible when a company cares and nurtures a beloved franchise. The fact Microsoft now owns Blizzard makes its treatment of Warcraft feel particularly unfair.

Most disappointing is the lack of bonus content. Contrast this with Half-Life 2’s free anniversary update, which Valve released just days after the Warcraft remasters. It includes three and a half hours of new commentary from Gabe Newell and the dev team. Valve also uploaded a two-hour documentary and announced a second edition of Raising the Bar, a behind-the-scenes look at Half-Life 2’s turbulent development. If Newell could take time away from his yachts to talk about Valve's most important game, surely Chris Metzen could have done the same for Warcraft. The people who were vital to Warcraft and Warcraft II’s development aren’t getting any younger — Blizzard should preserve their stories.

If there’s one thing I’m hopeful for, it’s that Blizzard will eventually do the right thing. As I mentioned, the bundle I bought also came with Warcraft III: Reforged. Last week it received a free patch that does a lot to fix the disastrous issues with that remaster, albeit four years late. With more work, I can see the Warcraft and Warcraft II remasters becoming essential. But as things stand, the studio has done the bare minimum to honor its own legacy.

This article originally appeared on Engadget at https://www.engadget.com/gaming/pc/i-wish-blizzard-loved-warcraft-as-much-as-i-do-141524674.html?src=rss

Google now offers a standalone Gemini app on iPhone

Google now offers a dedicated Gemini AI app on iPhone. First spotted by MacRumors, the free software is available to download in Australia, India, the US and the UK following a soft launch in the Philippines earlier this week.

Before today, iPhone users could access Gemini through the Google app, though there were some notable limitations. For instance, the dedicated app includes Google’s Gemini Live feature, which allows users to interact with the AI agent from their iPhone’s Dynamic Island and Lock Screen. As a result, you don’t need to have the app open on your phone’s screen to use Gemini. The software is free to download — though a Gemini Advanced subscription is necessary to use every available feature. Gemini Advanced is included in Google’s One AI Premium plan, which starts at $19 per month.

The app is compatible with iPhones running iOS 16 and later, meaning people with older devices such as the iPhone 8 and iPhone X can use the AI agent. I’ll note here that the oldest iPhone that can run Apple Intelligence is the iPhone 15 Pro. Of course, that’s not exactly a fair comparison; Apple designed its suite of AI features to rely primarily on on-device processing, and when a query requires more computational horsepower, it goes through the company’s Private Cloud Compute framework.

Either way, it’s not surprising to see Google bring a dedicated Gemini app to iPhone. Ahead of WWDC 2024, Apple had reportedly been in talks with the company to integrate the AI agent directly into its devices.

This article originally appeared on Engadget at https://www.engadget.com/mobile/smartphones/google-now-offers-a-standalone-gemini-app-on-iphone-160025513.html?src=rss

Meta will have to defend itself from antitrust claims after all

The Federal Trade Commission will get a chance to argue its case for Meta’s breakup in court. On Wednesday, US District Judge James Boasberg allowed the FTC’s lawsuit against the social media giant to move forward (PDF link). The FTC first sued Meta in 2020 in an attempt to force the company, then known as Facebook, to divest itself of Instagram and WhatsApp. Alongside dozens of attorneys general, the agency alleged Meta acquired the platforms in 2012 and 2014 to stifle growing competition in the social media market.

This past April, Meta asked Judge Boasberg to dismiss the case. In addition to noting that the FTC had previously approved both acquisitions, Meta argued that the agency had failed to show that the company held monopoly power in the social networking services market, and that, in buying Instagram and WhatsApp, it had harmed consumers. Additionally, the company claimed that it had invested billions of dollars in both platforms and made them better as a result, to the benefit of social media users everywhere.

While he did not entirely dismiss the lawsuit, Boasberg did force the FTC to narrow its case, dismissing an allegation that Facebook had provided preferential access to developers who agreed not to compete with it.

“We are confident that the evidence at trial will show that the acquisitions of Instagram and WhatsApp have been good for competition and consumers. More than 10 years after the FTC reviewed and cleared these deals, and despite the overwhelming evidence that our services compete with YouTube, TikTok, X, Apple’s iMessage, and many others, the Commission is wrongly continuing to assert that no deal is ever truly final, and businesses can be punished for innovating,” a Meta spokesperson told Engadget. “We will review the opinion when it’s filed.”

Judge Boasberg will meet with the two sides on November 25 to schedule the trial. The FTC lawsuit, it should be noted, was filed under the previous Trump administration, though whether it moves forward and in what form will depend on who President-elect Trump appoints to lead the agency.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/meta-will-have-to-defend-itself-from-antitrust-claims-after-all-155730259.html?src=rss

Nintendo Palworld lawsuit seeks $65,700 in damages

Nintendo and the Pokémon Company are seeking approximately $65,700 in compensation from their lawsuit against Palworld developer Pocketpair. In a press release the studio issued on Friday, it said Nintendo and the Pokémon Company want ¥5 million each (plus late fees), for a total of ¥10 million or $65,700 in damages.

At first glance, that's a paltry amount of money to demand for copying one of the most successful gaming properties ever, particularly when you consider Tropic Haze, the creator of the now defunct Yuzu Switch emulator, agreed to pay $2.4 million to settle its recent case with Nintendo. While Nintendo and the Pokémon Company may have well wanted to sue for more, their legal approach may have limited their options somewhat.

As you might recall, when the two sued Pocketpair in September, they didn’t accuse it of copyright infringement. Instead, they went for patent infringement. On Friday, Pocketpair listed the three patents Nintendo and the Pokémon Company are accusing the studio of infringing. Per Bloomberg, they relate to gameplay elements found in most Pokémon games. For example, one covers the franchise’s signature battling mechanics, while another relates to how players can ride monsters.

Pokémon games have featured those mechanics since the start, but here’s the thing: all three patents were filed and granted to Nintendo and the Pokémon Company after Pocketpair released Palworld to early access on January 19, 2024. The earliest patent, for instance, was granted to Nintendo and the Pokémon Company on May 22, 2024, or nearly four months after Palworld first hit Steam and Xbox Game Pass.

According to Pocketpair, the two companies seek “compensation for a portion of the damages incurred between the date of registration of the patents and the date of filing of this lawsuit.” Put another way, it's a small window of time the suit targets. 

I’m not a lawyer, so I won’t comment on Nintendo’s strategy of attempting to enforce patents that were issued after Palworld was already on the market. However, I think it’s worth mentioning that Pocketpair CEO Takuro Mizobe had said before the game's release that Palworld had “cleared legal reviews,” suggesting the studio had looked at Nintendo's patent portfolio for possible points of conflict. In any case, the Tokyo District Court is scheduled to hear opening remarks from each side next week.

This article originally appeared on Engadget at https://www.engadget.com/gaming/nintendo/nintendo-palworld-lawsuit-seeks-65700-in-damages-163051523.html?src=rss