NASA’s newest telescope can detect gravitational waves from colliding black holes

NASA showed off a telescope prototype for a new gravitational wave detection mission in space. The telescope is part of the Laser Interferometer Space Antenna (LISA) mission led by the European Space Agency (NSA) in partnership with NASA.

The goal of the LISA mission is to position three spacecraft in a triangular orbit measuring nearly 1.6 million miles on each side. The three spacecraft will follow the Earth’s orbit around the Sun. Each spacecraft will carry two telescopes to track their siblings using infrared laser beams. Those beams can measure distances down to a trillionth of a meter.

Gravitational waves are created during a collision between two black holes. They were first theorized by Albert Einstein in 1916 and detected almost a century later by the Laser Interferometer Gravitational-wave Observatory (LIGO) Scientific Collaboration from the National Science Foundation, Caltech and MIT. A gravitational wave is detected when the three spacecraft shift from their characteristic pattern.

The LISA mission is scheduled to launch in the mid-2030s. The detection of gravitational waves could provide “enormous potential” to better our understanding of the universe, including events like black holes and the Big Bang that are difficult to study through other means, according to the official mission website.

This article originally appeared on Engadget at https://www.engadget.com/science/space/nasas-newest-telescope-can-detect-gravitational-waves-from-colliding-black-holes-194527272.html?src=rss

OpenAI and Microsoft are funding $10 million in grants for AI-powered journalism

OpenAI and Microsoft are funding projects to bring more AI tools into the newsroom. The duo will give grants of up to $10 million to Chicago Public Media, the Minnesota Star Tribune, Newsday (in Long Island, NY), The Philadelphia Inquirer and The Seattle Times. Each of the publications will hire a two-year AI fellow to develop projects for implementing the technology and improving business sustainability. Three more outlets are expected to receive fellowship grants in a second round.

OpenAI and Microsoft are each contributing $2.5 million in direct funding as well as $2.5 million in software and enterprise credits. The Lenfest Institute of Journalism is collaborating with OpenAI and Microsoft on the project, and announced the news today.

To date, the ties between journalism and AI have mostly ranged from suspicious to litigious. OpenAI and Microsoft have been sued by the Center for Investigative Reporting, The New York Times, The Intercept, Raw Story and AlterNet. Some publications accused ChatGPT of plagiarizing their articles, and other suits centered on scraping web content for AI model training without permission or compensation. Other media outlets have opted to negotiate; Condé Nast was one of the latest to ink a deal with OpenAI for rights to their content.

In a separate development, OpenAI has hired Aaron Chatterji as its first chief economist. Chatterji is a professor at Duke University’s Fuqua School of Business, and he also served on President Barack Obama’s Council of Economic Advisers as well as in President Joe Biden's Commerce Department.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-and-microsoft-are-funding-10-million-in-grants-for-ai-powered-journalism-193042213.html?src=rss

A federal ban on fake online reviews is now in effect

Be warned, online merchants who see no issue in publishing phony reviews from made-up customers: that practice is no longer allowed. A federal ban on fake online reviews has taken effect.

The Federal Trade Commission issued a final rule on the purchase and sale of online reviews back in August and it came into force 60 days after it was published in the Federal Register. The agency's commissioners voted unanimously in favor of the regulation.

The rule bans businesses from creating, buying or selling reviews and testimonials attributed to people who don't exist, including those that are AI generated. False celebrity endorsements aren't allowed and companies can't pay or otherwise incentivize genuine customers to leave positive or negative reviews.

Certain reviews and testimonials written by people who have close ties with a company without a disclaimer is a no-no. There are restrictions on soliciting reviews from close relatives of employees too.

The rule includes limitations on the suppression of negative reviews from customers. It also prohibits people from knowingly selling or buying fake followers and views to inflate the influence or importance of social media accounts for commercial purposes.

Fines for violating these measures could prove extremely costly. The maximum civil penalty for each infraction is currently $51,744.

“Fake reviews not only waste people’s time and money, but also pollute the marketplace and divert business away from honest competitors,” FTC Chair Lina Khan said when the rule was finalized. “By strengthening the FTC’s toolkit to fight deceptive advertising, the final rule will protect Americans from getting cheated, put businesses that unlawfully game the system on notice, and promote markets that are fair, honest and competitive.”

The rule is a positive move for consumers, with the idea that reviews should be more trustworthy in the future. In a separate victory for consumer rights, the FTC recently issued a final rule to make it as easy for people to cancel a subscription as it is to sign up for one.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/a-federal-ban-on-fake-online-reviews-is-now-in-effect-191746690.html?src=rss

Google Messages adds enhanced scam detection tools

Google just announced a spate of safety features coming to Messages. There’s enhanced scam detection centered around texts that could lead to fraud. The company says the update provides “improved analysis of scammy texts.” For now, this tool will prioritize scams involving package deliveries and job offers.

When Google Messages suspects a scam, it’ll move the message to the spam folder or issue a warning. The app uses on-device machine learning models to detect these scams, meaning that conversations will remain private. This enhancement is rolling out now to beta users who have spam protection enabled.

Google’s also set to broadly roll out intelligent warnings, a feature that’s been in the pilot stage for a while. This tool warns users when they get a link from an unknown sender and automatically “blocks messages with links from suspicious senders.” The updated safety tools also include new sensitive content warnings that automatically blurs images that may contain nudity. This is an opt-in feature and also keeps everything on the device. It’ll show up in the next few months.

Finally, there’s a forthcoming tool that’ll let people turn off messages from unknown international senders, thus cutting the scam spigot off at the source. This will automatically hide messages from international senders who aren’t already in the contacts list. This feature is entering a pilot program in Singapore later this year before expanding to more countries.

In addition to the above tools, Google says it’s currently working on a contact verifying feature for Android. This should help put the kibosh on scammers trying to impersonate one of your contacts. The company has stated that this feature will be available sometime next year.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/google-messages-adds-enhanced-scam-detection-tools-190009890.html?src=rss

Stable Diffusion 3.5 follows your prompts more closely and generates more diverse people

Stable Diffusion, an open-source alternative to AI image generators like Midjourney and DALL-E, has been updated to version 3.5. The new model tries to right some of the wrongs (which may be an understatement) of the widely panned Stable Diffusion 3 Medium. Stability AI says the 3.5 model adheres to prompts better than other image generators and competes with much larger models in output quality. In addition, it’s tuned for a greater diversity of styles, skin tones and features without needing to be prompted to do so explicitly.

The new model comes in three flavors. Stable Diffusion 3.5 Large is the most powerful of the trio, with the highest quality of the bunch, while leading the industry in prompt adherence. Stability AI says the model is suitable for professional uses at 1 MP resolution.

Meanwhile, Stable Diffusion 3.5 Large Turbo is a “distilled” version of the larger model, focusing more on efficiency than maximum quality. Stability AI says the Turbo variant still produces “high-quality images with exceptional prompt adherence” in four steps.

Finally, Stable Diffusion 3.5 Medium (2.5 billion parameters) is designed to run on consumer hardware, balancing quality with simplicity. With its greater ease of customization, the model can generate images between 0.25 and 2 megapixel resolution. However, unlike the first two models, which are available now, Stable Diffusion 3.5 Medium doesn’t arrive until October 29.

The new trio follows the botched Stable Diffusion 3 Medium in June. The company admitted that the release “didn’t fully meet our standards or our communities’ expectations,” as it produced some laughably grotesque body horror in response to prompts that asked for no such thing. Stability AI’s repeated mentions of exceptional prompt adherence in today’s announcement are likely no coincidence.

Although Stability AI only briefly mentioned it in its announcement blog post, the 3.5 series has new filters to better reflect human diversity. The company describes the new models’ human outputs as “representative of the world, not just one type of person, with different skin tones and features, without the need for extensive prompting.”

Let’s hope it’s sophisticated enough to account for subtleties and historical sensitivities, unlike Google’s debacle from earlier this year. Unprompted to do so, Gemini produced collections of egregiously inaccurate historical “photos,” like ethnically diverse Nazis and US Founding Fathers. The backlash was so intense that Google didn’t reincorporate human generations until six months later.

This article originally appeared on Engadget at https://www.engadget.com/ai/stable-diffusion-35-follows-your-prompts-more-closely-and-generates-more-diverse-people-184022965.html?src=rss

Anthropic is letting Claude AI control your PC

Anthropic's latest development gives its Claude AI assistant the ability to control a PC, reportedly just like a person would. The feature, dubbed 'computer use,' entered public beta today. With computer use, Claude can be directed to execute tasks such as "looking at a screen, moving a cursor, clicking buttons, and typing text," according to the company's announcement

In theory, this could make the AI even more useful in automating repetitive computer tasks. However, a second blog post focused on computer use acknowledged that this application of Anthropic's AI models is still early in development and, to paraphrase, buggy as heck. The company said that in internal testing, Claude stopped in the middle of an assigned coding task and began opening images of Yellowstone National Park. While that is uncannily human behavior (who doesn't want to take a break to stare at natural beauty during the work day?), it's also a reminder that even the best AI models can have errors.

In addition to unveiling computer use, Anthropic also released an upgraded version of its Claude 3.5 Sonnet model alongside a brand new model called Claude 3.5 Haiku that will be released later in October. In August, Anthropic joined OpenAI in agreeing to share its work with the US AI Safety Institute.

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-is-letting-claude-ai-control-your-pc-181500127.html?src=rss

Redact-A-Chat is an old-style chatroom that censors words after one use

If you're a word and game lover like me, then prepare to join me in excitement — and, eventual frustration — as there's a new daily word puzzle of sorts. New York-based art collective MSCHF has introduced an AOL-style chatroom called Redact-A-Chat that censors a word each time someone uses it. Josh Wardle, creator of Wordle, recently worked at MSCHF there for a few years. 

So, how does it work? There's a main chatroom where you can write anything, but if a word gets repeated, then it's covered with a blue blurry line and unavailable for the rest of the day. I got to try it out early, and it seems duplicated words in sentences also lead to the second mention being blurred out. All words become fair game again at midnight. Announcements about newly censored words and when the time starts again come from three one-eyed safety pins reminiscent of the Microsoft Word safety pin. 

In a statement, MSCHF said Redact-A-Chat "forces creative communication. You must constantly keep ahead of the censor in order to continue your conversation. On the other hand, you can be that a**hole who starts working their way through the dictionary to deprive everyone else of language."

If you're unsure about participating in the main room, you can start a chat just for your friends. You just click the create a chat room button, give it a name and it will appear. You can then invite other people to the group with a unique code. 

This article originally appeared on Engadget at https://www.engadget.com/ai/redact-a-chat-is-an-old-style-chatroom-that-censors-words-after-one-use-180014370.html?src=rss

More than 10,500 artists sign open letter protesting unlicensed AI training

Some of the biggest names in Hollywood, literature and music have issued a warning to the artificial intelligence industry. The Washington Post reports that more than 10,500 artists have signed an open protest letter objecting to AI developers’ “unlicensed use” of artists’ work to train their models.

“The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted,” the one sentence letter reads.

The letter has support from some huge names across the film, television, music and publishing industries. Some of the more famous signatures include actors Julianne Moore, Rosario Dawson, Kevin Bacon and F. Murray Abraham, as well as former Saturday Night Live star Kate McKinnon, author James Patterson and Radiohead frontman Thom Yorke.

The unauthorized use of their work to train AI models has been an area of major concern among creatives. The SAG-AFTRA union and Writers Guild of America recently held industry-wide strikes demanding better protections for their work and livelihood against the use of AI in studio projects.

There are also several lawsuits currently in courts accusing some AI developers of using copyrighted content without permission or proper compensation.On Monday, The Wall Street Journal and The New York Post sued Perplexity AI for violating their copyright protections. Music labels like Universal, Warner and Sony sued the makers of the Suno and Uido AI music makers back in June for violating its copyright protections on a “massive scale.”

This article originally appeared on Engadget at https://www.engadget.com/ai/more-than-10500-artists-sign-open-letter-protesting-unlicensed-ai-training-174544491.html?src=rss

Metal Slug Tactics finally arrives on November 5

The return of Metal Slug is almost upon us. It's been three years since spin-off Metal Slug Tactics was unveiled. After some delays, the game finally has a firm release date of November 5.

Rather than the classic run-and-gun gameplay of the original games, Metal Slug Tactics takes a more methodical approach to the action. As the name suggests, it's a tactical RPG. It does retain the pixel art look of the old games, though. Metal Slug Tactics is billed as both an homage to its predecessors and a new spin on the series, with some roguelite elements designed to boost replayability.

In their latest trailer, developer Leikir Studio and publisher Dotemu provide a fresh look at the game. It reveals three additional characters who appeared in earlier games from original publisher SNK in Clark Still, Ralf Jones and Leona Heidern.

The last new mainline game, Metal Slug 7, debuted in 2008. Since Metal Slug Tactics was announced, a couple of other spin-offs have arrived in the form of mobile titles Metal Slug: Commander and Metal Slug: Awakening, which later came to PC. However, this one is bound for PC, Nintendo Switch, PlayStation 5, PlayStation 4, Xbox Series X/S and Xbox One.

This article originally appeared on Engadget at https://www.engadget.com/gaming/metal-slug-tactics-finally-arrives-on-november-5-171012984.html?src=rss

Medical record tracking comes to Samsung Health

In an update spotted by 9to5Google, Samsung Health now lets users view their medical records. Samsung is working with b.well Connected Health, a platform designed to provide people with access to their health data, to make these changes happen. They can access previous medical records, including vaccinations, prescriptions and specific medical tests.

Not only does Samsung Health provide information from the past. It can also provide recommendations for next steps and actions, as well as prompt users to seek medical attention.

Since the end of last year, Samsung Health has had a medication tracking feature. Now, Samsung says the feature is coming to South Korea and India, and it is collaborating with healthcare providers in those countries as needed.

One final notable update involves food intake monitoring. Samsung Health now has a barcode scanner to more easily record food products. The company is partnering with fatsecret, a provider of verified food and nutrition data. As a result, you can scan a barcode to get nutritional information instantly. This feature is coming first to the US and some EU countries, including France, Germany, Italy, the Netherlands and Poland. There are plans to expand it to other regions in the future.

Samsung isn’t only working to improve people’s health with the Samsung Health app. The FDA greenlit a sleep detection feature for Galaxy Watch this year.

This article originally appeared on Engadget at https://www.engadget.com/apps/medical-record-tracking-comes-to-samsung-health-170011090.html?src=rss