Qualcomm and Google team up to help carmakers create AI voice systems

Car manufacturers will be able to develop new AI voice assistants for their cars thanks to a new partnership with Qualcomm and Google. Qualcomm announced earlier today that it’s working with Google on a new AI development system for carmakers.

The new version is based on Android Automotive OS (AAOS), Google’s infotainment platform for cars. Qualcomm is offering its Snapdragon Digital Chassis with Google Cloud and AAOS to generate new AI-powered digital cockpits for cars. Qualcomm also unveiled two new chips for powering driving systems including the Snapdragon Cockpit Elite for dashboards and the Snapdragon Ride Elite for self-driving features.

The new interface will allow car drivers and passengers to interact with custom voice assistants, immersive maps and real-time driving updates. Carmakers can use the new system to create their own unique and marketable AI voice assistants that don’t require a connection to a smartphone.

Other carmakers have taken steps to try to integrate AI systems in its vehicles. Volkswagen announced plans at CES 2024 that it would integrate ChatGPT in its cars’ voice assistants across a range of newer models. After a slow start, AAOS now underpins vehicles from several manufacturers including Chevrolet, Honda, Volvo and Rivian.

This article originally appeared on Engadget at https://www.engadget.com/ai/qualcomm-and-google-team-up-to-help-carmakers-create-ai-voice-systems-211510693.html?src=rss

Ecobee smart home users can now unlock Yale and August smart locks from its app

Ecobee is integrating smart locks into its app. The company doesn’t make smart locks of its own, but you can now control Wi-Fi-enabled ones from Yale and August using the Ecobee app. The feature could prevent you from switching apps to let someone who rings your smart doorbell in. However, it’s locked behind a subscription, so user convenience isn’t the only motive here.

The integration adds an “unlock” button from the Ecobee app’s live view. So, you can let visitors in from the same screen where you confirm it’s someone you want coming inside. (Handy!) The Ecobee app also allows you to lock your doors automatically when you arm your security system. (Also handy!)

Less handy: You’ll need to pay up to enjoy these perks because the feature is locked (ahem) behind Ecobee’s Smart Security system. The premium service costs $5 monthly or $50 annually. And as The Verge notes, it won’t let you unlock your August or Yale devices from Ecobee’s smart thermostats.

This could be a convenient perk if you’re already paying for Ecobee’s subscription service. If not, you’ll have to ask yourself if it’s worth a premium to avoid the oh-so-grueling task of pulling up your phone’s app switcher to jump to another smart-home app.

This article originally appeared on Engadget at https://www.engadget.com/home/smart-home/ecobee-smart-home-users-can-now-unlock-yale-and-august-smart-locks-from-its-app-201700926.html?src=rss

NASA’s newest telescope can detect gravitational waves from colliding black holes

NASA showed off a telescope prototype for a new gravitational wave detection mission in space. The telescope is part of the Laser Interferometer Space Antenna (LISA) mission led by the European Space Agency (NSA) in partnership with NASA.

The goal of the LISA mission is to position three spacecraft in a triangular orbit measuring nearly 1.6 million miles on each side. The three spacecraft will follow the Earth’s orbit around the Sun. Each spacecraft will carry two telescopes to track their siblings using infrared laser beams. Those beams can measure distances down to a trillionth of a meter.

Gravitational waves are created during a collision between two black holes. They were first theorized by Albert Einstein in 1916 and detected almost a century later by the Laser Interferometer Gravitational-wave Observatory (LIGO) Scientific Collaboration from the National Science Foundation, Caltech and MIT. A gravitational wave is detected when the three spacecraft shift from their characteristic pattern.

The LISA mission is scheduled to launch in the mid-2030s. The detection of gravitational waves could provide “enormous potential” to better our understanding of the universe, including events like black holes and the Big Bang that are difficult to study through other means, according to the official mission website.

This article originally appeared on Engadget at https://www.engadget.com/science/space/nasas-newest-telescope-can-detect-gravitational-waves-from-colliding-black-holes-194527272.html?src=rss

OpenAI and Microsoft are funding $10 million in grants for AI-powered journalism

OpenAI and Microsoft are funding projects to bring more AI tools into the newsroom. The duo will give grants of up to $10 million to Chicago Public Media, the Minnesota Star Tribune, Newsday (in Long Island, NY), The Philadelphia Inquirer and The Seattle Times. Each of the publications will hire a two-year AI fellow to develop projects for implementing the technology and improving business sustainability. Three more outlets are expected to receive fellowship grants in a second round.

OpenAI and Microsoft are each contributing $2.5 million in direct funding as well as $2.5 million in software and enterprise credits. The Lenfest Institute of Journalism is collaborating with OpenAI and Microsoft on the project, and announced the news today.

To date, the ties between journalism and AI have mostly ranged from suspicious to litigious. OpenAI and Microsoft have been sued by the Center for Investigative Reporting, The New York Times, The Intercept, Raw Story and AlterNet. Some publications accused ChatGPT of plagiarizing their articles, and other suits centered on scraping web content for AI model training without permission or compensation. Other media outlets have opted to negotiate; Condé Nast was one of the latest to ink a deal with OpenAI for rights to their content.

In a separate development, OpenAI has hired Aaron Chatterji as its first chief economist. Chatterji is a professor at Duke University’s Fuqua School of Business, and he also served on President Barack Obama’s Council of Economic Advisers as well as in President Joe Biden's Commerce Department.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-and-microsoft-are-funding-10-million-in-grants-for-ai-powered-journalism-193042213.html?src=rss

A federal ban on fake online reviews is now in effect

Be warned, online merchants who see no issue in publishing phony reviews from made-up customers: that practice is no longer allowed. A federal ban on fake online reviews has taken effect.

The Federal Trade Commission issued a final rule on the purchase and sale of online reviews back in August and it came into force 60 days after it was published in the Federal Register. The agency's commissioners voted unanimously in favor of the regulation.

The rule bans businesses from creating, buying or selling reviews and testimonials attributed to people who don't exist, including those that are AI generated. False celebrity endorsements aren't allowed and companies can't pay or otherwise incentivize genuine customers to leave positive or negative reviews.

Certain reviews and testimonials written by people who have close ties with a company without a disclaimer is a no-no. There are restrictions on soliciting reviews from close relatives of employees too.

The rule includes limitations on the suppression of negative reviews from customers. It also prohibits people from knowingly selling or buying fake followers and views to inflate the influence or importance of social media accounts for commercial purposes.

Fines for violating these measures could prove extremely costly. The maximum civil penalty for each infraction is currently $51,744.

“Fake reviews not only waste people’s time and money, but also pollute the marketplace and divert business away from honest competitors,” FTC Chair Lina Khan said when the rule was finalized. “By strengthening the FTC’s toolkit to fight deceptive advertising, the final rule will protect Americans from getting cheated, put businesses that unlawfully game the system on notice, and promote markets that are fair, honest and competitive.”

The rule is a positive move for consumers, with the idea that reviews should be more trustworthy in the future. In a separate victory for consumer rights, the FTC recently issued a final rule to make it as easy for people to cancel a subscription as it is to sign up for one.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/a-federal-ban-on-fake-online-reviews-is-now-in-effect-191746690.html?src=rss

Google Messages adds enhanced scam detection tools

Google just announced a spate of safety features coming to Messages. There’s enhanced scam detection centered around texts that could lead to fraud. The company says the update provides “improved analysis of scammy texts.” For now, this tool will prioritize scams involving package deliveries and job offers.

When Google Messages suspects a scam, it’ll move the message to the spam folder or issue a warning. The app uses on-device machine learning models to detect these scams, meaning that conversations will remain private. This enhancement is rolling out now to beta users who have spam protection enabled.

Google’s also set to broadly roll out intelligent warnings, a feature that’s been in the pilot stage for a while. This tool warns users when they get a link from an unknown sender and automatically “blocks messages with links from suspicious senders.” The updated safety tools also include new sensitive content warnings that automatically blurs images that may contain nudity. This is an opt-in feature and also keeps everything on the device. It’ll show up in the next few months.

Finally, there’s a forthcoming tool that’ll let people turn off messages from unknown international senders, thus cutting the scam spigot off at the source. This will automatically hide messages from international senders who aren’t already in the contacts list. This feature is entering a pilot program in Singapore later this year before expanding to more countries.

In addition to the above tools, Google says it’s currently working on a contact verifying feature for Android. This should help put the kibosh on scammers trying to impersonate one of your contacts. The company has stated that this feature will be available sometime next year.

This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/google-messages-adds-enhanced-scam-detection-tools-190009890.html?src=rss

Stable Diffusion 3.5 follows your prompts more closely and generates more diverse people

Stable Diffusion, an open-source alternative to AI image generators like Midjourney and DALL-E, has been updated to version 3.5. The new model tries to right some of the wrongs (which may be an understatement) of the widely panned Stable Diffusion 3 Medium. Stability AI says the 3.5 model adheres to prompts better than other image generators and competes with much larger models in output quality. In addition, it’s tuned for a greater diversity of styles, skin tones and features without needing to be prompted to do so explicitly.

The new model comes in three flavors. Stable Diffusion 3.5 Large is the most powerful of the trio, with the highest quality of the bunch, while leading the industry in prompt adherence. Stability AI says the model is suitable for professional uses at 1 MP resolution.

Meanwhile, Stable Diffusion 3.5 Large Turbo is a “distilled” version of the larger model, focusing more on efficiency than maximum quality. Stability AI says the Turbo variant still produces “high-quality images with exceptional prompt adherence” in four steps.

Finally, Stable Diffusion 3.5 Medium (2.5 billion parameters) is designed to run on consumer hardware, balancing quality with simplicity. With its greater ease of customization, the model can generate images between 0.25 and 2 megapixel resolution. However, unlike the first two models, which are available now, Stable Diffusion 3.5 Medium doesn’t arrive until October 29.

The new trio follows the botched Stable Diffusion 3 Medium in June. The company admitted that the release “didn’t fully meet our standards or our communities’ expectations,” as it produced some laughably grotesque body horror in response to prompts that asked for no such thing. Stability AI’s repeated mentions of exceptional prompt adherence in today’s announcement are likely no coincidence.

Although Stability AI only briefly mentioned it in its announcement blog post, the 3.5 series has new filters to better reflect human diversity. The company describes the new models’ human outputs as “representative of the world, not just one type of person, with different skin tones and features, without the need for extensive prompting.”

Let’s hope it’s sophisticated enough to account for subtleties and historical sensitivities, unlike Google’s debacle from earlier this year. Unprompted to do so, Gemini produced collections of egregiously inaccurate historical “photos,” like ethnically diverse Nazis and US Founding Fathers. The backlash was so intense that Google didn’t reincorporate human generations until six months later.

This article originally appeared on Engadget at https://www.engadget.com/ai/stable-diffusion-35-follows-your-prompts-more-closely-and-generates-more-diverse-people-184022965.html?src=rss

Anthropic is letting Claude AI control your PC

Anthropic's latest development gives its Claude AI assistant the ability to control a PC, reportedly just like a person would. The feature, dubbed 'computer use,' entered public beta today. With computer use, Claude can be directed to execute tasks such as "looking at a screen, moving a cursor, clicking buttons, and typing text," according to the company's announcement

In theory, this could make the AI even more useful in automating repetitive computer tasks. However, a second blog post focused on computer use acknowledged that this application of Anthropic's AI models is still early in development and, to paraphrase, buggy as heck. The company said that in internal testing, Claude stopped in the middle of an assigned coding task and began opening images of Yellowstone National Park. While that is uncannily human behavior (who doesn't want to take a break to stare at natural beauty during the work day?), it's also a reminder that even the best AI models can have errors.

In addition to unveiling computer use, Anthropic also released an upgraded version of its Claude 3.5 Sonnet model alongside a brand new model called Claude 3.5 Haiku that will be released later in October. In August, Anthropic joined OpenAI in agreeing to share its work with the US AI Safety Institute.

This article originally appeared on Engadget at https://www.engadget.com/ai/anthropic-is-letting-claude-ai-control-your-pc-181500127.html?src=rss

Redact-A-Chat is an old-style chatroom that censors words after one use

If you're a word and game lover like me, then prepare to join me in excitement — and, eventual frustration — as there's a new daily word puzzle of sorts. New York-based art collective MSCHF has introduced an AOL-style chatroom called Redact-A-Chat that censors a word each time someone uses it. Josh Wardle, creator of Wordle, recently worked at MSCHF there for a few years. 

So, how does it work? There's a main chatroom where you can write anything, but if a word gets repeated, then it's covered with a blue blurry line and unavailable for the rest of the day. I got to try it out early, and it seems duplicated words in sentences also lead to the second mention being blurred out. All words become fair game again at midnight. Announcements about newly censored words and when the time starts again come from three one-eyed safety pins reminiscent of the Microsoft Word safety pin. 

In a statement, MSCHF said Redact-A-Chat "forces creative communication. You must constantly keep ahead of the censor in order to continue your conversation. On the other hand, you can be that a**hole who starts working their way through the dictionary to deprive everyone else of language."

If you're unsure about participating in the main room, you can start a chat just for your friends. You just click the create a chat room button, give it a name and it will appear. You can then invite other people to the group with a unique code. 

This article originally appeared on Engadget at https://www.engadget.com/ai/redact-a-chat-is-an-old-style-chatroom-that-censors-words-after-one-use-180014370.html?src=rss

More than 10,500 artists sign open letter protesting unlicensed AI training

Some of the biggest names in Hollywood, literature and music have issued a warning to the artificial intelligence industry. The Washington Post reports that more than 10,500 artists have signed an open protest letter objecting to AI developers’ “unlicensed use” of artists’ work to train their models.

“The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted,” the one sentence letter reads.

The letter has support from some huge names across the film, television, music and publishing industries. Some of the more famous signatures include actors Julianne Moore, Rosario Dawson, Kevin Bacon and F. Murray Abraham, as well as former Saturday Night Live star Kate McKinnon, author James Patterson and Radiohead frontman Thom Yorke.

The unauthorized use of their work to train AI models has been an area of major concern among creatives. The SAG-AFTRA union and Writers Guild of America recently held industry-wide strikes demanding better protections for their work and livelihood against the use of AI in studio projects.

There are also several lawsuits currently in courts accusing some AI developers of using copyrighted content without permission or proper compensation.On Monday, The Wall Street Journal and The New York Post sued Perplexity AI for violating their copyright protections. Music labels like Universal, Warner and Sony sued the makers of the Suno and Uido AI music makers back in June for violating its copyright protections on a “massive scale.”

This article originally appeared on Engadget at https://www.engadget.com/ai/more-than-10500-artists-sign-open-letter-protesting-unlicensed-ai-training-174544491.html?src=rss