Twitter says startups can ‘experiment’ with its data for $5,000 a month

Twitter’s API roller coaster under Elon Musk continues. The company announced a new “Pro” tier for developers today. At $5,000 per month, it falls between the $100-a-month Basic and custom-priced Enterprise plans.

The new Twitter API Pro plan offers monthly access to one million retrieved tweets and 300,000 posted tweets at the app level. It also includes rate-limited access to endpoints for real-time filtered streams (live access to tweets based on specified parameters) and a complete archive search of historical tweets. Finally, it adds three app IDs and Login with Twitter access.

However, the $5,000-a-month pricing for companies wanting to “experiment, build, and scale [their] business” leaves an enormous gap between it and the $100-a-month basic plan, the next tier down. The latter only offers a tiny fraction of the access in the Pro plan, leaving small businesses to choose between a level that may not provide enough for a $100 monthly fee versus a $5,000 plan that stretches beyond many startups’ budgets. 

Some users also voiced their belief that its limits were too tight for that price. “That’s cool, but you already killed most Twitter apps by now,” Birdy developer Maxime Dupré responded to Twitter’s announcement. “And 5K is still too much for most of us. A 1K plan could make sense... but then again it’s too late.” The pricing also doesn’t likely do much for researchers, who the platform has been trying to charge tens of thousands of dollars for access.

Twitter’s recent API changes have created quite a bumpy ride for developers who still want access to the company’s data. First, the company effectively killed most third-party clients in January before quietly updating its terms to reflect the change. Then, it announced in February that it was ending free API access, only to delay the move after widespread blowback while promising that a new read-only version of the free tier would remain available for “testing” purposes. (The old version of the free API was cut off entirely in April, although Twitter reenabled it for emergency services in May.) The platform rolled out the new API’s initial three tiers (free, basic and enterprise) in March before adding today’s $5,000 pro tier. However, as the company has already alienated many of the developers that once relied on its platform, it remains to be seen how effective it will be at luring new customers — especially smaller operations — into the expensive new plan.

This article originally appeared on Engadget at

Google begins opening access to generative AI in search

Google’s take on AI-powered search begins rolling out today. The company announced this morning that it’s opening access to Google Search Generative Experience (SGE) and other Search Labs in the US. If you haven’t already, you’ll need to sign up for the waitlist and sit tight until you get an email announcing it’s your turn.

Revealed at Google I/O 2023 earlier this month, Google SGE is the company’s infusion of conversational AI into the classic search experience. If you’ve played with Bing AI, expect a familiar — yet different — product. Cherlynn Low noted in Engadget’s SGE preview that Google’s AI-powered search uses the same input bar you’re used to rather than a separate chatbot field like in Bing. Next, the generative AI results will appear in a shaded section below the search bar (and sponsored results) but above the standard web results. Meanwhile, on the top right of the AI results is a button letting you expand the snapshot, and it adds cards showing the sourced articles. Finally, you can ask follow-up questions by tapping a button below the results.

Google describes the snapshot as “key information to consider, with links to dig deeper.” Think of it like a slice of Bard injected (somewhat) seamlessly into the Google search you already know.

In addition, Google is opening access to other Search Labs, including Code Tips and Add to Sheets (both are US-only for now). Code Tips “harnesses the power of large language models to provide pointers for writing code faster and smarter.” It lets aspiring developers ask how-to questions about programming languages (C, C++, Go, Java, JavaScript, Kotlin, Python and TypeScript), tools (Docker, Git, shells) and algorithms. Meanwhile, as its name suggests, Add to Sheets lets you insert search results directly into Google’s spreadsheet app. Tapping a Sheets icon to the left of a search result will pop up a list of your recent documents; choose one to which you want to attach the result.

If you aren’t yet on the Search Labs waitlist, you can tap the Labs icon (a beaker symbol) on a new tab in Chrome for desktop or in the Google search app on Android or iOS. However, the company hasn’t announced how quickly or broadly it will open access, so you may need to be patient.

This article originally appeared on Engadget at

‘The Talos Principle 2’ brings mind-bending puzzles to a new generation

Sony revealed The Talos Principle 2 at its PlayStation Games Showcase today. The sequel to the 2015 first-person puzzler promises a greatly expanded scope with “more mind-bending puzzles to solve, more surreal environments to explore, more secrets to uncover, a deeper story to lose yourself in, and bigger questions to boggle your brain.” Developer Croteam describes the sequel as simultaneously familiar and fresh.

The game takes place “in an era where humanity as we know it has long gone extinct, but our culture lives on in a world inhabited by robots made in our image.” As you investigate a mysterious structure, the game will challenge you not only with puzzles, but also “questions about the nature of the cosmos, faith versus reason, and the fear of repeating humankind’s mistakes.” It appears the team hasn’t pulled back one bit from the philosophical depths plumbed in its predecessor.

As for gameplay, Croteam says the sequel will add gravity manipulation and mind transference as new mechanics. The developer also teases that it’s holding other new gameplay elements up its sleeve, including optional Gold puzzles that will “melt your brain.” Additionally, it says you’ll be “directly involved in its complex, character-driven story,” while teasing multiple endings defined by player choice. “The Talos Principle 2’s narrative is itself a puzzle to be solved, and any player who dedicates themselves to digging up its deepest, darkest secrets will have their inquisitiveness rewarded,” the developer wrote wrote.

The Talos Principle 2 is a giant leap forward for the series — particularly the world design,” Croteam wrote. “These are the biggest, strangest, and most beautiful environments Croteam has ever made, taking full advantage of the latest in graphics tech. You’ll visit several all-new locations including a city on the brink of a paradigm shift and the varied landscapes of an island that holds the keys to the future.”

The game will launch on PlayStation 5 “later this year.”

This article originally appeared on Engadget at

Google and the European Commission will collaborate on AI ground rules

The world’s governments have taken note of generative AI’s potential for massive disruption and are acting accordingly. European Commission (EC) industry chief Thierry Breton said Wednesday that it would work with Alphabet on a voluntary pact to establish artificial intelligence ground rules, according toReuters. Breton met with Google CEO Sundar Pichai in Brussels to discuss the arrangement, which will include input from companies based in Europe and other regions. The EU has a history of enacting strict technology rules, and the alliance gives Google a chance to provide input while steering clear of trouble down the road.

The compact aims to set up guidelines ahead of official legislation like the EU’s proposed AI Act, which will take much longer to develop and enact. “Sundar and I agreed that we cannot afford to wait until AI regulation actually becomes applicable, and to work together with all AI developers to already develop an AI pact on a voluntary basis ahead of the legal deadline,” Breton said in a statement. He encouraged EU nations and lawmakers to settle on specifics by the end of the year.

In a similar move, EU tech chief Margrethe Vestager said Tuesday that the federation would work with the United States on establishing minimum standards for AI. She hopes EU governments and lawmakers will “agree to a common text” for regulation by the end of 2023. “That would still leave one if not two years then to come into effect, which means that we need something to bridge that period of time,” she said. Topics of concern for the EU include copyright, disinformation, transparency and governance.

OpenAI’s ChatGPT, the service most associated with AI fears, exploded in popularity after its November launch, on its way to becoming the fastest-growing application ever (despite not having an official mobile app until this month). Unfortunately, its viral popularity is paired with legitimate fears about its capacity to upend society. In addition, image generators can produce AI-generated “photos” that are increasingly difficult to discern from reality, and speech cloners can mimic the voices of famous artists and public figures. Soon, video generators will evolve, making deepfakes even more of a concern.

Despite its undeniable potential for creativity and productivity, generative AI can threaten the livelihoods of countless content creators while posing new security and privacy risks and proliferating misinformation / disinformation. Left unregulated, corporations tend to maximize profits no matter the human cost, and generative AI is a tool that, paired with bad actors, could wreak immeasurable global havoc. “There is a shared sense of urgency. In order to make the most of this technology, guard rails are needed,” Vestager said. “Can we discuss what we can expect companies to do as a minimum before legislation kicks in?”

This article originally appeared on Engadget at

Universal Music Group partners with Endel for AI-generated wellness soundscapes

Universal Music Group (UMG) is partnering with Endel, an “AI sound wellness company” specializing in personalized algorithmic soundscapes, the companies announced today. The partnership aims to let UMG artists create machine-learning-generated sounds for activities like sleep, relaxation and focus. Endel previously partnered with synth-pop artist Grimes on a lullaby app.

The record label “will use Endel’s proprietary AI technology to enable UMG artists to create science-backed soundscapes,” the companies said. The soundscapes can contain new music and updated versions of back-catalog tracks. The companies emphasize that the project “will always respect creators’ rights and put artists at the center of the creative process,” adding that musicians and their teams have the final say on the results. UMG and Endel say they’ll announce “the first wave of soundscapes” from the partnership in the coming months.

Endel uses artist stems to make soundscapes “driven by scientific insights into how music affects our mind-state.” The companies describe the collaboration as a way to “provide artists and rights holders new opportunities to generate additional revenue for their catalogs” while letting performers dip their toes into new areas and “support wellness for the listener.” But it’s hard not to see the irony of UMG quickly stomping out AI-generated music that threatens its business model — like when fake Drake and The Weeknd tracks went viral — while putting out rapturous press releases when it sees a potential profit. (Although, to be fair, cloning artists’ voices without their permission would never fly for long, regardless of UMG’s response.)

“At UMG, we believe in the incredible potential of ethical AI as a tool to support and enhance the creativity of our artists, labels and songwriters, something that Endel has harnessed with impressive ingenuity and scientific innovation,” said Michael Nash, EVP and Chief Digital Officer at UMG. “We are excited to work together and utilize their patented AI technology to create new music soundscapes — anchored in our artist-centric philosophy — that are designed to enhance audience wellness, powered by AI that respects artists’ rights in its development.”

This article originally appeared on Engadget at

Amazon again accused of breaking labor laws at unionized warehouse

Amazon has been accused again of illegal anti-union behavior. The National Labor Relations Board (NLRB) filed a complaint Monday, saying the company changed its policies to squash union support at its only unionized warehouse in Staten Island, as reported byBloomberg. The complaint says Amazon changed policies to prohibit onsite union meetings while bypassing labor negotiations for providing paid leave for COVID-19 cases, among other violations. The accusations paint a picture of a corporation essentially dismissing the union, which voted to organize in 2022, as illegitimate — an image that lines up with its CEO’s public comments.

The NLRB accuses Amazon of changing a policy to prevent unionized workers from accessing the Staten Island warehouse during their time off. In addition, the agency says the company terminated two employees because of their association with the Amazon Labor Union (ALU) and changed its paid-leave policy for COVID-19 cases unilaterally — without negotiating with the workers’ organization.

The complaint also alleges that Amazon CEO Andy Jassy broke federal labor laws by saying unionized employees would be less empowered and have difficulty enjoying direct relationships with supervisors in an interview at The New York Times DealBook Summit in December. “That has a real chance to end up in federal courts,” Jassy added about the workers’ establishment of “bureaucratic” unions. Amazon has argued that the union’s establishment should be overturned because of “misconduct.”

The NLRB complaint describes Jassy’s comments as “interfering with, restraining and coercing employees,” saying his quotes about losing access to managers were an illegal threat. The NLRB filed a previous complaint in October following similar anti-union comments from Jassy. “All these Succession-style billionaires should be held accountable for unlawful actions, and that’s what we’re doing,” said ALU attorney Seth Goldstein. “[The complaint] is going to send a strong message to the union-busters and to CEOs like Jassy who think that they can say whatever they want to and they won’t be held accountable.”

In cases like this, NLRB prosecutors’ complaints are sent to agency judges, whose rulings can be appealed to labor board members in Washington and, if it stretches beyond that, to federal court. But, unfortunately, although the National Labor Relations Act (NLRA) allows the independent agency to make employers reinstate wrongly terminated workers and change policies, it can’t issue fines to them (or individual executives like Jassy). So don’t be shocked if this saga makes its way through the courts as Amazon flexes its muscle to try to avoid meaningful consequences and prevent the lone unionized warehouse from sparking a broader movement within the corporation.

This article originally appeared on Engadget at

Adobe adds generative AI editing to Photoshop

As generative AI has taken the tech world by storm, it was only a matter of time before Photoshop got in on the action. Adobe announced today that a new Generative Fill feature is coming to its ubiquitous photo-editing software later this year. The company promises “a magical new way to work” as the Firefly-powered feature lets you add, remove and extend visual content based on natural-language text prompts. “Generative Fill combines the speed and ease of generative AI with the power and precision of Photoshop, empowering customers to bring their visions to life at the speed of their imaginations,” said Ashley Still, Adobe’s senior VP of Digital Media.

Adobe’s Generative Fill is equivalent to DALL-E 2’s inpainting (generating AI content within a section of an image) and outpainting (AI-generated content extending beyond the image’s borders). So, for example, if you want to inpaint the sky to look surreal in a photo you took, select that area and type something like “surreal sky with strange colors” into the prompt field. Or, if you took a picture that you wish had a wider aspect ratio, you can select the area outside of it and prompt it to extend the scene.

Adobe says the feature matches the original scene’s perspective, lighting and style, allowing you to alter images radically with minimal legwork. The company provided a marketing video showing three AI-generated results to choose from for each text prompt.

Split-screen image (before and after) from Adobe’s Generative Fill AI feature. On the left, an original image of a deer in a forest (the deer is outlined as a selection). A text prompt below says

To try to help separate its AI work from the pack on an ethical level, the company says its current-generation model only learns from Adobe Stock images and “other public domain content without copyright restrictions.” In addition, as part of Adobe’s Content Credentials initiative, AI images made in Photoshop will be encoded with an invisible digital signature indicating whether it’s human-made or the product of AI. As generative AI makes it increasingly difficult to separate the organic from the algorithmic — and as artists worry about their work being plagiarized by career-killing machines — Adobe’s more transparent approach is refreshing.

Generative Fill will be available in the Photoshop desktop beta starting today. Adobe adds that the feature will be “generally available in the second half of 2023.” Finally, Generative Fill is also available today on the web as a module in the (invite-only) Firefly beta.

This article originally appeared on Engadget at

BMW reveals three new EVs for its summer 2023 lineup

BMW announced new EVs today as part of its summer 2023 lineup. The new models include the i4 xDrive 40 (an all-wheel-drive variant of the i4), the single-motor i7 eDrive50 and the hybrid 750e xDrive. In addition, the automaker revealed an updated infotainment operating system for some models.

The 2024 i4 xDrive40 is an all-wheel-drive, 396-horsepower variant of the popular Gran Coupe. The all-electric vehicle has dual motors that provide an estimated 307-mile range using the standard 18-inch tires (it drops to about 282 miles with optional 19-inch wheels.) In addition, the EV can accelerate from zero to 60 in 4.9 seconds. The i4 xDrive40 will start at $61,600 with an added $995 destination fee. BMW expects US-based deliveries to begin in the third quarter of 2023.

Meanwhile, the rear-wheel-drive i7 eDrive50 is powered by a single GEN5 motor, supplying 449 horsepower. BMW will announce range and performance details “closer to market launch” this fall, but we know the model will start at $105,700 (plus destination fee). Finally, the 750e xDrive combines a 308-horsepower six-cylinder internal combustion engine with a 194-horsepower electric motor. It also offers 483 horsepower and 516 lb-ft of torque. In addition, the plug-in hybrid’s purely electric range is rated at 35 miles. The 750e xDrive will start at $107,000 and the same $995 destination fee. It also launches in the US this fall.

Screenshot of BMW's updated Operating System 8.5 infotainment home screen. On the left, it includes phone controls with navigation at the right and a taskbar (with shortcuts) at the bottom.

The automaker is updating its infotainment operating system “in certain models.” BMW Operating System 8.5 gives the home screen “clearly arranged functions” designed to work better on the company’s curved display. Ridding itself of sub-menus, it uses a “zero-layer principle” that keeps all relevant controls and information on a single level, using widgets arranged vertically on the driver’s side. In addition, it includes symbols to quick-access the climate control menu, app library, navigation and Apple CarPlay / Android Auto.

This article originally appeared on Engadget at

TikTok is suing Montana over statewide ban

TikTok filed a lawsuit on Monday in the U.S. District Court of Montana to challenge the state’s ban of the social platform, as reported byThe Wall Street Journal. The case was brought against the state Attorney General Austin Knudsen.

Montana’s governor signed the bill into law last week, just one month after it passed through the state legislature. It was met with immediate pushback — a group of creators quickly sued the state, calling the law unconstitutional. Now TikTok is suing the state directly with similar claims, stating in the lawsuit that Montana’s law violates the First Amendment. “Montana's ban abridges freedom of speech in violation of the First Amendment, violates the U.S. Constitution in multiple other respects, and is preempted by federal law,” the lawsuit reads.

The law prohibits the ByteDance-owned platform from operating in the state, as well as preventing Apple’s and Google’s app stores from listing the TikTok app for download. Although it isn't clear how Montana plans to enforce the ban, it states that violations will tally fines of $10,000 per day. However, individual TikTok users won’t be charged. You can read the full TikTok vs. Montana suit here (viaNPR).

This article originally appeared on Engadget at

Meta’s open-source speech AI recognizes over 4,000 spoken languages

Meta has created an AI language model that (in a refreshing change of pace) isn’t a ChatGPT clone. The company’s Massively Multilingual Speech (MMS) project can recognize over 4,000 spoken languages and produce speech (text-to-speech) in over 1,100. Like most of its other publicly announced AI projects, Meta is open-sourcing MMS today to help preserve language diversity and encourage researchers to build on its foundation. “Today, we are publicly sharing our models and code so that others in the research community can build upon our work,” the company wrote. “Through this work, we hope to make a small contribution to preserve the incredible language diversity of the world.”

Speech recognition and text-to-speech models typically require training on thousands of hours of audio with accompanying transcription labels. (Labels are crucial to machine learning, allowing the algorithms to correctly categorize and “understand” the data.) But for languages that aren’t widely used in industrialized nations — many of which are in danger of disappearing in the coming decades — “this data simply does not exist,” as Meta puts it.

Meta used an unconventional approach to collecting audio data: tapping into audio recordings of translated religious texts. “We turned to religious texts, such as the Bible, that have been translated in many different languages and whose translations have been widely studied for text-based language translation research,” the company said. “These translations have publicly available audio recordings of people reading these texts in different languages.” Incorporating the unlabeled recordings of the Bible and similar texts, Meta’s researchers increased the model’s available languages to over 4,000.

If you’re like me, that approach may raise your eyebrows at first glance, as it sounds like a recipe for an AI model heavily biased toward Christian worldviews. But Meta says that isn’t the case. “While the content of the audio recordings is religious, our analysis shows that this does not bias the model to produce more religious language,” Meta wrote. “We believe this is because we use a connectionist temporal classification (CTC) approach, which is far more constrained compared with large language models (LLMs) or sequence-to-sequence models for speech recognition.” Furthermore, despite most of the religious recordings being read by male speakers, that didn’t introduce a male bias either — performing equally well in female and male voices.

After training an alignment model to make the data more usable, Meta used wav2vec 2.0, the company’s “self-supervised speech representation learning” model, which can train on unlabeled data. Combining unconventional data sources and a self-supervised speech model led to impressive outcomes. “Our results show that the Massively Multilingual Speech models perform well compared with existing models and cover 10 times as many languages.” Specifically, Meta compared MMS to OpenAI’s Whisper, and it exceeded expectations. “We found that models trained on the Massively Multilingual Speech data achieve half the word error rate, but Massively Multilingual Speech covers 11 times more languages.”

Meta cautions that its new models aren’t perfect. “For example, there is some risk that the speech-to-text model may mistranscribe select words or phrases,” the company wrote. “Depending on the output, this could result in offensive and/or inaccurate language. We continue to believe that collaboration across the AI community is critical to the responsible development of AI technologies.”

Now that Meta has released MMS for open-source research, it hopes it can reverse the trend of technology dwindling the world’s languages to the 100 or fewer most often supported by Big Tech. It sees a world where assistive technology, TTS and even VR / AR tech allow everyone to speak and learn in their native tongues. It said, “We envision a world where technology has the opposite effect, encouraging people to keep their languages alive since they can access information and use technology by speaking in their preferred language.”

This article originally appeared on Engadget at