Spotify has unveiled an upcoming interactive feature called SongDNA designed to show you the samples, collaborators and covers included in a given track, the company announced. As part of that update, Spotify also revealed that it has acquired WhoSampled, the company behind the SongDNA technology.
"Through our recent discussions with Spotify, it became clear that we share a strong belief in the power of musical context — and a vision for helping listeners go deeper into the songs they love," the WhoSampled team wrote in a blog post.
Terms of the deal weren't disclosed, but Spotify is acquiring both the WhoSampled team and its database. WhoSampled's standalone platform and brand will continue to operate following the deal with improvements like faster moderation times, the elimination of display ads and free downloads and subscriptions for its mobile apps.
Spotify Premium users will see the SongDNA feature in the "Now Playing" view. It's described as a way to see connections between songs, "showing collaborators, samples and covers all in one place," Spotify wrote.
In the song Kiss me More (feat. SZA), for example, SongDNA shows Carter Lang and two other composers, along with Doja Cat and SZA as the main artists. It reveals that a sample is used from Olivia Newton-John's Physical and that Kiss me More has been covered multiple times, most prominently in a Japanese version by the artist Rainych.
Spotify is also working on a feature called "About the song," showing swipeable cards in the "Now Playing" view. Those will reveal information like the inspiration for a song, how the music was created and the cultural impact — all with links to the sources.
London-based WhoSampled tracks over 1.2 million songs and 622,000 samples in its database, along with covers, remixes and artists. Its mobile app offers a Shazam-style music recognition service that can tell you the song you're listening to and any samples it might contain. The two companies have partnered previously on a deal that allows WhoSampled users to access their Spotify playlists and tracks.
This article originally appeared on Engadget at https://www.engadget.com/entertainment/streaming/spotifys-songdna-feature-will-show-you-which-songs-are-sampled-on-a-track-130050490.html?src=rss
Warner Music Group (WMG) settled a lawsuit with an AI company in exchange for a piece of the action. The label announced on Wednesday that it had resolved a 2024 lawsuit against AI music creation platform Udio. As part of the deal, Udio gets to license Warner's catalog for an upcoming music creation service. This follows a similar settlement between Universal Music Group and Udio, announced last month.
Udio's service will allow subscribers to create, listen to and discover AI-generated music trained on licensed work. You’ll be able to generate new songs, remixes and covers using favorite artists' voices or compositions. The boundaries between human creation and an algorithm's approximation of it are about to grow murkier. Not in terms of artistic quality, but it will be based on what proliferates online.
WMG is framing the deal as a win for artists, who will — if they choose to opt in — gain a new revenue stream. Ahead of the service’s launch, Udio will roll out "expanded protections and other measures designed to safeguard the rights of artists and songwriters."
So, the settlement does at least appear to reassert some control over artists’ work. What the normalization of robot-made music will do for society's collective tastes is another question.
A neon sign on a wall, reading, "You are what you listen to."
Mohammad Metri / Unsplash
The settlement echoes a warning Spotify sounded to musicians and labels last month. "If the music industry doesn't lead in this moment, AI-powered innovation will happen elsewhere, without rights, consent or compensation," the company wrote. Spotify plans to launch "artist-first AI music products" in the future, a vague promise to be sure. However, given Udio's plans, it wouldn't be surprising to see the streaming service cooking up a similar licensed AI music-creation product.
"We're unwaveringly committed to the protection of the rights of our artists and songwriters, and Udio has taken meaningful steps to ensure that the music on its service will be authorized and licensed," Warner Music CEO Robert Kyncl wrote in a press release. "This collaboration aligns with our broader efforts to responsibly unlock AI's potential - fueling new creative and commercial possibilities while continuing to deliver innovative experiences for fans."
This article originally appeared on Engadget at https://www.engadget.com/ai/warner-signs-ai-music-licensing-deal-with-udio-213433325.html?src=rss
Most synthesizers look and feel like appliances. They’re plastic boxes mass-produced in factories, efficient and functional but utterly lacking in personality or warmth. Pianos and guitars get to be handcrafted instruments with wood grain and visible joints, while synths are treated like glorified toasters with circuit boards inside. That disconnect between electronic music and tactile craft has always felt like a missed opportunity, especially when you consider how satisfying it is to play a real wooden keyboard.
One maker decided to fix this by building a fully functional synthesizer from scratch, using materials that sound completely impractical. The result is a compact, 34-key synth with a fiberglass-reinforced cardboard body, a steam-bent walnut frame, and individual keys handmade from oak and walnut. It looks like something between a vintage record player and a mid-century hi-fi component, with a turquoise fiberglass shell and warm wooden accents that feel more like furniture than electronics.
The body starts as folded cardboard panels cut from a template, then gets layered with fiberglass cloth and epoxy until it transforms into a rigid, glossy shell. The process borrows from old automotive techniques where fiberglass shaped custom car bodies in the 1950s, giving the synth a retro-futuristic sheen. Around the perimeter sits a continuous steam-bent walnut strip with oval cutouts that mimic speaker grilles on vintage radios, adding visual warmth and a furniture-like presence.
The keys are where the craft really shows. Black keys are made from laminated walnut offcuts, while white keys are cut from oak for contrast and durability. Each key is individually shaped, drilled for a shared steel rod pivot, beveled to prevent jamming, then coated with fiberglass and sanded up to 3000 grit for a smooth finish. The result looks and feels closer to a piano than a typical plastic keyboard.
Underneath sits a custom flexible printed circuit with interdigitated copper pads and rubber dome switches. When you press a key, the dome collapses and bridges the pads, closing a circuit that a Teensy microcontroller scans continuously. The Teensy sends MIDI messages to a Raspberry Pi running Zynthian, an open-source synth platform packed with engines and presets, all displayed on a small touchscreen.
Of course, using cardboard and steam-bent walnut creates challenges the designer readily admits. Cardboard turned out to be impractical, requiring multiple fiberglass layers and tedious filling. Walnut is notoriously stubborn to bend, needing kerf cuts and boiling water to soften the fibers. The designer suggests foam board or 3D printing as easier alternatives and notes that more precise tools would have made the keys cleaner.
What makes this synth significant is how it challenges the assumption that electronic instruments have to be cold and industrial. By using wood, fiberglass, and visible handwork, it reintroduces warmth and personality into something usually purely functional. It’s less a finished product and more proof that synthesizers can be beautiful, tactile objects worth admiring even when silent.
It was pretty game-changing back in 2015 when Nintendo dropped the Switch, ushering in a wave of 2-player gaming on the same console. Two joy-cons, one console, mano-a-mano gaming. You didn’t need an extra controller – Nintendo built right one into the Switch. Designer Eunjun Jang wants to bring that same modular multiplayer culture to deejaying… because it’s an activity that is conducive to socializing.
Nobody plays music alone, the act of deejaying is inherently social. Look at the Boiler Room sets, where the deejay is surrounded by sometimes a hundred or more people, absorbing the energy emanating from the console and the speakers. The ‘Twin’ DJ Console just turns that emotionally social activity into a physically social one. Two player decks, one mixer in the middle, quite like a Nintendo Switch but for music. The units snap together to create a single 2-player console, but split them apart and they’re like a mano-a-mano setup for two deejays trying to collab in real-time.
Designer: Eunjun Jang
The Twin has this clean-yet-fun design, sort of like if Teenage Engineering met Braun. The console strays away from extra fluff, giving each player just a tiny screen that lets them monitor effects and whatnot. The music itself plays from smartphones which pair with each of the player units. Run the Twin app and place each phone above the player and you sort of see how the entire setup looks like a Pioneer XDJ or something. The controls are simplified, and the entire device is nearly 60-70% smaller than your average DJ console. This makes the Twin perfect for using on the go, in your bedroom, or at a café.
The design is truly fascinating, although it begs for some color and vibrancy. You’ve got the mixer front and center, with EQ knobs, a cue button for each deck, channel faders, and a crossfader that lets you swap between left and right decks, so you’re shifting between songs. On the player themselves, you’ve got a tempo key to let you manually sync songs, a cue key that lets you trigger a particular part of a song, and a play-pause key that form the most crucial set of controls. There are 4 extra keys on the top corner, along with a shift key, and while most DJ consoles have a disc that you spin to rewind/forward or scratch music, the Twin ditches that for an elegant jog-wheel on the side. It’s cute, and it gets the job done, although seasoned deejays may have their own hot-takes.
The modularity is what sets the Twin apart. You can pull the individual parts together and sit across each other, mixing music from your phones. Why build a Spotify playlist when you can literally play a deejay set in your jammies? It feels much more involved, allowing friends to bond and jam together in a way that Spotify or Apple Music just won’t let you.
Pogo pins allow you to snap the elements together or pull them apart, quite like the Nintendo Switch. Ultimately, that’s exactly the vibe Eunjung was going for. Games are nice, but music is just *chef’s kiss*. Each player gets their own dedicated deck, but you might end up fighting for the mixer if you’re not careful! You want to vibe together like Disclosure, not call it quits like Daft Punk!
That said, the Twin still feels like just a toy right now. It lacks the extra features that most professional DJs would really need. Proper effects, looping, the ability to add separate vocal channels, or even shift pitch. Then again, most amateur-level DJ kits stick to the basics, allowing for more simple techniques so that people can master those before moving onto larger tasks. Although, that’s where Twin’s modularity does come in handy. Imagine if Eunjung just designed a set of Pro-grade players that you could snap to your mixer, turning your entry-level DJ set into something enough to sustain a bloc party!
At this point, the streaming music landscape feels pretty well settled. Giants like Spotify, Amazon, Apple and YouTube duke it out at the top, while plenty of other players like Qobuz, Tidal, Deezer try their best to stand out from the pack. Somewhat surprisingly, though, a new player emerged in September. Coda Music used the recent backlash around Spotify co-founder Daniel Ek as a way to differentiate itself from the number one streamer, calling out Ek’s controversial funding of defense technology firm Helsing earlier in the year. (Spotify’s refusal to stop airing ICE recruitment ads certainly hasn’t helped the platform, either.)
Today, the fledgling service is announcing a new feature that feels designed to answer another of the recent Spotify controversies: AI slop music flooding the platform. In response, Coda Music is launching AI identification tools with the purpose of finding and labeling songs that weren’t composed by actual humans.
There are a few prongs to Coda’s approach. For starters, any artist added to Coda will be reviewed for AI origins, and their profile will be labeled “AI Artist” so that listeners know what they’re getting into. Coda is also letting users flag profiles of artists if they suspect the music is AI-generated; the company will then review them and label them if necessary.
Finally, there’s a toggle in settings that just lets you turn off AI artists entirely. Obviously, how useful this setting is will depend on how good Coda gets at labeling AI-created music as such, but I can definitely see the appeal in just flipping that to “off” and avoiding as much slop as possible.
Besides its stance on AI and the assurance that the company does not “invest in war,” there are a few other differentiators about Coda Music. The company says that it currently paying the “highest per-stream rate” in the industry — while at the same time, it acknowledges that no one is paying enough to artists. “The real problem isn’t how much is paid per stream, it’s that streaming alone doesn’t pay enough,” the company’s website says. “And minor improvements to a fundamentally flawed per stream model will not help.”
To that end, the company also lets users pick an “independent or qualifying artist” who gets $1 of their monthly subscription fee. Sure, it’s only a dollar, but it’s the kind of thing that sweetens the pot at least a little bit for musicians.
And Coda has good reason to want to make itself visible to users and artists alike. The last major differentiator for Coda is the company’s ambitions to turn its app into a social, music-sharing feed where you get recommendations from humans rather than algorithms. To that end, users can share anything from the app in their feed, and it also allows you to share external links and photos as well (go ahead and post your blurry images from that NIN concert!).
The app’s home page prominently features fan-made playlists and recommended users to follow in addition to the usual suggestions based on what you’re listening to already. And there’s a social tab where you can see posts from people you follow; share songs, artists or albums; and see posts from artists you follow. That last part is key, as Coda wants artists interacting and sharing as well as just end users.
It reminds me a little bit of the Fan Groups feature that Amazon Music just announced — and as with that feature, the problem facing Coda is getting people to start contributing to a new network rather than just posting things on whatever app they’re already using. Fortunately, music nerds love a community, so it’ll be interesting to see if this takes off at all.
As for the new features for reporting and filtering out AI music, Coda says they’re available as of today in its iOS and Android apps. The company doesn’t have a web interface yet, but says it is coming soon. If ducking AI-generated tunes is something that catches your attention, Coda currently costs $11 a month, or $17 per month for a family plan with up to four listeners.
This article originally appeared on Engadget at https://www.engadget.com/entertainment/music/new-streaming-app-coda-music-is-rolling-out-tools-for-labeling-and-blocking-ai-generated-tunes-140000530.html?src=rss
Amazon is in the middle of rolling out Alexa+, the long-awaited, AI-infused update for its voice assistant. At the same time, the company has also been giving a fair bit of attention to Amazon Music, adding things like Alexa+ integration and AI-powered playlists. And as of today, Amazon is rolling out a new community-focused feature called Fan Groups. As the name suggests, Fan Groups are a way for users to connect around different musical interests — and what makes this more fun to me is that these aren’t limited to Amazon-curated groups.
Once Fan Groups fully rolls out, anyone will be able to create a public group in Amazon Music based around a genre, region, time period or anything else you want to focus the group on. Right now, Fan Groups are only available in Canada during a a beta period, but they’ll come to other countries (including the US) early next year. Amazon has had testers building out some Fan Groups in the meantime so that testers don’t walk into a ghost town.
When you first open the Groups tab, which will be part of Amazon Music’s bottom navigation, you’ll see a top rail with Groups you’ve joined and a scrolling list of ones you can check out. Some of the examples Amazon showed off include “K-pop Now,” “Red Dirt Americana” and “Indie Insiders,” all of which feel pretty self-explanatory. Each group includes a “featured” playlist at the top and then a scroll of posts by people who’ve joined the group.
Members can share any song, album or playlist on Amazon Music along with a comment; you can then have a discussion on the post. It’ll be familiar to anyone who has used a Facebook Group over the years. Somewhat interestingly, Amazon is also letting you share external links Beyond the “posts” view, there’s also a music-only tab that just shows everything that has been shared to the group. One of the more intriguing features in Fan Groups is the ability to just hit “play” and listen to everything that’s been shared over time — it’s something that should be good for exploration as well as just seeing if the group’s tastes are aligned with your own.
In the quick demo I saw of Fan Groups, it felt like the rare new social tool that could be useful. Music is obviously an extremely social art, one that so many love sharing with other fans. Discovery is also a huge part of being a music fan, and I appreciate the fact that Amazon is building a way to get recommendations from other human beings and not just algorithms and AI. The only issue is that getting traction for a social network built inside of a specific service isn’t the easiest thing to do — you could just as easily share music on Facebook or any number of other apps. But the potential for finding new music and sharing what you’re into with other fellow obsessives make this feature worth a look once it fully launches.
This article originally appeared on Engadget at https://www.engadget.com/amazon-musics-fan-groups-are-a-refreshingly-old-school-way-to-share-and-find-tunes-150000084.html?src=rss
Spotify delivers a lot of personalized, data-driven music recommendations, and now the streaming service is adding new weekly snapshots of your listening activities. Listening stats will highlight the artists and songs that a user has heard the most over the previous four weeks and creates a playlist inspired by those selections. And according to the blog post: "Each week, it also includes a special highlight that captures what makes your listening unique, whether it’s a milestone, a new discovery, or a fan moment." That's a pretty vague introduction, and how engaging the highlights are in practice will likely depend on how much they actually surprise and delight listeners.
It sounds like a midway point between the company's year-end Wrapped data package and its daily mix playlists. More ways to view listening data are always fun, and several competing services like Apple Music, Amazon Music and YouTube Music have already upped how often they share reports with their users. This seems to be Spotify’s move to catch up on that trend.
The listening stats will live under your profile and can be shared internally on Spotify or as an external link. The new features will be available for both free and paying listeners across 60 international markets.
This article originally appeared on Engadget at https://www.engadget.com/entertainment/music/spotify-introduces-weekly-listening-stats-140000577.html?src=rss
This story about Paul McCartney begins with one of his old bandmates. "I'm not really Beatle George," the ever-philosophical George Harrison once said. "For me, Beatle George was a suit or a shirt that I once wore. And the only problem is, for the rest of my life, people are going to look at that shirt and mistake it for me."
On one hand, that’s, well, George being George. But his quote does speak to our need to mythologize the Beatles. It’s hard not to! The music is so exquisite, influential and timeless that we look for grand stories to tell about it. We want a stronger connection to it, so we pore over biographies, interviews and documentaries. We seek meaning and purpose in their story.
Still, it must be surreal to be one of the four protagonists of that story. At some point, the narrative takes on a life of its own that may not reflect your experience. McCartney alluded to that in the 2013 song "Early Days." "Now everybody seems to have their own opinion on who did this and who did that," he sang. "But as for me, I don't see how they can remember when they weren't where it was at."
So, I’ll try not to mythologize the Beatles too much as I describe my experience photographing Sir Paul McCartney last month. I will, of course, fail spectacularly at that mission.
The crowd ranged from seniors to teens in Sgt. Pepper costumes.
Will Shanklin for Engadget
Months before I watched him play for nearly three hours in front of 15,000 fans (at age 83!) at Albuquerque’s Isleta Amphitheater, I sent a press request to his team. A few days before the concert, I learned that my photography pass had been approved. Once it sank in, I screamed and giggled, not unlike the teenagers in Ed Sullivan's audience. (Don't judge those gals until you've been near a Beatle!)
But there wasn’t much time to soak up the excitement. Without any real cameras on hand — my iPhone 17 Pro certainly wasn’t going to cut it — and only a few days to prepare, some quick decisions were in order. After enough internal debate to make my head spin off its axis, I settled on an oddball combination. For the body, I went with the Canon EOS R50, an ultra-compact mirrorless with a 24-megapixel APS-C sensor.
Was it the best one available? Not at all. But instead of renting a $3,000 camera, I decided to buy something in my budget that I'll enjoy using for years. I'd already eyed it after handling a display model and reading Steve Dent's review. Plus, it created a fun challenge: How can a sub-$800 consumer-facing camera stand up to the unique demands of concert photography?
The lens, on the other hand, is no place to mess around. So I rented the Canon EF 70-200mm f/2.8 L IS USM, a gargantuan, professional-grade telephoto one. (It's the precursor to this $2,399 one.) This choice was simple: It was by far the most concert-appropriate lens available to rent. It maintains sharpness and contrast across its long zoom range, its autofocus is fast and its f/2.8 aperture is crucial for the unique demands of stage lighting.
Put the tiny camera and ginormous lens together (with this $38 adapter), and you get the odd couple you see below. To say this sucker was front-weighted would be an understatement.
"She's so heavy..."
Will Shanklin for Engadget
Camera in hand (and Beatles hoodie equipped), I took my position in the tight press pen. The photography area was about 150 yards from the stage and didn’t allow for lateral movement, so ideas for creative compositions were set aside. My only option was to push that glass out to 200mm (or close to it) and fire away. Save those composition ideas for when it's time to crop.
When photographing someone like Sir Paul, you ideally want an image that captures not only the man and the musician, but also that larger-than-life myth. It should be something grand that you’d want to hang on your wall. No pressure!
Sir Paul's first number was the John Lennon-penned classic "Help!" Until this year's leg of the Got Back tour, McCartney hadn't played the song in full since 1990. We can only speculate about his reasons for pulling it out of his bag now. But I feel like the song's desperate pleas gain new poignancy in 2025. I can't count the times I've wanted to cry out to someone — anyone! — to "Please, please help me" after reading the news.
We were huddled close enough together that I was glad I wore these $16 kneepads under my jeans. When the crowd in front of us settled down a bit, I kneeled to give my photographer cohorts more elbow room. My right knee bounced pleasantly onto the cozy leg pillow.
Will Shanklin for Engadget
With one song already down, the R50's burst mode was getting a workout. The stock Canon battery was still going strong, but I had these two third-party spares stashed in this camera bag to swap out if necessary. (I didn't end up needing them, despite snapping over 600 photos.)
McCartney transitioned into his second number, "Coming Up," the first track from 1980's McCartney II. That LP was ahead of its time, embracing synths, drum machines and other studio tricks before they became commonplace. Contemporary critics didn’t care much for it, but it later became a cult classic. That combination illustrates something about his solo career: always experimenting, sometimes misunderstood, but ultimately vindicated.
Two songs were over in a flash. Macca addressed the crowd, and picture time was over. Off to leave my camera with security, and claim the faraway lawn seat I bought long before I knew I'd have press access.
The rest of McCartney's set included a perfect balance of Beatles, Wings and solo numbers. (There was even an old Quarrymen song, "In Spite of All the Danger.") As you can see in the photos, he started on his trademark Höfner bass. But he moved on to piano, acoustic and electric guitars and ukulele. The latter was for his beautiful rendition of Harrison's "Something."
That number wasn’t the only point that moved me. The most notable was where he teamed with Lennon on "I've Got a Feeling." Present-day McCartney singing with 1969 Lennon, who appeared on the giant screen above (via the restored rooftop concert footage in Get Back), was profound. "I love that one because I get to sing with John again," he said.
Will Shanklin for Engadget
Sir Paul strikes me as someone who’s always looking forward. But the Got Back tour is a chance to look back. It lets us, the romanticizing fans, join him on the long and winding road from the Quarrymen to today. The entire production made me feel like a passenger on his journey.
I could go on. But you don't need me to elevate Paul McCartney's musical legacy any more than you need me to explain Michael Jordan's basketball skills or Meryl Streep's acting chops. Listen to the music — and catch his tour if you can — and you'll feel it.
As for the photos, my favorite is the one at the top of this article. (I also included a color version in the gallery below.) It’s the only one that (to me) captures the man, musician and myth as he plays his Höfner bass. Out of more than 600 rapidly-fired photos, one that feels just right is good enough for me.
But even if they all sucked, who cares! Decades from now, I'll tell everyone at the old folks' home that, when I was young (and my heart was an open book), I snapped some pictures of Sir Paul McCartney. The story may grow more inflated by then, and maybe I’ll invent new details. But perhaps I can be forgiven for a bit of mythologizing.
This article originally appeared on Engadget at https://www.engadget.com/cameras/the-gear-i-used-to-photograph-paul-mccartney-133033591.html?src=rss
Teenage Engineering has never been content to stay within conventional product categories, consistently pushing boundaries between instruments, toys, and art objects. Their approach to music hardware combines Swedish design sensibilities with genuine technical innovation, creating devices that feel both familiar and revolutionary. The company’s latest announcement signals another bold expansion into uncharted territory, moving beyond synthesizers and samplers into the world of vocal performance.
Today’s unveiling of the “Riddim N’ Ting” bundle showcases this adventurous spirit, pairing the recently released EP-40 Riddim sampler with the brand-new EP-2350 Ting microphone. The Ting represents Teenage Engineering’s first foray into microphone design, but it is far from a traditional vocal mic. Instead, it is a compact effects processor, sample trigger, and vocal manipulator rolled into one handheld device, complete with motion sensors and live-adjustable parameters that let performers tilt and move the mic to control everything from echo intensity to robotic voice modulation in real time.
So the Ting itself is this ridiculously lightweight object, weighing a scant 90 grams, that feels less like a piece of serious audio equipment and more like a prop from a retro sci-fi film. That’s the point. It houses four primary effects: a standard echo, an echo blended with a spring reverb, a high-pitched “pixie” effect, and a classic “robot” voice. A physical lever and an internal motion sensor allow you to manipulate the effect parameters by physically moving the mic, turning a vocal performance into a kinetic activity. Four buttons on the side are dedicated to triggering samples, which come preloaded with sound system staples like air horns and lasers but are fully replaceable. It’s a dedicated hype-mic, a performance tool designed for immediate, tactile fun rather than pristine vocal capture.
Its lo-fi audio character is a feature, not a bug, leaning into the saturated, gritty vocal sounds that define dub and dancehall sound system culture. While you could draw parallels to devices like Roland’s VT-4 for vocal processing or Korg’s Kaoss Pad for real-time effects, the Ting’s genius is its form factor. It integrates these functions directly into the microphone itself, removing a layer of abstraction and making the performance more immediate. It connects to any system via a 3.5mm line out, but it’s clearly designed to be the perfect companion for its partner device. This is where the workflow becomes a self-contained creative loop.
That partner, the EP-40 Riddim, is the anchor for all the Ting’s chaotic energy. While it follows the established format of the EP-series, its focus is sharp. It’s a sampler and groovebox loaded with over 400 instruments and sounds curated by legendary reggae producers like King Jammy and Mad Professor. The specs are solid: 12 stereo or 16 mono voices, a 128MB system memory, and a subtractive synth engine for crafting classic bass and lead tones. It includes seven main effects and twelve punch-in effects, all tailored for dub-style mixing. Connectivity is standard for Teenage Engineering, with stereo and sync I/O, MIDI, and USB-C. It’s a capable sampler on its own, but its true purpose is realized when paired with the Ting.
Together, they form a portable, battery-powered sound system in a box. The workflow is obvious and effective: you build a beat on the Riddim, then plug the Ting directly into its input to lay down vocals, trigger hype samples, and perform live dub-outs with the effects. For their launch, Teenage Engineering is bundling them together and offering the Ting for free, a clever move that ensures this new, weirder device gets into users’ hands immediately. It’s a compelling package that champions spontaneity and play. It proves that the most engaging technology isn’t always about higher fidelity or more features, but about creating a more direct and enjoyable path from an idea to its execution.
Amazon has launched its new and improved AI assistant in the Amazon Music app. From today, anyone signed up to Alexa+ Early Access with the latest version of the app downloaded to their iOS or Android device can start using Amazon’s reimagined virtual assistant for music discovery and organizing their libraries.
To access the chatbot, you tap the “A” button in the lower right corner of the screen when Amazon Music is open. You can then test its knowledge by asking it a range of questions, from something as basic as finding a recently released song by a particular artist, to more complex searches based on a single lyric or the name of the TV show the song you’re trying to find is featured in.
Alexa+ is designed for more conversational interactions, so you can use natural language prompts and then ask follow-up questions as you would if you were talking to a friend, to narrow down its search results. Amazon says you can search for specific eras, moods and instruments, as well as telling Alexa what you don’t want it to serve up.
Alexa+ can also be used for playlist creation, allowing you to request something as specific as a high-energy running playlist with songs from a particular decade that starts with a song from a certain artist. You can also be more vague, asking for something that fits your current mood or the time of day.
Alexa+ in Amazon Music is being marketed not only as an AI tastemaker and personal DJ, but also a music expert, so you can ask it things like the inspiration for a song’s lyrics, where an album charted and questions about upcoming live performances.
Alexa+ has been gradually rolling out in Amazon’s various smart devices since the beginning of the year, with mixed results. You’ll be using it in everything from new Ring devices, to the latest Kindles and Vega, Amazon’s new smart TV operating system. It’s also built into the new Echo Studio speaker, and Engadget’s Billy Steele was impressed by the AI assistant’s more human-like conversation skills, even if it’s still prone to basic errors right now, such as getting the day of the week wrong in a response.
Alexa+ is currently available in Early Access for all tiers of Amazon Music. Eventually it’ll be free to all Prime members, and available to non-Prime members for $20 per month (more than an Amazon Prime subscription on its own).
This article originally appeared on Engadget at https://www.engadget.com/entertainment/music/alexa-comes-to-the-amazon-music-app-143234227.html?src=rss