Skip to main content

Taryn Southern’s new album is produced entirely by AI

Taryn Southern
Taryn Southern

Music has been made on computers for decades, but the technology has traditionally been much more utilitarian than collaborative when it comes to the music-making process. In recent years, however, artificial intelligence (AI) has evolved to a level where it can help artists actually create music for 50-piece orchestras and even help craft Billboard hits.

Singer-songwriter and YouTuber Taryn Southern has decided to push the limits of AI composition, putting the sound of her new album into the “hands” of four AI programs: Amper Music, IBM’s Watson Beat, Google’s Magenta, and AIVA. Aptly titled I Am AI, the album will be the first of its kind to be fully composed with and totally produced by AI when it releases in May.

While each AI program is unique, they generally create music by following certain parameters (genre, tempo, style). Artists input music for the programs to analyze, and the machines learn the structure in order to create original music in minutes. Specializing in creating classical music, AIVA got so good at composing it became the first non-human to be recognized as a composer.

Ahead of the February 20 release of Life Support, the latest song from I Am AI, Southern spoke with Digital Trends about the album-making process, how time-consuming making music with AI is, and its exciting potential to break down traditional barriers in the music industry.

Digital Trends: A phenomenon of just the last few years, using AI to make music has mostly been on the experimental level to test out the capabilities. What inspired you to make an entire album with AI?

Taryn Southern Image used with permission by copyright holder

Taryn Southern: Last January, I was reading an article in New York Times, actually, about the future of artificial intelligence and how it was being used in creative ways. At that point, out of curiosity, I was reading a lot about AI more for its data applications for enterprising companies. Once I learned it was being used for musical applications, I was really intrigued. So, I started reaching out to companies in the article asking if I could get access to their platform. Within a few months I was experimenting with a few platforms [and] it became evident that I could create what I felt was similar to what I was able to do on my own, before artificial intelligence.

Most musicians need producers to help guide them, but with Watson Beat and Amper you can click a few preset moods and tempos and create a fully composed production. What was the process like for you?

“You can literally make music with the touch of a button.”

I think the cool thing about these technologies is you can literally make music with the touch of a button. Something like my album has been a bit more involved, though. Or maybe, a lot more involved. [Laughs]. I could make songs how I want to hear them and have the right structure in place. With Amper, you can make it as easy or as difficult as you want. I would iterate anywhere between 30-70 times off a song within Amper. Once I’m happy with the song, I download the stems [individual music elements of a multi-track recording], and then I rearrange the various sections of the instrumentation that I really like, and cut what I don’t like. I do that to create the structure of the song, like Life Support.

What do you mean by “upwards of 70 different iterations?”

I started with one mood, then I started converting it to several others. Changing the key. Changing the tempo. I think I downloaded 30 stems, arranged the song, and then created a new template beat that was of the same key and genre, but as a hip hop beat. I think the original beat I went with was a cinematic, uplifting genre. Then once it had a really strong song structure that I really liked, I took the same parameters, popped them into a hip hop beat to get some of the drums, and some of the percussive elements. Basically, [it was] two variations of the song, within different genre parameters with the same rhythmic structure.

You started with one preset/mood, it spit out a beat, then you took the best parts of that beat and mixed it with something else?

Yeah. For the [Life Support] beat, I probably iterated 15-20 times, to get something where I liked the rhythm and the melodic structure. From there, once I had a sound song structure, I went into a different preset and set the genre parameters the same, so I could take sounds to add to the song. That adds to that layered feeling that you get from a song like Life Support, which has about 35 stems.

That must have been time consuming. Is that how it was for the entire album?

Every song on the album has a different process depending on the technology used, [and] depending on how quickly I could get something I really loved. There is another song I did on Amper that I only iterated on three times. A lot of those iterations are around the instrumentation, [and] playing with different instruments.

With something like Watson, I’m basically taking the code, running it through terminal, then taking all of the stems, pushing them through a DAW [Digital Audio Workstation] and changing the instruments myself to whatever I see fit. There’s a lot more involvement in working with a platform like that. On the plus side, [Watson] gives musicians who potentially have more background … in writing music potential opportunity to have more creative license … where Amper might be easier for beginner musicians and early music creators who want a bit more of a full production experience.

How did you get your hands on these programs, and how were they each different?

Magenta is open source, so that was a matter of going on Github, reading documentation, and downloading it. Fortunately, I had some friends at Magenta who have been very helpful answering questions and helping me out. They also have a number of different tools outside of Magenta, like NSynths, that are really cool, AI based tools that can help you customize a sound, or song, or tones even more than you had access to through other programs.

“I’m working on a song right now that’s basically an ode to revolution and I call it my Blockchain song.”

With Watson Beat I just reached out to everyone I could at Watson telling them how I’d love to get my hands on this. They emailed me back last fall, and … [via Google hangout] they set it all up on my computer and walked me through the whole program. They’ve been really helpful and I’ve been in direct contact with them quite a bit. I’m really impressed with the background code they’ve done on this and the program. It’s really intuitive. What I like about Watson is being able to inject the code with any kind of data or inspiration point of music that I’d like.

For instance, I’m working on a song right now that’s basically an ode to revolution and I call it my “Blockchain song.” It’s a song that’s inspired by the blockchain revolution, but I really wanted it to encompass this idea of revolution. So, I’ve been feeding Watson various songs, as far back as the 1700s, that represent revolution, trying to see what it can … glean from those songs to make a new anthemic, revolution song.

I would hope the Beatles’ Revolution made it in there at some point.

[Laughs] I started with 1700, 1800 revolution songs, because there’s no copyright issue with those. Currently, the rules around teaching AI based on copyrighted works is still a grey area. So I’m trying to play within the bounds of what’s legally acceptable at this point. I thought it would also be interesting to have these really old songs as inspiration points. It’s probably 15 songs from the 1700s and 1800s that are old-school anthemic songs, and it was really fun to have the AI algorithm learn from those songs and then force that function through anthemic pop structure that Watson already designated to see what kind of things it’d come with.

Image used with permission by copyright holder

You mentioned AI being taught copyrighted music and spitting out new compositions. Did someone tell you teaching AI copyrighted material was a legal gray area, or did you figure that out yourself?

I figured it out myself. I think, as is the case with all of these new technologies, you’re writing the rules as you go. Ten years ago, I don’t think there were many people talking about artificial intelligence and copyright infringement. These are conversations that are happening in every single industry, not just music. I actually just did a panel at the copyright society this week that was digging into these predicaments. They asked, “What kind of attributions are given if artificial intelligence is learning off copyrighted works?” A lot of these things have to be figured out, but there aren’t any hard and fast rules on this.

Hypothetically, if someone ran a copyrighted song through AI, would the original song be discernible to the copyright holder?

I have run popular songs through, just to see what would happen. Usually what comes out of it is something that is not even close to resembling the original piece. It depends on the AI. Sometimes it’s just running pattern recognition on the chord structure. Other times it’s running statistical analysis, saying “if there’s an F-chord here, then you’re 70 percent likely to get a G-chord after the F-chord or an E-minor chord after the F-chord.” … If we’re looking at this from a purely theoretical point of view, I think that holding an AI accountable for stealing from copyrighted works would be very similar to holding a human accountable who’s grown up listening to The Beatles their entire life and now writes pop music. [ Laughs]. …

If we’re looking at a really sophisticated AI program that is built … similar to the way our own neural networks integrate and mimic information, then you could think of an AI as a really sophisticated human. [Laughs]. Even artists joke that some of the best music producers out there, like Max Martin, are just really advanced AI … Many of his songs have repeatable patterns that can be studied and mimicked.

So when the album’s done, will the programs be credited as producers?

I look at each of these AI programs as creative collaborators, so they’re produced in collaboration with Amper, AIVA, Magenta and Watson. There are 11 songs in total, although I might be adding two songs.

Have you used multiple programs on one song? Which program have you used the most?

One program per song. I’ve probably used Watson and Amper the most. If I end up with 12-13 songs, those would be additional songs from Amper.

What was your experience with AIVA?

AIVA was trained specifically off classical music. It primarily writes symphonies. I have two songs with AIVA that I love that are really unique. Because they were trained in classical music, it’s like, “How do we take these classical interpretations and turn them into pop songs?” So, they have a very different kind of feel to them, but they’re symphonic in the way that my Amper songs have symphonic and synth sounds.

One of the most expensive aspects of making an album is paying for studio time and producers. If you can do it in your room with a program, this has the potential to reduce the role of the human producers, doesn’t it?

I 100 percent agree. I think that the most exciting aspect of all of this is it will democratize access. I know that’s a really scary thing to the industry, and for understandable reasons. No one wants to lose their job and no one wants to feel like they might be beat out of their own game, vis-a-vis a computer. But at the same time, the music industry for so long has been kind of an old-boys club. … It has many gatekeepers.

If you want to produce a really well done album, it’s expensive. You have to find great producers, [and they] are not cheap. Sometimes the artists don’t make any money. As a YouTuber who grew up in the digital content revolution, I love when new tools come along that allow me to be scrappy and create without worrying about how I’m going to pay my bills. That might be the entry point for someone to say, “Wow, I love music. I’m going to do more of this.” … I feel like these kind of things are actually just helpful in widening the creative community and allowing more people to join the creative class.

After this album, will you continue to use AI to make music?

I’m sure I will. I can only imagine these are just the first few technologies to become available and there will be many more and they will evolve. I’m really excited to see how they evolve. But, they really do make my life easier as the artist, because I can focus on so many of the others things that I love to focus on in the creation process.

Keith Nelson Jr.
Former Digital Trends Contributor
Keith Nelson Jr is a music/tech journalist making big pictures by connecting dots. Born and raised in Brooklyn, NY he…
Apple says it made ‘AI for the rest of us’ — and it’s right
An Apple executive giving a presentation at WWDC 2024.

After many months of anxious waiting and salacious rumors, it’s finally happened: Apple has revealed its generative artificial intelligence (AI) systems to the world at its Worldwide Developers Conference (WWDC).

Yet unlike ChatGPT and Google Gemini, the freshly unveiled Apple Intelligence tools and features look like someone actually took the time to think about how AI can be used to better the world, not burn it down. If it works half as well as Apple promises it will, it could be the best AI system on the market.

Read more
The Mac just became a true ‘AI PC’
Disney Plus on a MacBook Pro.

Apple has unveiled a significant overhaul of its macOS operating system at its Worldwide Developers Conference (WWDC). The move -- long an expected topic for WWDC -- infuses the Mac with artificial intelligence (AI) across multiple apps, tools, and systems, revamping almost the entire Mac experience in the process. Put together, it has the potential to transform the Mac into an AI PC of the highest order.

Dubbed Apple Intelligence, the new system works across a host of apps -- including third-party ones -- to take them up a level. For example, Apple unveiled tools that can summarize or rewrite text in apps, such as rephrasing an email response for a new context. Apple also showcased some generative AI capabilities similar to those found in rival products like like Midjourney. Apple's spin, though, is that its system has more contextual knowledge. You can ask it to create an image of a friend for their birthday and it will take a photo of them that you have tagged and redesign it in one of several styles. In this case, Apple Intelligence knows who your friend is without you needing to specify a photo first.

Read more
Intel’s new AI image generation app is free and runs entirely on your PC
screenshot of AI Playground image creation screen showing more advanced ccontrols

Intel shared a sneak preview of its upcoming AI Playground app at Computex earlier this week, which offers yet another way to try AI image generation. The Windows application provides you with a new way to use generative AI a means to create and edit images, as well as chat with an AI agent, without the need for complex command line prompts, complicated scripts, or even a data connection.

The interesting bit is that everything runs locally on your PC, leveraging the parallel processing power of either an Intel Core Ultra processor with a built-in Intel Arc GPU or through a separate 8GB VRAM Arc Graphics card.

Read more