Consequence of Sound

Two months before she dropped the news that she’s expecting her first child with Elon Musk, Grimes (aka Claire Boucher, who now legally goes by c, the symbol for the speed of light), lit up music Twitter. The one-time neuroscience major let loose with a provocative forecast on theoretical physicist Sean Carroll’s Mindscape podcast when she shared, “I think live music is going to be obsolete soon,” predicting we might be seeing the last generation of human artists and also inspiring dozens of variations on the same joke suggesting Grimes’ own imminent obsolescence.

In Grimes’ defense, she was referring to Artificial General Intelligence (AGI), an AI that’s as powerful as human intelligence in every way and is expected to arrive any time between the next decade and never and is currently unmatched by any AI tool on the market or in the works. However, even looking into the music landscape’s hazy future, “obsolete” is a strong word

A few musicians publicly responded, including Holly Herndon, who made her own AI songwriting tool for last May’s PROTO , which was released as part of her dissertation for Stanford’s Ph.D. program in Computer-Based Music Theory and Acoustics, plus a series of since-deleted tweets from Zola Jesus and another thread from Grimes’ ex-boyfriend, Majical Cloudz’s Devon Welsh.

Grimes may be correct about AGI being technically better at making art and music, much like Google DeepMind’s AlphaGo executed creative, never-before-seen strategies to beat the world’s best Go player. But, that won’t stop people from making music about their lived experiences, a feat AGI won’t be capable of credibly undertaking on its own.


Setting aside AGI and focusing on the current AI songwriting tools aimed at the music industry, to a large degree these tools open up new opportunities for musicians rather than threatening to replace them. One notable exception, a point Cherie Hu made in an article for SongTrust last year, is production music. These songs are intended to create a vibe or provide an emotional backdrop geared toward licensing for film, TV, and commercials. Creators behind this particular medium are primed to struggle under the weight of the new entrants targeting easily reproducible styles of mood-heavy music, like JukeDeck, a UK-based AI music startup that was recently acquired by TikTok and can “interpret video and automatically set music to it.”

Emerging music technologies of the recent past have been unable to live up to the threat-level hype. Drum machines, samplers, sequencers, synthesizers — these have been field-levelers, not death knells, opening doors to allow more people to participate in writing, recording, and performing music rather than shutting creators out. Backing bands aren’t losing gigs to loop stations; instead, we get visionary creativity like tUnE-yArDs looping the hell out of “Bizness”, and bad singers aren’t displacing any talented vocalists in the wake of Cher pioneering Auto-Tune as the vehicle for her transcendently pitch-corrected cyborgian break-up anthem, “Believe”.

AI tools are unlikely to be markedly different, at least that’s how it looks from this side of the singularity. Rather than replacing biological creators or pushing anyone out, AI might just expand the field for new entrants for the foreseeable future, creating opportunities for those for whom making music was to some degree inaccessible (for instance, hits can now be written and produced on iPhones), and in some cases, pushing artists and music into completely new creative territory.

David Bowie paved the way for AI-powered songwriting in the early 1990s, teaming up with Apple for Verbasizer, a proto AI lyric assistant that used Bowie’s own lyrics in a kind of digital cut-out technique that randomized sentences, which he used on his 1995 album, Outside. Today, we have Lyric AI, “a collaboration between Reimagine.AI and Google Brain building an artificial intelligence assistant designed to help musicians create original lyrics.”

Google’s DeepMind has an AI project that generates “speech which mimics any human voice” called WaveNet, so we’re approaching an AI that can actually handle vocal duties. Facebook’s Artificial Intelligence Research (FAIR) team is building their own AI system on top of WaveNet, attempting to develop a music translation AI system that can transform a hummed or whistled tune into a complete song.

Google isn’t only building tools for lyrics and vocals; the tech behemoth also has a variety of AI songwriting tools under the umbrella of open-source research project Google Magenta, including NSynth, a neural synthesizer that uses a deep neural network to learn the characteristics of sounds and uses the original sounds’ acoustic qualities to synthesize new sounds. In fact, Grimes herself used NSynth on her forthcoming album Miss_Anthropocene.

Another Google Magenta project is Piano Genie, an intelligent controller that maps 8-buttons of input to a full 88-key piano in real time, which The Flaming Lips tweaked to make Fruit Genie, with its physical interface consisting of fruit; yet another is MusicVAE, which blends musical loops and scores. YACHT’s August release, Chain Tripping, was composed using MusicVAE on every track.

French songwriter Benoît Carré wrote “Daddy’s Car”, the first song written with the help of artificial intelligence using a songwriting tool made by Sony Flow Machines. The tool emulates a specific style of genre, in this case, mimicking The Beatles. Together, they created the first AI album, Hello World (2016), under the moniker SKYGGE (Danish for “shadow”), actually landing a track on Spotify’s New Music Friday playlist.

Spotify later poached the head of Sony Flow Machines, François Pachet, for Spotify’s Creator Technology Research Lab in July of 2017, so not only can Spotify use AI for music discovery and distribution, but they can now also utilize AI to write their own songs (which might make record label execs unhappy but will probably be used to make the rest of us as happy as possible).

Another album written in collaboration with AI is I Am AI, the 2018 debut album by former American Idol contestant Taryn Southern using a combination of Amper, which lets you create and rework stems, and IBM Watson Beat, which uses a combination of reinforcement learning and neural networks to create tracks; and Alex da Kid used IBM Watson Beat on “Not Easy”, which was a Top 40 hit in 2016.

Both IBM Watson Beat and Google Magenta are open source on GitHub, making it easy for developers to access, and while Amper seems available to musicians, judging from its use on Southern’s I Am AI, it’s actually aimed at “enterprise teams” creating “stock music” en masse with sliding prices dependent on “team size and specific music needs.” 

AI-powered songwriting assistant Amadeus Code is available as an iPhone app (free, $1.99 pay as you go, or $9.99/mo), while ambitious Aussie start-up Popgun (one-time presumptive slayers of the Top 40 hit, currently providing an app for “teenagers using AI tools to make music for one another”) is focused on creating a space for “pop stars on training wheels.” Popgun’s Splash is newly available as a free iPhone/Android app, just coming out of beta in mid-December.

Also recently out of beta, where it was most popular with gamers creating diss tracks to send to their vanquished enemies over Discord, is an extremely simple AI “songwriting” tool, Boomy, available via a web-based interface (free or $8.99/mo versions available).

Amazon Web Services (AWS) just announced their entrant into the field, AWS DeepComposer, last month at their AWS re:Invent conference in Vegas, which offers “the world’s first machine learning-enabled keyboard for developers.” Arca just announced her AI collaboration with Bronze AI (which Jai Paul also demoed), and AI composer tool JAM is a new entrant set for release in March of 2020.

Tons of music-related start-ups are popping up globally (Music Ally published a report last November detailing some of the most interesting), including WaveAI, creators of Alysia, a “lyric assistant” and “melody partner” complete with AI-generated vocals; HumTap, which converts a hummed tune into full instrumentation that you can add beats to and set to a video; UK start-up Vochlea, which takes vocalizations (even beatboxing) and turns it into MIDI accessible in various digital audio workstations (DAW) that can be used to create a full song; and Greek creators of an AI “beat assistant” at Accusonus.

Who knows what will be announced next, but for now, most musicians’ jobs don’t seem to be going anywhere. If we’re “nearing the end of human-only art,” as Grimes later clarified on Twitter, then it’s a period ripe for hybrid creativity. It’s already happening — the tools exist right now and enterprising musicians are currently using them in creative ways. This will only increase as more tools become available and more musicians experiment with them, ushering in a new phase of creative expression that incorporates an ever-evolving AI tool set that enables musicians to more fully express our humanity.

Will Artificial Intelligence Replace Human Musicians?
Matt Melis

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*

 © amin abedi 

CONTACT US

Sending

Log in with your credentials

Forgot your details?