As AI Finds Its Voice, Music Communities Must Strengthen Theirs

For the last two years, I’ve poured my angst, joy, wonder and grief into a musical project called Current Dissonance.

I read the news voraciously, and every few days, a story resonates with particular thunder. I sit with that story in mind, as inspiration and intention, and then record a piece of solo piano music, composed on the spot, in reaction. Most often, Current Dissonance subscribers receive the new track within minutes of its completion.

I love engaging with this project. It’s become a cathartic practice and wordless diary, connective tissue when so much around us seems to be fracturing, something full of guts and blood and soul that feels deeply personal and unapologetically human.

Given all that, I find it both thrilling and jarring that AI music creation has advanced to a point where well-crafted algorithms could largely take my place as the brain, heart and fingers behind this project.

At its core, the fusion of AI and music creation isn’t new, and its evolution from tweaky curiosity to full-on cultural juggernaut has been fascinating to watch. My first exposure came via Digital Audio Workstations (DAWs) — the complex software suites used to produce nearly all new music. Years ago, I experimented with an early AI feature that allowed virtual drummers to bang out rudimentary grooves tailored to my songs-in-progress; another utility let me stretch and distort audio samples in subtle or grotesque ways. Later, I wrote coverage of a startup that used machine learning to auto-generate soundtracks for video.

Some of those legacy AI utilities felt promising but imperfect, others inelegant to the point of unusability. But they all showed the potential of what was to come. And it’s not hard to see that what was coming has now arrived — with the force of a freight train.

Welcome To The New A(I)ge

Examples of AI’s growth spurt permeate the music world. For cringe-worthy fun, check out There I Ruined It, where AI Simon & Garfunkel sing lyrics from “Baby Got Back” and “My Humps” to the melody of “Sound of Silence.” Then visit Suno, where single-sentence prompts yield remarkably realistic songs — fully-produced, with customized lyrics — in electronica, folk, metal and beyond. Open up Logic Pro and hear just how big and vivid its AI mastering utility can make a track sound in seconds. These developments are just the overture, and there’s no technical reason why a vast array of musical projects — including my own — couldn’t be AI-ified in the movements to come. 

For example, I’ve created 154 short piano pieces for Current Dissonance, as of the writing of this article. Hours more of my piano work are publicly accessible. An AI model could be trained on those recordings to look for patterns in the notes I play, the chord voicings I choose, the ways I modulate volume and manipulate rhythms — all the subtle choices that make me sound like me, as opposed to anyone else sitting at a piano.

The algorithm would also need to learn the relationship between each Current Dissonance movement and the news article it reinterprets, building a map of correlations between facets of the written story and recorded music. Do Locrian-mode motifs in 7/8 permeate my playing when I’m reflecting on South Asian politics  —  and are C#s twice as likely to appear when I reimagine headlines that are less than four words long? I have no idea, but a well-trained AI model would parse those potential patterns and more.

In the end, my hypothetical AI Current Dissonance would function like Suno does for popular music formats. To hear a Michael Gallant-style piano reaction to anything, type in your prompt and see what erupts.

While this may sound like a daydream, the key technical bedrock exists right now, or will exist soon. Following a similar development pathway, I doubt it’ll be long before we can also hear how Tchaikovsky might have reacted symphonically to war in Ukraine, or how McCoy Tyner could have soloed over “Vampire,” “Believer,” or any other tune written after his death. Elvis Presley reimagining Elvis Costello, Billie Holiday reinterpreting Billie Eilish, John Philip Sousa composing marches to honor nations that didn’t exist when he did — the possibilities are stunning.

But where does all of this innovation leave today’s music professionals?

Old Theme, New Variations

Recent conversations with fellow music-makers have yielded gallows humor, dark jokes about obsolescence at the hands of the robots — but also a sense of resilience, the feeling that we’ve heard this tune before.

Take for example the advancement of synthesizer technology, which has certainly constricted market demand for musicians who make their living playing in recording sessions. And the ubiquitousness of affordable, powerful DAWs like Pro Tools, Ableton Live and GarageBand has snuffed out a generation of commercial studios and their engineers’ careers. Those losses are real and devastating, but they’re only part of the story.

Inventing, programming and performing with synthesizers has become a thriving musical specialty of its own, creating new professional opportunities amidst ashes of the old. The same can be said for the brilliant minds who make every new bit of music software even more amazing. And democratized music production due to GarageBand and its ilk has made possible the global ascent of DIY artists who could never have afforded to work in traditional studios.

As the duality of loss and regrowth takes hold in the AI era, everyone involved in music must amplify the latter, while keeping the former as muted as possible. There are key steps that communities and countries alike can take to ensure that AI music technology boosts existing creators and inspires new ones — that it enhances human creativity more than it cuts us down.

Shedding for the Future

The biggest error music-makers can commit is pretending that nothing will change. When it comes to AI, willful ignorance will lead to forced irrelevance. Let’s avoid that future.

Instead, I encourage all music-makers to learn as much about AI music technology as possible. These tools are not secret weapons, siloed away for the rich and privileged; with an internet connection and a few hours, any music-maker can gain at least a high-level look at what’s going on. It’s incumbent on all of us to learn the landscape, learn the tools and see how they can make our human music-making better.

Music-makers must also double down on human connections. For artists with followings large or small, this means rededicating ourselves to building meaningful relationships with audiences, strengthening the human connection that AI can only approximate. Taking time to greet listeners at each performance, making space to bond with superfans — just as in-person concerts will grow in meaning as fiction and reality become increasingly indistinguishable in the digital world, so will the importance of face-to-face conversations, handshakes and high-fives, hugs between artists and those who see beauty in their music.

For music-makers who spend their time in studio settings, reinforcing connections with clients and collaborators will also be key. While I currently rely on AI-fueled music tools in some contexts, I cherish every opportunity to team up with fellow humans, because I’m blessed to work with great people who elevate and inspire me. That’s another vital connection that AI cannot now — or hopefully ever — replace.

It Takes a Movement

Music-makers, those who support them in commerce and industry, and those who weave music into their lives as listeners — all of us must help build a movement that cherishes human creativity lifted through technology.

There’s already hard evidence that protecting artists’ digital integrity is an all-too-rare consensus issue within American politics; check out Tennessee’s bipartisan ELVIS Act for more. Music-makers in any community can push their local and national leaders to ride Tennessee’s momentum and reproduce its successes against AI abuse. As a voting member of the Recording Academy, I’m proud of the organization’s pro-human activism efforts when it comes to federal copyright law and other vital issues. Every music-related entity should make noise in favor of similar protections.

Granted, even the smartest laws will only go so far. AI music technology is so accessible that trolls and bad actors will likely be able to manipulate musicians’ voices, privately and anonymously, without suffering real consequences — a dynamic unlikely to change anytime soon. But the more our culture brands such exploitative recordings as tasteless and taboo, the better. We cultivate respect for human creators when we marginalize the consumption of non-consensual, AI-smelted musical plastic.

Consent is one key; control is another. While industry executives, music-makers of all shapes and flavors, influencers and lawmakers must collectively insist that musicians remain masters of their own voices, I recommend we go further by empowering artists themselves to take the lead.

It would be brilliant, and fair, for Madonna or Janelle Monáe, Juanes or Kendrick Lamar, to release interactive AI albums that they, the artists, control. Such properties could allow fans to create custom AI tracks from raw material exclusively recorded for that purpose. Under no circumstances should AI assets be leveraged for any use without the explicit permission — and compensation — of the humans responsible for the music on which those algorithms were trained.

…And I Feel Fine

In the face of AI’s explosion, we must remember to stay curious, hungry and optimistic. Investors, inventors and tech companies must look beyond novelty song creation as the technology’s highest musical goal; I can’t imagine how far AI will go when applied to creating new instruments, for example. Much of the music I make is improvisational, formed in my brain milliseconds before it’s realized by my fingers. How amazing would it be to jam with live band members — as well as an AI algorithm trained to create instant orchestrations, in real time as I play, using a never-before-heard chimera of Les Paul overdrive, volcanic glass vibraphone and a grizzly bear roaring?

AI presents massive challenges to human creators of any sort, but if we proceed with thoughtfulness and respect, new innovations will lift music-making communities everywhere. I for one will be thrilled to learn who the first Beethoven, Beyoncé and Robert Johnson of the AI era will be, and to hear the masterpieces they create.

Michael Gallant is a musician, composer, producer, and writer living in New York City.

Leave a Comment