In 2021, technology’s role in how art is generated remains up for debate and discovery. From the rise of NFTs to the proliferation of techno-artists who use generative adversarial networks to produce visual expressions, to smartphone apps that write new music, creatives and technologists are continually experimenting with how art is produced, consumed, and monetized.
BT, the Grammy-nominated composer of 2010’s These Hopeful Machines, has emerged as a world leader at the intersection of tech and music. Beyond producing and writing for the likes of David Bowie, Death Cab for Cutie, Madonna, and the Roots, and composing scores for The Fast and the Furious, Smallville, and many other shows and movies, he’s helped pioneer production techniques like stutter editing and granular synthesis. This past spring, BT released GENESIS.JSON, a piece of software that contains 24 hours of original music and visual art. It features 15,000 individually sequenced audio and video clips that he created from scratch, which span different rhythmic figures, field recordings of cicadas and crickets, a live orchestra, drum machines, and myriad other sounds that play continuously. And it lives on the blockchain. It is, to my knowledge, the first composition of its kind.
Could ideas like GENESIS.JSON be the future of original music, where composers use AI and the blockchain to create entirely new art forms? What makes an artist in the age of algorithms? I spoke with BT to learn more.
What are your central interests at the interface of artificial intelligence and music?
I am really fascinated with this idea of what an artist is. Speaking in my common tongue—music—it's a very small array of variables. We have 12 notes. There's a collection of rhythms that we typically use. There's a sort of vernacular of instruments, of tones, of timbres, but when you start to add them up, it becomes this really deep data set.
On its surface, it makes you ask, "What is special and unique about an artist?" And that's something that I've been curious about my whole adult life. Seeing the research that was happening in artificial intelligence, my immediate thought was that music is low-hanging fruit.
These days, we can take the sum total of the artists' output and we can take their artistic works and we can quantify the entire thing into a training set, a massive, multivariable training set. And we don't even name the variables. The RNN (recurrent neural networks) and CNNs (convolutional neural networks) name them automatically.
So you’re referring to a body of music that can be used to “train” an artificial intelligence algorithm that can then create original music that resembles the music it was trained on. If we reduce the genius of artists like Coltrane or Mozart, say, into a training set and can recreate their sound, how will musicians and music connoisseurs respond?
I think that the closer we get, it becomes this uncanny valley idea. Some would say that things like music are sacrosanct and have to do with very base-level things about our humanity. It's not hard to get into kind of a spiritual conversation about what music is as a language, and what it means, and how powerful it is, and how it transcends culture, race, and time. So the traditional musician might say, "That's not possible. There's so much nuance and feeling, and your life experience, and these kinds of things that go into the musical output."
And the sort of engineer part of me goes, well Look at what Google has made. It's a simple kind of MIDI-generation engine, where they've taken all Bach’s works and it's able to spit out [Bach-like] fugues. Because Bach wrote so many fugues, he's a great example. Also, he’s the father of modern harmony. Musicologists listen to some of those Google Magenta fugues and can't distinguish them from Bach's original works. Again, this makes us question what constitutes an artist.
I'm both excited and have incredible trepidation about this space that we're expanding into. Maybe the question I want to be asking is less "We can, but should we?" and more "How do we do this responsibly, because it's happening?"
Right now, there are companies that are using something like Spotify or YouTube to train their models with artists who are alive, whose works are copyrighted and protected. But companies are allowed to take someone's work and train models with it right now. Should we be doing that? Or should we be speaking to the artists themselves first? I believe that there needs to be protective mechanisms put in place for visual artists, for programmers, for musicians.
On the flip side, are there potential benefits to art as a training set? Are there ways in which technology can be used in music to help transcend barriers, create opportunities, increase accessibility?
I split artificial intelligence into two categories, generative and assisting. There are technologies that will do things for us, or do things autonomously, and then there are things that will assist us in our daily lives. I'm crazy bullish on the idea of assistive and adaptive AI technologies augmenting what's possible for everyone, for visual artists, for poets, even for actors and voice actors, for musicians.
Imagine if we could turn somebody that I love, like Prince, into a training set. And also its intellectual property rights are owned, and his estate is fairly remunerated, right? All those i's are dotted and t's are crossed. And from that training set the AI could generate new music. And then the artist is able to be everywhere at once. It would augment the artist's availability and would be a completely revolutionary income stream for artists. For example, we would have these adaptive works that are suited contextually to people's daily lives, what they're doing: their favorite artists are singing to them, talking to them, painting for them in real time. I hope that that's where we land.
So, conceivably, I can have an algorithm producing music all day that sounds like an artist I’m a fan of, and that artist’s estate could be properly acknowledged and compensated.
This brings us GENESIS.JSON. Can you explain how it works?
The coolest thing about it is it's this living piece of artwork that now lives on blockchain forever. It will live there until the end of the internet. And it's there for everyone to enjoy. And there’s tokenized provenance for a single owner. My mouth drops thinking how lucky I am to get to be alive right now, comparatively, to some of the artists I idolized from hundreds of years ago.
It's a 24-hour piece of music and visual art, an adaptive composition that changes throughout the entire day, from sunrise to sunset. One of the things that I love telling people, that people have been really excited about, is that it is the first ever non-fungible token (NFT) that puts itself to sleep every night, along a circadian cycle. It goes to sleep at 9:00 pm and it wakes up at 9:00 am. Of course, we're web scraping network time as a variable, so if you're in Australia and you're watching it, you'll see it go to sleep at 9:00 pm your time, whereas I'm on the East Coast right now, and I'll see it going to bed at 9:00 pm my time.
One of the cool behind-the-curtain things about it is the audio and the video are noncontiguous. So it's a bit of a parlor trick to make it feel like this contiguous experience, but the visual element does not have the audio bound to it. The audio is all being live generated, and the video is sequence stitched together throughout the course of the day.
Where’d the idea come from?
I grew up in suburban Maryland, and we had a grandfather clock that was passed down in our family. It wasn't until I was grown that I realized what is so special about that object, and it kind of came full circle for me. It's this symbolic object that just constantly, by its presence, is demarcating the passage of time. And I wanted to make something that was this meeting point, that was a study of the passage of time.
I see it as a kind of legacy project. In general, I'm so bullish on all the blockchain works as a new way of connecting with fans and making things that are iterative and interactive. It's my love language, getting to think of music in this nonlinear way that becomes adaptive, and features are embedded into it, programmatic things, visual art. It's like I've waited for this moment my entire life. This is just the first of many infinity works in the blockchain space.
What would you tell artists thinking about this space?
If these art and technology spaces interest you, and it should because this is a way to disconnect the umbilical cord from the middle man that has kept artists away, then do your research. Modern technologies, and the blockchain in particular, are a way to eradicate the middle man in a way that empowers everyone. It empowers the audience as much as it empowers the artist. It's this beautiful circle of empowerment.
This interview has been edited and condensed for clarity.
- ?? The latest on tech, science, and more: Get our newsletters!
- Hundreds of ways to get s#!+ done—and we still don't
- Why I'll never finish Legend of Zelda: Breath of the Wild
- How the far right exploded on Steam and Discord
- Where to get discounts with your student email address
- Big Tech is bending to the Indian government's will
- ??? Explore AI like never before with our new database
- ?? WIRED Games: Get the latest tips, reviews, and more
- ? Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers