Google AI researcher and piano player Pablo Castro knows that musicians can settle into a comfort zone. It’s a place you sometimes rely on in front of a paying audience, but it can constrict creativity, get boring, and make a musician less likely to explore. To push boundaries, Castro is developing a deep generative AI model made to help musicians create more uniquely human music through improvisation.

“Because we’ve been trained for so long, we can use our musical training to sort of navigate these uncomfortable areas in a creative way. And that sometimes lends itself to new types of musical expression,” Castro said in a phone conversation with VentureBeat.

Castro plays piano in PSC Trio, a jazz band that performs in Ottawa, Montreal, and other parts of Canada.

The ML-Jam project to bring out the human characteristic of improvisation in musicians comes from Google Brain’s Magenta project to drive music with machine learning. ML-Jam purposely attempts to limit itself to premade models that work right out of the box and utilizes Magenta’s DrumsRNN and MelodyRNN.

“Essentially, what I wanted to do is keep my rhythm, since that’s so reflective of the way I play, but replace my notes with the notes that the model is producing. So it’s this hybrid improvisation,” he said. “What I found in my experience is this is often rhythmically not something I would have come up with on my own, because it’s not a rhythm would have come organically to me. But it often ends up being something kind of interesting to me.”

ML-Jam and its open source Python code were presented by Castro at the International Conference on Computational Creativity (ICCC) held last week in Charlotte, North Carolina.

READ  Nintendo Switch Lite fits in my pocket, but should it?

The model begins with what Castro calls a deterministic drum groove, while a person plays a bassline and adds other instruments until that groove is sent to DrumsRNN to generate a unique model. Then a musician in control of the rhythm model that shapes how a musical phrase is approached improvises with song melody created by MelodyRNN.

Multithreading with Python makes ML-Jam’s inference carry out in a separate thread, allowing for the model to be generated, then played live during a performance. Since a model can take an unpredictable amount of time to be generated, musicians have to work with a sound they haven’t heard yet live while performing.

Finding the groove

Castro played with ML-Jam and his jazz trio but said he found chemistry between the two lacking. Instead, Castro plans to incorporate AI into his own music.

His next step is to use ML-Jam or derivative systems to fuel unique content for a live show.

“One thing I’ve started working on is essentially a solo show where it’s just me and … improvisations are built around this technology. So then it’s a lot more organic, and what’s interesting about that is that it forces me to approach composition in a very different way than what I would do normally,” Castro said.

“I have to think of whether it will work with the type of system I’m using. Like if it’s this like drumming thing, it’s using a loop, so I have to have something that kind of works well with the loop, isn’t going to be too repetitive, isn’t going to be boring, but that still fits well within this idea … And so whenever I’m done with it, whatever comes out of it will 100% be very different than anything I would have come up with had I not imposed those constraints on myself.”

READ  How Kids’ Allowance Works in the Smartphone Age - The Wall Street Journal

Other standout AI models made recently for music include Magenta’s Piano Genie. A version of Piano Genie named Fruit Genie was used by the Flaming Lips in a performance onstage at I/O last month.

Castro’s tandem performance with AI may incorporate other novel music models introduced recently such as Magenta’s Music Transformer, which generates piano melodies, and OpenAI’s MuseNet to spark more improvised composition. In March, Google created a Music Transformer-powered tool that begins with keys chosen by a person, then generates music that sounds like the work of Johann Sebastian Bach.

“The whole point of it is to explore the space of human-machine collaboration, so the compositions would be written for this collaboration rather than trying to take a system I built externally and putting it into a song that I composed for human-only trio,” Castro said.

“What I want to do is essentially have each song explore a different type of machine learning model, and they don’t necessarily all have to be music generating models. The idea is to see how can you can integrate different machine learning technologies into composition or improvisation in a way that produces music that likely wouldn’t have come out that way had you not been trying to incorporate these machine learning technologies.”

Castro distinguishes his model from some others used by creatives in that it must receive human input in order to operate.

To Castro, human purpose shaped by a person’s history and humanity defines art.

“For me the question of is it art or not really boils down to where is the purpose coming from, and I don’t see any model having any purpose right now,” he said. “It’s the humans that put that in there.”

READ  Dell Virtustream Enterprise Cloud: Product Overview and Insight



READ SOURCE

WHAT YOUR THOUGHTS

Please enter your comment!
Please enter your name here