Artificial intelligence and machine learning are big topics these days. What does Create Music have to do with it all?
Mention Artificial Intelligence and music together in a sentence in a crowded room. Depending on if techies or musicians are present, you will get drastically different reactions. For the modern techie, one might think that AI is the answer to all music. And that like so many other occupations, musicians have their time numbered. AI after all can learn any music genre and repeat it. Musicians on the other hand, would disagree. They would argue that artists are the only ones truly safe from automation, since AI can never be creative.
There is a third group that argues there can be a convergence of the two. It’s mainly composed of marketers like Spotify, who see AI as a tool to create playlists or sort genres. Also composers who see that AI can be useful in augmenting their creativity, through automation or developing and/or finding patterns.
Create Music seeks to have the most natural sound possible in our stock music. In doing such, we’ve rejected any influence of Artificial Intelligence. Though what our app does might seem like a sort of AI wizardry, it’s really just a lot of hard work from our musicians. And some clever algorithm writing.
Pierre Langer, Create Music’s founder, says in our latest interview with him, “That’s the thing for some, the main solution, to have an artificial intelligence or a neural network, right? Music on the fly and replace composers. And I always thought this idea is not the right choice, because you want to have the creative input and emotional coloring of a composer.”
What is artificial intelligence?
It’s important to understand what AI means in this context. It’s not a Terminator army from the future patrolling the streets for hiding musicians, ready to gun them down. Though maybe that is the future, who knows.
What is meant by “artificial intelligence” is a concept known as “machine learning”. A computer program can be fed data in which it finds patterns, then it repeats those patterns based upon what it thinks will lead to the best possible output. “Best” being defined by the engineer.
Yufeng Guo, a developer for Google Cloud, puts it on this easy to understand video: “In machine learning, the goal… is to create an accurate model that answers our questions correctly most of the time.” In Yufeng’s example, they wanted a model that could answer whether a drink was beer or wine. They input a ton of data about both drinks, here color and alcohol percentage. Then the computer can predict whether it’s wine or beer when given that combination in the future.
Another example is in Facebook’s tagging. You’re teaching the Facebook algorithm what people look like every time you tag someone. The goal of this learning is to develop a model that can, based on the previous data, predict who is who in a picture. The more data it has – the more tagged pictures – the more accurate it will be.
Can AI create?
Given the current description of machine learning and artificial intelligence, this has huge drawbacks in creative spheres. Don’t get me wrong, artificial intelligence can make things, but it so far can’t really create things.
Take for example Deep-speare, a computer designed to make Shakespearean style poetry. They fed into the AI 2,700 sonnets in an effort to get the AI to be able to craft poetry. “Deep-speare independently learned three sets of rules that pertain to sonnet writing: rhythm, rhyme scheme, and the fundamentals of language,” the creators of the project said in an article for IEEE Spectrum. But upon evaluation with an expert in literature, Adam Hammond, they found that Hammond “could easily tell which poems were generated by Deep-speare.”
For example, one stanza by Deep-speare:
“Yet in a circle pallid as it flow,
by this bright sun, that with his light display,
roll’d from the sands, and half the buds of snow,
And calmly on him shall infold away.”
As the creators put it, it’s “nonsensical when you read it closely, but it certainly ‘scans well,’ as an English teacher would say – its rhythm, rhyme scheme, and the basic grammar of its individual lines all seem fine at first glance.”
But what about music?
Music has the same problem as poetry. When AI makes music, it certainly ‘scans well’, and can certainly fool a lot of non-musically inclined people. Music is a much broader and more open form of art than poetry though. That means the ability for an AI to create it is that much more diminished. How can you form meaningful patterns and produce something new when the terms are so wide open?
Probably the best functioning examples today of an AI producing music are AIVA and Jukebox. The CEO of AIVA, Pierre Barreau, explains at this TED talk that his team taught their AI 30,000 pieces from histories “greatest composers”. The TED talk features an orchestra playing a piece written by AIVA. Upon comparing it to the music available on their site, it’s clear that a composer finessed the piece.
“Music is also a super subjective art,” Barreau says. “And we needed to teach AIVA how to compose the right music for the right person, because people have different preferences.”
Jukebox is probably even more fascinating, as it covers modern genres and singing with surprising alacrity. It uses a similar method, of listening to thousands of other examples and reproducing something that sounds analogous.
It’s all about patterns
When you first listen to AIVA or Jukebox, it sounds incredible. But as the creators of Deep-speare point out in their article, there’s a certain thing called the Eliza effect. This is the “willingness to look past obvious errors in order to marvel at the wonder of AI.” Once the listener is past the Eliza effect, AIVA is much less dazzling. But why?
If you are only studying patterns and writing based on those patterns, how are you going to create anything new? And though Barreau claims he’s reviving Beethoven, he’s not. He’s only making something that can make variations of what’s been written before, and predict what will sound acceptable next. When you look at the famous classical composers, they’re famous precisely because they broke with their contemporaries and history, because they break patterns, they don’t follow them.
Look at what Beethoven or Mozart or Chopin were doing, and look at what was happening before them. Look at the modernists who developed the 12-tone technique, and ask yourself if a computer could come up with something that sounds so bad in terms of the rules set by predecessors? Or later jazz – something that sounds so different it’s good? If you gave a computer the past twenty years of data, how does it jump to Billie Eilish, Skrillex or Grimes, based on that data it had from before those musicians?
“You can have a machine compose music,” Pierre says, “but that is like producing forms that it learned, and sometimes it is actually pretty impressive. And in some genres it’s stunning… It sounds like Bach or Beethoven or whoever, but it’s just forms. Music is so rich in all the different styles and influences and then there’s the sound and what instrument is used and why and all these decisions that you have to make that are emotional.”
Does Create Music use AI?
“We can use tech to make music more ‘intelligent’ and enhance its usability,” Pierre says. “But it’s not AI, it’s software.” All the stock music that Create Music features has been written by flesh and blood composers. It’s not using any AI in the writing, nor in any of the features that seems like magic.
Of course, the most magical feature is when a user changes the length of the music, it seems to “rewrite” itself to the appropriate length. But this is due to our composers writing several variations in length and in different parts, called stems. Our software then links the stems together to create the necessary timing. It’s a seriously amazing effect to watch and play with, but it’s not at all artificial intelligence.