By Nathan Reiff
4 min read
In order to create music—or to produce any creative output at all—an artificial intelligence (AI) system must first know the basic patterns and elements of music. The most common way that AI does this is through machine learning, a process in which the AI system takes in a huge amount of data (in this case, it could be examples of recorded music, or sheet music, or something similar) and analyzes it. By analyzing this data, the system learns about the common patterns and rules governing the music making up the data set. The AI can then attempt to create its own output using those same rules. While at first the output may be crude and difficult to listen to, as the system “learns” what it can do to improve its creations and bring them closer to the examples from the data set, the musical output also gets better.
AI systems of all kinds typically rely on programmer input in order to set their desired tasks or learning outcomes. Consider, for example, an art generator AI tool that takes a bit of text input from the human user and then creates an image to correspond to that text. In the case of music, one way that users can work with music AI systems is to provide some of the parameters to be used in the musical creation itself—chords, melodies, beats, or representative songs or works.
An example of an algorithm using this type of input based on musical building blocks is an effort by researchers at Rutgers to complete Ludwig van Beethoven’s 10th Symphony (the Romantic-era composer only finished his famous 9th Symphony before dying, but he had started on the next). The team of music scholars and AI experts created an algorithm allowing them to input elements of Beethoven’s other musical works so that the system could best learn Beethoven’s way of composing.
Notably, the Rutgers Beethoven project was not driven entirely by AI. After the AI system created some musical output based on its analysis of Beethoven’s other works and the incomplete notes that he left for the 10th Symphony, the team behind the project sent the output to a separate music scholar for analysis. This individual ultimately selected and ordered the samples provided by the AI, making this project a true human-AI collaboration.
One key dilemma for a programming team creating an AI that is designed to make music is well illustrated by the above example. On their own, AI systems do not have the ability to gauge or evaluate the quality of their musical output. Human taste in music is largely subjective, and we all have our preferences or dislikes in this area. AI cannot be said to have the same sense of taste—it is strictly able to evaluate its own creations on the basis of how they stack up against the music it has already analyzed. Because some of the most famous works of music go against the common rules associated with musical creation, an AI is not likely to ever intentionally step outside of those boundaries.
This is where the human component comes into play, as in the example above. We explore more of the ways that humans are increasingly involved in AI-generated music creation in the next chapter of this course.
Decrypt-a-cookie
This website or its third-party tools use cookies. Cookie policy By clicking the accept button, you agree to the use of cookies.