Artificial intelligence (AI) is already transforming the world around us in countless ways, from automating tasks typically done by humans to creating new images and even helping scammers find new ways to cheat people out of their money. It was only a matter of time, then, before AI began to impact the world of music. AI’s influence in music can already be felt (or, more accurately, heard), and AI will likely continue to transform the way that music is created, talked about, and even treated legally.

Music professionals know already that AI is simply the latest in a long line of emerging technologies to upend the sonic landscape. In decades past, the advent of recording technology, of radio, of increasingly powerful amplification and editing tools, and much more has led music to grow and develop into its current state. Below, we explore in detail how AI-like systems have already interacted with music and musicians in the past before providing an overview of the current landscape.

Algorithms and Computer-Generated Music

Believe it or not, the earliest computer-generated music dates back to the 1950s. The Manchester Mark II computer developed at the University of Manchester allowed early experimenters the chance to use a computational system to recognize, analyze, and even generate new music. This and other early computers made use of sets of rules called algorithms to create musical works. The earliest such work was known as Iliac Suite, created by a computer with the help of composers Leonard Isaacson and Lejaren Hiller.

AD

Stochastic Probabilities

The next generation of computer-music pioneers included the Greek composer Iannis Xenakis, who utilized stochastic processes. Stochastic processes, also known as stochastic probabilities, are random probability distributions which are statistically analyzable. Xenakis and others around this time used computers to generate specifications for the structure, pitches, and other parameters of a piece of music.

Development of Artificial Musical Intelligence

While early experiments in computer-based music tasked proto-AI systems with recognizing patterns, developers of more advanced systems in the 1980s and 1990s aimed to teach computers how different musical elements function, thereby giving them the capacity to engage in generative modeling.

AI systems during this period might take elements from pre-existing pieces of music, separate them into parts, analyze them, then recombine them to create new music. In order for this to happen, a large database of music and related attributes would first need to be encoded into a database. The AI system could then extract certain elements based on identifiers. Then, using a system of rules, the AI would reconstruct these smaller segments into a large-scale musical output.

Last 15 Years: Development of Unique Style and Sound

The Iamus project in 2010 was an AI system designed to create new classical music in its own unique style. Iamus utilizes algorithms in much the same way as the earliest music-generating computers did, although at a much more advanced level. Iamus will randomly generate a piece of music, which is then subjected to a series of tests to determine how well it fits into established rules according to genre, music theory, and other systems. By reiterating this process, Iamus is able to create increasingly rule-following music in a variety of styles.

NSynth is another recent example of an AI tool used in the process of music creation. Unlike the examples above, NSynth does not aim to create whole musical works. Rather, it uses neural networks to generate new individual sounds that can then be sampled or sequenced into other creative processes. NSynth is an example of an AI system that aims to enhance the ability of human musicians, in this case by expanding the range of sounds they have at their disposal.

AD

AI has also found its way into editing and production software, where it may help to power apps and plugins that can do everything from convert a sonic input into another type of sound, or analyze the pitch content of a sonority and systematically alter those pitches, and much more.

Recent Trends in AI and Music

Some of the most recent AI tools used in music creation make use of neural networks as well, sometimes digging through and analyzing large collections of musical examples to understand patterns and common features. Aiva is one of the most popular of these tools, capable of creating music that is essentially indiscernible from some human compositions.

But AI tools are also increasingly seen not as a replacement for human musicians, but as an enhancement. AI systems can be used to create melodies, harmonies, chord progressions, synth sounds, and more, all of which a human musician can incorporate into their own works.

As AI becomes a more powerful tool in the world of music, it also creates questions and dilemmas. For instance: who owns (and who profits from) a piece of music created by AI? Will AI-based music replace that of human creators? What about the role of AI in the case of lawsuits concerning plagiarism, increasingly common in the music space?

Cheat Sheet

  • Musicians have used computers to help generate music since at least the 1950s.
  • Some of the earliest examples of computer-based music include works composed by algorithmic systems, stochastic processes, and generative modeling.
  • Since 2000, computer-generated music has developed rapidly. Iamus is a tool developed in the late 2000s which creates works of music in a unique style, while NSynth makes new sounds for songwriters or composers to use in their work.
  • Some of the latest AI systems used to make music are neural networks designed to analyze huge databases of musical examples for trends and commonalities. These systems then create new works based on those rules.
  • AI tools are increasingly seen as a way to help expand the work of human musicians, though there remains concern that AI could replace humans in this way.

Stay on top of crypto news, get daily updates in your inbox.