Generative artificial intelligence (AI) has rapidly progressed to a point at which it is capable of creating all types of media, including visual art and music. While there is certainly much more to come in this area as AI continues to improve and artists develop new ways to harness (or push back against) this technology, even now it is crucial to consider a host of ethical and legal issues that arise in the world of generative AI and the content it produces.
First, a quick note about the way that many of the most popular AI music generators function. It is common for these tools to develop and evolve using a process known as deep learning, in which an AI system is fed a database of information that it analyzes for rules, trends, and patterns. Generative AI trained in this way eventually reaches a point at which it can use the rules and patterns it has detected as an impetus to create its own content, which may then be fed back through the system to determine how well (or not) it matches with information in the database. In the case of AI music generators, the database of information that these systems use to “learn” is often musical examples, recordings, sounds, and similar materials.
A key consideration when examining legal considerations of music generated by deep learning AI is that some of these AI systems will, either by design or not, utilize bits and pieces of the data that they have analyzed when formulating their own new creations. As an example, some forms of image-generating AI have taken components of existing images to create a new picture. When the data for a music AI tool is copyrighted songs and sounds, it’s easy to imagine how this can quickly become a legal minefield, raising issues for artists, listeners, and regulators.
Copyright Concerns
Google has created a powerful AI music tool called MusicLM, designed to take a text input and turn it into music. However, it has yet to release MusicLM to the public as a direct result of risks associated with misappropriation of creative material. Another representative case illustrating the new arena of AI-related copyright concerns involves a recently released pop song that appeared to highlight vocals by artists The Weeknd and Drake. In actuality, however, the song utilized AI reproductions of both artists’ vocals, and the artists themselves were not featured in the creation of the song at all. Should The Weeknd and Drake be paid for the AI-generated vocals? What about Spotify, Apple Music, YouTube, or other platforms that might host the audio content? What might “fair use” look like in a world in which AI tools have access to limitless content?
While some musicians have pushed back against what they see as a threat to their intellectual property and livelihoods, others are embracing a new approach to royalties, copyrights, and related matters. The pop artist Grimes has taken the latter perspective, recently encouraging generative AI users to utilize AI versions of her own voice and in fact offering to split royalties on any such music.
In a sense, the questions looming around who gets paid for AI-generated music are akin to the types of questions that arose at the advent of mp3-sharing services like Napster, or when streaming services like Spotify launched. In each of these cases, a prolonged period of exploration, stakeholding, and legal battles, led to a new sense of normal. Where things will end up with AI-generated music and copyright remains to be seen.
Ownership
A second and related legal concern has to do with who might claim ownership over an AI-generated song. Besides artists whose voices or other sounds might appear in one such track, it is possible that the user of a generative AI program could claim ownership. Similarly, the developers behind the AI itself could assert a claim. Already, established music companies are attempting to draw lines in the sand: Universal Music Group told streaming platforms including Spotify and Apple in early 2023 to block AI systems that might scrape its music for deep learning or other applications. Still, some might argue that popular music is already well down a path toward a new definition of ownership: in 2022 alone, one out of every five hits on the Billboard Top 100 were based on samples.
Ethical Dilemmas
Aside from the legal ramifications of AI-based music, there are philosophical and ethical considerations as well. In a world in which an AI tool can be used to perfectly mimic another artist’s voice, should a user of that AI system be required to obtain permission from the artist? What recourse does an artist have if they do not want their voice to be used?
A concern among many music professionals is that, as AI advances, it will become increasingly difficult for listeners to distinguish human-made music from machine-generated sounds. In that scenario, should it be required that AI-based music be presented with some sort of label or identifier so that listeners can tell the difference?
There are deeper questions about appropriation as well. In August 2022, Capitol Records signed a virtual rapper, FN Meka, created in part using AI technology. However, the music company later dropped Meka following a backlash among listeners who claimed he exploited the Black and rap communities, made light of important societal issues, and promulgated stereotypes. The example above of an AI user creating a song using computer-generated vocals by Drake and The Weeknd has received similar backlash for what some see as a tendency to create Black art without the input of Black artists. These issues likely represent just a glimpse of what may be to come.
AI systems designed to create music are already powerful and will undoubtedly become even more so in the future. While they provide tremendous artistic potential, they also raise a host of dilemmas and questions, the solutions to which remain elusive.
Cheat Sheet
- Artificial intelligence (AI) is capable of creating all kinds of media, including music, which both opens the door to a new era of creative expression but also raises important legal and ethical questions.
- Music-generating AI systems that use deep learning methods may create audio tracks that include excerpts of the sounds they were trained on, opening up copyright concerns.
- Google has developed a generative AI for music, MusicLM, but it has not yet released this tool out of concern over copyright and related issues.
- A pop song generated using AI-based vocals mimicking Drake and The Weeknd drew scrutiny over issues of ownership, payment, and appropriation.
- 20% of all Billboard Top 100 songs in 2022 made use of samples, suggesting that notions of ownership in the music world may be increasingly fluid.
- Ethical questions surrounding AI music include whether artist’s voices or likenesses may be used without their permission, what recourse artists have if their likenesses are used in a way they don’t like, and more.
- A “virtual rapper,” FN Meka, was signed to Capitol Records in 2022 and subsequently dropped following a backlash accusing the creation of appropriating elements of Black and rap cultures.