Noteworthy

My Hot Takes from the Copyright and A.I.: Music and Sound Recordings Listening Session

My Hot Takes from the Copyright and A.I.: Music and Sound Recordings Listening Session

Today the Library of Congress held the final of its Spring listening sessions on the use of artificial intelligence to generate works in creative fields; today’s session was focused specifically on the impact of generative AI in the world of music and sound recordings.

I’m so glad I sat in on these two panels! There was a lot discussed, of course, but I figured I’d lay out my biggest take-aways below (while it’s still fresh in my mind). There will be video recordings and transcriptions from today’s roundtables available to the public about three weeks from now (viewable on this page, as are the previous sessions that were focused on other creative industries), so you’ll be able to hear for yourself how the industry is poised to adapt to our new AI reality. In the meantime, you can see the line-up of panelists that contributed today here.

Three Main Points (in no particular order):

The three points that stuck out to me today were:

  • the need for more transparency on both the “input” (AI tech companies’ use of copyrighted material for training and models) and “output” sides (when and how AI has been used in the creation of new music)
  • clearer guidelines on the use of AI that includes the consent of the original artist(s) being affected as well as equitable copyrighting of music, whether AI is used in the creation of it or not
  • the risk of overreach or otherwise restricting creative freedom and music democratization

On Transparency and Licensing…

  • Improving metadata is already an issue today, and incorporating AI contributions to the metadata of a song is more important than ever to ensure the royalties and proper attributions are given to the correct creators.
    • We need a way to reliably label when AI was used in the generation of a song.
  • Regan Smith from Spotify pointed out that it is impossible for DSPs (digital streaming platforms) to be able to police the use of AI generation in songs it receives on their own. DSPs rely on accurate metadata being submitted, so the onus of accurate reporting is on the creators and publishers of a song.
    • She also pointed out that the business structure of streaming platforms is based on what is listened to, not what is (merely) uploaded. There has been a clear rise in AI generated music uploaded to Spotify, and if the public likes it, then that is what will be shared most widely on the platform.
  • Several panelists involved in representing and publishing artists stated that AI tech companies have been reluctant to negotiate any licensing agreements on the use of copyrighted material for their “input” side of the equation–they seem to prefer the model of going to court over a lawsuit and debating each use on a case-by-case basis. As of right now, there is no incentive for tech companies to “come to the table” to negotiate with musical artists over the use of their personal IP (intellectual property) and brand.
    • This model is clearly stacked against independent artists who do not have the backing or resources of giant labels and organizations like UMG (Universal Music Group).

On Legal Guardrails…

  • Obtaining permission and arranging licensing deals (paying a creator for use of their IP whether on the “input” or “output” sides of AI use) on a case-by-case basis shouldn’t be a problem. According to several panelists, including Kenneth Doroshow from the RIAA, we already have a robust licensing framework in place, for obtaining rights to use samples or otherwise create remixes or derivative works, etc; so adding licensing requirements for those employing generative AI tools shouldn’t be an issue. It just needs to be employed in a thoughtful manner (many tech companies prefer the “move fast and break things” model, however).
  • It was argued that with the documented lack of clarity in metadata and rights in the entire music industry, there remains too much confusion about who has the right to license any part of a piece of music. So big tech companies would rather have a compulsory license: blanket permission to “scrape” from any music out in the world today.
  • The time to talk about this is now. AI is becoming an ingrained part of standard tools everywhere, including music creation; we are already seeing its use in DAWs (digital audio workstations) and other plug-in oriented tools. There will likely come a point at which even a creator won’t know if or when generative AI was used in the creation of their own work.
  • Ryan Groves spoke of his experience using AI in a cutting-edge way for video game creators and players: “Infinite Album“, whose copyright-safe, infinitely generative AI music instantly adapts to the game as it’s played and can be adjusted (via genre, emotion, instruments, or adding sound effects) by players. The artists from which Infinite Album synthesizes this music want to be able to monetize their work–because the AI is only basing its output on what human artists have put in. But current copyright rules offer no way to do this; to copyright a sound recording, you need to submit a reference recording, for which none can possibly exist for music that is constantly in flux depending on a player’s decisions.
  • Currently the “right of publicity” is a hodgepodge of laws that vary in scope and are dependent on location (in the USA, each State has its own publicity law, if it has one at all). Most of the panelists today agreed that we should be seeking a federal standard of this right, applicable to all States, since the sudden flood of “deepfake replicas” like the famous Drake fiasco.

On Creative Freedom and Opportunities…

  • Alexander Mitchell from Boomy brought up an interesting statement, that there is a newly emerging “creator class”–a class of music creators who use technology in the place of traditional musical training. The question was raised: why should we restrict how a creator creates? Should an artist who uses AI be paid less in royalties? This is like saying an artist who uses AutoTune or the myriad of other digital tools we already have should be worth less in the market than someone who uses traditional methods of creation.
  • This was also interesting to me: imagine a future of “interactive music”. A future where AI allows anyone (with focus on the consumer, not just other artists) to participate and engage with their favorite artists on a whole other level–actively manipulating a song with the assistance of AI based on their most beloved music. It would give listeners a chance to engage in ways beyond “static listening”.
    • Maybe listeners love a certain artist but wish there was “more cowbell“; with an interactive system in place, AI could generate a hit song from any genre with more cowbell… or more acoustic instruments, or more dubstep beats, or less bass, etc…
  • The argument was raised that remixing and creating derivatives of old music with AI breathes new life into forgotten songs and brings renewed attention to forgotten artists. With algorithms like the one used on TikTok, producers and creators can better see what adjustments to a song (tempo, mood, instrumentation, genre, etc.) will be a better fit for “going viral”, and with AI those adjustments can be readily and easily made to remake an old song in to a modern viral hit.
  • It is important to remain nuanced when defining AI tools. For example, the AI singers that replicate the voices of famous artists are not generative–there is still a human singing the original song, with a digital transfer of the famous artist’s voice laid over the original one. Therefore, “we should not conflate consent with copyright”: as long as an artist has given consent to lending their voice to an AI model, why can’t the new song be copyrighted as a new song, as any other original song may be copyrighted?
    • If consent is given (and any appropriate licensing deals worked out), this provides an opportunity for artists to monetize their skills further. For example, this could become a model of generating passive income for retired artists who don’t wish to perform any longer.

Concluding Thoughts

AI is here to stay, and we are definitely in the Wild West/”Land-Grab” era of this newest technology. I’m really very happy to see that government and industry officials, as well as artists and creators, seem to be coming together to discuss these human issues in good faith. Only time will tell how AI will be adopted and incorporated into our creation and sharing of music.

Share this post

2 comments

  1. Thank you so much for attending that Spring listening session on AI. That was incredibly informative and I will definitely be looking out for the recording in a few weeks. It is indeed the “Wild West” right now, and what I’m thinking about is not just how current artists are dealing with this and the consquences that we are sure to experience from the decisions we and the government make today, but also how to prepare our future music educators to handle music making with AI. AI is transforming what it means to be a music maker. The definition of music making grows ever more broad. Young people can already learn to manipulate music with AI, and there is so much more to music than playing physical acoustic/electric instruments in the “traditional” sense. How are/will music educators bring AI into the classroom… And surely, some will reject it. Love it or not, AI is here to stay.

    Thanks so much for posting this, Sarah!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.