Let’s talk about AI and copyright.
Blogs about AI are split into two camps:
AI is an excellent tool for productivity. AI can be used to brainstorm. AI can generate images and texts as a precursor to fully fleshing out a creative campaign. AI can be used to create those creative campaigns.
AI is a legal grey area. AI violates copyright. AI should be used sparingly. AI takes jobs away from artists.
Neither one is fully right.
The truth is somewhere in between.
How does AI generate imagery?
AI image generation works like this: you write a prompt.
The machine extrapolates the meaning of every word.
It gives you an image.
Here are a few samples that our Head of Brand created as part of a workshop.
The problem comes after.
What is copyright law?
Copyright law is the treaty which governs how creative work is treated. There’s no international copyright law, so instead, artists have to deal with copyright law both for domestic and international use.
The first copyright law was ratified in 1710. Since then, the amount of content that the various laws covered has ballooned – and this includes what is now happening with AI.
What does AI have to do with copyright?
Ostensibly, AI violates copyright law.
The manner of how AI image generators are trained means that they learn off datasets scraped from the internet – most of which are protected. The same reason why you can’t just save an artist’s original work and put it onto a campaign falls through when it comes to AI generators, as the ones that currently exist don’t really make a distinction between fair-use artwork and copyright artwork.
This theoretically means that every AI generated image trained on that dataset is violating copyright law, opening companies up to legal issues down the line.
Aren’t there laws governing AI use?
The first EU-wide AI law was proposed in 2021. Amendments to the draft were made in 2023 to also determine the use of generative AI. Discussions are currently underway on how this regulation will look, but a brief explanation is this: works created using generative AI will need to be labelled as such.
Additionally, the restrictions on the use of AI are determined through a proposed classification system that will hinge AI risk on whether it affects human health, safety, or fundamental rights. Theoretically, the lowest tier, minimal, is where most generative AI content will exist, however a separate category exists that will determine the use of generative AI risk on a case by case basis.
Broadly, Generative AI use in Europe will have to abide by three main rules:
- Content will have to be disclosed as generated by AI.
- The model itself has to be designed to produce only legal content.
- Generative AI companies will have to publish summaries of all the copyrighted data used to train their generative AI model.
There have been some significant pushbacks to this final point, leading to certain AI models such as Google Bard not launching in Europe entirely.
The United States copyright law is a little simpler: content that is generated with AI is not considered as art created by human involvement, and therefore the art itself does not fall under copyright law protection. Data that was used to train AI models is under copyright law protection, however this has opened up more, not less, legal issues.
Copyright law cases using generative AI
Here is a list of some of the current copyright law cases.
- Sarah Silverman, Chrstopher Golden, and Richard Kadrey have sued OpenAI and Meta for using their copyrighted books as training material for ChatGPT.
- Mona Awad and Paul Tremblay filed a similar suit in California Federal court.
- John Grisham, George R.R. Martin and other members of the Authors Guild has also sued OpenAI.
- Universal Music Group is suing Anthropic AI for copyright violation in training its chatbot, Claude.
- Sarah Anderson, Kelly McKernan, and Karla Ortiz have sued Stability AI, Midjourney, and DeviantArt.
- Getty AI has also sued Stability AI over the use of its materials to train their Stable Diffusion model.
The outcome of these cases won’t be determined for a very long time, however it’s clear that the artworks themselves do not appear to be at the heart of the lawsuits. Instead, copyright holders are suing the companies that created the AI models themselves, and this is proving to be a difficult battle to win.
- In the case of Sarah Anderson, Kelly McKernan, and Karla Ortiz, two of the cases were dismissed, with only Sarah Anderson’s case proceeding, albeit with limited focus.
- In the case of Sarah Silverman, Christopher Golden, and Richard Kadrey, as well as Mona Awad and Paul Trembley, John Grisham, and George R.R. Martin, a motion to dismiss the case was filed on the basis that the output is not similar at all to the copyrighted works. A counter-claim was filed by the authors against this motion.
AI companies are also fighting to get the Copyright Office to class its use of datasets as ‘fair use’, which could have further widespread consequences for copyright law.
What does this mean for businesses?
At this stage, you’re not breaking the law if you’re using AI-generated artworks – provided you are following the legal rules for AI use in your country, which at this stage do not amount to much.
It is very unlikely that legal proceedings against companies simply using the AI tools will make it to court: instead, it’s the AI companies themselves that will have to field this battle.
That said, there is another issue that small and medium businesses have to keep in mind, and that is the public opinion of AI-generated imagery. Consumers range form apathetic to irritated about the proliferation of AI-generated imagery, and it’s more likely that companies using AI-generated imagery as part of their creative output won’t face a legal reckoning, but a public reckoning: that of their audiences deriding them for the use of AI in the first place.
What else should I keep in mind?
At the time of this article, there is nothing tangible to state that using AI-generated imagery violates copyright by the company that is using it. Class action lawsuits are being directed towards the parent companies of those AI models.
That said, this is an issue that’s changing rapidly. We likely won’t see the full scope or scale of laws governing AI until the very end of its initial development cycle, so somewhere around 2024 – 2025. While governments and institutions around the world are working quickly to try and create laws that will limit harm and risk, the world at large was not prepared for this onslaught of content, and it’s taking a longer time than normal to parse through what this means.
Use AI speculatively. Enjoy its process
We’re keeping an eye out for what’s coming next.