Artistic licence meaning
Mark Lemley, professor of law at Stanford Law School, has gone a step further. Teaching a computer to "create" by training it on existing works is a very similar concept, Tushnet concludes. She explains: “In general, the human capacity to absorb influences and come up with new works in response is usually accepted as how creativity works and a good thing rather than a bad one, outside of cases in which there is infringing similarity between two outputs.” This is when a new work based on an old one is deemed transformative if it uses the source work in completely new or unexpected ways, enabling the creator to avoid accusations of infringement. This view has curried favour in academic circles: Rebecca Tushnet, professor at Harvard Law School, draws a comparison between the leverage granted to humans when coming up with ‘transformative works’. It has been and it may also be a defence to use images as training data for a generative AI such as Stable Diffusion,” he reflects. “Fair use is a potential defence to copyright infringement based on a court’s analysis of factors such as how much of a protected work is being used and the purpose of that use. Ryan Abbott, partner at Brown, Neri, Smith & Khan, points out that the fair use rule has been successfully used as a basis to avoid liability in cases involving text and data mining. The crux of the dispute will ultimately come down to how the US courts will apply the doctrine of fair use-the right to use a copyrighted work under certain conditions without the copyright owner’s permission.įor some, Stability AI has a good case for countering the claims.
We intend to defend ourselves and the vast potential generative AI has to expand the creative power of humanity.”
Stability AI, however, strongly rejects this stance, arguing that the trio have a fundamentally flawed vision of the purpose of generative AI.Ī spokesperson from the company told WIPR that: “The allegations in this suit represent a misunderstanding of how generative AI technology works and the law surrounding copyright. McKernan and her co-plaintiffs argue that the derived images from Stable Diffusion will ultimately compete in the marketplace with the originals-and that buyers will be able to access the artists’ works to generate new works without compensating them. “While the law differs from country to country, we can expect the technology used in generative AI to be explored thoroughly and this will be informative and, potentially, persuasive in future disputes regardless of where that dispute is taking place.” “It asks whether the output from an AI infringes as a derivative work, whereas the Getty case just concerns the content that was an input,” he explains.Īs a result, its outcome could have significant repercussions internationally, he adds. In London, Getty Images sued Stability AI for using images for training purposes without its permission, while Microsoft, Microsoft’s GitHub, OpenAI and MidJourney have been accused of infringement by allegedly using copyrighted open source code to train their machine learning (ML) systems.īut according to lawyers and academics, it is the artists’ case that has the power to reverberate across the legal sphere, and to truly divide opinion.įor Mark Nichols, senior associate at Potter Clarkson, the US case covers more territory than the Getty lawsuit, as it explicitly looks at the IP implications regarding content that has been created by an AI-even if it has used the work of others to do so. The case marks the start of a flurry of litigation against the developers of generative AI- systems that can learn concepts from large bodies of existing knowledge and which then use what they learn to help people create new works.