AI is taking the world by storm, being involved in everything from online shopping to automation testing. It’s helped businesses to overcome challenges such as processing terabytes of data and has taken over multiple mundane tasks from human workers. Recently, AI has begun to enter the creative arts. Tools like ChatGPT are capable of writing articles, stories, and poems at an often convincingly human level. These AI models that create new content are called ‘generative AI’ or ‘creative AI’. Art-focused generative AI models can compose impressive pictures, songs, and even videos.
But none of this comes without a price. AI opens up a lot of new legal challenges – especially when used to generate creative material. Legal issues surrounding AI and creativity can get very complicated very quickly. This blog looks at 5 of the biggest legal challenges associated with creative AI.
1. Ownership of intellectual property (IP)
To whom does AI-generated IP belong? The ‘Naruto Case’ shines an interesting light on potential IP issues with AI.
In a nutshell, photographer David Slater set up a camera in Sulawesi. While the camera was unattended, a monkey named Naruto investigated the camera and accidentally took several selfies of himself. Slater was thrilled with the cheeky simian selfies and published them in a book. However, PETA filed an IP complaint in the US against his publisher, claiming that Naruto should be considered the owner of the selfies and should be the sole beneficiary of their sale.
So far, so quirky, but here’s where it gets interesting for AI IP. In response to PETA’s case, the US Copyright Office stated that it will “Refuse to register a claim if it determines that a human being did not create the work”.
As such, AI-generated IP may not be eligible for copyright protection in the US. At best, it could be extremely difficult to establish ownership of any materials created. Who has ownership? Is the creator of the algorithm the owner of any materials generated? The person who typed the command? Or (and this brings us neatly to our next challenge), the artists the AI drew upon to create the work? What about in cases where there has been a collaboration between different parties when training an AI?
Intellectual property law is already complex. Adding AI into the mix could make things very tricky because it challenges conventional ideas about copyright law. Only a handful of countries have so far begun to tackle these issues and provide some protection for AI-created work.
The UK takes the position that the author of a computer-generated work is ‘the person by whom the arrangements necessary for the creation of the work are undertaken’. This separates authorship from creativity. Artistic works created by a computer can, therefore, benefit from copyright protection for the benefit of the human individual who made the arrangements necessary to create the work. Who exactly is considered to have made the ‘necessary arrangements’ may be contentious in situations where multiple parties were involved (eg the person that programmed and trained an AI model and the person who entered the specific prompt that led to the relevant output).
In the UK, the Supreme Court is currently also considering whether an AI system can be an ‘inventor’ for the purposes of a patent application.
Depending on the specific circumstances of a situation, identifying distinct parties responsible for a piece of AI-created work may prove difficult in practice.
In addition, businesses that create AI models need to make sure that they legally own or have permission to use the data that they use to train their models. They might, for example, need to obtain intellectual property licences either just to cover the use of data for training or additionally the use of the data in the creation of new material (ie in the AI’s output). This could protect the business if the AI creates content that would otherwise infringe copyrights existing in the data used to train it. However, this is a rapidly developing area of law. For example, the UK government is considering allowing businesses to train their AIs without licences for the copyrighted materials used.
AI users should also be wary of secondary copyright infringement. Secondary infringement can occur if an AI user uses the output they’ve obtained from an AI, the creation of which in the first place was a copyright infringement. In this situation, it is possible that the user could be liable for copyright infringement. Users need to know that AI companies do not generally offer protection against such liability.
Overall, regulation of AI and IP is still developing and, across the world, there are many legal battles yet to come.
For more information on copyright law and AI, read What does AI mean for copyright law?
So, IP generated by AI may not, in many countries, have an immediately clear legal owner. But there’s another problem: AI works by drawing from pre-existing materials. What if the ‘new’ IP it creates is too similar to something created by a human artist?
AI is relatively unproblematic when doing things like running security audits, analysing data, and answering customer service questions as part of the latest call centre technology trends. But when it comes to generating creative materials, we run into problems, especially as AI is incapable of creating entirely original material.
Plagiarism is generally considered to be the presentation of another person’s ideas as one’s own without fair acknowledgement, either as part of or the whole of a new work. There’s an old saying: ‘When you copy from one source, it’s plagiarism. When you copy from several, it’s research’. In theory, AI does what any human writer or artist does when creating new material – it ‘researches’ pre-existing works and puts what it has learned together in ways that fulfil whatever command it has been given.
However, AI cannot have an ‘original thought’. All it can do is run pre-existing material through its algorithm and recreate it in ways that answer the command it’s been given. If the original artists feel that AI-generated material presents their work as a new author’s own without fair attribution to the original artists, this AI-generated material may veer into plagiarism. Although plagiarism isn’t always illegal, it is immoral, may transect the rules of organisations (eg universities), and may also constitute unlawful IP infringement. So things can rapidly become difficult.
Who is liable if AI-generated material goes wrong? This blog has already established that it can be hard to pinpoint ‘ownership’ of AI-generated materials. This means that it’s also hard to figure out who is liable for any legal issues.
In the UK, AIs are not considered legal persons (at least not yet) and so cannot be litigated against.
Let’s say that an AI writes something defamatory. Who would the victim bring a case against? The publisher? The person who typed the command? The person who commissioned the piece? The sources the AI used to ‘research’ the piece? Identifying who is legally responsible is not straightforward.
This can be particularly challenging if an AI generates hate speech (something that has indeed happened). Who should be held to account if an AI starts inciting violence against minorities?
Businesses should address these liability concerns in their Privacy policies to outline the responsibilities and potential risks associated with AI-generated content.
These ethical challenges stemming from a lack of clear responsibility for the effects of AI are among the most complicated aspects of creative AI, and they need a lot of consideration.
4. Ethical challenges
Ethical challenges may not always directly intersect with legal challenges, but it’s always worth considering the ethical implications of any creative AI.
This blog has already discussed how unethical materials generated by AI can bring up questions about liability. It has also spoken about the plagiarism problem, in which AI-generated material could be subject to accusations of IP theft. But the ethical issues run a lot deeper than this.
Creative AI is often accused of putting human artists out of work. At the time of writing, the Writers Guild of America (WGA) and Screen Actors Guild‐American Federation of Television and Radio Artists (SAG-AFTRA) are on strike, citing concerns about AI-generated material in its list of demands.
If anti-AI protests are successful, it’s possible that anti-AI clauses could be written into future contracts in this industry. Anti-AI legislation could even be passed into law.
All in all, the ethical challenges of creative AI have the potential to pose significant legal challenges, both now and in the future.
5. International discrepancies
As the world wrestles with the challenges of creative AI, it’s likely that the development of AI-related legislation will move at different paces in different countries. This could make the cohesive international use of creative AI difficult.
Let’s say, for example, that the European Union passes legislation mandating that AI does not replace humans in certain jobs. The US, however, does not. If, for example, a multinational company based in the US uses AI for customer services in their business call centre, European customers may be unable to access this service.
It is notable that at the moment the UK, in contrast to the EU, is not planning on introducing any broadly applicable AI-specific regulations. Instead, the UK government plans to rely on existing regulators and structures issuing guidance for their sectors. This approach poses its own challenges. For example, it assumes that such regulators have the necessary expertise and resources.
This gets even more complicated when you consider that Artificial Intelligence can cross international borders very easily. AI often operates across a variety of geopolitical borders. For example, the use of the .ai domain (the code for Anguilla) is popular with AI businesses purely for the fun of the name. So, an AI algorithm may be developed in the US, be used by people all over the world, and operate from Anguilla.
As AI legislation and regulations develop, there is potential for international discrepancies to cause serious legal complications.
Also, as the use of creative AI continues to evolve, there are increasing concerns about privacy and data protection. AI algorithms often rely on vast amounts of data, including personal information, to generate creative outputs. This raises questions about how this data is collected, stored, and used, and who has access to it.
Regulating data privacy becomes crucial in the context of AI-generated content, to ensure that personal data is handled in compliance with privacy laws and ethical standards.
AI legal issues now and in the future
AI is already a contentious issue, as the WGA and SAG-AFTRA strikes show. It’s likely to become even more complicated in the near future, as AI advances and encroaches on traditionally ‘human’ territory even further. From IP issues to concerns about liability, there are a lot of legal challenges presented by AI.
Do not hesitate to Ask a lawyer if you have any questions about legally compliant use of AI.
If you run a business, consider adopting an AI policy to outline when and how your staff members can use AI in the workplace.