What is generative AI?
AI is, in simple terms, a simulation of human intelligence in machines. Creative AI is the simulation of human creativity in machines. Tools like ChatGPT are capable of writing articles, stories and poems at an often convincingly human level. These AI models that create new content are called ‘generative AI’ or ‘creative AI’. Art-focused generative AI models can compose pictures, stories, songs and even videos.
What challenges and issues arise from creative AI?
AI opens up a lot of new legal challenges, especially when used to generate creative material. Legal issues surrounding AI and creativity can get very complicated very quickly. Some of the biggest legal challenges associated with generative AI include:
1. Ownership of intellectual property (IP)
Intellectual property law is already complex. Adding AI into the mix could make things very tricky because it challenges conventional ideas about copyright law. Only a handful of countries have so far begun to tackle these issues and provide some protection for AI-created work.
The UK takes the position that the author of a computer-generated work is ‘the person by whom the arrangements necessary for the creation of the work are undertaken’. This separates authorship from creativity. Artistic works created by a computer can, therefore, benefit from copyright protection for the benefit of the human individual who made the arrangements necessary to create the work. Who exactly is considered to have made the ‘necessary arrangements’ may be contentious in situations where multiple parties were involved (eg the person who programmed and trained an AI model and the person who entered the specific prompt that led to the relevant output).
In the UK, the Supreme Court is currently also considering whether an AI system can be an ‘inventor’ for the purposes of a patent application.
Depending on the specific circumstances of a situation, identifying distinct parties responsible for a piece of AI-created work may prove difficult in practice.
In addition, businesses that create AI models need to make sure that they legally own or have permission to use the data that they use to train their models. They might, for example, need to obtain intellectual property licences either just to cover the use of data for training or additionally the use of the data in the creation of new material (ie in the AI’s output). This could protect the business if the AI creates content that would otherwise infringe copyrights existing in the data used to train it. However, this is a rapidly developing area of law. For example, the UK government is considering allowing businesses to train their AIs without licences for the copyrighted materials used.
AI users should also be wary of secondary copyright infringement. Secondary infringement can occur if an AI user uses the output they’ve obtained from an AI, the creation of which in the first place was a copyright infringement. In this situation, it is possible that the user could be liable for copyright infringement. Users need to know that AI companies do not generally offer protection against liability for such infringements.
Overall, regulation of AI and IP is still developing and, across the world, there are many legal battles yet to come.
For more information on copyright law and AI, read What does AI mean for copyright law?.
In many countries, IP generated by AI may not have an immediately clear legal owner. But there’s another problem: AI works by drawing from pre-existing materials. What if the ‘new’ IP it creates is too similar to something created by a human artist?
AI is relatively unproblematic when doing things like running security audits, analysing data, and answering customer service questions as part of the latest call centre technology trends. But when it comes to generating creative materials problems arise, especially as AI is incapable of creating entirely original material.
Plagiarism is generally considered to be the presentation of another person’s ideas as one’s own without fair acknowledgement, either as part of or the whole of a new work. There’s an old saying: ‘When you copy from one source, it’s plagiarism. When you copy from several, it’s research’. In theory, AI does what any human writer or artist does when creating new material, it ‘researches’ pre-existing works and puts what it has learned together in ways that fulfil whatever commands it has been given. However, AI cannot have an ‘original thought’. All it can do is run pre-existing material through its algorithm and recreate it in ways that answer the command it’s been given. If the original artists feel that AI-generated material presents their work as a new author’s own without fair attribution to the original artists, this AI-generated material may veer into plagiarism.
Although plagiarism isn’t always illegal, it is immoral, may breach the rules of organisations (eg universities), and may also constitute IP infringement. So things can rapidly become difficult.
3. Assigning liability
Who is liable if AI-generated material goes wrong? As it can be hard to pinpoint ownership of AI-generated materials, it’s also hard to figure out who is liable for any legal issues.
In the UK, AIs are currently not considered legal persons and so cannot be litigated against.
Let’s say that an AI writes something defamatory or discriminatory. Who would the victim bring a case against? The publisher? The person who typed the command? The person who commissioned the piece? The sources the AI used to ‘research’ the piece? Identifying who is legally responsible is not straightforward.
Businesses should address these liability concerns in their Privacy policies to outline the responsibilities and potential risks associated with AI-generated content. For businesses providing AI-based software products, clear AI terms of services should be adopted to clarify aspects of liability allocation for users.
4. Ethical challenges
Ethical challenges may not always directly intersect with legal challenges, but it’s always worth considering the ethical implications of any creative uses of AI.
Creative AI is often accused of putting human artists out of work. For example, between July to November 2023, the Writers Guild of America (WGA) and Screen Actors Guild‐American Federation of Television and Radio Artists (SAG-AFTRA) were on strike, citing concerns about AI-generated material in its list of demands. When the strike ended, one of the points included in the provisional agreement placed limits on artificial intelligence. If these types of anti-AI protests continue to be successful, anti-AI clauses may be written into other contracts in future and anti-AI legislation could even be passed into law.
5. International discrepancies
As the world wrestles with the challenges of creative AI, it’s likely that the development of AI-related legislation will move at different paces in different countries. This could make the cohesive international use of creative AI difficult.
Let’s say, for example, that the European Union (EU) passes legislation mandating that AI does not replace humans in certain jobs while the US does not. Then, for example, a multinational company based in the US that uses AI for customer services in their business call centre may not be accessible to EU customers, if customer service jobs are some of those protected by the EU legislation.
At the moment the UK, in contrast to the EU, is not definitively planning on introducing any broadly applicable AI-specific regulations. Instead, the UK government plans to rely on existing regulators and structures issuing guidance for their sectors. This approach poses its own challenges (eg it assumes that such regulators have the necessary expertise and resources).
This gets even more complicated when you consider that AI can cross international borders very easily. AI often operates across a variety of geopolitical borders. For example, the use of the .ai domain (the code for Anguilla) is popular with AI businesses purely for the fun of the name. So, an AI algorithm may be developed in the US, be used by people all over the world, and operate from Anguilla. As AI legislation and regulations develop, there is potential for international discrepancies to cause serious legal complications.
6. Data protection
As the use of creative AI continues to evolve, there are increasing concerns about privacy and data protection. AI algorithms often rely on vast amounts of data, including personal information, to generate creative outputs. This raises questions about how this data is collected, stored, and used, and who has access to it.
Regulating data privacy becomes crucial in the context of AI-generated content, to ensure that personal data is handled in compliance with privacy laws and ethical standards.
AI legal issues now and in the future
AI is already a contentious issue, as the WGA and SAG-AFTRA strikes have shown. It’s likely to become even more complicated in the near future, as AI advances and encroaches on traditionally ‘human’ territory even further. From IP issues to concerns about liability, there are a lot of legal challenges presented by AI.
Do not hesitate to Ask a lawyer if you have any questions about the legally compliant use of AI.
If you run a business, consider adopting an AI policy to outline when and how your staff members can use AI in the workplace.