Profile information Member settings
Logout
Sign up Sign in

What is AI?

‘Artificial intelligence’ is a broad term that covers various types of computer programmes designed to perform in ways that mimic human cognition and intelligence. These range vastly in complexity. The AIs currently in vogue are generally ‘machine learning’ models, which use algorithms to learn from many examples of a specific type of content (eg written information or images). This learning process is referred to as ‘training’ an AI. Trained AIs can make predictions and extrapolations based on their learning to, for example, generate new content in response to a user’s prompt. Of particular interest in the legal services industry are large language models (LLMs), which train on and generate text. 

It can simply be wrong

Providing valuable legal services, whether advice, documents, advocacy, or other services, always requires accuracy. Provide incorrect legal information or ineffective contracts and a legal services provider’s clients are going to be very unhappy and may suffer serious losses (which may, in turn, be passed on to the legal services provider if it has provided some sort of guarantee to the client). So are LLMs up to the task?

In short, no (or at least not yet). LLMs can sometimes simply be wrong. LLMs produce output that is purely informed by the data they’ve been trained on. An AI’s training data may be out of date, limited (ie if it simply doesn’t contain necessary information), or just wrong. AIs that use data available online to train are also vulnerable to hacking, ie when a malicious party intentionally exposes an AI to incorrect or confusing data. In any of these situations, an LLM may have learnt incorrect patterns from training data, which it will in turn present to users as truth. 

Even if an AI’s data is all correct, it may still produce incorrect output. AI creates output not based on deep, nuanced understandings of topics, but based on statistical outcomes and probabilities. For instance, an answer suggested by the patterns an AI has identified in its training data is the answer the AI will use. Sometimes such answers are incorrect as an AI may try so hard to provide useful output that it essentially totally makes up something it thinks should fit. This is sometimes referred to as the AI ‘hallucinating’. 

These inaccuracies leave a gap in the market for human legal professionals to fit into. AI does not (yet?) look able to possess the level of practical experience and commercial expertise or the ability to check reliable sources that a knowledgeable human lawyer can offer.

AI can perpetuate societal biases

As well as being simply incorrect, inaccurate AI output can be harmful. For example, if an AI is trained on data that contains systematic biases (eg assumptions about individuals that belong to certain population groups), the AI will take in these biases and incorporate them into its output without ethical evaluation, perpetuating the biases by presenting users with bias-informed output. This could affect legal services provision. For example, if an AI is used to create a contract that allocates risk between different groups (eg in insurance situations) and it uses risk data that was originally collected via a procedure that contained bias (eg if the data collector made unconscious judgements about survey participants belonging to certain population groups).

AI can over-generalise

Epidemiology often relies on a concept of ‘generalisability’. This refers to the extent that the outcomes of statistical analyses of health data within a given sample (ie a small group within a population) can be accurately said to apply to the general population from which the sample was taken. If something is over-generalised (ie you assume it applies to a general population, when really the sample is qualitatively different to the general population), the application of the data to the general population will be inaccurate. 

AI faces an analogous issue when it relies on proxy measurements when providing output. Using a proxy measurement involves data on one matter being used to provide information about another matter, relying on the assumption that the first matter maps onto the second matter for the purposes of the data. For example, an AI might be trained on data that includes a study that found that businesses that have more experience (however measured) as subcontractors in their given industries are less likely to do something to cause main contractors losses related to Subcontracting agreements. An LLM may write a main contractor a new subcontracting agreement that includes a limitation of liability that it considers allocates the main contractor a commercially appropriate amount of risk due to the length of time its new subcontractor has been acting in its industry. If the subcontractor has been acting in the industry for a long time but somehow does not have a lot of experience (eg because they had a very small market share for a long time and are only now expanding), the degree of risk the main contractor has agreed to take on could be inappropriately large – all because the AI incorrectly conflated amount of experience and length of time. In relying on proxy data in such ways, AIs can provide legal services based on inaccurate information that may lead to costly missteps for clients. 

AI use could break data protection laws 

LLMs are trained using a large volume of data. This may accidentally, or due to an AI developer’s ignorance of data protection law, include people’s personal data (ie information about them from which they can be identified). The UK’s data protection laws restrict how such data can be processed (eg used and stored). If such data is inadvertently included in an AI’s output, for example, to illustrate a point in generated legal advice, its use may infringe data protection law. 

Businesses may also input personal data that they control. For example, they may input their customers’ details when asking an LLM to generate contracts for use with those customers. 

Businesses in either of these situations must be responsible for their own data protection compliance and will need to take into account risks specifically associated with processing data using AI. For example, the UK’s Information Commissioner’s Office (ICO) has recently released guidance on factors that businesses should consider when processing data using AI and specifically when carrying out a Data protection impact assessment (DPIA)

For more information on data protection compliance, read Complying with GDPR

Despite the multiple limitations outlined above, AI models do offer a lot of promise for enhancing legal services provision. It’s indisputable that their intellect and computing power offer time and cost efficiencies that, if used in the right way, may offer clients of legal services more cost-effective legal solutions. 

For example, AI models can be used as a tool to expedite the initial stages of tasks like contract and document drafting. This can cut down the time a human lawyer takes to complete a task as they, for example, may only need to check and improve upon clauses that have been created for them. Less complex and nuanced legal tasks could be even more highly automated using AI. For example, answering people’s legal questions by signposting to more reliable, human-updated sources. Innovations like this could greatly increase the accessibility of legal services.  

Better yet, AI enterprises and users are constantly working to mitigate some of the issues highlighted above, meaning AI may become increasingly reliable and better able to assist with legal services provision. Measures include: 

  • training models on high-quality maintained data sets designed specifically for legal services purposes

  • putting in place guardrails, ie programmed interventions that minimise the chances of an LLM hallucinating or producing inappropriate output

  • using ‘red teams’ within an AI business, ie groups who purposefully try to make a model behave poorly to identify its vulnerabilities, so that these may be overcome 

What’s next?

AI technology is developing so quickly that various industry representatives have called for a slowdown. Whatever is next, we can expect it to be more powerful and, although perhaps more dangerous, to offer even greater potential for creating efficiencies and accessibility in the legal market. As long as LLMs’ limitations are taken into account, tackled, and mitigated, the future of AI in the legal services industry looks positive.

That’s not to say AI doesn’t pose issues in other areas of the legal world. For an example, read about AI and copyright law

If you run a business and want to work with AI, either to help you with legal needs or to help you provide services yourself, consider Asking a lawyer for legal assistance to ensure you have your advice or documents checked for accuracy or contracts in place to protect your business’ endeavours. You should also consider adopting an AI Policy to set out how staff members can use AI in the workplace. 


India Hyams
India Hyams
Content Acquisition Manager at Rocket Lawyer UK

India manages legal content for Rocket Lawyer UK. She has a MA in Law and as an undergraduate studied Psychology and English Literature at the University of Auckland and King’s College London.

She is interested in commercial law, particularly that related to intellectual property, tech, and the life sciences sector.

Ask a lawyer

Get quick answers from lawyers, easily.
Characters remaining: 600
Rocket Lawyer On Call Solicitors

Try Rocket Lawyer FREE for 7 days

Get legal services you can trust at prices you can afford. As a member you can:

Create, customise, and share unlimited legal documents

RocketSign® your documents quickly and securely

Ask any legal question and get an answer from a lawyer

Have your documents reviewed by a legal pro**

Get legal advice, drafting and dispute resolution HALF OFF* with Rocket Legal+

Your first business and trade mark registrations are FREE* with Rocket Legal+

**Subject to terms and conditions. Document Review not available for members in their free trial.