AI policies and governance: A guide for in-house lawyers

Sally Mewies, partner and head of the technology and digital group at Walker Morris, explains the risks of AI and what’s needed for an effective AI policy and governance framework.

2023 was the year artificial intelligence (AI) captivated the world, and we haven’t stopped talking about it.

With recent rapid developments in AI technology, the possibilities are seemingly endless. For in-house lawyers, working out how best to manage risk can be a daunting prospect.

Let’s go right back to basics. What is AI and how is it relevant to you?

AI is a term that can be used to mean different things to different people depending on the context.

When we talk about AI in a legal or business context, we generally mean any machine that processes data or carries out tasks with some level of autonomy – ie, it makes decisions or finds answers to queries based on its own ‘machine learning’. This ‘machine learning’ is possible because of the algorithms that form part of the AI system and the data the system is trained on.

It’s important to understand that AI technology isn’t new and has been in use for years. For example, the technology used in many streaming services to predict the kind of content a viewer might like. Interest has piqued over the last 12 months or so because of a new breed of AI – Generative AI (GenAI) which has the ability to create videos, text, and audio at the touch of a button and in a way that (at least visually) looks polished. Examples include ChatGPT and Google Bard.

Many GenAI systems are built on what are known as ‘foundation models’. These are attracting a lot of attention because they can process vast amounts of unstructured data and perform multiple tasks at once. Foundation models can be used as a base technology to build an AI system for a specific purpose – and often allow organisations to build on top of their capabilities via an application programming interface (API) access as part of their business model.

GenAI in particular can have a transformational impact on businesses. Tasks that take a human hours to complete can be done in seconds. In-house lawyers need to be ahead of the game when it comes to AI because, in many sectors, their business teams will be very keen to implement these systems. There are associated risks and legal considerations, and in-house lawyers will be called on to prepare their businesses for the impact.

There’s a lot of talk about the risks of AI. What are the key legal and compliance issues that in-house legal teams need to be aware of?

It depends on the type of AI system being used, but let’s take the foundation model as an example. There are various risks for a business of using AI systems built with this kind of model. These include:

Risk one: IP and confidentiality

The data the system was trained on could include a third-party’s IP or confidential information. If the correct consents and/or licences haven’t been obtained, this could lead to action being taken against the user by the owner of the IP or confidential information. The extent to which training an AI system on third-party IP is an infringement is the subject of cases in many jurisdictions. In early 2023, Getty initiated legal proceedings against Stability AI for the alleged use of Getty’s images to train Stable Diffusion, a model owned by Stability. The legal position is still unclear until we have judgments in these cases.

It’s also unclear who owns the IP in the content generated by an AI system (the output). For example, when a business uses an AI system to write an article, the human writer (or their employer) would ordinarily own the IP in the article. But where the article has been wholly written by a machine, with minimal prompts from a human, there are questions about whether there’s sufficient creativity in the output from a machine for it to satisfy the criteria for IP protection. We’ve already seen a number of cases on whether an AI system can be the registered owner of a patent. The current trend is that it can’t.

The law around trade secrets is likely to become highly relevant. That’s because both the data that has been input into an AI system as a query, and the ensuing output, are the user’s trade secrets and protected more like confidential information than as a registered piece of IP such as a patent or a work of copyright.

Crucially, if a business inputs confidential information into a public AI system like ChatGPT, the model could, in theory, use that information to train itself. The information is arguably disclosed at that stage. There’s then a risk that the information is now public, which could be in breach of any confidentiality obligations the business has towards its staff or business partners.

Risk two: data protection

Businesses using AI systems still need to comply with their data protection law obligations. Because we’ve seen a number of high-profile IP cases, less attention is being paid to the complexities of using AI systems that process personal data.

Obligations include being fair and transparent, building data protection into the design of systems, and honouring certain rights of individuals – for example, the right to have personal data erased and rights related to automated decision making.

Principles that must be adhered to include purpose limitation, data minimisation and accuracy. These are clearly very challenging principles to adhere to in systems that require huge swathes of data, and where human understanding of how they work can be limited. Personal data should only be processed in ways that an individual reasonably expects, and how that processing is carried out and the effect of it must be explained. With a complex AI system that uses a sophisticated algorithm (or algorithms) it’s unlikely this’ll be an easy task.

The UK’s data protection regulator, the Information Commissioner’s Office (ICO), has AI firmly on its radar. It recently published a round-up of all of its AI guidance and resources, including a detailed overview of how to apply data protection law principles to the use of information in AI systems. The guidance was updated last year after requests from UK industry to clarify requirements for fairness in AI. Fairness interacts with bias and discrimination, two key societal concerns associated with the use of AI. These are ethical considerations that all businesses will need to anticipate from the start, in particular if they themselves are involved in the development of AI systems. The ICO and The Alan Turing Institute have also jointly produced a guide on explaining decisions made with AI, which has some useful practical advice and tips.

It’s really important to take data protection issues into account at a very early stage of deployment. If the AI system is being brought in from a third-party, discuss these issues with them from the start.

Risk three: accuracy

Because AI-generated output tends to be professionally presented and seemingly confident, with few typographical or spelling errors, this can lead people to trust that it’s error-free. But we know from reported stories of the risk of ‘AI hallucinations’, where content presented articulately as fact has simply been made up, including entire legal cases.

It’s important to understand that most GenAI systems, despite being trained on vast amounts of data, are essentially prediction machines – they’re designed to forecast the most probable answer to a prompt.

Unfortunately, this means that if the model doesn’t have the data needed to generate a correct response, it may confidently provide an incorrect one. This is more likely in highly technical fields, where general models might not have enough specific training to give accurate answers.

Even the smartest AI can get things wrong if it’s not armed with the right information. If staff within organisations believe that the tech they’re using produces high-quality, accurate results, they might be tempted to rely on it for certain aspects of their job, which could – depending on the nature of their role – be a risk for the business.

So, what action does the board and senior management need to take now?

A suitable internal structure should be put in place to lead on AI strategy and ensure consistent messaging and awareness throughout the business. As this is a rapidly evolving area, it’s more important than ever to keep up to date with technological and legal/regulatory requirements and guidance. Policies should be regularly reviewed and updated, and staff training will be key.

Carrying out a review and risk assessment of how AI is currently used in the business is an important first step.

All businesses need to understand how they might use AI systems. Ultimately, some may not have a use for AI operationally in their day-to-day business activities, but they must all consider that their employees may still be using freely available AI systems like ChatGPT. Employees need direction on how AI systems can and can’t be used. Otherwise, businesses may find that IP and confidential information (either theirs or that of their business partners) is being put into these systems, resulting in loss of control and possible breach of licences, confidentiality agreements and other obligations. There’s also the risk of reliance on outputs that aren’t accurate and/or may be tainted by inherent bias or discrimination. This is where a staff policy comes into play.

A new ISO AI standard was recently published (ISO 42001) which has similar scope and purpose to the well-known ISO 27001 standard for best practice in information management. Businesses may want to contemplate adopting this standard when implementing their policies and governance frameworks.

Consider whether a log should be kept of use of the AI systems for specific purposes, so you have a record for governance purposes. More generally, documenting the steps that have been taken to identify and mitigate risks, and the reasoning behind decisions
taken, may be a legal and/or regulatory requirement (and is sound business practice, in any event).

What else do in-house legal teams need to consider when contracting for AI? What protections should be included?

Contracting for a new AI system is much like contracting for any new IT solution. All the usual considerations around scope of your licence to use the system and appropriate indemnities and allocation of liabilities apply. But there are some additional points you’ll need to think about:

  • Data protection – if the system will process any personal data, then it’s a priority to map out how the business will comply with its data protection obligations. It’s likely that a data protection impact assessment (DPIA) will be needed. You’ll need to think about privacy notice updates if the system will result in personal data being collected for new purposes and processed in different ways.
    Consider whether the AI third-party provider is a data controller or processor for data protection law purposes. Usually, in relation to IT solutions, it’s a standard controller to processor relationship – but the complexities of how AI
    systems are trained and kept up to date means this may need a more thorough analysis.
  • IP and confidentiality – you’ll want to seek comfort that the input data on which the AI system has been trained doesn’t misuse the confidential information or trade secrets of another person, or infringe their IP rights, and that the processes used to develop the system are legally and ethically sound. However, it’s clear that, because of the uncertainty around IP infringement in relation to training data, many AI system suppliers will be reluctant about giving this protection at this stage.
  • Stopping internal data being used for model training – for obvious reasons it’s crucial to ensure that the AI system isn’t trained on any data provided by your business or its suppliers. Keeping your data in your hands and out of the AI’s learning process protects confidentiality for both your business and
    those that you work with.
  • Who owns the data – we’ve already discussed the current legal challenges surrounding ownership of output. It’s essential that, between you and the supplier, your contract is clear about who has the rights over the output and what each party can do with it.
  • Model maintenance – AI systems need to be kept up-to-date and trained on up-to-date data. The contract needs to be clear on how this’ll be achieved and who’ll take the lead on this important issue. This is likely to be an additional support service in some contracts.

What are the key points an AI staff policy should cover?

Different policies and instructions will be needed for a publicly available AI system like ChatGPT from an enterprise versionthat’s been commissioned by the business.

Not all businesses allow the use of publicly available AI systems, but those that do need to be clear what can and can’t be done with them. As already discussed, confidential information and IP shouldn’t be inputted into a publicly available AI system, and staff need to understand that these systems aren’t always accurate. Care needs to be taken both in terms of the prompts used to generate an output and the reliance on that output.

A business implementing an enterprise AI system should develop specific instructions based on its capabilities and how the business intends it to be used. Typically, when a business implements a new software system, its use is confined to its functionality. AI systems can be different in that their uses can be wide and varied, some of which can be low-risk and others high-risk. Depending on the kind of system you implement you’ll need to be clear about what tasks it can be used for.

Other areas to cover include an outline of the benefits and risks, the consequences of non-compliance with the policy, details of available training and other support, use of AI systems on personal devices, and monitoring.

Want to talk through any of the answers provided by Sally?
Contact her at sally.mewies@walkermorris.co.uk.

About Walker Morris

Walker Morris LLP is a commercial law firm that provides tailored, long-term, full-service advice to clients around the world.

Our diverse collection of lawyers and professionals carry an entrepreneurial spirit and work hard to help clients forge their own paths and achieve greater success. At our firm, the owners of the business do the work. They have ‘skin in the game’ and assemble their own teams to suit each requirement. This highly flexible approach has enabled us to consistently provide the commercial and sustainable advice our clients look for.