Moving towards regulation of AI

Charlotte Halford and Omar Kamal at DAC Beachcroft discuss the global and UK approaches to regulating AI.


Since the sudden growth in generative AI in early 2023, barely a day has passed without AI hitting the headlines. In some instances, it has been proclaimed an existential threat to humanity and decried for disinformation in recent elections. In others, it has been billed as a revolution, with the British Computer Society publishing a letter calling for AI to be seen as a ‘force for good’.

So what is it? Why should you care? And how are regulators dealing with it?

What is AI?

Essentially, Artificial Intelligence (AI) are systems or machines which appear to have elements of human-like intelligence. Not all AI is created equal; there are different types of AI which vary in complexity – from basic ‘reactive’ machines (that react to new content in a predictable way) to ‘generative AI’ (which can teach itself and create new, original content).

When considering the risk and opportunity presented by a specific use of AI, it is crucial to understand the type of AI being considered, to ensure that all relevant risks are properly considered and, conversely, that the approach being taken is not overly risk adverse in the context.

Why does it matter?

The use of AI can offer considerable benefits and opportunities, from simple business efficiencies to competitive commercial advantages. However, it can also carry significant risks related to legal and regulatory enforcement, data protection, confidentiality and intellectual property.

Importantly, these risks should be considered at each stage of the AI lifecycle: design and development; data collection and selection; development and evaluation; deployment and monitoring; and decommission.

How is AI being regulated?

So what are regulators doing to help manage these risks?

Global approach

Of course, there are a range of existing legal and regulatory obligations (such as data protection, copyright, equality and product liability laws) which govern the use of AI and more broadly, issues such as data ethics and reputational protection are increasingly important.

However, alongside this, governments and regulators – both in the UK and internationally – are moving at pace to develop an approach to AI regulation, many even agreeing to work together in the world-first Bletchley Declaration (Declaration) which was agreed by countries including the UK, US, Canada, France, Germany, China, Japan and India.

Collective key themes in AI regulation have emerged around the world, with the Declaration affirming the importance of a global approach to AI. Key themes from the Declaration and internationally include: safety and security (particularly the protection of individual rights), transparency, together with risk-based approaches and assessments.

However, there is still no uniform global approach to exactly how AI is regulated with different countries taking different approaches. Some (such as China) already have laws in force, some are deciding not to introduce overall binding regulations as yet (for example, Japan, which has instead focused on non-binding guidance so far), and others (such as the EU, US and UK) have plans to regulate in some form.

The EU, in particular, is taking a robust, risk-based approach. The draft EU AI Act, on which political agreement was reached on 9 December 2023, focuses on the protection of rights and transparency, with certain AI use cases being prohibited, and others being considered high risk (with additional measures, such as conformity assessments, being applied).

UK approach

However, in contrast to the EU approach, the UK Government, in its White Paper published on 29 March 2023 has proposed a principles based, sector led approach set out in its White Paper, titled: ‘A pro-innovation approach to AI regulation’. Rather than implementing a new AI regulatory framework, the UK has set out plans to adopt a principles-based and context-specific approach, under which existing regulators will govern the use of AI by those within their remit. This approach has both industry and inter-government departmental support.

However, in August 2023, the Interim Reports of the Department for Science, Innovation and Technology (DSIT) and Department for Digital, Culture, Media & Sport (DCMS) raised questions in response to this approach. The questions focus on the ability of the current regulators to perform their proposed roles in regulating AI in their sectors. The existing regulators have cited a lack of training, funding, staff numbers and coordination as potential challenges.

DSIT noted that the need for a clear AI governance framework was imperative and in DSIT’s view a ‘tightly focused’ AI Bill should have been put forward in the following King’s Speech. As readers will likely be aware, on 7 November 2023 this recommendation was not followed. This new session of Parliament would have been the last opportunity before the General Election for the UK to legislate on the governance of AI. Following the election it is unlikely any new legislation could be enacted before 2025.

Nonetheless, it should be noted that a Private Members Bill (the Artificial Intelligence (Regulation) Bill (the Bill) was introduced into the House of Lords on 22 November 2023. The Bill is an attempt to establish by statutory instrument a framework to produce more detailed AI regulation. However it is worth noting that Private Members Bills rarely become law but, due to the media attention they often receive, they can encourage the government to take action.

In addition, on 27 November 2023 the UK published the first global guidelines to ensure the secure development of AI technology. This initiative was led by GCHQ’s National Cyber Security Centre and developed with the US Cybersecurity and Infrastructure Security Agency, industry experts and other international agencies. It is further evidence of the continuing global collaboration on AI as the guidelines are the first of their kind to be agreed globally. The aforementioned Declaration was also a diplomatic success for the UK and is purported to be the foundation of international cooperation and collaboration on AI safety.

What’s next?

There are likely to be a considerable amount of developments regarding the regulation of AI in 2024 and we will be watching this space with interest. Without any further certainty, UK organisations need to continue to follow current relevant regulatory guidance, for example the ICO’s ‘AI and data protection risk’ toolkit.

*Please note that this article is accurate as of 9 January 2024.