Artificial Intelligence in the United Kingdom

Law / proposed law in the United Kingdom

A specific law addressing AI has not been implemented in the UK yet.

Two Private Members' Bills relating to the regulation the use of AI systems are currently progressing through the legislative system. The first relates to decision-making processes in the public sector, the Public Authority Algorithmic and Automated Decision-Making Systems Bill, introduced to the House of Lords by Lord Clement-Jones on 9 September 2024. The second is Lord Holmes' Artificial Intelligence (Regulation) Bill introduced on 4 March 2025 (although a version of the Bill had existed in the prior Parliamentary session before the 2024 General Election), which would establish a central AI Authority, regulatory sandboxes and require an AI officer for organisations deploying AI.

In the King's Speech of 17 July 2024, the UK Government announced that it will seek to:

"establish the most appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models" (para. 7, page 7).

Whilst there had been some speculation in 2025 that the UK might move more strongly towards a broader cross-sector AI regulation focussed on managing the risks of AI, this has not been the case. In the latter half of the year, the UK government reaffirmed its sector-based approach and in particular the message that it sees AI as a critical component of UK economic growth. 

In October 2025, the Government announced its blueprint for AI regulation, which identified some of the tools it sees as necessary to deliver this growth and drive modernisation of key UK sectors.  Part of these proposals include the use of regulatory sandboxes in key sectors (such as healthcare, professional services, transport, and the use of robotics in advanced manufacturing) to foster responsible development of AI.  While the proposals are cross sector in nature, the focus appears to be more on reducing barriers to growth. The government launched a call for evidence, which closed on 7 January 2026, to seek views on the AI Growth lab and so we can expect to see more concrete proposals later in the year.

There are many UK laws beyond the scope of this resource (relating to data protection, intellectual property, human rights, equalities, employment laws, etc.) that impact various aspects of AI development, deployment and use.

On data protection for example, the Data (Use and Access) Act 2025 (DUAA) received Royal Assent on 19 June 2025. Although not an AI-specific statute, the DUAA is expected to play a significant role in the UK's AI ecosystem by improving access to and use of data across regulated sectors, in turn, supporting AI development and innovation.

The most relevant amendments impacting the use of AI in the UK are those related to automated decision making, which took effect on 5 February 2026. The previous regime generally prohibited solely automated decisions (with no meaningful human involvement), including profiling, that had a significant legal effect, unless there was explicit consent or it was necessary for the entry into or performance of a contract. The DUAA moves the dial to a more permissive framework, aimed at reducing compliance burdens while in parallel mandating new safeguards (outlined in more detail in our guide to Data Protection Laws of the World).

Automated decision making is now permitted with those new safeguards implemented, unless special category data (e.g. health data) is involved, and organisations can now rely on legitimate interests as a lawful basis (i.e. instead of consent which is hard to obtain, or contractual necessity, which was often difficult to establish for efficiency gains).  

Notably, the DUAA clarifies that human review must be "substantive and informed," i.e. a human must be able to challenge or override an AI-driven decision or profile generation, but they don't necessarily need to be involved at all stages. This is important, as the Information Commissioner's Office has indicated that enforcement action may be prioritised where automated decision-making systems fail to offer meaningful human intervention, or where the lack of these safeguards could lead to significant discrimination or unfair treatment of individuals.

Continue reading

  • no results

Back to top