Artificial Intelligence in the United Kingdom
Fairness / unlawful bias in the United Kingdom
Law / proposed law in the United Kingdom
A specific law addressing AI has not been implemented in the UK yet.
Two Private Members' Bills relating to the regulation the use of AI systems are currently progressing through the legislative system. The first relates to decision-making processes in the public sector, the Public Authority Algorithmic and Automated Decision-Making Systems Bill, introduced to the House of Lords by Lord Clement-Jones on 9 September 2024. The second is Lord Holmes' Artificial Intelligence (Regulation) Bill introduced on 4 March 2025 (although a version of the Bill had existed in the prior Parliamentary session before the 2024 General Election), which would establish a central AI Authority, regulatory sandboxes and require an AI officer for organisations deploying AI.
In the King's Speech of 17 July 2024, the UK Government announced that it will seek to:
"establish the most appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models" (para. 7, page 7).
Whilst there had been some speculation in 2025 that the UK might move more strongly towards a broader cross-sector AI regulation focussed on managing the risks of AI, this has not been the case. In the latter half of the year, the UK government reaffirmed its sector-based approach and in particular the message that it sees AI as a critical component of UK economic growth.
In October 2025, the Government announced its blueprint for AI regulation, which identified some of the tools it sees as necessary to deliver this growth and drive modernisation of key UK sectors. Part of these proposals include the use of regulatory sandboxes in key sectors (such as healthcare, professional services, transport, and the use of robotics in advanced manufacturing) to foster responsible development of AI. While the proposals are cross sector in nature, the focus appears to be more on reducing barriers to growth. The government launched a call for evidence, which closed on 7 January 2026, to seek views on the AI Growth lab and so we can expect to see more concrete proposals later in the year.
There are many UK laws beyond the scope of this resource (relating to data protection, intellectual property, human rights, equalities, employment laws, etc.) that impact various aspects of AI development, deployment and use.
On data protection for example, the Data (Use and Access) Act 2025 (DUAA) received Royal Assent on 19 June 2025. Although not an AI-specific statute, the DUAA is expected to play a significant role in the UK's AI ecosystem by improving access to and use of data across regulated sectors, in turn, supporting AI development and innovation.
The most relevant amendments impacting the use of AI in the UK are those related to automated decision making, which took effect on 5 February 2026. The previous regime generally prohibited solely automated decisions (with no meaningful human involvement), including profiling, that had a significant legal effect, unless there was explicit consent or it was necessary for the entry into or performance of a contract. The DUAA moves the dial to a more permissive framework, aimed at reducing compliance burdens while in parallel mandating new safeguards (outlined in more detail in our guide to Data Protection Laws of the World).
Automated decision making is now permitted with those new safeguards implemented, unless special category data (e.g. health data) is involved, and organisations can now rely on legitimate interests as a lawful basis (i.e. instead of consent which is hard to obtain, or contractual necessity, which was often difficult to establish for efficiency gains).
Notably, the DUAA clarifies that human review must be "substantive and informed," i.e. a human must be able to challenge or override an AI-driven decision or profile generation, but they don't necessarily need to be involved at all stages. This is important, as the Information Commissioner's Office has indicated that enforcement action may be prioritised where automated decision-making systems fail to offer meaningful human intervention, or where the lack of these safeguards could lead to significant discrimination or unfair treatment of individuals.
Regulatory guidance / voluntary codes in the United Kingdom
On 31 January 2025, the UK Government published a Code of Practice for the Cyber Security of AI (Code) setting out cyber security requirements applying throughout the lifecycle of AI systems. The Code consists of thirteen principles to be voluntarily applied by relevant groups within the AI Supply chain, namely system operators, developers, data custodians, end-users and other affected entities, with each principle linked to a particular stage of the AI system lifecycle.
On 13 January 2025, the UK Government announced an AI Opportunities Action Plan (Action Plan), its roadmap towards harnessing AI opportunities to enhance growth and productivity for the UK, focusing heavily on investment in infrastructure and skills.
The Bletchley Declaration dated 1 November 2023 was the outcome of the UK's AI Safety Summit held by the previous UK Government and signed by several international governments, each affirming that AI should be designed, developed, deployed and used in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. The UK delegation notably joined the USA in declining to sign the declaration on 'inclusive' AI at the Paris AI Summit in 2025.
On 29 March 2023, the UK Government published a White Paper: A pro-innovation approach to AI regulation (White Paper) elaborating on the approach to AI set out in its 18 July 2022 AI Governance and Regulation Policy Statement. The White Paper set out proposals for implementing a proportionate, future-proof and pro-innovation legislative framework for regulating AI and identified five key principles (para. 48, section 3.2.3):
- Safety, security and robustness;
- Appropriate transparency and explainability;
- Fairness;
- Accountability and governance; and
- Contestability and redress.
On 31 July 2025, BSI launched the world’s first international standard for independent audits of AI systems aiming to ensure consistent evaluation of AI reliability, fairness and safety.
The government is planning to legislate to grant the AI Safety Institute statutory independence by late 2025, making voluntary safety pledges legally binding.
Additionally, in July 2025, the government signed non-binding arrangements with several frontier AI model providers, to foster adoption in public services including deployment in ‘AI Growth Zones’.
Appointed supervisory authority in the United Kingdom
A supervisory body with authority for AI has not yet been appointed in the UK by way of statutory appointment. However, Lord Holmes' proposed AI (Regulation) Bill seeks to create a central statutory AI Authority, which would coordinate oversight across sectors and set governance standards.
Definitions in the United Kingdom
A specific law addressing AI has not been introduced in the UK yet.
Prohibited activities in the United Kingdom
A specific law addressing AI has not been introduced in the UK yet.
High-risk AI in the United Kingdom
A specific law addressing AI has not been introduced in the UK yet. Sector regulators are looking at risks posed by AI in their sectors; financial, communications, healthcare and other sectoral regulators (FCA, Ofcom, MHRA) are increasingly embedding AI principles into existing frameworks. Some have expressed concerns about the pace of adoption, with the FCA having issued a warning in June 2025 that the speed at which AI is evolving will require adaptive enforcement.
Controls on generative AI in the United Kingdom
There is no single statute addressing AI in the UK yet. Existing principles under e.g. the Equality Act 2010, Data Protection Act 2018, UK GDPR and, now, the Data Use and Access Act are therefore to be considered.
Enforcement / fines in the United Kingdom
There is no single statute addressing AI in the UK yet. Existing powers available to the CMA, ICO and, for the financial services sector, the FCA and Ofcom for the media and online services sector, are therefore to be considered. It is worth noting that the Digital Regulation Cooperation Forum (DRCF) was set up in the UK to facilitate collaboration between the CMA, ICO, FCA, and Ofcom to coordinate on cross-sector digital risks, which established a dedicated AI and Digital Hub for innovators.
Where deployment of AI might result in redunancies, it is important to be aware of changes introduced by the Employment Rights Act 2025.
User transparency in the United Kingdom
There is no single statute addressing AI in the UK yet. Existing principles under e.g. Data Protection Act 2018 and UK GDPR should be considered. The principle of appropriate transparency and explainability identified in the White Paper specifies that AI systems should be appropriately transparent and explainable, on the basis that transparency can increase public trust, which can be a significant driver of AI adoption.
Fairness / unlawful bias in the United Kingdom
There is no single statute addressing AI in the UK yet. Deployment of AI systems with specific biases could breach existing laws, including the Equality Act 2010, the Data Protection Act 2018 and/or various employment laws, depending on context.
The principle of fairness identified in the White Paper specifies that AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes. Since AI can have a significant impact on people’s lives, the principle states that AI-enabled decisions with high impact outcomes should not be arbitrary and should be justifiable.
The Interim AI Report identified the 'Bias challenge', i.e. that AI can introduce or perpetuate biases that society finds unacceptable.
Human oversight in the United Kingdom
There is no single statute addressing AI in the UK yet. Existing principles under e.g. the Equality Act 2010, Data Protection Act 2018, UK GDPR and, now, the Data Use and Access Act must therefore be considered. As noted under Law / Proposed Law, the Data Use and Access Act has resulted in a more permissive approach to automated decision making, allowing decisions to be made provided safeguards are in place relying on legitimate interests (unless special category data is involved). Please see our guide to Data Protection Laws of the World for a summary of the new Articles 22A-22D of the UK GDPR.
There is no single statute addressing AI in the UK yet. Deployment of AI systems with specific biases could breach existing laws, including the Equality Act 2010, the Data Protection Act 2018 and/or various employment laws, depending on context.
The principle of fairness identified in the White Paper specifies that AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes. Since AI can have a significant impact on people’s lives, the principle states that AI-enabled decisions with high impact outcomes should not be arbitrary and should be justifiable.
The Interim AI Report identified the 'Bias challenge', i.e. that AI can introduce or perpetuate biases that society finds unacceptable.