
Artificial Intelligence in the European Union
Prohibited activities in the European Union
Law / proposed law in the European Union
Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions will come into force on specific dates:
- 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
- 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
- 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
- 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).
A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.
Regulatory guidance / voluntary codes in the European Union
In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:
- Prohibited AI practices, on 4 February 2025 (however, the Commission has not yet formally adopted them); and
- Definition of an AI system, on 6 February 2025 (however, the Commission has not yet formally adopted them).
Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.
Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct (which will be distinct from the GPAI Code of Practice).
In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.
Appointed supervisory authority in the European Union
European Level
The European Commission established the European AI Office (AI Office) on 24 January 2024. The AI Office is a European Commission function and forms part of the Directorate-General for Communications Networks, Content and Technology; it must therefore operate in accordance with the Commission's internal processes. The AI Office is responsible for assisting the European Commission with the oversight, monitoring and enforcement of requirements for GPAI models and systems. It is primarily made up of hired full-time staff from a range of backgrounds such as technology specialists, economists, policy specialists and lawyers.
In addition, the European Artificial Intelligence Board (AI Board) has also been established. The AI Board's core responsibility is to advise and assist the Commission and Member States to facilitate the consistent and effective application of the EU AI Act. The AI Board will include a representative from each Member State and the AI Office and the European Data Protection Supervisor shall participate as non-voting observers.
Member State Level
Article 70 of the EU AI Act concerns the designation of national competent authorities by EU Member States. It specifies that each Member State shall establish or designate as national competent authorities at least one notifying authority and at least one market surveillance authority for the purposes of the general supervision and enforcement of the EU AI Act. Where multiple market surveillance authorities are appointed, one of the market surveillance authorities must act as the single point of contact. The authorities must operate independently and without bias. Member States are expected to notify the Commission of their appointed authorities and must provide publicly available information on how to contact them by 2 August 2025.
Definitions in the European Union
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
Prohibited activities in the European Union
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions except for medical or safety reasons.
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
High-risk AI in the European Union
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Controls on generative AI in the European Union
General-Purpose AI Models
Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:
"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."
GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.
General-Purpose AI Models with Systemic Risk
Article 3(65) of the EU AI Act defines 'systemic risk' as:
"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".
Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.
Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.
Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.
Enforcement / fines in the European Union
The EU AI Act enforces compliance through a structured framework of fines and sanctions, varying in severity based on the nature of the non-compliance.
For non-compliance with prohibited AI practices, fines can reach up to EUR 35 million or 7% of the total worldwide annual turnover, whichever is higher.
This includes practices like manipulative AI systems, exploiting vulnerabilities, social scoring by public authorities, and unauthorized biometric identification in public spaces.
Breaches of high-risk AI system requirements can incur fines up to EUR 15 million or 3% of the total worldwide annual turnover.
These requirements include risk management, data governance, technical documentation, transparency, and cybersecurity. Other non-compliance issues, such as providing incorrect or misleading information, can result in fines up to EUR 7.5 million or 1% of the total worldwide annual turnover. This applies to breaches not covered by the highest or significant sanctions.
User transparency in the European Union
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Fairness / unlawful bias in the European Union
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
Human oversight in the European Union
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions except for medical or safety reasons.
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.