Artificial Intelligence in the European Union

Controls on generative AI in the European Union

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Continue reading

  • no results

Previous topic
Back to top