Artificial Intelligence in France

Controls on generative AI in France

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

Generative AI guidance in France

In France, the CSPLA Report specifies how AI model providers should publish a policy of compliance with European copyright law respecting the authors opt-out principle, and to make available to rights holders and the public a sufficiently detailed summary of the content used to train AI models. Templates have been provided for these summaries but have not been kept in the final version of the GPAI Code of Practice.

The Senate Report highlights that today’s AI ecosystem includes “more or less open” models, a distinction that has become central to regulatory debates because openness affects transparency, auditability, and safety oversight. It explains that the EU AI Act now regulates not only AI uses but also foundation models (i.e., GPAI models) themselves, introducing a stricter regime for those deemed systemic‑risk models, due to their scale, dual‑use potential and difficulty to supervise. The report emphasises that generative AI still suffers from core reliability issues (e.g., hallucinations, opacity, and multi‑layered bias) which persist even with mitigation techniques such as Retrieval‑Augmented Generation (RAG), positioning these technical limitations as key reasons regulators now impose model‑level obligations, in addition to downstream application controls.

With regard to privacy aspects, the CNIL Generative AI Guidance provides recommendations on how to ensure privacy safeguards when using Generative AI, including: start with specific needs rather than deploying AI without clear purpose; define allowed and prohibited uses, especially regarding personal data; acknowledge system limitations and risks; choose secure deployment methods, preferably using local, specialized systems; train end users on proper usage and risks; and implement appropriate governance ensuring GDPR compliance with all stakeholders involved.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Continue reading

  • no results

Previous topic
Back to top