Artificial Intelligence in France

Regulatory guidance / voluntary codes in France

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

AI compliance in France

In France, many governmental reports and independent authorities’ guidelines have been issued on AI. The main ones impacting the AI framework are presented below.

In September 2017, Deputy CĂŠdric Villani was tasked with leading a mission to implement a French and European AI strategy. This mission was presented in a report named 'Making sense of artificial intelligence', known as the 'Villani Report', which covers various aspects of AI, including economic policy, research, employment, ethics and social cohesion. Additionally, five annexes focus on the risks and opportunities of AI in specific areas: education, health, agriculture, transport and defense and security. This report led to the French government building a national AI strategy in 2018, which was last updated on 7 February 2025.

In June 2020, the French Banking Authority (ACPR) issued a study on the 'Governance of artificial intelligence algorithms in the financial sector' (ACPR AI Governance Study). This study highlights the need for AI algorithm evaluation and governance.

On 7 April 2022, the French national advisory commission on human rights (CNCDH) issued an 'Opinion on the impact of AI on fundamental rights' (CNCDH Opinion), which urges public authorities to establish a strong legal framework for AI. The document highlights how algorithms can perpetuate human biases and recommends measures for ensuring algorithmic transparency and fairness.

On 13 March 2024, the French Artificial Intelligence Commission (governmental body) published a report 'AI: our ambition for France' containing twenty-five recommendations to make France a major player in the AI technological revolution, notably by facilitating access to personal data (in particular health data) and adopting an “AI exception” for public research.

On 28 November 2024, the French Senate’s Office for the Evaluation of Scientific and Technological Choices (OPECST) issued a wide‑ranging report called “ChatGPT, and after? Assessment and perspectives of artificial intelligence” that traces the evolution and mechanics of AI (from symbolic systems to deep learning and Transformer‑based “foundation models”), assesses economic, societal, cultural, and security implications, benchmarks France’s national AI strategy against roughly twenty other jurisdictions, and surveys emerging models of national, EU, and global governance. The report culminates in 18 recommendations, including several to be advanced at forthcoming international AI fora, emphasizing innovation, risk management, transparency, and democratic oversight to ensure AI serves the public interest while safeguarding sovereignty and fundamental rights (the Senate Report).

The French national data protection authority (CNIL) has issued non-binding AI fact sheets (CNIL AI Fact Sheets) that focus on the development phase of AI systems and models and highlight the necessity to comply with privacy requirements during all stages of the development of AI systems. CNIL has also built tools and best practices to be followed for AI tools and models to be used in compliance with privacy laws e.g., the risk assessment before the use of an AI system (CNIL AI Risk Assessment). In addition, the CNIL has published a guidance on the use of generative AI systems with a related Q&A (CNIL Generative AI Guidance) that aims to help organizations deploy such systems responsibly.

The French agency on security of IT systems (ANSSI) published guidance on 29 April 2024 setting out security recommendations for generative AI systems (ANSSI Generative AI Security Guidance). This guidance set out good practices to implement on the three stages of generative AI lifecycle: training; integration and deployment and operational production. Such practices should be adapted to the choice of providers (for hosting, training, testing, etc.) and the sensitivity of the data used, as well as the criticality of the intended use case of the AI system.

On 12 July 2024, the French Competition Regulator (Autorité de la Concurrence) issued an opinion on the competitive functioning of the generative artificial intelligence sector. This opinion focuses on strategies by major digital players to consolidate market power in the design, training, and specialization of large language models. Following this opinion, the Authority announced that it is opening an ex officio investigation to analyze the competitive functioning of the conversational agents (or chats) sector. The Authority also intends to examine the new issues that are emerging, particularly those linked to the use of conversational agents in the online retail sector, also referred to as ‘agentic commerce’ by launching in 2026 a public consultation.

The French High Council for Literary and Artistic Property (CSPLA), who acts as an observatory for the exercise and enforcement of copyright and neighboring rights, was tasked with clarifying the EU AI Act transparency requirements for AI model providers (Article 53). Its findings were made public via a report published on 11 December 2024 (CSPLA Report).

In 2025, the CIGREF (a non-profit association bringing together major French companies and administrations) issued a set of five guides focusing on helping large organizations adopt AI responsibly and in compliance with the EU AI Act, offering practical guidance on key obligations, governance structures, legal issues, and contractual impacts. They also provided best practices and enterprise feedback on generative AI adoption, highlighting organizational readiness, risks, and responsible use patterns.

Continue reading

  • no results

Previous topic
Back to top