Artificial Intelligence in France

User transparency in France

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.

Criminalization of unauthorized deepfakes in France

Please note that the French Digital Space Law criminalizes the publication of deepfakes of other persons in a way that modifies by AI their image and/or voice without their consent. An offender may face imprisonment (up to one year) and financial penalties (up to 15,000 euros). Such penalties increase when deepfakes are shared through online platforms or involve sexually explicit content.

  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.

User transparency in France

Please note that the French Influencer Law imposes requirements on influencers to include warnings on images that have been modified using AI. Modified images using filters or AI must include "retouched images" or "virtual images" labels.

In France, the CNCDH Opinion recommends to extend the EU AI Act transparency obligations in order to systematically inform people when they are exposed to or required to interact with an AI system and, when they are the subject of a decision, that this decision is based in part or in full on algorithmic processing even when undertaken by private organisations (and currently in France, this information requirement related to AI decision-making only applies with respect to public bodies).

Also, the Senate Report flags multiple transparency‑adjacent issues such as (i) the black box / explainability which underpins the difficulty of understanding model reasoning, motivating transparency and interpretability requirements in policy frameworks and (ii) deepfake watermarking/labeling by noting however that the increasing policy push for watermarking or equivalent measures enable users to recognize synthetic media.

Continue reading

  • no results

Previous topic
Back to top