Artificial Intelligence in France

Human oversight in France

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 

Guidance on human oversight in France

The CNCDH Opinion recommends implementing supervision of the AI system, according to a procedure that may vary according to the risks of infringement of fundamental rights as identified by a prior impact risk assessment. This process should help maintain ongoing vigilance from the AI system user with regards to the effects of the system and its potential discriminatory effects.

The CNIL AI Risk Assessment provides elements about human oversight to prevent systematic errors and misuse. It specifies that organizations must implement clear, effective, and sustainable measures for human intervention and data controllers should establish a framework ensuring these oversight conditions are met.

The Senate Report strongly approves the EU approach to human oversight measures and the need for governance so that AI complies with human rights through education and oversight capacities built across society, pointing to the role of human control over system deployment and use. In its EU governance chapter, it welcomes positively institutional set‑ups and soft‑law instruments (e.g., AI Pact) designed to structure oversight and accountability around model and system risks, consistent with human‑in‑the‑loop expectations under the risk‑based scheme it presents.

Continue reading

  • no results

Previous topic
Back to top