Artificial Intelligence in the European Union

Human oversight in the European Union

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 

Continue reading

  • no results

Previous topic
Back to top