Artificial Intelligence in Australia

Human oversight

Information not provided.

Last modified 25 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 18 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 8 July 2025

Page 5 of the Summary of the Brazilian Artificial Intelligence Strategy notes that in the Brazilian AI Strategy it is often stated that systems must be designed in a way that respects human rights, democratic values and diversity, imposing the inclusion of appropriate safeguards that enable human intervention, whenever necessary, to guarantee a just society.

Last modified 31 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 23 July 2025

The Voluntary Code specifies under its Human Oversight and Monitoring principle that signatories to it should (with varying levels of obligation, as indicated, depending on whether a signature is either a developer or a manager of a generative AI system and if the system is available for public use or not):

  • monitor the operation of the system for harmful uses or impacts after it is made available, including through the use of third-party feedback channels, and inform the developer and/or implement usage controls as needed to mitigate harm; and
  • maintain a database of reported incidents after deployment, and provide updates as needed to ensure effective mitigation measures.
Last modified 11 July 2025

Article 4 of the Chilean AI Bill establishes the main principles applicable to AI systems, and Article 4 a) states the following:

Human intervention and monitoring

AI systems will be developed and used as a tool in the service of human beings, respecting human dignity and personal autonomy, and operated in a way that can be adequately controlled and monitored by human beings.

In addition, Article 8 f) of the Chilean AI Bill establishes the following rule applicable to High-Risk AI Systems:

Human oversight mechanisms

High-risk AI systems shall be designed and developed so that they can be supervised by natural persons technically qualified for this function as appropriate for the respective implementation scenario, and in a proportional manner with the associated risks, with the aim of preventing or minimising risks to health, safety, fundamental rights, democracy, and/or the environment, which may arise when a High-Risk AI system is used as intended or when it is likely to be misused.
Last modified 23 July 2025

The Recommendation Algorithms Provisions specify that to comply, businesses must (amongst other things) establish and improve relevant management systems and technical measures (including for algorithm mechanism review) and employ professional staff and technical support that is appropriate to the scale of the algorithm recommendation service.

Last modified 26 January 2026

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 23 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 14 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 9 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 21 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 22 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 11 February 2026

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 22 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 

Guidance on human oversight in France

The CNCDH Opinion recommends implementing supervision of the AI system, according to a procedure that may vary according to the risks of infringement of fundamental rights as identified by a prior impact risk assessment. This process should help maintain ongoing vigilance from the AI system user with regards to the effects of the system and its potential discriminatory effects.

The CNIL AI Risk Assessment provides elements about human oversight to prevent systematic errors and misuse. It specifies that organizations must implement clear, effective, and sustainable measures for human intervention and data controllers should establish a framework ensuring these oversight conditions are met.

The Senate Report strongly approves the EU approach to human oversight measures and the need for governance so that AI complies with human rights through education and oversight capacities built across society, pointing to the role of human control over system deployment and use. In its EU governance chapter, it welcomes positively institutional set‑ups and soft‑law instruments (e.g., AI Pact) designed to structure oversight and accountability around model and system risks, consistent with human‑in‑the‑loop expectations under the risk‑based scheme it presents.

Last modified 5 February 2026

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 3 February 2026

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 19 July 2025

Laws specifically addressing AI have not yet been introduced in Hong Kong. 

Human oversight is:

  • one of the twelve ethical AI principles set out in the Ethical AI Framework, which says that the degree of human intervention required as part of AI application’s decision-making or operations should be dictated by the level of the perceived severity of ethical issues;
  • one of the seven ethical AI principles set out in the Guidance. The human oversight ethical principle set out in the Guidance specifies that users of AI systems should be able to take informed and autonomous actions regarding the recommendations or decisions of the AI systems. The level of human involvement in the process should be proportionate to the risks and impact of using the AI systems. In particular, it states that it is required for higher risk uses of AI; and
  • a key measure for mitigating the risks of using AI, according to the Model Framework, proportionate to the risk, and is critical for AI systems with a higher risk profile. It stresses that human oversight should not merely be a gesture, and that human reviewers should be able to properly assess, interpret, exercise discretion and veto AI recommendations or flag problematic content or decisions. It is clear that, "ultimately, human actors should be held accountable for the decisions and output made by AI".

The GenAI Guideline specifies that human oversight is critical for ensuring the trust and accountability framework of generative AI systems, and that the degree of human oversight appropriate should in each case be based on the impact of different stages (e.g., data collection, model training, and output generation), with stronger need for human oversight where the impact is greater. Human oversight is required for "high-risk" applications deployed in critical infrastructure contexts.

Last modified 25 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 24 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 23 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 3 February 2026

Currently, there are no laws in Japan that specifically address this point.

Last modified 31 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight. This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.

Guidance on human oversight in Latvia

Law on the Artificial Intelligence Centre states that the Artificial Intelligence Center is established, with the aim of enhancing national competitiveness and societal well-being by developing public-private partnerships in the field of AI and promoting public engagement. The centre will foster the development, application and management of responsible and trustworthy AI solutions, identify and manage associated risks, and enhance AI skills within society and public administration. The centre’s board will fulfil supervisory, including board work oversight, and advisory functions.

Last modified 14 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 24 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 23 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 

Human oversight in Malta

Human autonomy is another ethical AI principle set out by the National Framework. This principle aims to ensure humans maintain complete and effective control over their own decisions and actions. This principle includes establishing appropriate human oversight in processes carried out by AI systems.

Last modified 23 July 2025

Laws specifically addressing AI have not been introduced in Mauritius yet. 

Last modified 26 June 2025

Laws specifically addressing AI have not been introduced in Mexico yet. Article 15 of the AI Bill states that the objective of human oversight shall be to prevent or minimise risks to health, safety or fundamental rights that may arise from the use of AI systems.

Last modified 29 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 23 July 2025

Laws specifically addressing AI have not been introduced in New Zealand yet, so there are no statutory human oversight requirements. However, the OPC's AI Guidance states the importance of developing processes for the human review of AI decisions, and to empower and adequately resource the people conducting these reviews. Both the OPC AI Guidance and the AI Guidance for Business highlight the importance of human oversight where automated decision making has direct impacts on the outcomes of people. Both also warns that having a "human in the loop" may not be sufficient to uphold the accuracy principle due to the risk of automation blindness. On this topic, the AI Guidance for Business concludes with a reminder that businesses are responsible for their decisions, regardless of the supporting technology used.

Last modified 14 July 2025

Laws specifically addressing AI have not been introduced in Nigeria yet.

Last modified 17 June 2025

The content on Human oversight in the European Union applies in Norway.

Last modified 9 October 2025

Laws specifically addressing human oversight in relation to AI have not been introduced in Peru yet.

Last modified 20 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 23 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 22 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 25 July 2025

Laws specifically addressing AI have not yet been introduced in Singapore. 

The Model Framework outlines a detailed framework for determining the appropriate extent of human oversight in AI-augmented decision-making, based on the probability and severity of harm to an individual (or organisation) as a result of the decision made by an organisation about that individual (or organisation).

Last modified 28 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 29 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 14 July 2025

The AI Act does not specifically mandate human oversight as an obligation. However, it does require certain safety and reliability measures in relation to high-impact AI. For AI systems where the cumulative compute used for training surpasses a certain threshold (to be provided under the Presidential Decree), the AI Act requires AI business operators to identify, assess, and mitigate risks throughout the AI life cycle, and establish a risk management system (Article 32). While the specific details of these safety and reliability measures and risk management systems have not yet been stipulated, there is a possibility that certain level of human oversight and monitoring may be introduced in the future through Presidential Decrees or separate regulations. 

Last modified 29 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 

Human oversight in Spain

The Spanish government, in collaboration with the European Commission, has launched a controlled testing environment to assess the compliance of certain AI systems that may pose risks to security, health and fundamental rights.

The objective of the testing procedure is for participating AI providers to implement and demonstrate compliance with key requirements, including risk management, data quality, transparency, accuracy, cybersecurity and human oversight of AI systems.

  • In order to participate on that testing procedure, the Spanish government will publish calls for participation, where interested AI providers will submit their applications.
  • Once accepted, the participating AI providers, with the guidance and personalized support from the competent authority, will carry out the necessary actions to meet the requirements specified in Article 11 of the AI Testing Royal Decree.

After completing the required tests, the AI provider will conduct a self-assessment to ensure the implementation is consistent with the established guidelines. and the competent authority will examine the documents submitted by the AI provider, particularly focusing on the quality management system, technical documentation, and post-market monitoring plan. This review will ensure that all the necessary requirements have been met and properly documented. Once the testing phase is completed, the AI provider will need to implement a post-market monitoring system to ensure ongoing compliance with the established requirements and the continued safe operation of the AI system in the market.

Entities that have successfully completed all stages of the controlled testing environment will receive a certificate of participation. Additionally, they will receive an evaluation report summarizing the results of their testing activities. This certification will confirm that the AI systems meet the required standards and have undergone the necessary assessments.

Last modified 21 July 2025

Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.

Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)). 

In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)). 

Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.

This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. 
Last modified 7 July 2025

Laws specifically addressing AI have not been introduced in Thailand yet.

Last modified 25 July 2025

Laws specifically addressing AI have not been introduced in Turkey yet. NAIS sets out an ‘AI Principle’ of ‘Responsibility and Accountability’, as follows (page 61 of NAIS):

“Person(s) and organizations involved in the lifecycle of AI systems are ultimately responsible for the proper functioning of AI systems and the application of AI principles. In line with their roles in the lifecycle, the context of the system and technological possibilities, these actors and their ethical responsibilities should be able to be related to their liabilities regarding their decisions and actions. Accountability should be appropriately distributed among actors. Necessary mechanisms for human audit, impact analysis and risk assessment should be established. Technical and organizational design should guarantee auditing and traceability of compliance with AI values. Audit data should be available for third parties to research and review behavior patterns of the AI system, in accordance with their mandate.”
Last modified 30 July 2025

There is no unified federal law or emirate level law in the UAE that has a primary focus on regulating AI (and therefore no binding obligations in relation to human oversight).

However, the AI Ethics Guide requires that rules and standards should be adopted to ensure effective human control over decisions. The Guide also contains a principle of accountability which provides that:

  • Accountability for the outcomes of an AI System lies not with the system itself but is apportioned between those who design, develop and deploy it.
  • Developers should make efforts to mitigate the risks inherent in the systems they design.
  • AI Systems should have built-in appeals procedures whereby users can challenge significant decisions.
  • AI Systems should be developed by diverse teams which include experts in the area in which the system will be deployed.
  • AI Systems should be subject to external audit and decision quality assurance.

The DIFC’s Data Protection Regulations also provide that AI Systems must have mechanisms in place to ensure responsibility and accountability for outcomes. Such mechanisms may include internal governance and control frameworks in place for monitoring the AI System, processes and projects regularly or external organisation auditing processes regularly, enabling the assessment of algorithms, data and design processes.

Last modified 4 August 2025

There is no single statute addressing AI in the UK yet. Existing principles under e.g. the Equality Act 2010, Data Protection Act 2018, UK GDPR and, now, the Data Use and Access Act must therefore be considered. As noted under Law / Proposed Law, the Data Use and Access Act has resulted in a more permissive approach to automated decision making, allowing decisions to be made provided safeguards are in place relying on legitimate interests (unless special category data is involved). Please see our guide to Data Protection Laws of the World for a summary of the new Articles 22A-22D of the UK GDPR. 

Last modified 23 February 2026

Human oversight of AI systems is not federally mandated in the U.S., but some states have passed related laws, particularly for high-risk applications.

At the federal level, the NIST AI RMF encourages organizations to implement human oversight mechanisms throughout the AI lifecycle. It defines oversight as the ability for humans to understand, monitor, and, when necessary, intervene in AI system operations. While not legally binding, the framework is widely adopted across industries and referenced in agency guidance.

Since the Biden Administration, some federal enforcement agencies, such as the FTC and SEC, have continued to stress the value of human accountability, particularly in cases where AI is used to make decisions that affect consumers or investors. However, these expectations are grounded in broader legal principles, rather than AI-specific statutes.

At the state level, human oversight is more explicitly addressed in certain laws, such as:

  • Colorado’s AI Act, which requires deployers of high-risk AI systems to implement appropriate levels of human oversight to ensure the system operates as intended and does not result in algorithmic discrimination
  • California’s Physicians Make Decisions Act, which prohibits healthcare coverage denials made on the sole basis of an AI or algorithmic tool
  • New York City’s Local Law 144, which mandates that employers using automated employment decision tools conduct bias audits and provide human-readable explanations of how such tools may influence hiring decisions

These developments reflect a growing belief that human oversight may be a key to ensuring accountability, safety, and fairness in AI use.

Last modified 10 March 2026

Continue reading

  • no results

Previous topic
Back to top