Artificial Intelligence in Australia
Human oversight
Regulatory guidance / voluntary codes in Australia
On 23 May 2025, the Australian Signals Directorate's Australian Cyber Security Centre, together with its counterparts in the US, UK and New Zealand, released guidance on best practices for AI Data Security. The guidance sets out key data security risks in AI use and provides a list of best practice guidelines, including but not limited to, sourcing reliable data and tracking data provenance, verifying and maintaining data integrity during storage and transport, and data encryption.
In March 2025, the Commonwealth Ombudsman released an Automated Decision Making Better Practice Guide. The Guide is intended to inform the selection, adoption and use of AI by government agencies to ensure their compliance with Australian laws, including administrative law. Appendix A of the Guide features a comprehensive checklist which may assist government and non-government entities with decision making surrounding their use of AI.
Also in March 2025, the Australian Government Digital Transformation Agency released AI and Cyber Risk model clauses for procuring or developing AI models.
On 21 October 2024, the Office of the Australian Information Commissioner (OAIC), the national regulator for privacy and freedom of information, released two guidance documents relating to AI:
- Guidance on privacy and the use of commercially available AI products – This guidance document is intended to assist organisations deploying and using commercially available AI systems in complying with their privacy obligations. The guidance document specifies that privacy obligations apply to any personal information input into an AI system and the output that is generated by the AI system (where the output contains personal information). The OAIC also recommends that no personal information is entered into publicly available generative AI tools.
- Guidance on privacy and developing and training generative AI models – This guidance document recommends that AI developers take reasonable steps to ensure accuracy in generative AI models. With respect to privacy obligations, it notes that personal information includes inferred, incorrect or artificially generated information produced by AI models (such as hallucinations and deepfakes). In addition, this guidance document reminds developers that publicly available or accessible data may not automatically be legally used to train or fine-tune generative AI models or systems.
In September 2024, Australia's Department of Science, Industry and Resources published a Proposal Paper for introducing mandatory guardrails for AI in high-risk settings (Proposal Paper introducing mandatory guardrails). This paper identifies two broad categories of high-risk AI, namely (1) AI systems with known or foreseeable proposed uses that are considered to be high risk, and (2) advanced, highly capable general-purpose AI/GPAI models that are capable of being used, or being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems, where all possible applications and risks cannot be foreseen.
With respect to the first category listed above, the principles that organisations must consider in designating an AI system as high-risk are the risk of adverse impacts to:
- an individual's human rights, health or safety, and legal rights e.g. legal effects, defamation or similarly significant effects on an individual;
- groups of individuals or collective rights of cultural groups; and
- the broader Australian economy, society, environment and rule of law,
as well as the severity and extent of the adverse impacts outlined above.
With respect to AI designated as high-risk, the Proposal Paper introducing mandatory guardrails sets out the following proposed mandatory guardrails for organisations developing or deploying high-risk AI systems (page 35):
- "Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance;
- Establish and implement a risk management process to identify and mitigate risks;
- Protect AI systems, and implement data governance measures to manage data quality and provenance;
- Test AI models and systems to evaluate model performance and monitor the system once deployed;
- Enable human control or intervention in an AI system to achieve meaningful human oversight;
- Inform end-users regarding AI-enabled decisions, interactions with AI and AI generated content;
- Establish processes for people impacted by AI systems to challenge use or outcomes;
- Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks;
- Keep and maintain records to allow third parties to assess compliance with guardrails; and
- Undertake conformity assessments to demonstrate and certify compliance with guardrails."
The definition of high-risk AI and the guardrails are expected to be refined based on feedback provided by Australian stakeholders to the Proposal paper introducing mandatory guardrails.
On 5 September 2024, the Australian Government released a Voluntary AI Safety Standard publication that sets out substantially similar guardrails as those in the Proposal Paper introducing mandatory guardrails, with the exception of guardrail 10, which states:
"Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness."
Whereas the Proposal Paper introducing mandatory guardrails apply to high-risk AI, the Voluntary AI Safety Standard sets out voluntary guidelines for developers and deployers of AI to protect people and communities from harms, avoid reputation and financial risks to their organizations, increase organizational and community trust and confidence in AI systems, services and products, and align with legal obligations and expectations in Australia, among other things.
On 1 September 2024, the Policy for the Responsible Use of AI in Government (Policy) came into effect, aiming to empower the Australian Government to safely, ethically and responsibly engage with AI, strengthen public trust in the government's use of AI, and adapt to technological and policy changes over time.
In particular, the Policy requires government agencies to:
- designate accountability for compliance with the policy to certain public officials, and
- publish and keep updated an AI transparency statement.
Additional recommendations include fundamental AI training for all staff, additional training for staff with roles or responsibilities in connection with AI, understanding and recording where and how AI is being used within agencies, integrating AI considerations into existing frameworks, participating in the Australian Government's AI assurance framework, monitoring AI use cases and keeping up to date with policy changes.
Australia has been a signatory to the Bletchley Declaration since 1 November 2023, which establishes a collective understanding between 28 countries and the European Union on the opportunities and risks posed by AI.
In November 2019, the Australian Government published its AI Ethics Principles (Ethics Principles), designed to ensure that AI is safe, secure and reliable and to:
- help achieve safer, more reliable and fairer outcomes for all Australians;
- reduce the risk of negative impact on those affected by AI applications; and assist businesses and governments to practice the highest ethical standards when designing, developing and implementing AI.
Definitions in Australia
Information not provided.
Prohibited activities in Australia
Information not provided.
Controls on generative AI in Australia
Information not provided.
User transparency in Australia
Information not provided.
Fairness / unlawful bias in Australia
Information not provided.
Information not provided.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Page 5 of the Summary of the Brazilian Artificial Intelligence Strategy notes that in the Brazilian AI Strategy it is often stated that systems must be designed in a way that respects human rights, democratic values and diversity, imposing the inclusion of appropriate safeguards that enable human intervention, whenever necessary, to guarantee a just society.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
The Voluntary Code specifies under its Human Oversight and Monitoring principle that signatories to it should (with varying levels of obligation, as indicated, depending on whether a signature is either a developer or a manager of a generative AI system and if the system is available for public use or not):
- monitor the operation of the system for harmful uses or impacts after it is made available, including through the use of third-party feedback channels, and inform the developer and/or implement usage controls as needed to mitigate harm; and
- maintain a database of reported incidents after deployment, and provide updates as needed to ensure effective mitigation measures.
Article 4 of the Chilean AI Bill establishes the main principles applicable to AI systems, and Article 4 a) states the following:
Human intervention and monitoring
AI systems will be developed and used as a tool in the service of human beings, respecting human dignity and personal autonomy, and operated in a way that can be adequately controlled and monitored by human beings.
In addition, Article 8 f) of the Chilean AI Bill establishes the following rule applicable to High-Risk AI Systems:
Human oversight mechanisms
High-risk AI systems shall be designed and developed so that they can be supervised by natural persons technically qualified for this function as appropriate for the respective implementation scenario, and in a proportional manner with the associated risks, with the aim of preventing or minimising risks to health, safety, fundamental rights, democracy, and/or the environment, which may arise when a High-Risk AI system is used as intended or when it is likely to be misused.
The Recommendation Algorithms Provisions specify that to comply, businesses must (amongst other things) establish and improve relevant management systems and technical measures (including for algorithm mechanism review) and employ professional staff and technical support that is appropriate to the scale of the algorithm recommendation service.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Guidance on human oversight in France
The CNCDH Opinion recommends implementing supervision of the AI system, according to a procedure that may vary according to the risks of infringement of fundamental rights as identified by a prior impact risk assessment. This process should help maintain ongoing vigilance from the AI system user with regards to the effects of the system and its potential discriminatory effects.
The CNIL AI Risk Assessment provides elements about human oversight to prevent systematic errors and misuse. It specifies that organizations must implement clear, effective, and sustainable measures for human intervention and data controllers should establish a framework ensuring these oversight conditions are met.
The Senate Report strongly approves the EU approach to human oversight measures and the need for governance so that AI complies with human rights through education and oversight capacities built across society, pointing to the role of human control over system deployment and use. In its EU governance chapter, it welcomes positively institutional set‑ups and soft‑law instruments (e.g., AI Pact) designed to structure oversight and accountability around model and system risks, consistent with human‑in‑the‑loop expectations under the risk‑based scheme it presents.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Laws specifically addressing AI have not yet been introduced in Hong Kong.
Human oversight is:
- one of the twelve ethical AI principles set out in the Ethical AI Framework, which says that the degree of human intervention required as part of AI application’s decision-making or operations should be dictated by the level of the perceived severity of ethical issues;
- one of the seven ethical AI principles set out in the Guidance. The human oversight ethical principle set out in the Guidance specifies that users of AI systems should be able to take informed and autonomous actions regarding the recommendations or decisions of the AI systems. The level of human involvement in the process should be proportionate to the risks and impact of using the AI systems. In particular, it states that it is required for higher risk uses of AI; and
- a key measure for mitigating the risks of using AI, according to the Model Framework, proportionate to the risk, and is critical for AI systems with a higher risk profile. It stresses that human oversight should not merely be a gesture, and that human reviewers should be able to properly assess, interpret, exercise discretion and veto AI recommendations or flag problematic content or decisions. It is clear that, "ultimately, human actors should be held accountable for the decisions and output made by AI".
The GenAI Guideline specifies that human oversight is critical for ensuring the trust and accountability framework of generative AI systems, and that the degree of human oversight appropriate should in each case be based on the impact of different stages (e.g., data collection, model training, and output generation), with stronger need for human oversight where the impact is greater. Human oversight is required for "high-risk" applications deployed in critical infrastructure contexts.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Currently, there are no laws in Japan that specifically address this point.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight. This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Guidance on human oversight in Latvia
Law on the Artificial Intelligence Centre states that the Artificial Intelligence Center is established, with the aim of enhancing national competitiveness and societal well-being by developing public-private partnerships in the field of AI and promoting public engagement. The centre will foster the development, application and management of responsible and trustworthy AI solutions, identify and manage associated risks, and enhance AI skills within society and public administration. The centre’s board will fulfil supervisory, including board work oversight, and advisory functions.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Human oversight in Malta
Human autonomy is another ethical AI principle set out by the National Framework. This principle aims to ensure humans maintain complete and effective control over their own decisions and actions. This principle includes establishing appropriate human oversight in processes carried out by AI systems.
Laws specifically addressing AI have not been introduced in Mauritius yet.
Laws specifically addressing AI have not been introduced in Mexico yet. Article 15 of the AI Bill states that the objective of human oversight shall be to prevent or minimise risks to health, safety or fundamental rights that may arise from the use of AI systems.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Laws specifically addressing AI have not been introduced in New Zealand yet, so there are no statutory human oversight requirements. However, the OPC's AI Guidance states the importance of developing processes for the human review of AI decisions, and to empower and adequately resource the people conducting these reviews. Both the OPC AI Guidance and the AI Guidance for Business highlight the importance of human oversight where automated decision making has direct impacts on the outcomes of people. Both also warns that having a "human in the loop" may not be sufficient to uphold the accuracy principle due to the risk of automation blindness. On this topic, the AI Guidance for Business concludes with a reminder that businesses are responsible for their decisions, regardless of the supporting technology used.
Laws specifically addressing AI have not been introduced in Nigeria yet.
The content on Human oversight in the European Union applies in Norway.
Laws specifically addressing human oversight in relation to AI have not been introduced in Peru yet.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Laws specifically addressing AI have not yet been introduced in Singapore.
The Model Framework outlines a detailed framework for determining the appropriate extent of human oversight in AI-augmented decision-making, based on the probability and severity of harm to an individual (or organisation) as a result of the decision made by an organisation about that individual (or organisation).
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
The AI Act does not specifically mandate human oversight as an obligation. However, it does require certain safety and reliability measures in relation to high-impact AI. For AI systems where the cumulative compute used for training surpasses a certain threshold (to be provided under the Presidential Decree), the AI Act requires AI business operators to identify, assess, and mitigate risks throughout the AI life cycle, and establish a risk management system (Article 32). While the specific details of these safety and reliability measures and risk management systems have not yet been stipulated, there is a possibility that certain level of human oversight and monitoring may be introduced in the future through Presidential Decrees or separate regulations.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Human oversight in Spain
The Spanish government, in collaboration with the European Commission, has launched a controlled testing environment to assess the compliance of certain AI systems that may pose risks to security, health and fundamental rights.
The objective of the testing procedure is for participating AI providers to implement and demonstrate compliance with key requirements, including risk management, data quality, transparency, accuracy, cybersecurity and human oversight of AI systems.
- In order to participate on that testing procedure, the Spanish government will publish calls for participation, where interested AI providers will submit their applications.
- Once accepted, the participating AI providers, with the guidance and personalized support from the competent authority, will carry out the necessary actions to meet the requirements specified in Article 11 of the AI Testing Royal Decree.
After completing the required tests, the AI provider will conduct a self-assessment to ensure the implementation is consistent with the established guidelines. and the competent authority will examine the documents submitted by the AI provider, particularly focusing on the quality management system, technical documentation, and post-market monitoring plan. This review will ensure that all the necessary requirements have been met and properly documented. Once the testing phase is completed, the AI provider will need to implement a post-market monitoring system to ensure ongoing compliance with the established requirements and the continued safe operation of the AI system in the market.
Entities that have successfully completed all stages of the controlled testing environment will receive a certificate of participation. Additionally, they will receive an evaluation report summarizing the results of their testing activities. This certification will confirm that the AI systems meet the required standards and have undergone the necessary assessments.
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Laws specifically addressing AI have not been introduced in Thailand yet.
Laws specifically addressing AI have not been introduced in Turkey yet. NAIS sets out an ‘AI Principle’ of ‘Responsibility and Accountability’, as follows (page 61 of NAIS):
“Person(s) and organizations involved in the lifecycle of AI systems are ultimately responsible for the proper functioning of AI systems and the application of AI principles. In line with their roles in the lifecycle, the context of the system and technological possibilities, these actors and their ethical responsibilities should be able to be related to their liabilities regarding their decisions and actions. Accountability should be appropriately distributed among actors. Necessary mechanisms for human audit, impact analysis and risk assessment should be established. Technical and organizational design should guarantee auditing and traceability of compliance with AI values. Audit data should be available for third parties to research and review behavior patterns of the AI system, in accordance with their mandate.”
There is no unified federal law or emirate level law in the UAE that has a primary focus on regulating AI (and therefore no binding obligations in relation to human oversight).
However, the AI Ethics Guide requires that rules and standards should be adopted to ensure effective human control over decisions. The Guide also contains a principle of accountability which provides that:
- Accountability for the outcomes of an AI System lies not with the system itself but is apportioned between those who design, develop and deploy it.
- Developers should make efforts to mitigate the risks inherent in the systems they design.
- AI Systems should have built-in appeals procedures whereby users can challenge significant decisions.
- AI Systems should be developed by diverse teams which include experts in the area in which the system will be deployed.
- AI Systems should be subject to external audit and decision quality assurance.
The DIFC’s Data Protection Regulations also provide that AI Systems must have mechanisms in place to ensure responsibility and accountability for outcomes. Such mechanisms may include internal governance and control frameworks in place for monitoring the AI System, processes and projects regularly or external organisation auditing processes regularly, enabling the assessment of algorithms, data and design processes.
There is no single statute addressing AI in the UK yet. Existing principles under e.g. the Equality Act 2010, Data Protection Act 2018, UK GDPR and, now, the Data Use and Access Act must therefore be considered. As noted under Law / Proposed Law, the Data Use and Access Act has resulted in a more permissive approach to automated decision making, allowing decisions to be made provided safeguards are in place relying on legitimate interests (unless special category data is involved). Please see our guide to Data Protection Laws of the World for a summary of the new Articles 22A-22D of the UK GDPR.
Human oversight of AI systems is not federally mandated in the U.S., but some states have passed related laws, particularly for high-risk applications.
At the federal level, the NIST AI RMF encourages organizations to implement human oversight mechanisms throughout the AI lifecycle. It defines oversight as the ability for humans to understand, monitor, and, when necessary, intervene in AI system operations. While not legally binding, the framework is widely adopted across industries and referenced in agency guidance.
Since the Biden Administration, some federal enforcement agencies, such as the FTC and SEC, have continued to stress the value of human accountability, particularly in cases where AI is used to make decisions that affect consumers or investors. However, these expectations are grounded in broader legal principles, rather than AI-specific statutes.
At the state level, human oversight is more explicitly addressed in certain laws, such as:
- Colorado’s AI Act, which requires deployers of high-risk AI systems to implement appropriate levels of human oversight to ensure the system operates as intended and does not result in algorithmic discrimination
- California’s Physicians Make Decisions Act, which prohibits healthcare coverage denials made on the sole basis of an AI or algorithmic tool
- New York City’s Local Law 144, which mandates that employers using automated employment decision tools conduct bias audits and provide human-readable explanations of how such tools may influence hiring decisions
These developments reflect a growing belief that human oversight may be a key to ensuring accountability, safety, and fairness in AI use.