Artificial Intelligence in Australia
High-risk AI
Regulatory guidance / voluntary codes in Australia
On 23 May 2025, the Australian Signals Directorate's Australian Cyber Security Centre, together with its counterparts in the US, UK and New Zealand, released guidance on best practices for AI Data Security. The guidance sets out key data security risks in AI use and provides a list of best practice guidelines, including but not limited to, sourcing reliable data and tracking data provenance, verifying and maintaining data integrity during storage and transport, and data encryption.
In March 2025, the Commonwealth Ombudsman released an Automated Decision Making Better Practice Guide. The Guide is intended to inform the selection, adoption and use of AI by government agencies to ensure their compliance with Australian laws, including administrative law. Appendix A of the Guide features a comprehensive checklist which may assist government and non-government entities with decision making surrounding their use of AI.
Also in March 2025, the Australian Government Digital Transformation Agency released AI and Cyber Risk model clauses for procuring or developing AI models.
On 21 October 2024, the Office of the Australian Information Commissioner (OAIC), the national regulator for privacy and freedom of information, released two guidance documents relating to AI:
- Guidance on privacy and the use of commercially available AI products – This guidance document is intended to assist organisations deploying and using commercially available AI systems in complying with their privacy obligations. The guidance document specifies that privacy obligations apply to any personal information input into an AI system and the output that is generated by the AI system (where the output contains personal information). The OAIC also recommends that no personal information is entered into publicly available generative AI tools.
- Guidance on privacy and developing and training generative AI models – This guidance document recommends that AI developers take reasonable steps to ensure accuracy in generative AI models. With respect to privacy obligations, it notes that personal information includes inferred, incorrect or artificially generated information produced by AI models (such as hallucinations and deepfakes). In addition, this guidance document reminds developers that publicly available or accessible data may not automatically be legally used to train or fine-tune generative AI models or systems.
In September 2024, Australia's Department of Science, Industry and Resources published a Proposal Paper for introducing mandatory guardrails for AI in high-risk settings (Proposal Paper introducing mandatory guardrails). This paper identifies two broad categories of high-risk AI, namely (1) AI systems with known or foreseeable proposed uses that are considered to be high risk, and (2) advanced, highly capable general-purpose AI/GPAI models that are capable of being used, or being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems, where all possible applications and risks cannot be foreseen.
With respect to the first category listed above, the principles that organisations must consider in designating an AI system as high-risk are the risk of adverse impacts to:
- an individual's human rights, health or safety, and legal rights e.g. legal effects, defamation or similarly significant effects on an individual;
- groups of individuals or collective rights of cultural groups; and
- the broader Australian economy, society, environment and rule of law,
as well as the severity and extent of the adverse impacts outlined above.
With respect to AI designated as high-risk, the Proposal Paper introducing mandatory guardrails sets out the following proposed mandatory guardrails for organisations developing or deploying high-risk AI systems (page 35):
- "Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance;
- Establish and implement a risk management process to identify and mitigate risks;
- Protect AI systems, and implement data governance measures to manage data quality and provenance;
- Test AI models and systems to evaluate model performance and monitor the system once deployed;
- Enable human control or intervention in an AI system to achieve meaningful human oversight;
- Inform end-users regarding AI-enabled decisions, interactions with AI and AI generated content;
- Establish processes for people impacted by AI systems to challenge use or outcomes;
- Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks;
- Keep and maintain records to allow third parties to assess compliance with guardrails; and
- Undertake conformity assessments to demonstrate and certify compliance with guardrails."
The definition of high-risk AI and the guardrails are expected to be refined based on feedback provided by Australian stakeholders to the Proposal paper introducing mandatory guardrails.
On 5 September 2024, the Australian Government released a Voluntary AI Safety Standard publication that sets out substantially similar guardrails as those in the Proposal Paper introducing mandatory guardrails, with the exception of guardrail 10, which states:
"Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness."
Whereas the Proposal Paper introducing mandatory guardrails apply to high-risk AI, the Voluntary AI Safety Standard sets out voluntary guidelines for developers and deployers of AI to protect people and communities from harms, avoid reputation and financial risks to their organizations, increase organizational and community trust and confidence in AI systems, services and products, and align with legal obligations and expectations in Australia, among other things.
On 1 September 2024, the Policy for the Responsible Use of AI in Government (Policy) came into effect, aiming to empower the Australian Government to safely, ethically and responsibly engage with AI, strengthen public trust in the government's use of AI, and adapt to technological and policy changes over time.
In particular, the Policy requires government agencies to:
- designate accountability for compliance with the policy to certain public officials, and
- publish and keep updated an AI transparency statement.
Additional recommendations include fundamental AI training for all staff, additional training for staff with roles or responsibilities in connection with AI, understanding and recording where and how AI is being used within agencies, integrating AI considerations into existing frameworks, participating in the Australian Government's AI assurance framework, monitoring AI use cases and keeping up to date with policy changes.
Australia has been a signatory to the Bletchley Declaration since 1 November 2023, which establishes a collective understanding between 28 countries and the European Union on the opportunities and risks posed by AI.
In November 2019, the Australian Government published its AI Ethics Principles (Ethics Principles), designed to ensure that AI is safe, secure and reliable and to:
- help achieve safer, more reliable and fairer outcomes for all Australians;
- reduce the risk of negative impact on those affected by AI applications; and assist businesses and governments to practice the highest ethical standards when designing, developing and implementing AI.
Definitions in Australia
Information not provided.
Prohibited activities in Australia
Information not provided.
Controls on generative AI in Australia
Information not provided.
User transparency in Australia
Information not provided.
Fairness / unlawful bias in Australia
Information not provided.
Information not provided.
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Laws specifically addressing AI have not been introduced in Brazil yet. Draft Article 14 of the proposed Brazilian AI Bill specifies that high-risk AI systems include those used in critical infrastructure, education, employment, public services, financial services, emergency response, justice, healthcare, public security, and migration management. Specific controls in relation to high-risk AI systems are proposed by the Brazilian AI Bill specifies, including that high-risk AI systems shall require an algorithmic impact assessment, considering risks, benefits, and mitigation measures, and must be updated periodically (draft Article 18).
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
National laws specifically addressing AI have not yet passed in Canada.
Article 5 of the Chilean AI Bill divides AI systems into four risk classes. The second highest risk class is a 'High-Risk AI System', which is defined by Article 7 as an AI system that presents a significant risk of causing harm to health, safety, fundamental rights protected by the Constitution or the environment, as well as the rights of consumers, regardless of whether it has been introduced on the market or put into service, whether the AI system is intended to be used as a safety component of a product, or whether it is itself such a product.
Article 8 of the Chilean AI Bill establishes the rules applicable to 'High-Risk AI Systems':
- Establishment of risk management systems: High-Risk AI Systems will undergo a continuous iterative process of risk assessment to be conducted throughout the life cycle of the system, which will require periodic reviews and updates to seek its effectiveness and minimise the potential for failure or malfunction, based on the stated intended purpose.
- Data governance: High-Risk AI Systems using techniques that involve training models with data shall be subject to data governance commensurate with the context of use, as well as the intended purpose of the AI system, to the extent that this is technically feasible in accordance with the market segment or scope of application concerned. They should also seek to incorporate internationally accepted technical and data security standards.
- Technical documentation: The technical documentation attached to High-Risk AI Systems shall be intelligible and written in such a way as to demonstrate that the high-risk AI system complies with the rules set forth in the Chilean AI Bill.
- System of records: High-Risk AI Systems shall be designed and developed with capabilities to record safety information and events while in operation. These recording capabilities shall be in accordance with recognised common standards or specifications and the state of the art.
- Transparency mechanisms: High-Risk AI Systems shall be designed and developed with a level of transparency sufficient for operators and their intended users to reasonably understand the operation of the system, in accordance with its intended purpose.
- Human oversight mechanisms: High-Risk AI Systems shall be designed and developed so that they can be overseen by natural persons technically qualified for this function as appropriate for the implementation scenario and in a manner proportionate to the associated risks, with the aim of preventing or minimising risks to health, safety, fundamental rights, democracy and/or the environment, which may arise when High-Risk AI System is used in accordance with its intended purpose or when it is put to reasonably foreseeable misuse.
- Accuracy, robustness and cybersecurity: High-Risk AI Systems shall be designed and developed following the principle of safety by design and by default, and shall have an adequate level of accuracy, robustness, security and cybersecurity, operating consistently, reliably and robustly throughout their life cycle.
The PRC has not yet established a formal categorisation of AI technologies based on their associated risk levels. Nevertheless, specific laws and regulations provide requirements for certain use cases and services with certain capabilities. For example:
- Each of the three pieces of regulation referred to above require providers of services with public opinion attributes or social mobilisation capabilities to perform record-filing procedures and conduct security assessments in accordance with laws.
- Under the Deep Synthesis Provisions, the generated or edited information of certain deep synthesis services that may cause confusion among the public must be labelled with regard to its deep synthesis status prominently, including:
- smart dialogue or similar services that simulate a human to generate or edit texts;
- speech generation services (e.g. voice synthesis or voice imitation services);
- services that generate images or videos of people (e. face generation, face swapping, face manipulation or posture manipulation);
- immersive simulated scene generation, editing or other services;
- any other editing services that significantly alter personal identification characteristics; and
- any other services that generate or significantly altering information content.
The same labelling obligations are also reiterated in the GenAI Measures.
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems are those that are either: (i) safety components of products or products themselves regulated by existing EU product safety laws (e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
High-risk AI in France
In France, the ACPR AI Governance Study has highlighted the necessity that credit scoring, anti-money laundering and customer protection AI systems are evaluated in a way that they ensure (i) appropriate processing of data, (ii) performance, (iii) stability and (iv) explainability. To do so, financial institutions must take care, from the design of algorithms onwards, to integrate operational teams, have human verification of the decisions, implement strong security measures, validation processes and to conduct regular audits.
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Laws specifically addressing AI have not yet been introduced in Hong Kong.
The Ethical AI Framework identifies certain categories of AI application as "likely to result in high risk", for which CIO/IT Board approval is required, namely:
- AI application has a high degree of autonomy;
- AI application is used in a complex environment;
- sensitive personal data is used in the AI application;
- personal data is processed on a large scale and/or are combined data sets, taking into account certain factors;
- the AI application can result in a potentially sensitive impact on human beings;
- the AI application involves the evaluation or scoring of individuals;
- automated/complex decision making by the AI application with significant impact and legal consequences without human intervention; and
- the AI application involves systemic observation or monitoring.
The GenAI Guideline proposes a four-tier risk classification system, in which application of generative AI in critical infrastructure contexts (e.g., healthcare diagnostics, autonomous vehicles) are classified as high-risk.
The Model Framework and Guidance indicate that AI systems have a higher risk profile if they are likely to have a significant impact on individuals.
Under the SFC Circular, the SFC generally considers using an AI language model for providing investment recommendations, investment advice or investment research to investors or clients as high-risk use cases.
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Currently, there are no laws in Japan that specifically address this point.
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
High-risk AI in Latvia
The annotation to the Law on the Artificial Intelligence Centre states that Latvia lacks the competencies and capabilities related to the management of AI risks, both in terms of monitoring prohibited and high-risk uses of AI and in protecting against the malicious use of AI. As a solution, the creation of a supportive environment for the development, testing, and implementation of safe and trustworthy AI solutions is proposed, including the establishment of a regulatory sandbox to facilitate simplified innovation deployment. It is also planned to strengthen sectoral competencies, foster cooperation with international partners, and develop methodologies and tools for risk management, as well as provide support to supervisory authorities. The Centre will serve as a platform for coordinating strategic projects and providing consultations, thereby promoting understanding and management of AI-related risks in Latvia. The Centre is already established, the supervising council has also been appointed and currently the head of the Centre is being sought.
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Laws specifically addressing AI have not been introduced in Mauritius yet.
Laws specifically addressing AI have not been introduced in Mexico yet. Article 10 of the AI Bill establishes that the following AI systems used for the following purposes are considered to be high-risk:
- Real-time or delayed remote biometric identification of persons in private spaces.
- Management of water, electricity and gas supply.
- The allocation and determination of access to educational establishments and the assessment of students.
- The selection and recruitment of employees, as well as the assignment of tasks and the monitoring and evaluation of their performance and conduct.
- The assessment of individuals for access to benefits, services and social programmes.
- The assessment of the economic solvency of persons, or to establish their credit rating.
- The definition of priorities for the care of persons or groups of persons in emergency or disaster situations.
- The use to determine the risk of a person or persons committing or reoffending.
- The use at any stage of the investigation and interpretation of facts that could constitute an offence during criminal proceedings.
- The use for personalised or individualised management of migration, asylum and border control.
- Influencing the political-electoral preferences of citizens, supplanting the voice or image of candidates or political leaders without explicitly and undeniably doing so.
Article 12 of the AI Bill specifies the obligations on providers of high-risk AI systems, including the following:
- To have a quality management system in place.
- To develop and disseminate the technical documentation of the AI system.
- When under their control; to retain log files automatically generated by their AI systems.
- To ensure that AI systems are subject to human assessment and control procedures determined by the competent authority before being placed on the market or put into service.
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Laws specifically addressing AI have not been introduced in New Zealand yet, so no AI uses are expressly specified as being high-risk. The non-binding OPC AI Guidance identifies the use of AI tools for automated decision making as being higher-risk, given the potential for the use to have direct impacts on outcomes for individuals.
Laws specifically addressing AI have not been introduced in Nigeria yet.
The content on High-risk AI in the European Union applies in Norway.
Laws specifically addressing high-risk uses of AI have not been introduced in Peru yet.
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Laws specifically addressing AI have not yet been introduced in Singapore.
The Model Framework for GenAI cites as examples of high-risk AI use cases: (i) use for medical diagnosis or (ii) use with national security or societal implications.
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
The AI Act outlines several key obligations for AI business operators who aim to provide high-impact AI systems or products or services utilising such technology.
- High-Impact AI Definition: "High-Impact AI" systems are those that significantly influence or pose risks to the safety and fundamental rights of individuals. These are typically employed in critical decision-making or assessments with substantial impact on someone’s rights and responsibilities. Examples include applications in medical device development, recruitment processes, loan assessments, and educational evaluations (Article 2, Item 4).
- Preliminary Review Obligation: AI business operators must assess whether their AI technology qualifies as high-impact before deployment. They may seek confirmation from the Minister of MSIT if there is uncertainty regarding the classification of their AI system (Article 33). Non-compliance may result in an administrative fine of up to KRW 30 million (Article 43, Paragraph (1), Item 1).
- Advance Notification Obligation: AI business operators intending to deploy products or services using high-impact AI are obligated to inform users in advance (Article 31, Paragraph (1)). Non-compliance may result in an administrative fine of up to KRW 30 million (Article 43, Paragraph (1), Item 1).
- Safety and Reliability Measures: A comprehensive framework of safety and reliability measures must be implemented by operators offering high-impact AI systems to ensure these systems operate as intended without undue risk (Article 34).
- Impact Assessment Obligation: AI business operators are expected to proactively assess the potential impact of their high-impact AI on individuals’ fundamental rights. Public institutions, including national and local government entities, must prioritize AI solutions that have undergone such assessments (Article 35).
- Right to Explanation: Individuals affected by AI systems including high-impact AI have the right to request clear explanations of the logic and principles behind AI-generated outcomes, to the extent that this is technically and reasonably feasible (Article 3, Paragraph (2)).
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Laws specifically addressing AI have not been introduced in Thailand yet.
Laws specifically addressing AI have not been introduced in Turkey yet.
There is no unified federal law or emirate level law in the UAE that has a primary focus on regulating AI (and therefore no classification of AI into unacceptable risk, high risk, limited risk and minimal risk).
The DIFC’s Data Protection Regulations do not classify AI Systems into unacceptable risk, high risk, limited risk and minimal risk.
A specific law addressing AI has not been introduced in the UK yet. Sector regulators are looking at risks posed by AI in their sectors; financial, communications, healthcare and other sectoral regulators (FCA, Ofcom, MHRA) are increasingly embedding AI principles into existing frameworks. Some have expressed concerns about the pace of adoption, with the FCA having issued a warning in June 2025 that the speed at which AI is evolving will require adaptive enforcement.
Unlike in the EU, the risk categorization of AI technologies in the U.S. is not defined by a single, harmonized legislative or regulatory taxonomy. Whether a specific AI technology or use is considered “high-risk” will depend on, and will matter only if, jurisdiction-specific laws or rules include a relevant definition. Currently in the U.S., the Colorado AI Act is the only legislation that adopts a risk stratification system that categorizes certain uses of AI as “high-risk.”
The Colorado AI Act defines “high-risk” AI systems as those that make, or significantly contribute to making, a “consequential decision.” Under the Act, a consequential decision has a material legal or similarly significant effect on the provision, denial, cost, or terms of:
- Education enrollment or opportunity
- Employment or an employment opportunity
- A financial or lending service
- An essential government service
- Healthcare services
- Housing
- Insurance, or
- Legal services.
The definition excludes AI systems intended to perform a narrow procedural task or detect deviations in decision-making patterns. These systems are not intended to replace or influence a previously completed human assessment without human review.