Artificial Intelligence in Australia

High-risk AI

Information not provided.

Last modified 25 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 18 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 8 July 2025

Laws specifically addressing AI have not been introduced in Brazil yet. Draft Article 14 of the proposed Brazilian AI Bill specifies that high-risk AI systems include those used in critical infrastructure, education, employment, public services, financial services, emergency response, justice, healthcare, public security, and migration management. Specific controls in relation to high-risk AI systems are proposed by the Brazilian AI Bill specifies, including that high-risk AI systems shall require an algorithmic impact assessment, considering risks, benefits, and mitigation measures, and must be updated periodically (draft Article 18).

Last modified 31 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 23 July 2025

National laws specifically addressing AI have not yet passed in Canada.

Last modified 11 July 2025

Article 5 of the Chilean AI Bill divides AI systems into four risk classes. The second highest risk class is a 'High-Risk AI System', which is defined by Article 7 as an AI system that presents a significant risk of causing harm to health, safety, fundamental rights protected by the Constitution or the environment, as well as the rights of consumers, regardless of whether it has been introduced on the market or put into service, whether the AI system is intended to be used as a safety component of a product, or whether it is itself such a product.

Article 8 of the Chilean AI Bill establishes the rules applicable to 'High-Risk AI Systems':

  • Establishment of risk management systems: High-Risk AI Systems will undergo a continuous iterative process of risk assessment to be conducted throughout the life cycle of the system, which will require periodic reviews and updates to seek its effectiveness and minimise the potential for failure or malfunction, based on the stated intended purpose.
  • Data governance: High-Risk AI Systems using techniques that involve training models with data shall be subject to data governance commensurate with the context of use, as well as the intended purpose of the AI system, to the extent that this is technically feasible in accordance with the market segment or scope of application concerned. They should also seek to incorporate internationally accepted technical and data security standards.
  • Technical documentation: The technical documentation attached to High-Risk AI Systems shall be intelligible and written in such a way as to demonstrate that the high-risk AI system complies with the rules set forth in the Chilean AI Bill.
  • System of records: High-Risk AI Systems shall be designed and developed with capabilities to record safety information and events while in operation. These recording capabilities shall be in accordance with recognised common standards or specifications and the state of the art.
  • Transparency mechanisms: High-Risk AI Systems shall be designed and developed with a level of transparency sufficient for operators and their intended users to reasonably understand the operation of the system, in accordance with its intended purpose.
  • Human oversight mechanisms: High-Risk AI Systems shall be designed and developed so that they can be overseen by natural persons technically qualified for this function as appropriate for the implementation scenario and in a manner proportionate to the associated risks, with the aim of preventing or minimising risks to health, safety, fundamental rights, democracy and/or the environment, which may arise when High-Risk AI System is used in accordance with its intended purpose or when it is put to reasonably foreseeable misuse.
  • Accuracy, robustness and cybersecurity: High-Risk AI Systems shall be designed and developed following the principle of safety by design and by default, and shall have an adequate level of accuracy, robustness, security and cybersecurity, operating consistently, reliably and robustly throughout their life cycle.
Last modified 23 July 2025

The PRC has not yet established a formal categorisation of AI technologies based on their associated risk levels. Nevertheless, specific laws and regulations provide requirements for certain use cases and services with certain capabilities. For example:

  • Each of the three pieces of regulation referred to above require providers of services with public opinion attributes or social mobilisation capabilities to perform record-filing procedures and conduct security assessments in accordance with laws.
  • Under the Deep Synthesis Provisions, the generated or edited information of certain deep synthesis services that may cause confusion among the public must be labelled with regard to its deep synthesis status prominently, including:
    • smart dialogue or similar services that simulate a human to generate or edit texts;
    • speech generation services (e.g. voice synthesis or voice imitation services);
    • services that generate images or videos of people (e. face generation, face swapping, face manipulation or posture manipulation);
    • immersive simulated scene generation, editing or other services;
    • any other editing services that significantly alter personal identification characteristics; and
    • any other services that generate or significantly altering information content.

The same labelling obligations are also reiterated in the GenAI Measures.

Last modified 26 January 2026

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems are those that are either: (i) safety components of products or products themselves regulated by existing EU product safety laws (e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 23 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 14 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 9 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 21 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 22 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 11 February 2026

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 22 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

High-risk AI in France

In France, the ACPR AI Governance Study has highlighted the necessity that credit scoring, anti-money laundering and customer protection AI systems are evaluated in a way that they ensure (i) appropriate processing of data, (ii) performance, (iii) stability and (iv) explainability. To do so, financial institutions must take care, from the design of algorithms onwards, to integrate operational teams, have human verification of the decisions, implement strong security measures, validation processes and to conduct regular audits.

Last modified 5 February 2026

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 3 February 2026

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 19 July 2025

Laws specifically addressing AI have not yet been introduced in Hong Kong.  

The Ethical AI Framework identifies certain categories of AI application as "likely to result in high risk", for which CIO/IT Board approval is required, namely:

  • AI application has a high degree of autonomy;
  • AI application is used in a complex environment;
  • sensitive personal data is used in the AI application;
  • personal data is processed on a large scale and/or are combined data sets, taking into account certain factors;
  • the AI application can result in a potentially sensitive impact on human beings;
  • the AI application involves the evaluation or scoring of individuals;
  • automated/complex decision making by the AI application with significant impact and legal consequences without human intervention; and
  • the AI application involves systemic observation or monitoring.

The GenAI Guideline proposes a four-tier risk classification system, in which application of generative AI in critical infrastructure contexts (e.g., healthcare diagnostics, autonomous vehicles) are classified as high-risk.

The Model Framework and Guidance indicate that AI systems have a higher risk profile if they are likely to have a significant impact on individuals.

Under the SFC Circular, the SFC generally considers using an AI language model for providing investment recommendations, investment advice or investment research to investors or clients as high-risk use cases.

Last modified 25 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 24 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 23 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 3 February 2026

Currently, there are no laws in Japan that specifically address this point.

Last modified 31 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

High-risk AI in Latvia

The annotation to the Law on the Artificial Intelligence Centre states that Latvia lacks the competencies and capabilities related to the management of AI risks, both in terms of monitoring prohibited and high-risk uses of AI and in protecting against the malicious use of AI. As a solution, the creation of a supportive environment for the development, testing, and implementation of safe and trustworthy AI solutions is proposed, including the establishment of a regulatory sandbox to facilitate simplified innovation deployment. It is also planned to strengthen sectoral competencies, foster cooperation with international partners, and develop methodologies and tools for risk management, as well as provide support to supervisory authorities. The Centre will serve as a platform for coordinating strategic projects and providing consultations, thereby promoting understanding and management of AI-related risks in Latvia. The Centre is already established, the supervising council has also been appointed and currently the head of the Centre is being sought.

Last modified 14 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 24 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 23 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 23 July 2025

Laws specifically addressing AI have not been introduced in Mauritius yet.

Last modified 26 June 2025

Laws specifically addressing AI have not been introduced in Mexico yet. Article 10 of the AI Bill establishes that the following AI systems used for the following purposes are considered to be high-risk:

  • Real-time or delayed remote biometric identification of persons in private spaces.
  • Management of water, electricity and gas supply.
  • The allocation and determination of access to educational establishments and the assessment of students.
  • The selection and recruitment of employees, as well as the assignment of tasks and the monitoring and evaluation of their performance and conduct.
  • The assessment of individuals for access to benefits, services and social programmes.
  • The assessment of the economic solvency of persons, or to establish their credit rating.
  • The definition of priorities for the care of persons or groups of persons in emergency or disaster situations.
  • The use to determine the risk of a person or persons committing or reoffending.
  • The use at any stage of the investigation and interpretation of facts that could constitute an offence during criminal proceedings.
  • The use for personalised or individualised management of migration, asylum and border control.
  • Influencing the political-electoral preferences of citizens, supplanting the voice or image of candidates or political leaders without explicitly and undeniably doing so.

Article 12 of the AI Bill specifies the obligations on providers of high-risk AI systems, including the following:

  • To have a quality management system in place.
  • To develop and disseminate the technical documentation of the AI system.
  • When under their control; to retain log files automatically generated by their AI systems.
  • To ensure that AI systems are subject to human assessment and control procedures determined by the competent authority before being placed on the market or put into service.
Last modified 29 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 23 July 2025

Laws specifically addressing AI have not been introduced in New Zealand yet, so no AI uses are expressly specified as being high-risk. The non-binding OPC AI Guidance identifies the use of AI tools for automated decision making as being higher-risk, given the potential for the use to have direct impacts on outcomes for individuals. 

Last modified 14 July 2025

Laws specifically addressing AI have not been introduced in Nigeria yet.

Last modified 17 June 2025

The content on High-risk AI in the European Union applies in Norway.

Last modified 9 October 2025

Laws specifically addressing high-risk uses of AI have not been introduced in Peru yet.  

Last modified 20 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 23 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 22 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 25 July 2025

Laws specifically addressing AI have not yet been introduced in Singapore. 

The Model Framework for GenAI cites as examples of high-risk AI use cases: (i) use for medical diagnosis or (ii) use with national security or societal implications.

Last modified 28 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 29 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 14 July 2025

The AI Act outlines several key obligations for AI business operators who aim to provide high-impact AI systems or products or services utilising such technology.

  • High-Impact AI Definition: "High-Impact AI" systems are those that significantly influence or pose risks to the safety and fundamental rights of individuals. These are typically employed in critical decision-making or assessments with substantial impact on someone’s rights and responsibilities. Examples include applications in medical device development, recruitment processes, loan assessments, and educational evaluations (Article 2, Item 4).
  • Preliminary Review Obligation: AI business operators must assess whether their AI technology qualifies as high-impact before deployment. They may seek confirmation from the Minister of MSIT if there is uncertainty regarding the classification of their AI system (Article 33). Non-compliance may result in an administrative fine of up to KRW 30 million (Article 43, Paragraph (1), Item 1).
  • Advance Notification Obligation: AI business operators intending to deploy products or services using high-impact AI are obligated to inform users in advance (Article 31, Paragraph (1)). Non-compliance may result in an administrative fine of up to KRW 30 million (Article 43, Paragraph (1), Item 1).
  • Safety and Reliability Measures: A comprehensive framework of safety and reliability measures must be implemented by operators offering high-impact AI systems to ensure these systems operate as intended without undue risk (Article 34).
  • Impact Assessment Obligation: AI business operators are expected to proactively assess the potential impact of their high-impact AI on individuals’ fundamental rights. Public institutions, including national and local government entities, must prioritize AI solutions that have undergone such assessments (Article 35).
  • Right to Explanation: Individuals affected by AI systems including high-impact AI have the right to request clear explanations of the logic and principles behind AI-generated outcomes, to the extent that this is technically and reasonably feasible (Article 3, Paragraph (2)).
Last modified 29 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 21 July 2025

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Last modified 7 July 2025

Laws specifically addressing AI have not been introduced in Thailand yet.  

Last modified 25 July 2025

Laws specifically addressing AI have not been introduced in Turkey yet.

Last modified 30 July 2025

There is no unified federal law or emirate level law in the UAE that has a primary focus on regulating AI (and therefore no classification of AI into unacceptable risk, high risk, limited risk and minimal risk).

The DIFC’s Data Protection Regulations do not classify AI Systems into unacceptable risk, high risk, limited risk and minimal risk.

Last modified 4 August 2025

A specific law addressing AI has not been introduced in the UK yet. Sector regulators are looking at risks posed by AI in their sectors; financial, communications, healthcare and other sectoral regulators (FCA, Ofcom, MHRA) are increasingly embedding AI principles into existing frameworks. Some have expressed concerns about the pace of adoption, with the FCA having issued a warning in June 2025 that the speed at which AI is evolving will require adaptive enforcement.

Last modified 23 February 2026

Unlike in the EU, the risk categorization of AI technologies in the U.S. is not defined by a single, harmonized legislative or regulatory taxonomy. Whether a specific AI technology or use is considered “high-risk” will depend on, and will matter only if, jurisdiction-specific laws or rules include a relevant definition. Currently in the U.S., the Colorado AI Act is the only legislation that adopts a risk stratification system that categorizes certain uses of AI as “high-risk.”

The Colorado AI Act defines “high-risk” AI systems as those that make, or significantly contribute to making, a “consequential decision.” Under the Act, a consequential decision has a material legal or similarly significant effect on the provision, denial, cost, or terms of:

  • Education enrollment or opportunity
  • Employment or an employment opportunity
  • A financial or lending service
  • An essential government service
  • Healthcare services
  • Housing
  • Insurance, or
  • Legal services.

The definition excludes AI systems intended to perform a narrow procedural task or detect deviations in decision-making patterns. These systems are not intended to replace or influence a previously completed human assessment without human review. 

Last modified 10 March 2026

Continue reading

  • no results

Previous topic
Back to top