Artificial Intelligence in the European Union

High-risk AI in the European Union

Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:

  • Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
  • Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
  • Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
  • Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
  • Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
  • Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
  • Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
  • Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.

These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.

The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).

Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).

Continue reading

  • no results

Previous topic
Back to top