Artificial Intelligence in Australia

Prohibited activities

Information not provided.

Last modified 25 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 18 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 8 July 2025

Laws specifically addressing AI have not been introduced in Brazil yet. Draft Article 13 of the proposed Brazilian AI Bill prohibits AI systems that employ subliminal techniques, exploit vulnerabilities of specific groups or are used by public authorities for illegitimate or disproportionate social scoring. 

Last modified 31 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 23 July 2025

National laws specifically addressing AI have not yet passed in Canada. At the provincial level, using AI for screening and evaluating potential employers or using AI to make decisions based on personal information should be reviewed carefully.

Last modified 11 July 2025

Article 5 of the Chilean AI Bill divides AI systems into four risk classes. The highest risk class is an 'Unacceptable Risk AI System', which is defined as an AI system that is incompatible with the respect and guarantee of the human fundamental rights, the introduction of which onto the market or the putting into service of which is prohibited.

Article 6 of the Chilean AI Bill lists the systems that shall be considered as 'Unacceptable Risk AI Systems':

  • Subliminal manipulation system: AI systems that use techniques that are imperceptible to the affected persons and that have the direct purpose or effect of inducing actions that cause damage to the physical and/or mental health of the people involved.
  • Systems that exploit people's vulnerabilities to generate harmful behaviours: AI systems that exploit any vulnerabilities of a person or a specific group of persons - including known characteristics of that person's or group's personality traits, social or economic situation, age, and physical or mental capacity - that are intended to substantially alter their behaviour or limit their will and cause actual or potential harm to that person or to third parties.
  • Systems of biometric categorisation of persons based on sensitive personal data: biometric categorisation systems that classify and identify natural persons based on sensitive personal data, or that are based on an inference related to such attributes or characteristics, in a way that such categorisation results in prejudicial or unjustified discriminatory treatment against them.
  • Generic social rating systems: AI systems whose purpose is to evaluate or classify individuals or groups of individuals based on their social behaviour, socioeconomic status, or known or inferred personal or personality characteristics, in a way that the resulting classification results in prejudicial or unjustifiably discriminatory treatment of such individuals or groups of individuals.
  • Remote biometric identification systems in public access spaces in real time: AI systems for video image analysis in public access spaces using real-time remote biometric identification systems.
  • Systems for non-selective extraction of facial images: AI systems that create or extend facial recognition databases by non-selectively extracting facial images from the internet or CCTV images.
  • Systems for the evaluation of a person's emotional states: AI systems that infer the emotions of a natural person in the fields of criminal law enforcement, criminal procedure and border management, in workplaces and in educational institutions.
Last modified 23 July 2025

Under the GenAI Measures, any organisation or individual is prohibited from:

  • using generative AI services to generate any illegal content including content that endanger national security, national sovereignty or the socialist system, etc., or content that propagates terrorism, ethnic hatred, or any violent or obscene content, or false or harmful information; and
  • exploiting advantages in terms of algorithms, data, platforms (from an intellectual property perspective) to carry out monopoly or unfair competition practices.

Under the Deep Synthesis Provisions, any organisation or individual is prohibited from:

  • using deep synthesis services to engage in illegal activities including those that endanger national security and public interests, disrupt economic and social order, and infringe upon the legitimate rights and interests of others, etc.;
  • using deep synthesis services to produce or distribute fake news; and
  • using technical means to delete, alter or conceal labels added to information generated or edited using deep synthesis services.

Under the Recommendation Algorithms Provisions, service providers are prohibited from:

  • using recommendation algorithm-based services to engage in illegal activities including those that endanger national security and public interests, disrupt economic and social order, and infringe upon the legitimate rights and interests of others, etc.;
  • using recommendation algorithm-based services to disseminate information prohibited by laws and administrative regulations (and there is also a positive obligation for service providers to prevent the dissemination of bad information);
  • setting up algorithm models that lead users into addiction or excessive consumption, or that are illegal or against ethics and morals;
  • create or produce false or misleading news content or share news from sources outside the parameters set by the government;
  • use algorithms to create fake accounts, engage in illegal account trading, manipulate user accounts, or generate false likes, comments or shares;
  • use algorithms to block information, make excessive recommendations, manipulate rankings in search results, control trends, or otherwise disrupt information presentation in a way that influences public opinion online or evade supervisory or regulatory oversight; and
  • engage in monopolistic or unfair competitive practices by using algorithms to place unreasonable restrictions on other internet information service providers or disrupt their lawful operations.
Last modified 26 January 2026

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 23 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 14 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 9 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.

Prohibited activities in Denmark

In accordance with Denmark’s opt-out from EU justice and home affairs, as set out in Protocol (No 22) on the position of Denmark, certain provisions of Article 5 of the EU AI Act do not apply in Denmark. This includes the use of AI for biometric categorization and emotion recognition in the context of police cooperation and criminal justice. Additionally, Article 5(1)(h) and paragraphs 2–6 of Article 5 are excluded. These exemptions reflect Denmark’s specific legal position within the EU.

Last modified 21 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 22 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions except for medical or safety reasons.
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 11 February 2026

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 22 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.

Prohibited activities in France

In France, the CNCDH Opinion recommends to ban: (i) the use of choice interfaces whenever their purpose or effect is to manipulate users to their detriment by exploiting their vulnerabilities; (ii) all types of “social scoring” implemented by public authorities or by any company, public or private; and (iii) the use of emotion recognition technologies, except for their use when they are intended to reinforce the autonomy of individuals, or more broadly the effectiveness of their fundamental rights.

Last modified 5 February 2026

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 3 February 2026

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 19 July 2025

Laws specifically addressing AI have not yet been introduced in Hong Kong.  

Although not strictly prohibited due to its non-binding nature, the GenAI Guideline states that systems posing existential threats (e.g., uses causing harm or affecting human safety, subliminal manipulation) carry an unacceptable level of risk and should therefore be prohibited. Technology Developers should bear legal liability for creating such unacceptable risks in the development or deployment of such generative AI technologies.

Last modified 25 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 24 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.

Prohibited activities in Ireland

The Irish Data Protection Commission has been active in opening a number of consultations and investigations on certain AI systems.

  • In 2024 it ordered X to suspend training of an AI chatbot after issuing High Court proceedings pursuant to Section 134 of the Data Protection Act 2018.
  • In 2024, it launched a statutory inquiry into Google Ireland under Section 110 of the DPA 2018, re compliance with DP obligations pursuant to Article 35 GDPR for its Pathways Language Model 2 (PaLM 2).
  • In 23/24 the DPC engaged with Meta in relation to the training of its LLM using public content shared across the EU, which lead to the DPC seeking a formal GDPR opinion on the matter from the EDPB. Engagement with Meta and oversight of its implemented measures and improvements is ongoing.

In February 2025, the DPC also became one of five signatory data protection authorities on Paris declaration to reaffirm their commitment to implementing data governance that promotes innovative and privacy-protecting AI.

Last modified 23 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 3 February 2026

Currently, there are no laws in Japan that specifically address this point.

Last modified 31 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 14 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 24 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 23 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 23 July 2025

As stated above, laws specifically addressing AI have not been introduced in Mauritius yet.

Last modified 26 June 2025

Laws specifically addressing AI have not been introduced in Mexico yet.  Article 9 of the AI Bill establishes that certain AI systems that can cause or cause serious physical or psychological harm to people when used, including use for biometric identification, are considered to be of unacceptable risk.  This includes the marketing, sale, distribution and use, even free of charge, of AI systems of unacceptable risk specifically intended to:

  • Alter the behaviour of any person, in such a way that causes, or is likely to cause, physical or psychological harm.
  • Take advantage of the vulnerabilities of specific groups of people, whether due to age or physical or mental disability, to substantially alter their behaviour in a way that causes, or is likely to cause, physical or psychological harm.
  • Classify people, in such a way that results in harm or damage to one or more people.
  • Carry out remote biometric identification in real time in public access spaces without authorization of the affected person, except in cases of public interest or national security.
  • Alter in any way voice or image files of any person in order to modify their original content without authorisation of the affected person or whoever is the owner of the property rights.
Last modified 29 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.

Prohibited activities in the Netherlands

In 2024, the Dutch Data Protection Authority (Autoriteit Persoonsgegevens) opened a consultation on certain categories of prohibited AI systems within the Netherlands. On its website, the Dutch Data Protection Authority published summaries of the consultation regarding prohibited AI systems for emotion recognition in the workplace or educational institutions and for manipulative and exploitative AI systems. The Dutch Data Protection Authority has indicated they will provide additional guidance, which has not yet been published. No results are published yet of the consultation for prohibited AI systems for risk assessment of criminal offenses and for social scoring.

Last modified 23 July 2025

Laws specifically addressing AI have not been introduced in New Zealand yet, so no AI activities are expressly prohibited. We note that the draft Biometric Processing Privacy Code dated December 2024, when passed, will prohibit biometric categorisation using automated processes, however AI is not the primary focus of this legislation.

Last modified 14 July 2025

Laws specifically addressing AI have not been introduced in Nigeria yet.

Last modified 17 June 2025

The content on Prohibited activities in the European Union applies in Norway.

Last modified 9 October 2025

Laws specifically prohibiting activities in relation to AI have not been introduced in Peru yet.  

Last modified 20 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 23 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 22 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 25 July 2025

Laws specifically addressing AI have not yet been introduced in Singapore.

Last modified 28 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 29 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 14 July 2025

The AI Act does not specifically enumerate or stipulate any specific prohibited actions (in contrast to the treatment of prohibited AI practices under the EU AI Act). However, actions that are already prohibited under existing laws and regulations, such as infringement of copyright or privacy, and distribution and publication of illegal information and contents, may still be problematic in relation to AI-related services.

Last modified 29 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.

Prohibited activities in Spain

As per the Spanish Draft AI Bill, prior judicial authorization will be required in order to use real-time remote biometric identification in public spaces. To this effect, article 11 of the draft further specifies that such authorizations will be granted by administrative courts. For each use, the requesting authority must submit a written request containing detailed information, including:

  • a reference to the system’s registration in the EU database (or justification for any delay due to urgency);
  • justification for any prior use without prior authorization, if applicable;
  • specific technical and operational details of the system;
  • the identity of the individuals targeted;
  • the geographic and temporal scope of the measure (which cannot exceed one month, renewable);
  • the legal basis and facts justifying the use; and
  • the proposed data handling measures once authorization ends.

Data relating to individuals not identified in the authorization must not be processed and must be promptly deleted. Moreover, data collected during authorized use may only be used in the context of the specific investigation for which the authorization was granted. Once transferred to the requesting law enforcement authority who is responsible for its custody as evidence under applicable law, the data must be destroyed without undue delay.

Last modified 21 July 2025

Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI. 

Under Article 5, these uses and technologies include:

  • Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
  • Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
  • Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
  • Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
  • Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
  • Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
  • Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
  • Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Last modified 7 July 2025

Laws specifically addressing AI have not been introduced in Thailand yet.  

Last modified 25 July 2025

Laws specifically addressing AI have not been introduced in Turkey yet.

Last modified 30 July 2025

There is no unified federal law or emirate level law in the UAE that has a primary focus on regulating AI (and therefore no prohibited activities).

The DIFC’s Data Protection Regulations do not classify AI Systems into unacceptable risk, high risk, limited risk and minimal risk, nor contain any practices that are expressly prohibited. However, the regulations do prohibit the use, operation or provision of an AI System to engage in high risk processing activities unless the DIFC’s Commissioner for Data Protection has established audit and certification requirements for such AI Systems. ’High risk processing activities’ is defined as processing of personal data where one or more of the following applies:

  • processing that includes the adoption of new or different technologies or methods, which creates a materially increased risk to the security or rights of a data subject or renders it more difficult for a data subject to exercise their rights;
  • a considerable amount of personal data will be processed (including staff and contractor personal data) and where such processing is likely to result in a high risk to the data subject, including due to the sensitivity of the personal data or risks relating to the security, integrity or privacy of the personal data;
  • the processing will involve a systematic and extensive evaluation of personal aspects relating to natural persons, based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person; or
  • a material amount of special categories of personal data is to be processed.

Given no audit and certification requirements have been established at present, there is a de-facto prohibition on the use of AI Systems for high risk processing activities. The Commissioner has confirmed that no AI System may be used for high risk processing activities until the audit and certification requirements have been established.

Last modified 4 August 2025

A specific law addressing AI has not been introduced in the UK yet.

Last modified 23 February 2026

As noted, the U.S. has not enacted a comprehensive federal law that explicitly outlines prohibited uses of AI. However, certain AI-related activities are restricted or prohibited under existing laws and proposed legislation. Enforcement actions have been taken under broader legal authorities such as consumer protection, civil rights, and securities laws. 

At the federal level, two of the many proposed bills aiming to prohibit specific AI practices are:

  • The Preventing Algorithmic Collusion Act (2025), which would ban the use of pricing algorithms – including those powered by AI – to incorporate nonpublic competitor data to facilitate price-fixing
  • The Transparency and Responsibility for Artificial Intelligence Networks Act (TRAIN Act) (2025), which would create an administrative subpoena process allowing copyright owners to compel AI developers to disclose copies of, or records sufficient to identify, copyrighted works used to train generative artificial intelligence models

While these bills have not become law, federal agencies have used existing statutes that prohibit deceptive or harmful AI practices. For example:

  • The FTC has taken enforcement action against companies for “AI washing” (misleading claims about AI capabilities) and is studying the business practices of companies that offer companion chatbots, focusing on their effect on children
  • The SEC has charged firms for misrepresenting the role of AI in investment strategies
  • The DOJ has pursued criminal charges in cases involving fraudulent claims about AI functionality

At the state level, some jurisdictions have enacted laws that explicitly prohibit certain AI uses, such as:

  • Colorado’s AI Act, which prohibits the deployment of high-risk AI systems without reasonable safeguards to prevent algorithmic discrimination
  • Utah’s AI Policy Act, which prohibits the undisclosed use of generative AI in regulated occupations (e.g., legal, medical), requires clear disclosure when AI is used in consumer interactions, and holds individuals liable for AI-driven misconduct under state consumer protection laws
  • New York City’s Local Law 144, which prohibits the use of automated employment decision tools without prior bias audits and candidate notification
  • California and Illinois, which have passed laws restricting the unauthorized use of AI-generated digital replicas and require transparency in political advertising

Overall, while the U.S. lacks a unified list of federally prohibited AI activities, a growing patchwork of federal enforcement actions and state-level statutes is continuing to define the boundaries of acceptable AI use.

Last modified 10 March 2026

Continue reading

  • no results

Previous topic
Back to top