Artificial Intelligence in Australia
Prohibited activities
Regulatory guidance / voluntary codes in Australia
On 23 May 2025, the Australian Signals Directorate's Australian Cyber Security Centre, together with its counterparts in the US, UK and New Zealand, released guidance on best practices for AI Data Security. The guidance sets out key data security risks in AI use and provides a list of best practice guidelines, including but not limited to, sourcing reliable data and tracking data provenance, verifying and maintaining data integrity during storage and transport, and data encryption.
In March 2025, the Commonwealth Ombudsman released an Automated Decision Making Better Practice Guide. The Guide is intended to inform the selection, adoption and use of AI by government agencies to ensure their compliance with Australian laws, including administrative law. Appendix A of the Guide features a comprehensive checklist which may assist government and non-government entities with decision making surrounding their use of AI.
Also in March 2025, the Australian Government Digital Transformation Agency released AI and Cyber Risk model clauses for procuring or developing AI models.
On 21 October 2024, the Office of the Australian Information Commissioner (OAIC), the national regulator for privacy and freedom of information, released two guidance documents relating to AI:
- Guidance on privacy and the use of commercially available AI products – This guidance document is intended to assist organisations deploying and using commercially available AI systems in complying with their privacy obligations. The guidance document specifies that privacy obligations apply to any personal information input into an AI system and the output that is generated by the AI system (where the output contains personal information). The OAIC also recommends that no personal information is entered into publicly available generative AI tools.
- Guidance on privacy and developing and training generative AI models – This guidance document recommends that AI developers take reasonable steps to ensure accuracy in generative AI models. With respect to privacy obligations, it notes that personal information includes inferred, incorrect or artificially generated information produced by AI models (such as hallucinations and deepfakes). In addition, this guidance document reminds developers that publicly available or accessible data may not automatically be legally used to train or fine-tune generative AI models or systems.
In September 2024, Australia's Department of Science, Industry and Resources published a Proposal Paper for introducing mandatory guardrails for AI in high-risk settings (Proposal Paper introducing mandatory guardrails). This paper identifies two broad categories of high-risk AI, namely (1) AI systems with known or foreseeable proposed uses that are considered to be high risk, and (2) advanced, highly capable general-purpose AI/GPAI models that are capable of being used, or being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems, where all possible applications and risks cannot be foreseen.
With respect to the first category listed above, the principles that organisations must consider in designating an AI system as high-risk are the risk of adverse impacts to:
- an individual's human rights, health or safety, and legal rights e.g. legal effects, defamation or similarly significant effects on an individual;
- groups of individuals or collective rights of cultural groups; and
- the broader Australian economy, society, environment and rule of law,
as well as the severity and extent of the adverse impacts outlined above.
With respect to AI designated as high-risk, the Proposal Paper introducing mandatory guardrails sets out the following proposed mandatory guardrails for organisations developing or deploying high-risk AI systems (page 35):
- "Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance;
- Establish and implement a risk management process to identify and mitigate risks;
- Protect AI systems, and implement data governance measures to manage data quality and provenance;
- Test AI models and systems to evaluate model performance and monitor the system once deployed;
- Enable human control or intervention in an AI system to achieve meaningful human oversight;
- Inform end-users regarding AI-enabled decisions, interactions with AI and AI generated content;
- Establish processes for people impacted by AI systems to challenge use or outcomes;
- Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks;
- Keep and maintain records to allow third parties to assess compliance with guardrails; and
- Undertake conformity assessments to demonstrate and certify compliance with guardrails."
The definition of high-risk AI and the guardrails are expected to be refined based on feedback provided by Australian stakeholders to the Proposal paper introducing mandatory guardrails.
On 5 September 2024, the Australian Government released a Voluntary AI Safety Standard publication that sets out substantially similar guardrails as those in the Proposal Paper introducing mandatory guardrails, with the exception of guardrail 10, which states:
"Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness."
Whereas the Proposal Paper introducing mandatory guardrails apply to high-risk AI, the Voluntary AI Safety Standard sets out voluntary guidelines for developers and deployers of AI to protect people and communities from harms, avoid reputation and financial risks to their organizations, increase organizational and community trust and confidence in AI systems, services and products, and align with legal obligations and expectations in Australia, among other things.
On 1 September 2024, the Policy for the Responsible Use of AI in Government (Policy) came into effect, aiming to empower the Australian Government to safely, ethically and responsibly engage with AI, strengthen public trust in the government's use of AI, and adapt to technological and policy changes over time.
In particular, the Policy requires government agencies to:
- designate accountability for compliance with the policy to certain public officials, and
- publish and keep updated an AI transparency statement.
Additional recommendations include fundamental AI training for all staff, additional training for staff with roles or responsibilities in connection with AI, understanding and recording where and how AI is being used within agencies, integrating AI considerations into existing frameworks, participating in the Australian Government's AI assurance framework, monitoring AI use cases and keeping up to date with policy changes.
Australia has been a signatory to the Bletchley Declaration since 1 November 2023, which establishes a collective understanding between 28 countries and the European Union on the opportunities and risks posed by AI.
In November 2019, the Australian Government published its AI Ethics Principles (Ethics Principles), designed to ensure that AI is safe, secure and reliable and to:
- help achieve safer, more reliable and fairer outcomes for all Australians;
- reduce the risk of negative impact on those affected by AI applications; and assist businesses and governments to practice the highest ethical standards when designing, developing and implementing AI.
Definitions in Australia
Information not provided.
Prohibited activities in Australia
Information not provided.
Controls on generative AI in Australia
Information not provided.
User transparency in Australia
Information not provided.
Fairness / unlawful bias in Australia
Information not provided.
Information not provided.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Laws specifically addressing AI have not been introduced in Brazil yet. Draft Article 13 of the proposed Brazilian AI Bill prohibits AI systems that employ subliminal techniques, exploit vulnerabilities of specific groups or are used by public authorities for illegitimate or disproportionate social scoring.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
National laws specifically addressing AI have not yet passed in Canada. At the provincial level, using AI for screening and evaluating potential employers or using AI to make decisions based on personal information should be reviewed carefully.
Article 5 of the Chilean AI Bill divides AI systems into four risk classes. The highest risk class is an 'Unacceptable Risk AI System', which is defined as an AI system that is incompatible with the respect and guarantee of the human fundamental rights, the introduction of which onto the market or the putting into service of which is prohibited.
Article 6 of the Chilean AI Bill lists the systems that shall be considered as 'Unacceptable Risk AI Systems':
- Subliminal manipulation system: AI systems that use techniques that are imperceptible to the affected persons and that have the direct purpose or effect of inducing actions that cause damage to the physical and/or mental health of the people involved.
- Systems that exploit people's vulnerabilities to generate harmful behaviours: AI systems that exploit any vulnerabilities of a person or a specific group of persons - including known characteristics of that person's or group's personality traits, social or economic situation, age, and physical or mental capacity - that are intended to substantially alter their behaviour or limit their will and cause actual or potential harm to that person or to third parties.
- Systems of biometric categorisation of persons based on sensitive personal data: biometric categorisation systems that classify and identify natural persons based on sensitive personal data, or that are based on an inference related to such attributes or characteristics, in a way that such categorisation results in prejudicial or unjustified discriminatory treatment against them.
- Generic social rating systems: AI systems whose purpose is to evaluate or classify individuals or groups of individuals based on their social behaviour, socioeconomic status, or known or inferred personal or personality characteristics, in a way that the resulting classification results in prejudicial or unjustifiably discriminatory treatment of such individuals or groups of individuals.
- Remote biometric identification systems in public access spaces in real time: AI systems for video image analysis in public access spaces using real-time remote biometric identification systems.
- Systems for non-selective extraction of facial images: AI systems that create or extend facial recognition databases by non-selectively extracting facial images from the internet or CCTV images.
- Systems for the evaluation of a person's emotional states: AI systems that infer the emotions of a natural person in the fields of criminal law enforcement, criminal procedure and border management, in workplaces and in educational institutions.
Under the GenAI Measures, any organisation or individual is prohibited from:
- using generative AI services to generate any illegal content including content that endanger national security, national sovereignty or the socialist system, etc., or content that propagates terrorism, ethnic hatred, or any violent or obscene content, or false or harmful information; and
- exploiting advantages in terms of algorithms, data, platforms (from an intellectual property perspective) to carry out monopoly or unfair competition practices.
Under the Deep Synthesis Provisions, any organisation or individual is prohibited from:
- using deep synthesis services to engage in illegal activities including those that endanger national security and public interests, disrupt economic and social order, and infringe upon the legitimate rights and interests of others, etc.;
- using deep synthesis services to produce or distribute fake news; and
- using technical means to delete, alter or conceal labels added to information generated or edited using deep synthesis services.
Under the Recommendation Algorithms Provisions, service providers are prohibited from:
- using recommendation algorithm-based services to engage in illegal activities including those that endanger national security and public interests, disrupt economic and social order, and infringe upon the legitimate rights and interests of others, etc.;
- using recommendation algorithm-based services to disseminate information prohibited by laws and administrative regulations (and there is also a positive obligation for service providers to prevent the dissemination of bad information);
- setting up algorithm models that lead users into addiction or excessive consumption, or that are illegal or against ethics and morals;
- create or produce false or misleading news content or share news from sources outside the parameters set by the government;
- use algorithms to create fake accounts, engage in illegal account trading, manipulate user accounts, or generate false likes, comments or shares;
- use algorithms to block information, make excessive recommendations, manipulate rankings in search results, control trends, or otherwise disrupt information presentation in a way that influences public opinion online or evade supervisory or regulatory oversight; and
- engage in monopolistic or unfair competitive practices by using algorithms to place unreasonable restrictions on other internet information service providers or disrupt their lawful operations.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Prohibited activities in Denmark
In accordance with Denmark’s opt-out from EU justice and home affairs, as set out in Protocol (No 22) on the position of Denmark, certain provisions of Article 5 of the EU AI Act do not apply in Denmark. This includes the use of AI for biometric categorization and emotion recognition in the context of police cooperation and criminal justice. Additionally, Article 5(1)(h) and paragraphs 2–6 of Article 5 are excluded. These exemptions reflect Denmark’s specific legal position within the EU.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions except for medical or safety reasons.
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Prohibited activities in France
In France, the CNCDH Opinion recommends to ban: (i) the use of choice interfaces whenever their purpose or effect is to manipulate users to their detriment by exploiting their vulnerabilities; (ii) all types of “social scoring” implemented by public authorities or by any company, public or private; and (iii) the use of emotion recognition technologies, except for their use when they are intended to reinforce the autonomy of individuals, or more broadly the effectiveness of their fundamental rights.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Laws specifically addressing AI have not yet been introduced in Hong Kong.
Although not strictly prohibited due to its non-binding nature, the GenAI Guideline states that systems posing existential threats (e.g., uses causing harm or affecting human safety, subliminal manipulation) carry an unacceptable level of risk and should therefore be prohibited. Technology Developers should bear legal liability for creating such unacceptable risks in the development or deployment of such generative AI technologies.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Prohibited activities in Ireland
The Irish Data Protection Commission has been active in opening a number of consultations and investigations on certain AI systems.
- In 2024 it ordered X to suspend training of an AI chatbot after issuing High Court proceedings pursuant to Section 134 of the Data Protection Act 2018.
- In 2024, it launched a statutory inquiry into Google Ireland under Section 110 of the DPA 2018, re compliance with DP obligations pursuant to Article 35 GDPR for its Pathways Language Model 2 (PaLM 2).
- In 23/24 the DPC engaged with Meta in relation to the training of its LLM using public content shared across the EU, which lead to the DPC seeking a formal GDPR opinion on the matter from the EDPB. Engagement with Meta and oversight of its implemented measures and improvements is ongoing.
In February 2025, the DPC also became one of five signatory data protection authorities on Paris declaration to reaffirm their commitment to implementing data governance that promotes innovative and privacy-protecting AI.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Currently, there are no laws in Japan that specifically address this point.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
As stated above, laws specifically addressing AI have not been introduced in Mauritius yet.
Laws specifically addressing AI have not been introduced in Mexico yet. Article 9 of the AI Bill establishes that certain AI systems that can cause or cause serious physical or psychological harm to people when used, including use for biometric identification, are considered to be of unacceptable risk. This includes the marketing, sale, distribution and use, even free of charge, of AI systems of unacceptable risk specifically intended to:
- Alter the behaviour of any person, in such a way that causes, or is likely to cause, physical or psychological harm.
- Take advantage of the vulnerabilities of specific groups of people, whether due to age or physical or mental disability, to substantially alter their behaviour in a way that causes, or is likely to cause, physical or psychological harm.
- Classify people, in such a way that results in harm or damage to one or more people.
- Carry out remote biometric identification in real time in public access spaces without authorization of the affected person, except in cases of public interest or national security.
- Alter in any way voice or image files of any person in order to modify their original content without authorisation of the affected person or whoever is the owner of the property rights.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Prohibited activities in the Netherlands
In 2024, the Dutch Data Protection Authority (Autoriteit Persoonsgegevens) opened a consultation on certain categories of prohibited AI systems within the Netherlands. On its website, the Dutch Data Protection Authority published summaries of the consultation regarding prohibited AI systems for emotion recognition in the workplace or educational institutions and for manipulative and exploitative AI systems. The Dutch Data Protection Authority has indicated they will provide additional guidance, which has not yet been published. No results are published yet of the consultation for prohibited AI systems for risk assessment of criminal offenses and for social scoring.
Laws specifically addressing AI have not been introduced in New Zealand yet, so no AI activities are expressly prohibited. We note that the draft Biometric Processing Privacy Code dated December 2024, when passed, will prohibit biometric categorisation using automated processes, however AI is not the primary focus of this legislation.
Laws specifically addressing AI have not been introduced in Nigeria yet.
The content on Prohibited activities in the European Union applies in Norway.
Laws specifically prohibiting activities in relation to AI have not been introduced in Peru yet.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Laws specifically addressing AI have not yet been introduced in Singapore.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
The AI Act does not specifically enumerate or stipulate any specific prohibited actions (in contrast to the treatment of prohibited AI practices under the EU AI Act). However, actions that are already prohibited under existing laws and regulations, such as infringement of copyright or privacy, and distribution and publication of illegal information and contents, may still be problematic in relation to AI-related services.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Prohibited activities in Spain
As per the Spanish Draft AI Bill, prior judicial authorization will be required in order to use real-time remote biometric identification in public spaces. To this effect, article 11 of the draft further specifies that such authorizations will be granted by administrative courts. For each use, the requesting authority must submit a written request containing detailed information, including:
- a reference to the system’s registration in the EU database (or justification for any delay due to urgency);
- justification for any prior use without prior authorization, if applicable;
- specific technical and operational details of the system;
- the identity of the individuals targeted;
- the geographic and temporal scope of the measure (which cannot exceed one month, renewable);
- the legal basis and facts justifying the use; and
- the proposed data handling measures once authorization ends.
Data relating to individuals not identified in the authorization must not be processed and must be promptly deleted. Moreover, data collected during authorized use may only be used in the context of the specific investigation for which the authorization was granted. Once transferred to the requesting law enforcement authority who is responsible for its custody as evidence under applicable law, the data must be destroyed without undue delay.
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
Laws specifically addressing AI have not been introduced in Thailand yet.
Laws specifically addressing AI have not been introduced in Turkey yet.
There is no unified federal law or emirate level law in the UAE that has a primary focus on regulating AI (and therefore no prohibited activities).
The DIFC’s Data Protection Regulations do not classify AI Systems into unacceptable risk, high risk, limited risk and minimal risk, nor contain any practices that are expressly prohibited. However, the regulations do prohibit the use, operation or provision of an AI System to engage in high risk processing activities unless the DIFC’s Commissioner for Data Protection has established audit and certification requirements for such AI Systems. ’High risk processing activities’ is defined as processing of personal data where one or more of the following applies:
- processing that includes the adoption of new or different technologies or methods, which creates a materially increased risk to the security or rights of a data subject or renders it more difficult for a data subject to exercise their rights;
- a considerable amount of personal data will be processed (including staff and contractor personal data) and where such processing is likely to result in a high risk to the data subject, including due to the sensitivity of the personal data or risks relating to the security, integrity or privacy of the personal data;
- the processing will involve a systematic and extensive evaluation of personal aspects relating to natural persons, based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person; or
- a material amount of special categories of personal data is to be processed.
Given no audit and certification requirements have been established at present, there is a de-facto prohibition on the use of AI Systems for high risk processing activities. The Commissioner has confirmed that no AI System may be used for high risk processing activities until the audit and certification requirements have been established.
A specific law addressing AI has not been introduced in the UK yet.
As noted, the U.S. has not enacted a comprehensive federal law that explicitly outlines prohibited uses of AI. However, certain AI-related activities are restricted or prohibited under existing laws and proposed legislation. Enforcement actions have been taken under broader legal authorities such as consumer protection, civil rights, and securities laws.
At the federal level, two of the many proposed bills aiming to prohibit specific AI practices are:
- The Preventing Algorithmic Collusion Act (2025), which would ban the use of pricing algorithms – including those powered by AI – to incorporate nonpublic competitor data to facilitate price-fixing
- The Transparency and Responsibility for Artificial Intelligence Networks Act (TRAIN Act) (2025), which would create an administrative subpoena process allowing copyright owners to compel AI developers to disclose copies of, or records sufficient to identify, copyrighted works used to train generative artificial intelligence models
While these bills have not become law, federal agencies have used existing statutes that prohibit deceptive or harmful AI practices. For example:
- The FTC has taken enforcement action against companies for “AI washing” (misleading claims about AI capabilities) and is studying the business practices of companies that offer companion chatbots, focusing on their effect on children
- The SEC has charged firms for misrepresenting the role of AI in investment strategies
- The DOJ has pursued criminal charges in cases involving fraudulent claims about AI functionality
At the state level, some jurisdictions have enacted laws that explicitly prohibit certain AI uses, such as:
- Colorado’s AI Act, which prohibits the deployment of high-risk AI systems without reasonable safeguards to prevent algorithmic discrimination
- Utah’s AI Policy Act, which prohibits the undisclosed use of generative AI in regulated occupations (e.g., legal, medical), requires clear disclosure when AI is used in consumer interactions, and holds individuals liable for AI-driven misconduct under state consumer protection laws
- New York City’s Local Law 144, which prohibits the use of automated employment decision tools without prior bias audits and candidate notification
- California and Illinois, which have passed laws restricting the unauthorized use of AI-generated digital replicas and require transparency in political advertising
Overall, while the U.S. lacks a unified list of federally prohibited AI activities, a growing patchwork of federal enforcement actions and state-level statutes is continuing to define the boundaries of acceptable AI use.