Artificial Intelligence in Australia

Fairness / unlawful bias

Information not provided.

Last modified 25 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 18 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 8 July 2025

The Brazilian AI Strategy discusses the importance of establishing mechanisms that allow the prevention and elimination of biases, which can result both from the algorithms used as well as from the databases used for their training (para. 2, page 7, Summary of the Brazilian Artificial Intelligence Strategy).

Last modified 31 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 23 July 2025

The Voluntary Code specifies under its Fairness and Equity principle that signatories should (with varying levels of obligation, as indicated, depending on whether a signature is either a developer or a manager of a generative AI system and if the system is available for public use or not):

  • assess and curate datasets used for training to manage data quality and potential biases; and
  • implement diverse testing methods and measures to assess and mitigate risk of biased output prior to release.
Last modified 11 July 2025

Article 4 establishes the main principles applicable to AI systems, and Article 4 e) states the following:

Diversity, non-discrimination and equity

AI systems will be developed and used throughout their lifecycle, promoting equal access, gender equality and cultural diversity, whilst avoiding discriminatory effects and selection or information biases that could generate a discriminatory effect.
Last modified 23 July 2025

The GenAI Measures require that measures be taken to prevent discrimination on the basis of race, ethnicity, beliefs, nationality, region, gender, age, occupation, etc. in the process of algorithm design, training data selection, model generation and optimisation and provision of services. Further, the lawful rights and interests of others (including rights to likeness, reputation, honour, personal privacy and personal information) must also be respected.

Under the Recommendation Algorithms Provisions, service providers have a responsibility to safeguard specific protected groups, including minors and the elderly, by providing appropriate services that are in line with such groups' characteristics. Where a recommendation algorithm-based service is deployed in an employee work dispatching use case, they must also ensure workers' rights to compensation, rest and leave. Consumers' right to fair trading must also be protected where a service is deployed to provide goods or services to consumers.

Last modified 26 January 2026

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 23 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 14 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 9 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 21 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 22 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 11 February 2026

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 22 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Guidance on fairness / unlawful bias in France

The CNCDH Opinion considers that AI systems inherit biases from two sources: development process and training data. These biases self-reinforce and amplify automatically, creating systematic discrimination. According to the CNCDH Opinion, continuous monitoring and adjustment are essential to prevent AI systems from perpetuating discrimination against marginalized communities. 

In addition, the Senate Report discusses how multi-layer bias (through real data and programming choices) are linked to unequal or distorted outputs and polarized “economy of attention,” reinforcing the policy need for bias detection/mitigation obligations (data governance, testing, documentation) within the EU framework.

Last modified 5 February 2026

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 3 February 2026

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Fairness / unlawful bias in Greece

Under Law 4961/2022, public sector bodies using AI systems in decision-making are also subject to certain obligations to mitigate discrimination and unlawful bias-related risks. Pursuant to Article 5, public sector bodies must conduct an algorithmic impact assessment before deploying AI systems. This assessment must evaluate the AI system's purpose, technical parameters, types of decisions supported, the data categories involved, potential risks to individuals' rights, particularly for vulnerable groups, such as people with disabilities and chronic conditions, and the societal benefits of the system. Additionally, under Article 7(3), contractors responsible for developing or deploying AI systems for public sector bodies must ensure the system complies with legal standards, thus protecting human dignity, privacy, preventing discrimination, promoting gender equality, and ensuring accessibility, among other rights.

Private sector bodies are obliged to address and prevent discrimination in the workplace. As stipulated in Article 9, businesses are required to provide clear and comprehensive information to employees or candidates regarding the criteria for taking AI-driven decisions in relation to recruitment, working conditions, or performance assessments. This obligation ensures that AI systems do not result in discrimination based on gender, race, ethnicity, disability, age, or other protected characteristics. Furthermore, Article 10 mandates medium and large enterprises to maintain a registry of AI systems with information such as operational parameters, technical specifications, and the data processed. This registry must also include the company's data ethics policy, outlining measures implemented to safeguard data integrity and prevent discriminatory outcomes.

Last modified 19 July 2025

Laws specifically addressing AI have not yet been introduced in Hong Kong.  

The GenAI Guideline requires risk assessments and strict controls to be implemented at each stage (from initial data collection, model training and content generation) to eliminate model biases. It requires continuous monitoring and audits to identify and address bias.

The fairness principle within the AI Ethical Framework requires recommendations/results from the AI applicable to treat individuals within similar groups in a fair manner, without favouritism or discrimination and without causing or resulting in harm. It further provides that this entails maintaining respect for the individuals behind the data and refraining from using datasets that contain discriminatory biases. It recommends various measures for mitigating these risks.

The fairness ethical principle set out in the Guidance specifies that individuals are entitled to be treated in a reasonably equal manner, without unjust bias or unlawful discrimination, and that differential treatments between different individuals or different groups of people should be justifiable with sound reasons. The Model Framework expands on this, including recommending certain measures to mitigate these risks, such as validation and testing.

Last modified 25 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 24 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 23 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 3 February 2026

The Social Principles state that the use of AI should not create inequality or social disadvantage. Relevant policy makers and businesses must have a thorough understanding of AI, along with the knowledge and ethical awareness to use AI appropriately. Furthermore, an educational environment that promotes learning and literacy must be made accessible to everyone.

Last modified 31 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Guidance on fairness / unlawful bias in Latvia

Section 1 of the Law on the Artificial Intelligence Centre stipulates that the purpose of the Law is to establish an artificial intelligence technology ecosystem and a legal framework for cooperation between the public sector, private sector, and higher education institutions, as well as to define the objectives, legal status, tasks, rights, organizational structure, sources of funding, and procedures for the use of funds of the foundation 'Artificial Intelligence Centre'.

Last modified 14 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 24 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 23 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Fairness / unlawful bias in Malta

Fairness is one of the ethical principles set out by the National Framework. Emphasis is made on the fair development, deployment, use and operation of AI systems given that AI raises risks, such as biased automated decision-making and discrimination.

Last modified 23 July 2025

Laws specifically addressing AI have not been introduced in Mauritius yet. However, our Data Protection Act 2017 provides that every controller or processor shall ensure that personal data are processed lawfully and fairly in relation to any data subject. The Blueprint aims at promoting digital inclusion, ensure equitable access to services, uphold human rights including accessibility, data privacy and non-discrimination.

Last modified 26 June 2025

Laws specifically addressing AI have not been introduced in Mexico yet. Article 2 of the AI Bill states as follows:

"In any use of artificial intelligence systems, the protection of human rights must be guaranteed, and therefore any form of discrimination based on ethnic or national origin, gender, age, disabilities, social status, health conditions, religion, opinions, sexual preferences, marital status or any other practice that violates human dignity and aims to, or results in, nullifying or impairing the rights and freedoms of individuals is prohibited in their development and use."
Last modified 29 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 23 July 2025

Laws specifically addressing AI have not been introduced in New Zealand yet, so there are no specific fairness and/or unlawful bias requirements. Fairness and unlawful bias requirements under existing legislation could be applied in the AI context, such as the Human Rights Act 1993 (Human Rights Act). While it does not specifically regulate AI, the Human Rights Act is to be read as applying as widely as possible to protect human rights. Therefore, if an AI decision is ultimately attributable to a company, the Human Rights Act would apply to that decision and create an obligation on the company to ensure that decision is not discriminatory.

Last modified 14 July 2025

Laws specifically addressing AI have not been introduced in Nigeria yet. However, other laws applicable to AI such as the Nigerian Constitution 1999 (as amended), the Nigeria Data Protection Act, 2023 etc have provisions addressing fairness and discrimination.

Last modified 17 June 2025

The content on Fairness / unlawful bias in the European Union applies in Norway.

 

 

Last modified 9 October 2025

Laws specifically addressing fairness/unlawful bias relating to AI have not been introduced in Peru yet.

Last modified 20 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 23 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 22 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 25 July 2025

Laws specifically addressing AI have not yet been introduced in Singapore.

Fairness is one of the guiding principles in the Model Framework. More specifically, it recommends:

  • ensuring that algorithmic decisions do not create discriminatory or unjust impact across different demographic lines (e.g. race, sex, etc.);
  • monitoring and accounting mechanisms to avoid unintentional discrimination when implementing decision-making systems; and
  • consulting a diversity of voices and demographics when developing systems, applications and algorithms.

The Principles recommend the following fairness and ethics practices:

  • individuals or groups of individuals must not be systematically disadvantaged through AI and data analytics (AIDA)-driven decisions unless these decisions can be justified;
  • use of personal attributes as input factors for AIDA-driven decisions is justified;
  • data and models used for AIDA-driven decisions must be regularly reviewed and validated for accuracy and relevance, and to minimize unintentional bias;
  • AIDA-driven decisions must be regularly reviewed so that models behave as designed and intended;
  • use of AIDA is aligned with the firm’s ethical standards, values and codes of conduct; and
  • AIDA-driven decisions are held to at least the same ethical standards as human-driven decisions.
Last modified 28 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 29 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 14 July 2025

Currently, the AI Act does not clearly stipulate this, but it is recommended in the above-mentioned National Guidelines for AI Ethics and other similar documents.

Last modified 29 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 21 July 2025

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Last modified 7 July 2025

Laws specifically addressing AI have not been introduced in Thailand yet.

Last modified 25 July 2025

Laws specifically addressing AI have not been introduced in Turkey yet. NAIS sets out an 'AI Principle' of 'Fairness', as follows (page 60 of NAIS):

"AI systems should be designed to provide an equal and fair service to all stakeholders while adhering to the rule of law and fundamental rights and freedoms. The fairness of AI systems means that the benefits of AI technology are shared at local, national and international levels, while taking into account the specific needs of different age groups, different cultural systems, different language groups, people with disabilities, and disadvantaged, marginalized and vulnerable segments of the society. It should be ensured that decisions made based on algorithms do not give rise to discriminatory or unfair effects on different demographic populations. In order to prevent the emergence of unintentional discrimination in decision-making processes, monitoring and accountability mechanisms should be developed and those mechanisms should be included in the implementation process."
Last modified 30 July 2025

There is no unified federal law or emirate level law in the UAE that has a primary focus on regulating AI (and therefore no binding obligations in relation to fairness and bias).

However, the AI Ethics Guide contains a principle of fairness which provides that:

  • Data ingested should, where possible, be accurate and representative of the population.
  • Algorithms should avoid non-operational bias.
  • Steps should be taken to mitigate and disclose the biases inherent in datasets.
  • Significant decisions should be provably fair.
  • All personnel involved in the development, deployment and use of AI Systems have a role and responsibility to operationalize AI fairness and should be educated accordingly.

The DIFC’s Data Protection Regulations also provide that AI Systems must be designed in accordance with the principle of fairness. In particular, AI Systems should be designed to treat all individuals equally and fairly, regardless of race, gender, or other specifically subjective factors; and AI Systems should be designed to avoid potential biases, including unjust bias, or where possible, mitigate bias that could lead to unfair outcomes.

Last modified 4 August 2025

There is no single statute addressing AI in the UK yet. Deployment of AI systems with specific biases could breach existing laws, including the Equality Act 2010, the Data Protection Act 2018 and/or various employment laws, depending on context.

The principle of fairness identified in the White Paper specifies that AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes. Since AI can have a significant impact on people’s lives, the principle states that AI-enabled decisions with high impact outcomes should not be arbitrary and should be justifiable.

The Interim AI Report identified the 'Bias challenge', i.e. that AI can introduce or perpetuate biases that society finds unacceptable. 

Last modified 23 February 2026

As with transparency, there is no federal law in the U.S. that specifically addresses fairness, bias, or other forms of algorithmic discrimination in AI systems. Under the Biden Administration, federal agencies sought to address these issues by applying existing civil rights, employment, and consumer protection laws to AI use cases. However, this activity has almost entirely ended, and agency-issued guidance on these subjects has in some cases been removed from public websites. Meanwhile, however, several states have enacted or proposed legislation to directly address algorithmic discrimination. For example:

  • California’s Fair Employment and Housing Act applies to employers’ use of “[AI], algorithms, and other automated-decision systems” in employment decisions
  • Colorado’s AI Act prohibits the deployment of high-risk AI systems without reasonable safeguards to prevent algorithmic discrimination, with enforcement led by the AG
  • Illinois has enacted workplace AI legislation that prohibits the use of AI in hiring or employment decisions that could result in discrimination
  • New Jersey issued guidance clarifying that the New Jersey Law Against Discrimination (LAD) applies to “algorithmic discrimination” resulting from the use of AI and other decision-making tools, including in employment
  • New York City’s Local Law 144 requires annual bias audits for automated employment decision tools and mandates candidate notification

These state and local efforts, combined with prior federal activity, may reflect a growing – though not entirely shared – belief that AI systems can perpetuate or amplify existing societal biases, and that legal frameworks are evolving to ensure fairness, particularly in domains like employment, housing, and healthcare.

Last modified 10 March 2026

Continue reading

  • no results

Previous topic
Back to top