Artificial Intelligence in Australia
Fairness / unlawful bias
Regulatory guidance / voluntary codes in Australia
On 23 May 2025, the Australian Signals Directorate's Australian Cyber Security Centre, together with its counterparts in the US, UK and New Zealand, released guidance on best practices for AI Data Security. The guidance sets out key data security risks in AI use and provides a list of best practice guidelines, including but not limited to, sourcing reliable data and tracking data provenance, verifying and maintaining data integrity during storage and transport, and data encryption.
In March 2025, the Commonwealth Ombudsman released an Automated Decision Making Better Practice Guide. The Guide is intended to inform the selection, adoption and use of AI by government agencies to ensure their compliance with Australian laws, including administrative law. Appendix A of the Guide features a comprehensive checklist which may assist government and non-government entities with decision making surrounding their use of AI.
Also in March 2025, the Australian Government Digital Transformation Agency released AI and Cyber Risk model clauses for procuring or developing AI models.
On 21 October 2024, the Office of the Australian Information Commissioner (OAIC), the national regulator for privacy and freedom of information, released two guidance documents relating to AI:
- Guidance on privacy and the use of commercially available AI products – This guidance document is intended to assist organisations deploying and using commercially available AI systems in complying with their privacy obligations. The guidance document specifies that privacy obligations apply to any personal information input into an AI system and the output that is generated by the AI system (where the output contains personal information). The OAIC also recommends that no personal information is entered into publicly available generative AI tools.
- Guidance on privacy and developing and training generative AI models – This guidance document recommends that AI developers take reasonable steps to ensure accuracy in generative AI models. With respect to privacy obligations, it notes that personal information includes inferred, incorrect or artificially generated information produced by AI models (such as hallucinations and deepfakes). In addition, this guidance document reminds developers that publicly available or accessible data may not automatically be legally used to train or fine-tune generative AI models or systems.
In September 2024, Australia's Department of Science, Industry and Resources published a Proposal Paper for introducing mandatory guardrails for AI in high-risk settings (Proposal Paper introducing mandatory guardrails). This paper identifies two broad categories of high-risk AI, namely (1) AI systems with known or foreseeable proposed uses that are considered to be high risk, and (2) advanced, highly capable general-purpose AI/GPAI models that are capable of being used, or being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems, where all possible applications and risks cannot be foreseen.
With respect to the first category listed above, the principles that organisations must consider in designating an AI system as high-risk are the risk of adverse impacts to:
- an individual's human rights, health or safety, and legal rights e.g. legal effects, defamation or similarly significant effects on an individual;
- groups of individuals or collective rights of cultural groups; and
- the broader Australian economy, society, environment and rule of law,
as well as the severity and extent of the adverse impacts outlined above.
With respect to AI designated as high-risk, the Proposal Paper introducing mandatory guardrails sets out the following proposed mandatory guardrails for organisations developing or deploying high-risk AI systems (page 35):
- "Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance;
- Establish and implement a risk management process to identify and mitigate risks;
- Protect AI systems, and implement data governance measures to manage data quality and provenance;
- Test AI models and systems to evaluate model performance and monitor the system once deployed;
- Enable human control or intervention in an AI system to achieve meaningful human oversight;
- Inform end-users regarding AI-enabled decisions, interactions with AI and AI generated content;
- Establish processes for people impacted by AI systems to challenge use or outcomes;
- Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks;
- Keep and maintain records to allow third parties to assess compliance with guardrails; and
- Undertake conformity assessments to demonstrate and certify compliance with guardrails."
The definition of high-risk AI and the guardrails are expected to be refined based on feedback provided by Australian stakeholders to the Proposal paper introducing mandatory guardrails.
On 5 September 2024, the Australian Government released a Voluntary AI Safety Standard publication that sets out substantially similar guardrails as those in the Proposal Paper introducing mandatory guardrails, with the exception of guardrail 10, which states:
"Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness."
Whereas the Proposal Paper introducing mandatory guardrails apply to high-risk AI, the Voluntary AI Safety Standard sets out voluntary guidelines for developers and deployers of AI to protect people and communities from harms, avoid reputation and financial risks to their organizations, increase organizational and community trust and confidence in AI systems, services and products, and align with legal obligations and expectations in Australia, among other things.
On 1 September 2024, the Policy for the Responsible Use of AI in Government (Policy) came into effect, aiming to empower the Australian Government to safely, ethically and responsibly engage with AI, strengthen public trust in the government's use of AI, and adapt to technological and policy changes over time.
In particular, the Policy requires government agencies to:
- designate accountability for compliance with the policy to certain public officials, and
- publish and keep updated an AI transparency statement.
Additional recommendations include fundamental AI training for all staff, additional training for staff with roles or responsibilities in connection with AI, understanding and recording where and how AI is being used within agencies, integrating AI considerations into existing frameworks, participating in the Australian Government's AI assurance framework, monitoring AI use cases and keeping up to date with policy changes.
Australia has been a signatory to the Bletchley Declaration since 1 November 2023, which establishes a collective understanding between 28 countries and the European Union on the opportunities and risks posed by AI.
In November 2019, the Australian Government published its AI Ethics Principles (Ethics Principles), designed to ensure that AI is safe, secure and reliable and to:
- help achieve safer, more reliable and fairer outcomes for all Australians;
- reduce the risk of negative impact on those affected by AI applications; and assist businesses and governments to practice the highest ethical standards when designing, developing and implementing AI.
Definitions in Australia
Information not provided.
Prohibited activities in Australia
Information not provided.
Controls on generative AI in Australia
Information not provided.
User transparency in Australia
Information not provided.
Fairness / unlawful bias in Australia
Information not provided.
Information not provided.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
The Brazilian AI Strategy discusses the importance of establishing mechanisms that allow the prevention and elimination of biases, which can result both from the algorithms used as well as from the databases used for their training (para. 2, page 7, Summary of the Brazilian Artificial Intelligence Strategy).
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
The Voluntary Code specifies under its Fairness and Equity principle that signatories should (with varying levels of obligation, as indicated, depending on whether a signature is either a developer or a manager of a generative AI system and if the system is available for public use or not):
- assess and curate datasets used for training to manage data quality and potential biases; and
- implement diverse testing methods and measures to assess and mitigate risk of biased output prior to release.
Article 4 establishes the main principles applicable to AI systems, and Article 4 e) states the following:
Diversity, non-discrimination and equity
AI systems will be developed and used throughout their lifecycle, promoting equal access, gender equality and cultural diversity, whilst avoiding discriminatory effects and selection or information biases that could generate a discriminatory effect.
The GenAI Measures require that measures be taken to prevent discrimination on the basis of race, ethnicity, beliefs, nationality, region, gender, age, occupation, etc. in the process of algorithm design, training data selection, model generation and optimisation and provision of services. Further, the lawful rights and interests of others (including rights to likeness, reputation, honour, personal privacy and personal information) must also be respected.
Under the Recommendation Algorithms Provisions, service providers have a responsibility to safeguard specific protected groups, including minors and the elderly, by providing appropriate services that are in line with such groups' characteristics. Where a recommendation algorithm-based service is deployed in an employee work dispatching use case, they must also ensure workers' rights to compensation, rest and leave. Consumers' right to fair trading must also be protected where a service is deployed to provide goods or services to consumers.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
Guidance on fairness / unlawful bias in France
The CNCDH Opinion considers that AI systems inherit biases from two sources: development process and training data. These biases self-reinforce and amplify automatically, creating systematic discrimination. According to the CNCDH Opinion, continuous monitoring and adjustment are essential to prevent AI systems from perpetuating discrimination against marginalized communities.
In addition, the Senate Report discusses how multi-layer bias (through real data and programming choices) are linked to unequal or distorted outputs and polarized “economy of attention,” reinforcing the policy need for bias detection/mitigation obligations (data governance, testing, documentation) within the EU framework.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
Fairness / unlawful bias in Greece
Under Law 4961/2022, public sector bodies using AI systems in decision-making are also subject to certain obligations to mitigate discrimination and unlawful bias-related risks. Pursuant to Article 5, public sector bodies must conduct an algorithmic impact assessment before deploying AI systems. This assessment must evaluate the AI system's purpose, technical parameters, types of decisions supported, the data categories involved, potential risks to individuals' rights, particularly for vulnerable groups, such as people with disabilities and chronic conditions, and the societal benefits of the system. Additionally, under Article 7(3), contractors responsible for developing or deploying AI systems for public sector bodies must ensure the system complies with legal standards, thus protecting human dignity, privacy, preventing discrimination, promoting gender equality, and ensuring accessibility, among other rights.
Private sector bodies are obliged to address and prevent discrimination in the workplace. As stipulated in Article 9, businesses are required to provide clear and comprehensive information to employees or candidates regarding the criteria for taking AI-driven decisions in relation to recruitment, working conditions, or performance assessments. This obligation ensures that AI systems do not result in discrimination based on gender, race, ethnicity, disability, age, or other protected characteristics. Furthermore, Article 10 mandates medium and large enterprises to maintain a registry of AI systems with information such as operational parameters, technical specifications, and the data processed. This registry must also include the company's data ethics policy, outlining measures implemented to safeguard data integrity and prevent discriminatory outcomes.
Laws specifically addressing AI have not yet been introduced in Hong Kong.
The GenAI Guideline requires risk assessments and strict controls to be implemented at each stage (from initial data collection, model training and content generation) to eliminate model biases. It requires continuous monitoring and audits to identify and address bias.
The fairness principle within the AI Ethical Framework requires recommendations/results from the AI applicable to treat individuals within similar groups in a fair manner, without favouritism or discrimination and without causing or resulting in harm. It further provides that this entails maintaining respect for the individuals behind the data and refraining from using datasets that contain discriminatory biases. It recommends various measures for mitigating these risks.
The fairness ethical principle set out in the Guidance specifies that individuals are entitled to be treated in a reasonably equal manner, without unjust bias or unlawful discrimination, and that differential treatments between different individuals or different groups of people should be justifiable with sound reasons. The Model Framework expands on this, including recommending certain measures to mitigate these risks, such as validation and testing.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
The Social Principles state that the use of AI should not create inequality or social disadvantage. Relevant policy makers and businesses must have a thorough understanding of AI, along with the knowledge and ethical awareness to use AI appropriately. Furthermore, an educational environment that promotes learning and literacy must be made accessible to everyone.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
Guidance on fairness / unlawful bias in Latvia
Section 1 of the Law on the Artificial Intelligence Centre stipulates that the purpose of the Law is to establish an artificial intelligence technology ecosystem and a legal framework for cooperation between the public sector, private sector, and higher education institutions, as well as to define the objectives, legal status, tasks, rights, organizational structure, sources of funding, and procedures for the use of funds of the foundation 'Artificial Intelligence Centre'.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
Fairness / unlawful bias in Malta
Fairness is one of the ethical principles set out by the National Framework. Emphasis is made on the fair development, deployment, use and operation of AI systems given that AI raises risks, such as biased automated decision-making and discrimination.
Laws specifically addressing AI have not been introduced in Mauritius yet. However, our Data Protection Act 2017 provides that every controller or processor shall ensure that personal data are processed lawfully and fairly in relation to any data subject. The Blueprint aims at promoting digital inclusion, ensure equitable access to services, uphold human rights including accessibility, data privacy and non-discrimination.
Laws specifically addressing AI have not been introduced in Mexico yet. Article 2 of the AI Bill states as follows:
"In any use of artificial intelligence systems, the protection of human rights must be guaranteed, and therefore any form of discrimination based on ethnic or national origin, gender, age, disabilities, social status, health conditions, religion, opinions, sexual preferences, marital status or any other practice that violates human dignity and aims to, or results in, nullifying or impairing the rights and freedoms of individuals is prohibited in their development and use."
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
Laws specifically addressing AI have not been introduced in New Zealand yet, so there are no specific fairness and/or unlawful bias requirements. Fairness and unlawful bias requirements under existing legislation could be applied in the AI context, such as the Human Rights Act 1993 (Human Rights Act). While it does not specifically regulate AI, the Human Rights Act is to be read as applying as widely as possible to protect human rights. Therefore, if an AI decision is ultimately attributable to a company, the Human Rights Act would apply to that decision and create an obligation on the company to ensure that decision is not discriminatory.
Laws specifically addressing AI have not been introduced in Nigeria yet. However, other laws applicable to AI such as the Nigerian Constitution 1999 (as amended), the Nigeria Data Protection Act, 2023 etc have provisions addressing fairness and discrimination.
The content on Fairness / unlawful bias in the European Union applies in Norway.
Laws specifically addressing fairness/unlawful bias relating to AI have not been introduced in Peru yet.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
Laws specifically addressing AI have not yet been introduced in Singapore.
Fairness is one of the guiding principles in the Model Framework. More specifically, it recommends:
- ensuring that algorithmic decisions do not create discriminatory or unjust impact across different demographic lines (e.g. race, sex, etc.);
- monitoring and accounting mechanisms to avoid unintentional discrimination when implementing decision-making systems; and
- consulting a diversity of voices and demographics when developing systems, applications and algorithms.
The Principles recommend the following fairness and ethics practices:
- individuals or groups of individuals must not be systematically disadvantaged through AI and data analytics (AIDA)-driven decisions unless these decisions can be justified;
- use of personal attributes as input factors for AIDA-driven decisions is justified;
- data and models used for AIDA-driven decisions must be regularly reviewed and validated for accuracy and relevance, and to minimize unintentional bias;
- AIDA-driven decisions must be regularly reviewed so that models behave as designed and intended;
- use of AIDA is aligned with the firm’s ethical standards, values and codes of conduct; and
- AIDA-driven decisions are held to at least the same ethical standards as human-driven decisions.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
Currently, the AI Act does not clearly stipulate this, but it is recommended in the above-mentioned National Guidelines for AI Ethics and other similar documents.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
Laws specifically addressing AI have not been introduced in Thailand yet.
Laws specifically addressing AI have not been introduced in Turkey yet. NAIS sets out an 'AI Principle' of 'Fairness', as follows (page 60 of NAIS):
"AI systems should be designed to provide an equal and fair service to all stakeholders while adhering to the rule of law and fundamental rights and freedoms. The fairness of AI systems means that the benefits of AI technology are shared at local, national and international levels, while taking into account the specific needs of different age groups, different cultural systems, different language groups, people with disabilities, and disadvantaged, marginalized and vulnerable segments of the society. It should be ensured that decisions made based on algorithms do not give rise to discriminatory or unfair effects on different demographic populations. In order to prevent the emergence of unintentional discrimination in decision-making processes, monitoring and accountability mechanisms should be developed and those mechanisms should be included in the implementation process."
There is no unified federal law or emirate level law in the UAE that has a primary focus on regulating AI (and therefore no binding obligations in relation to fairness and bias).
However, the AI Ethics Guide contains a principle of fairness which provides that:
- Data ingested should, where possible, be accurate and representative of the population.
- Algorithms should avoid non-operational bias.
- Steps should be taken to mitigate and disclose the biases inherent in datasets.
- Significant decisions should be provably fair.
- All personnel involved in the development, deployment and use of AI Systems have a role and responsibility to operationalize AI fairness and should be educated accordingly.
The DIFC’s Data Protection Regulations also provide that AI Systems must be designed in accordance with the principle of fairness. In particular, AI Systems should be designed to treat all individuals equally and fairly, regardless of race, gender, or other specifically subjective factors; and AI Systems should be designed to avoid potential biases, including unjust bias, or where possible, mitigate bias that could lead to unfair outcomes.
There is no single statute addressing AI in the UK yet. Deployment of AI systems with specific biases could breach existing laws, including the Equality Act 2010, the Data Protection Act 2018 and/or various employment laws, depending on context.
The principle of fairness identified in the White Paper specifies that AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes. Since AI can have a significant impact on people’s lives, the principle states that AI-enabled decisions with high impact outcomes should not be arbitrary and should be justifiable.
The Interim AI Report identified the 'Bias challenge', i.e. that AI can introduce or perpetuate biases that society finds unacceptable.
As with transparency, there is no federal law in the U.S. that specifically addresses fairness, bias, or other forms of algorithmic discrimination in AI systems. Under the Biden Administration, federal agencies sought to address these issues by applying existing civil rights, employment, and consumer protection laws to AI use cases. However, this activity has almost entirely ended, and agency-issued guidance on these subjects has in some cases been removed from public websites. Meanwhile, however, several states have enacted or proposed legislation to directly address algorithmic discrimination. For example:
- California’s Fair Employment and Housing Act applies to employers’ use of “[AI], algorithms, and other automated-decision systems” in employment decisions
- Colorado’s AI Act prohibits the deployment of high-risk AI systems without reasonable safeguards to prevent algorithmic discrimination, with enforcement led by the AG
- Illinois has enacted workplace AI legislation that prohibits the use of AI in hiring or employment decisions that could result in discrimination
- New Jersey issued guidance clarifying that the New Jersey Law Against Discrimination (LAD) applies to “algorithmic discrimination” resulting from the use of AI and other decision-making tools, including in employment
- New York City’s Local Law 144 requires annual bias audits for automated employment decision tools and mandates candidate notification
These state and local efforts, combined with prior federal activity, may reflect a growing – though not entirely shared – belief that AI systems can perpetuate or amplify existing societal biases, and that legal frameworks are evolving to ensure fairness, particularly in domains like employment, housing, and healthcare.