Artificial Intelligence in the European Union

Fairness / unlawful bias in the European Union

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Continue reading

  • no results

Previous topic
Back to top