Artificial Intelligence in Greece

Fairness / unlawful bias in Greece

At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.

Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:

  • Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
  • Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
  • Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).

The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.

Fairness / unlawful bias in Greece

Under Law 4961/2022, public sector bodies using AI systems in decision-making are also subject to certain obligations to mitigate discrimination and unlawful bias-related risks. Pursuant to Article 5, public sector bodies must conduct an algorithmic impact assessment before deploying AI systems. This assessment must evaluate the AI system's purpose, technical parameters, types of decisions supported, the data categories involved, potential risks to individuals' rights, particularly for vulnerable groups, such as people with disabilities and chronic conditions, and the societal benefits of the system. Additionally, under Article 7(3), contractors responsible for developing or deploying AI systems for public sector bodies must ensure the system complies with legal standards, thus protecting human dignity, privacy, preventing discrimination, promoting gender equality, and ensuring accessibility, among other rights.

Private sector bodies are obliged to address and prevent discrimination in the workplace. As stipulated in Article 9, businesses are required to provide clear and comprehensive information to employees or candidates regarding the criteria for taking AI-driven decisions in relation to recruitment, working conditions, or performance assessments. This obligation ensures that AI systems do not result in discrimination based on gender, race, ethnicity, disability, age, or other protected characteristics. Furthermore, Article 10 mandates medium and large enterprises to maintain a registry of AI systems with information such as operational parameters, technical specifications, and the data processed. This registry must also include the company's data ethics policy, outlining measures implemented to safeguard data integrity and prevent discriminatory outcomes.

Continue reading

  • no results

Previous topic
Back to top