
Artificial Intelligence in Greece
Fairness / unlawful bias in Greece
Law / proposed law in Greece
Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions will come into force on specific dates:
- 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
- 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
- 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
- 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).
AI compliance in Greece
In Greece, the primary source of regulation of AI is Law 4961/2022 entitled 'Emerging information and communication technologies, strengthening digital governance and other provisions' (Law 4961/2022). Law 4961/2022 was enacted before the EU AI Act came into force and remains still applicable. Articles 1-14 of Law 4961/2022 introduce a comprehensive framework for the utilization of AI by public and private entities, aiming at transparency, accountability and the protection of citizens' rights. Law 4961/2022 includes provisions for the secure use of AI systems, the protection of personal data, and transparency in decision-making processes. Public entities are required to conduct algorithmic impact assessments and maintain AI system registries, while in the private sector, rules are established to ensure the proper use of AI in employment relation and data management. Additionally, specialized bodies are established under Law 4961/2022, such as the Coordinating Committee for AI, which oversees the implementation of the National Strategy, and the AI Observatory, which monitors AI-related activities in Greece, takes notice of best practices and assesses their impact. These provisions aim to safeguard fundamental rights, promote innovation, and ensure compliance with ethical principles, equality and privacy.
A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.
Product Liability Directive in Greece
Greece has not yet transposed the Product Liability Directive into national law.
Regulatory guidance / voluntary codes in Greece
In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:
- Prohibited AI practices, on 4 February 2025 (however, the Commission has not yet formally adopted them); and
- Definition of an AI system, on 6 February 2025 (however, the Commission has not yet formally adopted them).
Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.
Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct (which will be distinct from the GPAI Code of Practice).
In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.
AI compliance in Greece
The High-Level Advisory Committee on Artificial Intelligence, established in November 2023 under the supervision of the Greek Prime Minister, developed Greece’s national AI strategy entitled 'A Blueprint for Greece's AI Transformation' (AI Strategy) in November 2024.
Τhe AI Strategy sets a comprehensive set of principles for ensuring that AI systems are effective and, are developed and used responsibly throughout their whole lifecycle. These principles:
- stress the importance of first confirming that AI is truly necessary for the given solution, ensuring that the project is feasible, and using high-quality data for training algorithms;
- emphasize the need for clear processes and rules governing data access, alignment among stakeholders, and defining success through key performance indicators and risk-value assessments;
- highlight the importance of appropriate infrastructure, organization, and workforce for the deployment of AI, while continually evaluating the interpretability and added value of the AI system; and
- address crucial aspects of responsible AI, including monitoring security risks, complying with legal and data regulations, ensuring ethical alignment, and fostering environmental sustainability throughout the AI system’s development and implementation.
The report 'Generative AI Greece 2030', authored by the National Center for Social Research (EKKE) and the National Center for Scientific Research with backing from the Special Secretariat of Foresight, examines the future landscape of generative AI in Greece by 2030. The report proposes co-creating non-mandatory guidelines for public authorities, social partners, and other stakeholders to ensure AI development aligns with ethical principles and mitigate risks of socio-economic divides due to unequal access to AI. To this end, the report calls for the creation of ethical guidelines and supervision mechanisms for AI that promote societal values, safety, transparency, innovation, and human welfare, while addressing issues like digital inequality and algorithmic discrimination.
The National Commission for Bioethics & Technoethics of Greece has issued an 'Opinion on ”the applications of Artificial Intelligence in Health in Greece ' that includes guidelines emphasizing that AI applications must align with fundamental ethical principles, such as:
- Autonomy: Respect patients' right to informed decision-making while ensuring privacy and consent;
- Beneficence and no harm: Improve health outcomes or diagnostics without causing harm;
- Safety: Implement strict quality control to prevent errors;
- Fairness: Ensure fair distribution of AI benefits in healthcare;
- Equality: Provide equitable access to AI-based healthcare for all;
- Prevention & Precaution: Stop AI use if risks are identified or uncertain;
- Explainability: Ensure AI decisions are transparent, interpretable, and accountable;
- Complementarity: AI supports, but does not replace, human medical judgement.
Ιn March 2025 the National Commission for Bioethics & Technoethics of Greece issued another 'Opinion on ”the use Artificial Intelligence in Greek schools '. The opinion contains ethical guidelines and policy recommendations for the use of AI in primary and secondary education. The Commission declares as fundamental the following ethical principles which should be considered for the introduction of any AI application in schools: respect for human dignity, autonomy, beneficence and no harm, equitable access, complementarity, transparency, sustainability, and augmentation over automation and inventiveness over repetition.
Regarding tertiary education, certain faculties of Greek universities, such as the University of Crete, the National and Kapodistrian University of Athens, the University of Macedonia, the Aristotle University of Thessaloniki and the University of Western Attica have published guidelines on the use of AI tools by students and faculty members (both administrative and teaching staff). Those guidelines emphasize that the use of AI in Greek universities is permitted as an assistive tool always with full disclosure of AI involvement, critical evaluation of the output and respect for intellectual property. Submitting AI-generated content as original work without acknowledgment constitutes academic misconduct comparable to plagiarism. Violations may lead to institutional sanctions.
Furthermore, the Hellenic Federation of Enterprises (SEV) has issued a Guide on the use of AI for businesses. This guide aims to help Greek enterprises understand the impact of AI and integrate AI effectively. It focuses on practical changes, business benefits (like productivity, revenue increase, and cost reduction), and employee empowerment. The guide also covers strategy, challenges, and prerequisites for successful implementation, detailing widespread applications across various sectors.
Finally, the Hellenic Association of Communication Agencies (EDEE) and the Hellenic Advertisers Association (SDE) have jointly issued a Best Practice Guide titled ‘10 Principles for the Responsible Use of Artificial Intelligence in Advertising’, addressed to advertising agencies and individuals advertising their products/services.
Appointed supervisory authority in Greece
European Level
The European Commission established the European AI Office (AI Office) on 24 January 2024. The AI Office is a European Commission function and forms part of the Directorate-General for Communications Networks, Content and Technology; it must therefore operate in accordance with the Commission's internal processes. The AI Office is responsible for assisting the European Commission with the oversight, monitoring and enforcement of requirements for GPAI models and systems. It is primarily made up of hired full-time staff from a range of backgrounds such as technology specialists, economists, policy specialists and lawyers.
In addition, the European Artificial Intelligence Board (AI Board) has also been established. The AI Board's core responsibility is to advise and assist the Commission and Member States to facilitate the consistent and effective application of the EU AI Act. The AI Board will include a representative from each Member State and the AI Office and the European Data Protection Supervisor shall participate as non-voting observers.
Member State Level
Article 70 of the EU AI Act concerns the designation of national competent authorities by EU Member States. It specifies that each Member State shall establish or designate as national competent authorities at least one notifying authority and at least one market surveillance authority for the purposes of the general supervision and enforcement of the EU AI Act. Where multiple market surveillance authorities are appointed, one of the market surveillance authorities must act as the single point of contact. The authorities must operate independently and without bias. Member States are expected to notify the Commission of their appointed authorities and must provide publicly available information on how to contact them by 2 August 2025.
Supervisory authority in Greece
Greece has not yet appointed any competent authority pursuant to Article 70 of the EU AI Act.
However, in November 2024, the Ministry of Digital Governance published the list of the national authorities and bodies designated to supervise and enforce the respect for fundamental rights, including the right to non-discrimination, in relation to the high-risk AI systems according to Article 77 (2) of the EU AI Act. The list has been submitted to the European Commission and will be updated when required. The authorities are the following:
- The Hellenic Data Protection Authority (ΑΠΔΠΧ);
- The Greek Ombudsman (Συνήγορος του Πολίτη);
- The Hellenic Authority for Communication Security and Privacy (ΑΔΑΕ); and
- The National Commission for Human Rights (EEΔΑ).
The designated authorities will cooperate with the national competent authority/ies of Article 70 of the EU AI Act and will be granted with powers, such as, inter alia requesting access to documentation produced or maintained by AI system operators to demonstrate compliance with the EU AI Act. They will also be tasked with addressing any request to the market surveillance authority. In order to organize the testing of a high-risk AI system through technical means in cases where documentation provided by AI system operators is not sufficient for determining potential infringement of fundamental rights. These powers will take effect on 2 August 2026.
In June 2025, the Prime Minister announced the renaming of the Ministry of Digital Governance so as to include a reference to Artificial Intelligence, while the relevant Minister presented the establishment of a new Special Secretariat for AI and Data Governance within the Ministry.
Definitions in Greece
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
Prohibited activities in Greece
Certain AI practices are banned outright under Article 5 of the EU AI Act due to their potential for harm and ethical concerns. These prohibitions aim to protect EU citizens from the most intrusive and potentially abusive uses of AI.
Under Article 5, these uses and technologies include:
- Subliminal techniques: Deploying subliminal techniques or techniques that are manipulative or deceptive and have the effect or objective of materially distorting those people by impairing their ability to make an informed decision, causing them to make a decision they would not otherwise have taken, in a manner that causes significant harm to them or others (or is reasonably likely to).
- Exploiting vulnerabilities: Exploiting vulnerabilities of specific groups due to age, disability, or social or economic situation – as with subliminal techniques, this must have the effect or objective of materially distorting behaviour and cause significant harm to them or others (or be reasonably likely to).
- Social scoring: Evaluating or classifying natural persons or groups based on their social behaviours or personality characteristics (known, inferred or predicted) leading to either or both, unfavourable treatment of them or others in social contexts unrelated to the context in which the data was originally gathered or that is unjustified or disproportionate to their social behaviour or its gravity.
- Crime profiling: Assessing the risk of an individual committing a crime, based on the profiling of that person and assessing their personality traits (as opposed to using such systems to support a human assessment of the involvement of a person).
- Facial recognition databases: Creating or expending facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
- Inferring emotions: Inferring emotions in workplaces and educational institutions (except for medial or safety reasons).
- Biometric categorisation: Categorising natural persons based on their biometric data to deduce or infer sensitive information about them (i.e. their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation) except where based on lawfully acquired datasets (including in law enforcement).
- Biometric identification: Engaging in ‘real-time’ biometric identification systems in publicly accessible spaces for law enforcement purposes, except under specific exempt circumstances.
High-risk AI in Greece
Article 6 of the EU AI Act sets out classification rules for high-risk AI systems, stating that high-risk AI systems fall within two categories: (i) safety components of products or products themselves regulated by existing EU product safety laws (listed in Annex I, e.g., medical devices, automotive AI); or (ii) used in specified areas (listed in Annex III), namely:
- Critical infrastructure: AI systems used as safety components in the management or operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity.
- Education and vocational training: AI systems that determine access to education or training or otherwise impact a person's future opportunities and career development and AI systems used for monitoring and detecting prohibited behaviour during tests.
- Employment and worker management: AI systems used in hiring (including the placement of targeted job advertisements), performance evaluation, promotion or termination decisions.
- Access to essential private and public services: AI systems that evaluate eligibility for essential public services, such as social security and healthcare as well as AI systems for evaluating and classifying emergency calls and dispatching emergency services. Additionally, AI systems used to evaluate creditworthiness or during the risk assessment and pricing of life and health insurance.
- Law enforcement: AI systems used by law enforcement for risk assessments, predicting criminal activities (the risk of individuals becoming victims of crime, risk of (re-)offending or otherwise during criminal investigations), for polygraphs (i.e. 'lie detectors' or similar tools), and assessing reliability of evidence.
- Border control and migration: AI systems used to assess visa applications, asylum claims, and border security including for polygraphs (i.e. 'lie detectors' or similar tools) and for detecting, recognising or identifying individuals in migration contexts.
- Judicial and democratic processes: AI systems assisting judicial authorities with researching and interpreting facts and the law and applying the law to a set of facts. As well as AI systems used for influencing the outcome of elections or referendum or voting behaviour.
- Biometric identification and categorisation: AI systems that perform remote biometric identification are used to categorise individuals based on biometric data or other sensitive or protected attributes, and AI systems used for emotion recognition purposes.
These systems must adhere to stringent requirements to ensure they do not pose unacceptable risks or operate in a manner that protects individuals' rights and safety. The classification emphasises the importance of high standards and accountability in deploying AI in sensitive and impactful areas.
The European Commission has the power to amend the above-mentioned categories of high-risk AI systems including to modify any existing use cases or add new ones (Article 7(1)) of the EU AI Act).
Where an AI system falls into one of the two categories above-mentioned but does not pose significant risk of harm to health, safety or fundamental rights, the operators of such AI systems are relieved from the requirements imposed for high-risk AI systems (except for the EU database registration). However, to benefit from such exemption, a thorough assessment must be documented and strict conditions must be met (however these conditions are currently difficult to interpret, and further guidelines from the Commission are expected).
Controls on generative AI in Greece
General-Purpose AI Models
Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:
"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."
GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.
General-Purpose AI Models with Systemic Risk
Article 3(65) of the EU AI Act defines 'systemic risk' as:
"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".
Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.
Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.
Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.
Enforcement / fines in Greece
The EU AI Act enforces compliance through a structured framework of fines and sanctions, varying in severity based on the nature of the non-compliance.
For non-compliance with prohibited AI practices, fines can reach up to EUR 35 million or 7% of the total worldwide annual turnover, whichever is higher.
This includes practices like manipulative AI systems, exploiting vulnerabilities, social scoring by public authorities, and unauthorized biometric identification in public spaces.
Breaches of high-risk AI system requirements can incur fines up to EUR 15 million or 3% of the total worldwide annual turnover.
These requirements include risk management, data governance, technical documentation, transparency, and cybersecurity. Other non-compliance issues, such as providing incorrect or misleading information, can result in fines up to EUR 7.5 million or 1% of the total worldwide annual turnover. This applies to breaches not covered by the highest or significant sanctions.
Enforcement / fines in Greece
Law 4961/2022 outlines administrative and criminal sanctions for private sector entities that fail to meet their obligations to disclose AI usage in the workplace. These can include administrative fines ranging from €300 to €50,000, issued by the Labor Inspection Body (ΣΕΠΕ), and temporary suspension of operations.
In addition, Article 9 of Law 4961/2022 provides refers to the imposition of criminal sanctions of Law 3996/2011, i.e. imprisonment of at least six months and/or financial penalties (of at least €900) for provides for employers violating their obligations to inform their current and prospective employers on the use of AI systems and individuals preventing inspections by the Labor Inspection Body.
User transparency in Greece
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
User transparency in Greece
Certain transparency obligations under Law 4961/2022 apply to both public and private sector bodies regarding the AI systems they use. With regard to public sector bodies using AI systems, Article 4 permits use of AI for decision-making or issuing administrative acts that affect individual or legal entity rights, provided that they are explicitly authorized by law and implement safeguards to protect these rights. Article 6 requires public entities to disclose information to addressees of administrative acts, any other affected legal entities or individualsn an accessible manner about the operational parameters, capabilities, and technical characteristics of AI systems, as well as the types of decisions or actions that these systems support.
Article 7 plays a crucial role in ensuring transparency by imposing obligations on contractors who develop AI systems for public sector bodies. These contractors must provide detailed information on the operation of AI systems, so that public sector bodies are able to fulfil their aforementioned obligations according to Article 6. Article 8 mandates public sector bodies to maintain an updated registry of their AI systems, which should be accessible to the National Transparency Authority upon request.
In the private sector, pursuant to Article 10, medium and large enterprises must maintain an updated electronic registry of AI systems used for profiling consumers or assessing employees, which includes information on the operational parameters, the number of individuals affected and safety measures in place. Additionally, businesses must establish ethical data usage policies, which form part of corporate governance disclosures, where applicable. Pursuant to Article 9, private entities using AI systems in employment decisions are required to inform employees or candidates in advance about the system’s role and decision-making parameters.
Fairness / unlawful bias in Greece
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
Fairness / unlawful bias in Greece
Under Law 4961/2022, public sector bodies using AI systems in decision-making are also subject to certain obligations to mitigate discrimination and unlawful bias-related risks. Pursuant to Article 5, public sector bodies must conduct an algorithmic impact assessment before deploying AI systems. This assessment must evaluate the AI system's purpose, technical parameters, types of decisions supported, the data categories involved, potential risks to individuals' rights, particularly for vulnerable groups, such as people with disabilities and chronic conditions, and the societal benefits of the system. Additionally, under Article 7(3), contractors responsible for developing or deploying AI systems for public sector bodies must ensure the system complies with legal standards, thus protecting human dignity, privacy, preventing discrimination, promoting gender equality, and ensuring accessibility, among other rights.
Private sector bodies are obliged to address and prevent discrimination in the workplace. As stipulated in Article 9, businesses are required to provide clear and comprehensive information to employees or candidates regarding the criteria for taking AI-driven decisions in relation to recruitment, working conditions, or performance assessments. This obligation ensures that AI systems do not result in discrimination based on gender, race, ethnicity, disability, age, or other protected characteristics. Furthermore, Article 10 mandates medium and large enterprises to maintain a registry of AI systems with information such as operational parameters, technical specifications, and the data processed. This registry must also include the company's data ethics policy, outlining measures implemented to safeguard data integrity and prevent discriminatory outcomes.
Human oversight in Greece
Human oversight is crucial for preventing and mitigating risks associated with the AI system's operation. Providers must also ensure that operators are adequately trained to oversee the AI system, understand its functionalities, and respond appropriately to any issues. Effective human oversight enhances the safety and reliability of high-risk AI systems, ensuring they operate within acceptable parameters and can be controlled in case of unexpected behaviour or malfunctions.
Article 14 of the EU AI Act deals with human oversight, stating that providers must implement measures to ensure effective human oversight of high-risk AI systems. This involves designing the system with mechanisms that allow human operators to monitor, intervene, and deactivate the AI system if necessary. Providers of high-risk AI systems are required to ensure that systems falling under their responsibility are compliant with this requirement (Article 16(a)) and to include the human oversight measures within the "instructions for use" for the high-risk AI system (Article 13(3)(d)).
In addition, deployers of high-risk AI systems are required to comply with the providers 'instructions for use' and to assign human oversight to persons that have the necessary competence, training and authority as well as necessary support (Article 26(1) and (2)).
Finally, recital 27 of the EU AI Act includes seven principles for trustworthy AI including ensuring that AI systems apply human agency and oversight.
This means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
At its core, the EU AI Act is driven by the imperative to safeguard the fundamental rights of EU citizens. The rapid advancement of AI technologies has introduced significant benefits but also potential risks, such as biases in decision-making systems and privacy infringements. The AI Act aims to mitigate these risks by establishing clear rules that ensure AI systems respect the rights enshrined in the EU Charter of Fundamental Rights. This focus on human-centric AI seeks to enhance trust and acceptance among the public, thereby promoting wider adoption of AI technologies in a responsible manner.
Within the EU AI Act, non-discrimination and fairness is incorporated withing the following:
- Recital 27 includes seven principles for trustworthy AI including ensuring that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
- Article 10 sets out data and data governance requirements for high-risk AI systems and includes a requirement to examine and assess possible bias in training, validation and testing data sets.
- Deployers are required to ensure that any input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system (Article 26(4)).
The Framework addresses the issue of bias (most notably in paragraphs 27-37 relating to ‘Non-bias and non-discrimination') and highlighted that AI has the potential to create and reinforce biases and that bias and discrimination by AI can cause manifest harm to individuals and to society. The European Parliament stated that regulation should encourage the development and sharing of strategies to counter these risks, including debiasing datasets in research and development and by the development of rules on data processing. The European Parliament also considered this approach to have the potential to turn software, algorithms and data into an asset in fighting bias and discrimination in certain situations, and a force for equal rights and positive social change.
Fairness / unlawful bias in Greece
Under Law 4961/2022, public sector bodies using AI systems in decision-making are also subject to certain obligations to mitigate discrimination and unlawful bias-related risks. Pursuant to Article 5, public sector bodies must conduct an algorithmic impact assessment before deploying AI systems. This assessment must evaluate the AI system's purpose, technical parameters, types of decisions supported, the data categories involved, potential risks to individuals' rights, particularly for vulnerable groups, such as people with disabilities and chronic conditions, and the societal benefits of the system. Additionally, under Article 7(3), contractors responsible for developing or deploying AI systems for public sector bodies must ensure the system complies with legal standards, thus protecting human dignity, privacy, preventing discrimination, promoting gender equality, and ensuring accessibility, among other rights.
Private sector bodies are obliged to address and prevent discrimination in the workplace. As stipulated in Article 9, businesses are required to provide clear and comprehensive information to employees or candidates regarding the criteria for taking AI-driven decisions in relation to recruitment, working conditions, or performance assessments. This obligation ensures that AI systems do not result in discrimination based on gender, race, ethnicity, disability, age, or other protected characteristics. Furthermore, Article 10 mandates medium and large enterprises to maintain a registry of AI systems with information such as operational parameters, technical specifications, and the data processed. This registry must also include the company's data ethics policy, outlining measures implemented to safeguard data integrity and prevent discriminatory outcomes.