Artificial Intelligence in Australia

Law / proposed law

Information not provided.

Last modified 25 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

Last modified 18 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

Last modified 8 July 2025

Laws specifically addressing AI have not been introduced in Brazil yet. The Brazilian Federal Senate's Bill No. 2338 of 2023 (Brazilian AI Bill) seeking to legislate for the development, implementation, and responsible use of AI systems in Brazil was proposed on 3 May 2023 and is progressing through the legislation adoption process. The Brazilian AI Bill was approved by the Federal Senate on 10 December 2024 and has been passed to the Chamber of Deputies for analysis, where it is awaiting to be discussed, potentially amended and voted on. If approved, it will be submitted to be sanctioned by the President of the Republic. The proposal has been the subject of extensive debate with various entities in the technology sector, among the houses of the National Congress. However, it is not yet possible to predict when the Bill will be enacted into law. The Brazilian AI Bill emphasises the protection of fundamental rights and the implementation of safe and reliable AI systems, focusing on human centrality, respect for human rights and the support of democratic values. It also encourages innovation through regulatory sandboxes, allowing entities to experiment with AI technologies under specific conditions (draft Article 55).

Last modified 31 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

AI compliance in Bulgaria

There is no proposed legislation or any legislation in force to regulate AI in Bulgaria yet. It remains to be seen how AI will be regulated at a national level.

Last modified 23 July 2025

AI-specific laws have not yet passed in Canada. An Artificial Intelligence and Data Act (AIDA) was proposed as part 3 of Bill C-27 in June 2022 but did not come into force. It had a stated purpose of:

  • regulating international and interprovincial trade and commerce in artificial intelligence systems by establishing common requirements, applicable across Canada, for the design, development and use of those systems; and
  • prohibiting certain conduct in relation to artificial intelligence systems that may result in serious harm to individuals or harm to their interests.

(Section 4, AIDA).

AIDA died on the order paper in January 2025 when Prime Minister Trudeau prorogued government, and it is not clear when or if it (or any AI-related laws) will be re-introduced at the national level. However, we can learn from the government’s experience with AIDA, which faced significant criticism on two key fronts during the legislative process. First, AIDA was introduced as part of a sweeping set of privacy reforms, which diluted parliamentary focus on AI-specific issues and may have created regulatory overlap with the broader omnibus legislation. Second, the government’s lack of meaningful public prior consultation prior to introduction of AIDA resulted in an unusual scenario where, because the initial framework deferred critical details to yet-to-be-written regulations, regulators were essentially required to re-write core components to add needed nuance and detail while Parliament debated the original proposal.

Even though there is no AI-specific legislation, AI-related rules appear in some other provincial legislation. For example:

  • Quebec’s privacy legislation imposes transparency and disclosure obligations on organizations that use personal information to render a decision based exclusively on the automated processing of that personal information.
  • As of January 2026, Ontario’s Employment Standards Act, 2000 will require employers that advertise publicly-advertised job postings to disclose to applicants when they implement AI to screen, assess, or select applicants (see Ontario confirms new regulations addressing pay transparency and job posting requirements).
  • In November 2024, Ontario’s government passed the Strengthening Cyber Security and Building Trust in the Public Sector Act, establishing public sector transparency, accountability and risk management frameworks for the use of AI.
Last modified 11 July 2025

Laws specifically addressing AI have not been introduced in Chile yet. On 7 May 2024 a bill to regulate artificial intelligence systems (Chilean AI Bill) was proposed, seeking to promote the creation, development, innovation and implementation of AI systems serving human beings, in respect of democratic principles and the fundamental rights of individuals against the harmful effects that certain uses could have on them.

Last modified 23 July 2025

There is no single comprehensive AI law in the People's Republic of China (PRC). Instead, rules relating to the use and deployment of AI are found in a number of specific laws, regulations and mandatory national standards that regulate different subcategories of AI technologies and services. These include:

  • The Interim Measures for the Management of Generative Artificial Intelligence Services (GenAI Measures), which came into force on 15 August 2023 and are the first piece of generative AI-specific regulation for the PRC, regulating the development and application of generative AI technology. The GenAI Measures apply to the use of generative AI technology to provide services that generate contents (including any texts, images, audios, and videos) to the 'public within the PRC' (which has a very wide interpretation). The GenAI Measures outline service providers' obligations in various areas, including model training, content management, service management and user protection.
  • The Administrative Provisions on Deep Synthesis in Internet-based Information Services (Deep Synthesis Provisions) came into force on 10 January 2023 and apply to the provision of internet-based information services using deep synthesis technologies within the PRC. Deep synthesis technology is broadly defined, and includes any technology that employs deep learning, virtual reality, or other algorithms that are synthetic or generative (such as text/Q&A generation, image generation and voice attribute editing). The Deep Synthesis Provisions impose compliance obligations on various players, including providers of deep synthesis services, providers of technical support for deep synthesis services and users of such services. Particularly, there is a requirement for deep synthesis service providers to verify the real identity of users (by way of mobile phone number, ID card number, unified social credit code or national online identity authentication services) before they can release the services to the users.
  • The Administrative Provisions on Recommendation Algorithms in Internet-based Information Services of the Cyberspace Administration of China came into force on 1 March 2022 (Recommendation Algorithms Provisions) and apply to any entity that uses recommendation algorithm technologies to provide internet-based information services within PRC. This includes the use of algorithm technologies, including generation and synthesis technology, personalised pushing technology and ranking and selection technology, etc., to provide users with information. The Recommendation Algorithms Provisions also emphasise the protection of the user. Service providers are required to inform users about the provision of algorithm services, including the principles behind them, their intended purposes and how they operate.
  • The Measures for the Labeling of Artificial Intelligence Generated and Synthesized Content (AIGC Labelling Measures) took effect on 1 September The AIGC Labelling Measures apply to online information service providers that offer AI generative and synthetic services. Such providers are required to add different types of explicit and/or implicit labels to AI-generated content based on contexts, and to restrict the dissemination of non-labelled content via their service platforms either by user terms or by implementing technical measures.
  • The mandatory national standard the Cybersecurity Technology—Labelling Method for Content Generated by Artificial Intelligence (AIGC Labelling Standard) took effect on 1 September 2025. The AIGC Labelling Standard implements the AIGC Labelling Measures and sets out detailed standards, specifications, and operational procedures for labelling AI-generated content. 
  • The Provisional Measures on the Administration of Human-like Interactive Artificial Intelligence Services (Draft for Public Comments) were released on 27 December 2025 to solicit public comment till 25 January 2026. The draft applies to AI services that present simulated human personality traits, thinking patterns and communication styles, and interact with users emotionally through text, images, audio, video or other means. It has a particular focus on addressing psychological risks by requiring providers to warn users against excessive use and to intervene when users show signs of addiction.
  • The Amendment to the Cybersecurity Law took effect on 1 January 2026. A general clause on AI is introduced, stating that the government will improve ethical norms for AI while strengthening AI risk monitoring and assessment and safety oversight — potentially paving the way for further AI regulations.

During the 2025 National People’s Congress, several delegates proposed the drafting of a specific AI law to address emerging risks, encourage innovation and establish a more consistent AI governance system. In particular, the possibility of classifying AI services into different risk categories and regulating them accordingly has been discussed.

Last modified 26 January 2026

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

AI compliance in Croatia

In January 2025, the 'Draft Plan for the Harmonization of Croatian Legislation with the European Union Acquis for 2025' was published. This document anticipates the adoption of an implementing law for the EU AI Act in 2025.

Last modified 23 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

AI compliance in Cyprus

Cyprus has commenced the requisite actions for the implementation of the EU AI Act, including the preparation of the relevant national legislation. 

Last modified 14 July 2025

 

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

AI compliance in the Czech Republic

Within the Czech Republic, no material steps have been taken yet to implement the Product Liability Directive. However, to implement the EU AI Act, the Czech Government approved a non-legislative document entitled 'Proposal for the Implementation of the AI Act in the Czech Republic' at a meeting on 28 May 2025. The document includes the setup of a key tool for AI development, known as a 'regulatory sandbox', the establishment of a precise legal framework and a supervisory mechanism to ensure that high-risk AI systems comply with all legal requirements, particularly in the areas of safety and ethics. Along with this, the responsibility for implementing the EU AI Act is currently entrusted to the Ministry of Industry and Trade (MIT), which will prepare a draft law on AI and establish an AI Competence Centre for eGovernment.

As far as the legal regulation of AI is concerned, Czech law regulates only for text data mining, following the introduction of exceptions to copyright law that allow for the legal implementation of rapid data analysis and processing. This is a transposition of Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market (DSM Directive), through Sections 39c and 39d of the Act No. 121/2000 Sb. Both provisions are based entirely on the criteria and almost literal diction of the DSM Directive. Therefore, for example, the question of remuneration of authors (and other rights holders) has not yet been addressed. And, while authors may be able to opt-out of the exemption, this is likely to be possible only in cases where it is possible to exclude the opt-out by machine-readable means, such as through the use of metadata.

There is one higher court ruling on the issue of AI, in particular in the form of the judgment of the Municipal Court in Prague No. 10 C 13/2023-16, which states that "An image created by artificial intelligence does not constitute a work of authorship under Section 2 of the Copyright Act, as it does not meet the conceptual characteristics of a work of authorship, because it is not a unique result of the creative activity of a natural person - the author".

Last modified 9 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

AI compliance in Denmark

In Denmark, the primary rules governing AI are set out in the EU AI Act, which applies directly within the Danish legal framework without the need for national implementing legislation. Apart from that, Denmark has adopted Law no. 467 of 14 May 2025, which provides supplementary provisions to the EU AI Act, including the designation of national competent authorities and the establishment of oversight mechanisms. The law enters into force on 2 August 2025.

Last modified 21 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

AI compliance in Estonia

In Estonia, the Ministry of Justice and Digital Affairs has developed a draft law proposal. The proposal intends to amend the Estonian Personal Data Protection Act so that the processing of personal data for scientific research purposes includes technological development, which also covers the development of artificial intelligence.

Last modified 22 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

Last modified 11 February 2026

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

AI compliance in Finland

Most AI-related rules in Finland originate from the EU's AI Act, owing to its direct applicability. However, some national laws have been proposed to supplement the Act. In general, the implementation in Finland is still in progress, and no national Acts have been adopted yet.

There is a government proposal (HE 46/2025) to enact an Act on the supervision of certain artificial intelligence systems and amend several existing laws: the Act on the Market Surveillance of Certain Products, the Act on the Financial Supervisory Authority, the Act on the Energy Agency, and the Act on the Enforcement of Fines. This proposed legislation relates to the first phase of AI Act implementation in Finland and addresses requirements that will apply from 2 August 2025 onwards.

For the second phase of implementation which shall cover requirements applicable from 2 August 2026, another government proposal is anticipated. This proposal would introduce national legislation to establish AI regulation testbeds, and a national register of high-risk AI systems related to critical infrastructure. Additionally, it would include other necessary provisions to implement the EU AI Regulation. It should be noted that this proposal has not yet been submitted to Parliament, and no further details of the proposal are currently available.

Last modified 22 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

AI compliance in France

In France, the main applicable rules related to AI consist in the EU AI Act as it is a text with direct application into the French legislative framework without need of transposition. Apart from this text, the French Parliament has issued a few laws that imposes specific rules when dealing with AI in certain cases.

Firstly, Law no. 2023-451 of 9 June 2023 on the regulation of commercial influence on social networks imposes influencers to include warning note on images modified by AI. In France, two laws have been adopted to regulate specific use of AI.

Secondly, Law no. 2024-449 of 21 May 2024 on the digital space (French Digital Space Law) introduces a new criminal offence for persons who publish deepfakes of other persons in a way that modifies their image and/or voice without their consent.

Last modified 5 February 2026

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

AI compliance in Germany

Germany has not yet established the specific implementation rules required under the EU AI Act relating to the appointment of competent national authorities responsible for monitoring AI compliance and applying penalties or other sanctions in case of breach. Germany missed the August 2025 implementation deadline due to elections and the formation of a new government. The Federal Ministry for Digital Transformation and Government Modernisation (Bundesministerium für Digitales und Staatsmodernisierung) has published a draft version of an implementing act in September 2025, and with this kick-started a consultation phase with the Federal States (Länder) and industry associations. A clear timeline is missing at this stage, but the government has emphasized that it is working hard to get the next legislative steps underway quickly.

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

Last modified 3 February 2026

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

AI compliance in Greece

In Greece, the primary source of regulation of AI is Law 4961/2022 entitled 'Emerging information and communication technologies, strengthening digital governance and other provisions' (Law 4961/2022). Law 4961/2022 was enacted before the EU AI Act came into force and remains still applicable. Articles 1-14 of Law 4961/2022 introduce a comprehensive framework for the utilization of AI by public and private entities, aiming at transparency, accountability and the protection of citizens' rights. Law 4961/2022 includes provisions for the secure use of AI systems, the protection of personal data, and transparency in decision-making processes. Public entities are required to conduct algorithmic impact assessments and maintain AI system registries, while in the private sector, rules are established to ensure the proper use of AI in employment relation and data management. Additionally, specialized bodies are established under Law 4961/2022, such as the Coordinating Committee for AI, which oversees the implementation of the National Strategy, and the AI Observatory, which monitors AI-related activities in Greece, takes notice of best practices and assesses their impact. These provisions aim to safeguard fundamental rights, promote innovation, and ensure compliance with ethical principles, equality and privacy.

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

Product Liability Directive in Greece

Greece has not yet transposed the Product Liability Directive into national law.

Last modified 19 July 2025

Laws specifically addressing AI have not yet been introduced in Hong Kong.  

Last modified 25 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

Last modified 24 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

AI compliance in Ireland

In Ireland, the Product Liability Directive has not yet been implemented into law.

Last modified 23 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

AI compliance in Italy

On 10 October 2025, the Italian Artificial Intelligence Law (Law No. 132/2025) entered into force, completing the national AI regulatory framework and complementing the EU AI Act without introducing divergent definitions or obligations. The law consists of 28 articles and is grounded in the core principles of the EU AI Act, including the protection of fundamental rights, proportionality, safety, and transparency. Articles 3 and 4 set out the general principles applicable to all AI systems, notably human oversight, the prevention of discriminatory or harmful effects, and compliance with constitutional and fundamental rights, including the protection of personal data.

Unlike the EU AI Act, however, the Italian AI Law adopts a sector-specific regulatory approach rather than a general risk-based framework. In the healthcare sector (Art. 7), AI systems can only support medical decision-making, may not determine access to services, and require patient notification, while full responsibility remains with medical professionals. Article 8 governs scientific research, permitting the processing of personal and sensitive data without consent for public-interest AI development, subject to prior notification to the Data Protection Authority, and allowing anonymisation or data synthestization for research purposes. In line with the current obligations under Italian employment law, Article 11 requires employers to inform workers of the use of AI systems, while Article 12 establishes a National AI Observatory tasked with monitoring the impact of AI on employment and informing regulatory strategies. Intellectual professions (Art. 13) are prohibited from fully delegating professional activities to AI systems and must ensure clear disclosure to clients. For public administration and justice (Arts. 14–15), AI may be used solely as a decision-support tool and cannot replace human evaluation, alongside measures promoting training for responsible use. Finally, the law addresses intellectual property and enforcement issues. Article 25 protects works created by humans with AI assistance (provided that the work is a product of human intellectual labor) and permits text and data mining for AI training purposes. Procedural and criminal provisions assign exclusive jurisdiction for AI-related disputes to ordinary courts, introduce AI-related aggravating circumstances under the Italian Criminal Code, and criminalize the unlawful dissemination of deepfakes.

Notwithstanding the foregoing, it is important to note that Article 24 further grants the government broad delegated powers to adopt legislative decrees on sector-specific AI regulation, public administration, system deployment, and sanctions. Given the central role assigned to delegated legislation in specifying the concrete powers of the designated authorities, including the sanctioning powers provided for under the AI Act, several provisions remain contingent upon implementing measures. Consequently, economic operators must continue to rely primarily on the EU AI Act while closely monitoring the evolving Italian regulatory framework.

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

Last modified 3 February 2026

On 16 February 2024 the ruling party's project team on the 'Evolution and Implementation of AI' released 'the rough draft of the Basic Law for the Promotion of Responsible AI'. This rough draft of the AI Act intends to propose legal governance for frontier AI models. Frontier AI models are recognised as high-performance, general-purpose AI models capable of performing a wide variety of tasks and are as capable or more capable than today's most advanced models. The AI Act seeks to minimise risks and maximise benefits through appropriate AI governance.

Subsequently, the Act on Promotion of Research and Development, and Utilization of AI-related Technology (the “AI Act”) was enacted on 28 May 2025, and promulgated on 4 June 2025, with many of its provisions (except for certain sections) coming into force on the same day. While the AI Act primarily serves as a basic framework law targeting the national and local governments, it also sets forth responsibilities for AI-utilizing business operators to proactively strive to improve the efficiency and sophistication of their business operations and to create new industries through the use of AI-related technologies, as well as to cooperate with measures implemented by national and local governments.

Last modified 31 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

AI compliance in Latvia

In Latvia, an Information Report (Information Report) has been prepared by the Ministry of Smart Administration and Regional Development on how the requirements of the EU AI Act may be implemented in Latvia. Most of the suggested amendments to existing laws and regulations specified in the Information Report are intended to give the competent authorities the power to monitor compliance with the EU AI Act. For example, amendments to the Cabinet of Ministers Regulations No. 309, “By-laws of the Health Inspectorate”, are planned to, in accordance with the Inspectorates competences, define the functions as the supervisory authority in market surveillance.

The Law on the Artificial Intelligence Centre entered into force on 20 March 2025. This law establishes the Artificial Intelligence Centre in Latvia with the aim of fostering cooperation between the public and private sectors and higher education institutions in the field of artificial intelligence. The Centre will coordinate innovation projects, provide consultations, promote public competence, and ensure the ethical and secure use of AI. Its operations will be financed through the state budget, donations, and other sources, and it will also serve as a platform for a special regulatory environment for testing AI systems. The law also sets out strict conditions for the processing of personal data within this environment.

The Ministry of Smart Administration and Regional Development has prepared a draft Cabinet Regulation entitled "Procedures by which the Artificial Intelligence Centre, in cooperation with competent institutions, organizes the special regulatory environment, and procedures for the processing of personal data within the framework of the special regulatory environment". The draft Cabinet Regulation establishes the procedures by which the Artificial Intelligence Centre, in cooperation with competent institutions, organizes a special regulatory environment for the development, testing, and deployment of artificial intelligence (AI) systems, as well as the procedures for processing personal data within this environment. These regulations are necessary to ensure the safe, innovative, and socially responsible implementation of AI solutions in Latvia, while fulfilling the mandate set out in the Law on the Artificial Intelligence Centre, which authorizes the Cabinet to regulate the functioning of the regulatory environment, the involvement of competent institutions, and the conditions for data processing.

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

Last modified 14 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

Last modified 24 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

AI compliance in Luxembourg

On 23 December 2024, the Luxembourg government introduced Bill No. 8476 (Luxembourg Bill), which is still in a very early stage but represents a significant step toward aligning national legislation with the EU AI Act.

Last modified 23 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

AI compliance in Malta

At the time of writing, Malta lacks any other legal framework which specifically regulates AI or legislation transposing the EU AI Act. The Malta Digital Innovation Authority (MDIA) is the entity that is tasked with implementing the EU AI Act in Malta. Although the Malta Digital Innovation Authority Act (Chapter 591 of the Laws of Malta) does not transpose the AI Act, it serves as the legislation establishing the MDIA.

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

Last modified 23 July 2025

Laws specifically addressing AI have not been introduced in Mauritius yet.

Blueprint

The Government of Mauritius, through the Ministry of Information Technology, Communication and Innovation (MITCI)  has published  A Blueprint for Mauritius – A Bridge to the Future – Digital Transformation 2025-2029, dated May 2025 (Blueprint) as a strategic framework to guide the country’s digital transformation, including the development of AI governance structure.  

The Blueprint announces the formulation of the Mauritius National Data Strategy (2025-2030) which inter alia aims at preparing Mauritius for a safe, trustworthy and ethical adoption of Artificial Intelligence (AI).

The National Data Strategy will include:

  • Creation of a Data Management office
  • Adoption of a Data Governance Framework
  • Adoption of a Data Sharing Policy and Protocol
  • Adoption of a Data Retention Policy
  • Creation of a National Government Data Warehouse, which will in turn create:
    • Data Assurance
    • Data Sovereignty
    • Open Data and access to public information
    • Data Architecture and Harmonisation v. Data Usability
    • Data Literacy and Skills

This strategic direction is structured around four pillars and five enablers, which are as follows:

The Pillars:

  1. The Foundation: State of the Art Information Structure;
  2. Human Capital: Digital Skills for all;
  3. Economy: Innovation and private sector growth; and
  4. Planet: a Sustainable and resilient digital future.

The Enablers:

  1. Digital public infrastructure;
  2. Legal and regulatory reform;
  3. Institutional governance;
  4. Cyber resilience and trust; and
  5. Data governance and AI.

The four strategic pillars represent Mauritius’ core objectives for digital transformation across government, economy, society and infrastructure, while the five enablers provide the essential conditions that support the successful implementation of these pillars.

National AI Strategy

As per the Blueprint, in line with the Government Programme 2025-2029, a National Artificial Intelligence (AI) Strategy will be formulated. The core objective is to leverage the potential of AI to significantly propel economic growth and enhance efficiency across various sectors of the economy and the society.

 

Last modified 26 June 2025

Laws specifically addressing AI have not been introduced in Mexico yet.  A draft bill to regulate AI (AI Bill) was presented to the Senate, by Ricardo Monreal Ávila, member of the Parliamentary Group of the MORENA Party, on 2 April 2024 aiming to create the first legal framework for artificial intelligence systems in Mexico, seeking to allow the country to take advantage of the benefits of using AI in various fields of application, whilst protecting the rights of third parties, users and the general public.

Article 1 of the AI Bill states that its objectives are to:

  • Regulate the development, marketing and use of artificial intelligence systems;
  • Ensure respect for the human rights of consumers and users and avoid any form of discrimination when using artificial intelligence systems;
  • Protect intellectual property rights; and
  • Facilitate the national development of artificial intelligence systems.

The AI Bill further aims to regulate AI applying a risk approach, classifying AI as either (i) unacceptable risk; (ii) high-risk; or (iii) low-risk. The AI Bill also provides that an authorisation must be obtained from the Federal Telecommunications Institute to be able to market such AI systems in Mexico.

To date, the project has not been discussed by the Science and Technology Commission and the Legislative Studies Commission of the Senate.

Last modified 29 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

AI compliance in the Netherlands

In the Netherlands, the Product Liability Directive has not yet been implemented into Dutch law.

Last modified 23 July 2025

Laws specifically addressing AI have not been introduced in New Zealand, and, based on the current Government's recent policy decisions in respect of AI, New Zealand is unlikely to see any economy-wide specific AI legislative reforms in the foreseeable future. Instead, AI is regulated through existing technology-neutral laws, such as the Privacy Act 2020 (Privacy Act), consumer protection laws, and New Zealand's intellectual property and human rights legislation.

Last modified 14 July 2025

Laws specifically addressing AI have not been introduced in Nigeria yet.

In February 2025, The Artificial Intelligence Management and Finance Institute (AIMFIN) (Establishment) Bill, 2025 (HB. 2063) was brought before the House of Representatives for first reading. In May 2025, another bill, the National Institute of Artificial Intelligence and Robotic Studies (Establishment) Bill, 2025 (HB.2243) was brought for first reading on the floor of the House of Representatives.

There are a number of Nigerian laws that by virtue of their provisions and scope, indirectly apply to AI. These include laws on data protection, fundamental rights, intellectual property, employment and labour laws etc. 

Last modified 17 June 2025

The Ministry of Digitalisation and Public Governance has issued a proposal for a law implementing the Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) into Norwegian law. The proposal was open for public hearing until 30 September 2025.

The proposed law is named "Artificial Intelligence Act" or "Lov om kunstig intelligens" in Norwegian, and will implement the EU AI Act in full.

Last modified 9 October 2025

In November 2024, the Presidency of the Council of Ministers, through the Secretariat of Government and Digital Transformation (National Authority), published the Bill of Regulation of AI Law (Bill of Regulation). The Bill of Regulation develops various regulatory aspects regarding the responsible use of AI, such as prohibited activities and high-risk issues. The Bill of Regulation has been made available for public consultation to receive recommendations regarding its content and is therefore subject to further changes.

On 5 July 2023, Law No. 31814, Law Promoting the Use of Artificial Intelligence (AI Law) was published, which aims to promote and guarantee the ethical, sustainable, transparent and responsible use of AI within the framework of the national digital transformation process.

Last modified 20 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

AI compliance in Poland

The Draft AI Systems Act (Draft Act), which mainly establishes the procedural rules for the enforcement of the EU AI Act, was proposed by the Polish Ministry of Digital Affairs on 16 October 2024. The Draft Act sets out:

  • organisation and conduct of oversight of the market for AI systems and general purpose AI models;
  • rules for imposing administrative fines;
  • infringement proceedings;
  • conditions and procedure for the authorisation of conformity assessment bodies;
  • manner of reporting serious incidents related to the use of AI systems; and
  • types of public activities in support of the development of AI systems.
Last modified 23 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

AI compliance in Portugal

In Portugal, no legislation to give further effect to the EU AI Act has been published yet.

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

Product Liability Directive in Portugal

No national law transposing the Product Liability Directive in Portugal has been published yet.

Last modified 22 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

Last modified 25 July 2025

Laws specifically addressing AI have not yet been introduced in Singapore.

Last modified 28 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

AI compliance in the Slovak Republic

A Slovak draft law aiming to regulate the institutional conditions, competences of authorities, rights and obligations of subjects in connection with the use of AI systems has been produced. The aim of the draft law is to implement certain provisions of Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules in the field of AI. The consultancy stage of the legislative procedure for the draft law is expected to commence in March 2025.

The Ministry of Investment, Regional Development and Informatization of the Slovak Republic submitted a draft law amending Act No. 95/2019 Coll. on Information Technology in Public Administration, as amended, to the consultancy stage on 12 May 2025. This draft law provides that AI systems also count as information technology and establishes basic principles for the use of AI systems in public administration. At the same time, the draft law expands the scope of administrative offenses and increases the upper limits of fines in the area of project management and IT operations to reflect the use of AI. The aim of this draft law is to respond to practical application problems and ensure the responsible deployment of AI in public administration. The evaluation of the consultancy stage commenced on 31 May 2025.

Last modified 29 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

AI compliance in Slovenia

Currently, no legislation specifically pertaining to AI exists in Slovenia beyond the EU regulation referred to elsewhere.

Last modified 14 July 2025

On December 26, 2024, the 'Framework Act on the Development of Artificial Intelligence and the Establishment of Foundation for Reliability' (AI Act) passed the plenary session of the National Assembly. The AI Act is expected to take effect around January 2026.   

The AI Act is prepared to advance AI development and promote self-regulation by establishing a framework of the following initiatives:(i) formulating a master plan for AI by the Minister of the Ministry of Science and ICT (MSIT), creating the National AI Committee/NAIC under the President’s office, establishing the AI Policy Center, and establishing the legal foundation for the AI Safety Institute’s operations, (ii) supporting industries related to the development and promotion of AI technology, including establishing standards for AI technology, and (iii) enacting and announcing the “AI Ethics Principles” to support self-verification and certification by AI-related organizations, thus ensuring the safety and reliability of AI, and establishing the legal basis for autonomous ethics committees in the private sector.

Furthermore, the AI Act stipulates various obligations for AI business operators such as operators involved with high-impact AI, businesses offering generative AI products or services, and operators whose AI training compute usage surpasses. It also requires that operators lacking a domicile or business location within Korea must appoint a domestic agent to comply with the proposed regulatory framework and empowers the Minister of MSIT with the authority to conduct fact-finding inspections and to issue suspension or corrective orders where necessary.

The AI Act is the first statute to govern legal requirements specific to AI technologies and products in Korea. 

Please note that AI in South Korea will still be regulated by existing rules governing personal information, copyright, and telecommunications. Therefore, the existing obligations and requirements under these laws and regulations will continue to apply to AI-related business and services.

Last modified 29 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

AI compliance in Spain

On 11 March 2025, the Spanish Council of Ministers approved the Draft Bill on the Proper Use and Governance of Artificial Intelligence (Spanish Draft AI Bill). The bill is currently being processed through an expedited procedure and will undergo the necessary legislative steps before returning to the Council of Ministers and the Spanish Parliament for final approval.

Overall, the proposed legislation aims to designate the competent supervisory authorities and provide them with the necessary enforcement powers to ensure compliance with the EU AI Regulation. It also seeks to regulate controlled testing environments for AI systems and to define the conditions under which the use of real-time remote biometric identification systems may be authorized in publicly accessible spaces, specifically when necessary to safeguard fundamental rights.

Additionally, Spain has enacted two key pieces of legislation that, while not directly developing the EU AI Act or introducing new standards for AI deployment, are of significant relevance: (i) Royal Decree 729/2023 of 22 August 2023, which establishes the Statute of the Spanish Agency for the Supervision of Artificial Intelligence (Supervisory Agency Royal Decree), and (ii) Royal Decree 817/2023 of 8 November 2023, which creates a controlled testing environment to evaluate AI systems' compliance with the proposed European Parliament and Council Regulation on harmonized AI standards (AI Testing Royal Decree). The AI Testing Royal Decree regulates the operation of controlled testing environments designed to assess the compliance of AI systems that may pose risks to security, health and fundamental rights.

Last modified 21 July 2025

Regulation (EU) 2024/1689 of the European Parliament and of the Council on harmonised rules on artificial intelligence (EU AI Act) was published in the Official Journal of the European Union on 12 July 2024. It entered into force on 1 August 2024, although many of its provisions come into force on specific dates:

  • 2 February 2025: General provisions and provisions relating to prohibited AI practices and AI literacy (Chapter 1 and Chapter 2).
  • 2 August 2025: Provisions relating to general-purpose AI (GPAI) models (e.g. generative AI).
  • 2 August 2026: Most other provisions (including requirements for Annex III high-risk AI systems).
  • 2 August 2027: Provisions relating to high-risk AI systems that are safety components of products or products themselves (i.e. AI systems covered by Annex I).

A new EU Product Liability Directive, Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products (Product Liability Directive), was published in the Official Journal of the European Union on 18 November 2024 and entered into force on 8 December 2024. Member States have until 9 December 2026 to implement the Product Liability Directive into national law. The Product Liability Directive modernises the EU-level strict product liability regime, preserving the core principles of the previous law while adapting to new technologies by extending the scope to include software and AI. This regime is still limited to certain types of damages and applies only to consumers and other natural persons.

As part of its Digital Omnibus package, the European Commission has proposed amendments to the EU AI Act. While the Act’s core structure remains unchanged, the revisions aim to make compliance more amenable to businesses including shifting the burden of the AI literacy requirement, expanding reliefs for SMEs and "small mid-caps" and providing a new exemption from EU database registration. The proposals also suggest a delay in the applicability for rules for high-risk AI systems and some transparency requirements. Further updates are expected later in 2026 as the proposal makes its way through the European legislative process.

Last modified 7 July 2025

Laws specifically addressing AI have not been introduced in Thailand yet. Previously, a Draft Royal Decree on Artificial Intelligence System Service Business was published for public hearing in October 2022. A Draft Act on the Promotion and Support of AI Innovations in Thailand as well as its subordinated regulations, i.e. a Draft Notification Regarding AI Innovation Testing Center (AI Sandbox) and a Draft Notification Regarding Guideline for Setting Criteria and Risk Assessment Methods from the Use of Artificial Intelligence System, were also published and the latest public hearing was held in August 2023.  At present, the Electronic Transactions Development Agency (ETDA) has been assigned to revisit those draft laws and subsequently released a Draft Principles of the Law on Artificial Intelligence for public hearing in June 2025.

Last modified 25 July 2025

Laws specifically addressing AI have not been introduced in Turkey yet. However, in June 2024, a draft AI bill was presented before Turkish Parliament for committee review, aiming to construct a regulatory framework concerning the use of AI. If the draft bill passes the review, it will be proposed to the Grand National Assembly of Turkey to be adopted as law. The draft bill consists of eight articles that addresses risk management and assessment, compliance and audit, violations and sanctions.

Last modified 30 July 2025

There is no unified federal law or emirate level law in the UAE that has a primary focus on regulating AI. Instead, AI is governed through a patchwork of non-AI-specific laws and non-binding guidelines.

However, the Dubai International Financial Centre (DIFC) – a financial free zone in the UAE that has its own civil and commercial laws, based on English common law – has introduced specific provisions in the DIFC’s Data Protection Regulations to address the processing of personal data in connection with AI systems (including as part of the training process).

The legal framework in the UAE is complex and comprises multiple jurisdictions. There are federal laws, emirate level laws and laws in specific free zones, as well as binding sectoral rules and regulations. This guide does not cover sectoral rules and regulations.

Last modified 4 August 2025

A specific law addressing AI has not been implemented in the UK yet.

Two Private Members' Bills relating to the regulation the use of AI systems are currently progressing through the legislative system. The first relates to decision-making processes in the public sector, the Public Authority Algorithmic and Automated Decision-Making Systems Bill, introduced to the House of Lords by Lord Clement-Jones on 9 September 2024. The second is Lord Holmes' Artificial Intelligence (Regulation) Bill introduced on 4 March 2025 (although a version of the Bill had existed in the prior Parliamentary session before the 2024 General Election), which would establish a central AI Authority, regulatory sandboxes and require an AI officer for organisations deploying AI.

In the King's Speech of 17 July 2024, the UK Government announced that it will seek to:

"establish the most appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models" (para. 7, page 7).

Whilst there had been some speculation in 2025 that the UK might move more strongly towards a broader cross-sector AI regulation focussed on managing the risks of AI, this has not been the case. In the latter half of the year, the UK government reaffirmed its sector-based approach and in particular the message that it sees AI as a critical component of UK economic growth. 

In October 2025, the Government announced its blueprint for AI regulation, which identified some of the tools it sees as necessary to deliver this growth and drive modernisation of key UK sectors.  Part of these proposals include the use of regulatory sandboxes in key sectors (such as healthcare, professional services, transport, and the use of robotics in advanced manufacturing) to foster responsible development of AI.  While the proposals are cross sector in nature, the focus appears to be more on reducing barriers to growth. The government launched a call for evidence, which closed on 7 January 2026, to seek views on the AI Growth lab and so we can expect to see more concrete proposals later in the year.

There are many UK laws beyond the scope of this resource (relating to data protection, intellectual property, human rights, equalities, employment laws, etc.) that impact various aspects of AI development, deployment and use.

On data protection for example, the Data (Use and Access) Act 2025 (DUAA) received Royal Assent on 19 June 2025. Although not an AI-specific statute, the DUAA is expected to play a significant role in the UK's AI ecosystem by improving access to and use of data across regulated sectors, in turn, supporting AI development and innovation.

The most relevant amendments impacting the use of AI in the UK are those related to automated decision making, which took effect on 5 February 2026. The previous regime generally prohibited solely automated decisions (with no meaningful human involvement), including profiling, that had a significant legal effect, unless there was explicit consent or it was necessary for the entry into or performance of a contract. The DUAA moves the dial to a more permissive framework, aimed at reducing compliance burdens while in parallel mandating new safeguards (outlined in more detail in our guide to Data Protection Laws of the World).

Automated decision making is now permitted with those new safeguards implemented, unless special category data (e.g. health data) is involved, and organisations can now rely on legitimate interests as a lawful basis (i.e. instead of consent which is hard to obtain, or contractual necessity, which was often difficult to establish for efficiency gains).  

Notably, the DUAA clarifies that human review must be "substantive and informed," i.e. a human must be able to challenge or override an AI-driven decision or profile generation, but they don't necessarily need to be involved at all stages. This is important, as the Information Commissioner's Office has indicated that enforcement action may be prioritised where automated decision-making systems fail to offer meaningful human intervention, or where the lack of these safeguards could lead to significant discrimination or unfair treatment of individuals.

Last modified 23 February 2026

AI laws and Proposed Laws

In the U.S., artificial intelligence (AI) is regulated at both the federal and state levels. While the U.S. lacks a unified federal AI law, the states have been active in modifying existing laws to account for AI and, in some cases, passing targeted AI-specific legislation.

This section outlines the major enacted laws at both federal and state levels, highlighting how states have taken the lead in adapting existing legal frameworks and introducing AI-specific laws in the absence of a comprehensive federal approach.

Federal AI legislation landscape

The federal regulatory landscape for AI remains limited in scope. Although a significant volume of AI-related legislation has been introduced in Congress, only one standalone statute intended to regulate the posting and distribution of AI-generated content has been enacted to date: 

  • Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act (TAKE IT DOWN Act): Although the statute does not regulate AI systems directly, it requires online platforms to delete flagged non-consensual intimate imagery, including AI-generated deepfakes, within 48 hours. The law creates criminal penalties for distributing content and empowers the Federal Trade Commission (FTC) to enforce compliance. 

Accordingly, federal policy is more defined by proposals than binding obligations, and most operational guidance continues to come from executive actions and agency-level enforcement.

Potential federal framework

On 11 December 2025, President Trump signed an Executive Order (EO) aimed at creating a national policy framework for AI to ensure American dominance in the field. The EO seeks to replace the existing patchwork of state laws—which the Trump Administration views as burdensome and detrimental to innovation—with a unified standard.

To achieve this, the EO outlines a two-pronged strategy: challenging existing state AI laws in court and establishing a new federal regulatory framework that would preempt them. A new task force, led by the Attorney General (AG), will be created to raise legal challenges to state laws that are viewed as unconstitutional or otherwise conflicting with federal regulations. Additionally, the EO directs the Secretary of Commerce to evaluate state AI legislation. The proposed federal framework will focus on key areas such as child safety, censorship prevention, and copyright protection, while preempting conflicting state-level regulations.

State-level AI legislation landscape

The lack of comprehensive federal AI legislation has led to a proliferation of state-level laws and regulations, with many more bills working their way through state legislatures in 2026. Some of these laws establish frameworks and requirements that impact both public- and private-sector use of AI technologies.

In 2025, all 50 states, Puerto Rico, the Virgin Islands, and Washington, D.C., introduced AI-related legislation. According to the National Conference of State Legislatures, 38 states adopted or enacted approximately 100 AI‑related measures. What materially changes in 2026 is enforceability: several major state AI laws take effect, significantly increasing the need for cross‑state governance frameworks, comprehensive inventories, and demonstrable evidence of controls.

These laws and regulations impose transparency and disclosure obligations, prohibit deceptive uses of generative AI, and seek to mitigate algorithmic discrimination in certain domains. The following list includes the principal state laws shaping AI regulation in the U.S. and a few examples of some of the narrower AI-focused laws:

California

  • California has enacted significant AI-related legislation, establishing new requirements for transparency, safety, and accountability across various AI applications. California’s SB53, the Transparency in Frontier Artificial Intelligence Act (TFAIA), was signed into law on 29 September 2025, and took effect on 1 January 2026. It requires large frontier AI developers to publish transparency reports and annually update a public frontier AI safety framework describing how they assess and mitigate “catastrophic risk,” secure unreleased model weights, and respond to critical safety incidents. Further, California’s AB2013, Generative Artificial Intelligence: Training Data Transparency Act (TDTA) was signed into law 28 September 2024, and took effect 1 January 2026. The TDTA requires AI developers to publicly post a high-level summary of the datasets used to train generative AI systems or services made available to the public since January 2022, enumerating specific categories of required disclosures.
  • During 2025, the California state legislature continued to pass many AI-related bills, most of which took effect on 1 January 2026. Signed into law on 13 October 2025, the California AI Transparency Act (AB853) mandates that developers of generative AI must embed “provenance data” into digital content to verify its authenticity and origin. (This law has staggered effective dates through 1 January 2028.) AB489, signed into law on 11 October 2025, prohibits the use of AI to falsely imply that advice or services are being provided by a licensed healthcare professional. Further, enacted on 13 October 2025, SB243 imposes specific safety protocols on “companion bots,” requiring them to prevent harmful conversations and regularly remind users that they are interacting with an AI. Other new laws, also enacted on 13 October 2025, create liability for services that enable deepfake pornography (AB621) and bar defendants from claiming an AI “autonomously caused the harm” in civil actions (AB316).

Colorado 

  • Colorado enacted the Consumer Protections for Interactions with AI Act (Colorado AI Act) in May 2024, and it is scheduled to take effect on 30 June 2026. It is recognized as the first comprehensive statute in the U.S. specifically targeting “high-risk” AI systems. The law requires developers and deployers of qualifying AI applications to exercise reasonable care in preventing algorithmic discrimination, mandates clear documentation of AI activities, and holds entities accountable for the outputs of their AI systems. By categorizing certain AI deployments as “high-risk,” the Colorado AI Act imposes heightened responsibilities in critical areas such as employment, healthcare, lending, housing, and government services.

Illinois

  • In August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act, which imposes significant restrictions on the use of AI in mental healthcare. The law, effective immediately, broadly prohibits any entity without a professional license from offering therapy services, a rule that explicitly includes services delivered via AI, and bars licensed healthcare professionals from delegating therapeutic decisions to AI systems.

Kentucky

  • Signed and effective on 24 March 2025, Kentucky’s AI Governance Act (SB4) establishes a comprehensive framework for AI use within state government. It calls for adoption of uniform AI policy standards and creates a governance committee to oversee ethical, transparent, and responsible AI use across state agencies. It includes provisions for human oversight, public disclosure, and protection of personal and business information.

Nevada

  • On 5 June 2025, Nevada enacted AB406, which makes it a deceptive trade practice to misrepresent the capabilities of AI in mental healthcare. The law prohibits offering AI systems that are programmed to perform services that would constitute the practice of professional mental healthcare if done by a person. Furthermore, providers are barred from marketing or otherwise representing that their AI systems are capable of delivering such care. AB406 took effect on 1 July 2025.

New York

  • New York enacted the Responsible AI Safety and Education (RAISE) Act on 19 December 2025, which establishes a comprehensive regulatory framework for developers of large-scale “frontier” AI models.[1] Effective on 1 January 2027, this law requires large developers to implement and publicly disclose a detailed “safety and security protocol” designed to mitigate the risk of “critical harm,” defined as events causing mass injury or over USD 1 billion in damages. It also requires developers to report any “safety incident” that demonstrates an increased risk of such harm to the state attorney general within 72 hours.
  • Further, New York enacted a first-of-its-kind law requiring advertisers to disclose the use of AI-generated individuals in commercial advertising on 11 December 2025. The law mandates a conspicuous disclosure when a “synthetic performer”—a digitally created asset made with generative AI to resemble a human who is not an identifiable person—is featured in a visual or audiovisual advertisement. This rule is narrowly targeted at AI-generated actors and does not apply to audio-only ads, deepfakes of real performers, or AI enhancements of real performers. This law takes effect on 9 June 2026.
  • Enacted on 11 December 2021, New York City’s Local Law 144 regulates the use of “automated employment decision tools” (AEDTs) in hiring and promotion decisions. Effective since 5 July 2023, the law imposes three core obligations on employers: they must conduct an annual independent bias audit to assess whether the tool has a disparate impact on candidates based on race, ethnicity, or sex; they must post a summary of the audit results publicly on their websites; and they must provide notice to candidates that an AEDT is being used and of their right to request an alternative screening process.

Texas

  • Texas enacted the Texas Responsible AI Governance Act (TRAIGA) on 22 June 2025, establishing foundational duties for state agencies, developers, and deployers of AI systems operating within Texas. The law went into effect on 1 January 2026, and prohibits state agencies from certain uses of social scoring and biometric data. Developers and deployers face prohibitions on the intentional misuse of AI for certain types of behavioral manipulation, unlawful discrimination, deepfakes, and infringement of constitutional rights. TRAIGA provides protections for organizations that follow recognized frameworks, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework, as well as a 60-day cure period for violations, and the creation of a regulatory sandbox.

Utah 

  • Utah employed a relatively comprehensive approach to AI oversight by adopting, and making effective, the AI Policy Act (SB149) on 1 May 2024. This legislation requires professionals in regulated occupations – such as law, medicine, and financial services – to disclose their use of generative AI tools during high-risk interactions, such as when providing sensitive advice or handling personal data. Additionally, consumers must be informed if they explicitly inquire whether they are interacting with AI.

A handful of other U.S. states have considered and rejected broad AI laws. In addition, numerous other states and localities have enacted specific statutes or municipal ordinances that regulate discrete aspects of AI. The following list includes several such examples:  

Maine

  • Maine enacted “An Act to Ensure Transparency in Consumer Transactions Involving Artificial Intelligence” (the Maine AI Chatbot Disclosure Act) on 12 June 2025. Effective 23 September 2025, the law establishes targeted disclosure requirements for AI‑driven interactions. It generally prohibits businesses (and other persons) from using an AI chatbot – or similar text‑ or voice‑based computer technology – in trade or commerce in a manner that may mislead or deceive a reasonable consumer into believing they are interacting with a human, unless the business provides a clear and conspicuous disclosure that the interaction involves AI.

Maryland

  • Maryland enacted HB820 on 20 May 2025, regulating how health insurance plans and related entities may use AI in coverage and treatment decisions made in utilization management and review decisions. Effective 1 October 2025, the law requires covered entities to ensure that the AI tool’s determinations are grounded in the enrollee’s individual clinical information, do not replace the role of a health care provider, and are applied fairly and equitably without resulting in unfair discrimination.

Pennsylvania

  • On 7 July 2025, Pennsylvania enacted Act 35 (formerly SB649) to address the malicious use of AI-generated deepfakes. Effective 5 September 2025, the law establishes criminal penalties for generating (or creating and distributing) a forged digital likeness with intent to defraud or injure, or with knowledge and intent to facilitate fraud or injury by another – including where the actor knows or reasonably should know the audio or visual at issue is forged.

Illinois

  • Illinois enacted HB3773 on 9 August 2024, amending the Illinois Human Rights Act to regulate the use of AI in employment decisions, prohibiting discriminatory practices. Effective 1 January 2026, the law requires employers to provide notice to applicants and workers if they use AI for hiring, discipline, discharge, or other workplace-related purposes.

This continued surge in state legislative activity reflects a wide range of approaches and priorities – from establishing task forces to study AI’s impact to imposing specific obligations on companies deploying AI systems. This dynamic landscape may underscore the growing value of state-level action in the absence of federal guidance, and organizations are encouraged to closely monitor both enacted laws and pending legislation in the jurisdictions in which they operate.

Last modified 10 March 2026

Continue reading

  • no results

Back to top