Artificial Intelligence in Australia

Controls on generative AI

Information not provided.

Last modified 25 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 18 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 8 July 2025

Laws specifically addressing AI have not been introduced in Brazil yet. The Brazilian AI Bill provides for the definition of generative artificial intelligence (generative AI) as:

"model of AI model specifically designed to generate or significantly modify, with varying degrees of different degrees of autonomy, text, images, audio, video or software code."
Last modified 31 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 23 July 2025

National laws specifically addressing AI have not yet passed in Canada. Canada’s export control regime is primarily based on the multilateral Wassenaar Arrangement, which does not itself explicitly list AI in current control lists (though high-performance computing systems, encryption tools or network intrusion software, or certain imaging or machine vision sensors that may form part of AI technologies may meet criteria for control). 

Due to stalls in global consensus on updating the Wassenaar Arrangement, on July 20, 2024, Canada unilaterally added certain quantum computing and advanced semiconductor technologies to its Export Control List, effectively prohibiting their export to any location other than the United States without an export permit. The list of controlled goods specifically added is part of Export Control List Order SOR/2024-112, where the attached Regulatory Impact Analysis Statement mentions specifically the addition of gate-all-around field-effect-transistors/GAAFET based on their application in creating microchips that run faster and consume less power, thus enabling more powerful and efficient artificial intelligence applications, including for military systems.

Under Canada’s national security powers under the Investment Canada Act, it is advisable to work with counsel to develop a strategy for managing the requisite notification to and/or review by government for all new businesses or acquisitions of business or other foreign direct or indirect investment where there is significant foreign control, to the extent they involve artificial intelligence resources.  In addition, Canada has recently launched a “Sovereign AI Compute Strategy” to foster the investment in Canadian-sourced artificial intelligence compute power.

Last modified 11 July 2025

The Chilean AI Bill does not address generative AI in a special way, so there are no specific controls in this regard. 

Last modified 23 July 2025

The main regulatory requirements are set out in the 'Law/proposed law' section.

In particular, if an AI service provider intends to provide AI services to external users located in China, it may need to pass certain security assessments conducted by the Chinese authorities and complete the required filings with the Chinese authorities.

Under the AI Security Standard, as well as the standards mentioned in the “Regulatory Guidance / Voluntary Code” section, AI service providers are required to ensure the security of their services, focusing mainly on the following aspects:

  • Training data security: service providers are responsible for ensuring the security of training data through effective data sources due diligence, content moderation, privacy protection and annotation process management.
  • Model security: service providers should take effective measures to ensure the security of AI model throughout the entire lifecycle of the model. This includes secure model training, output control, ongoing monitoring and evaluation, updates and upgrades, and the protection of the model’s operating environment.
  • Operation security: service providers should implement comprehensive safeguards concerning the provision of services, the transparency of service operations, the collection of input data, the mechanisms for handling complaints and reports and the business continuity planning.
Last modified 26 January 2026

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 23 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 14 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 9 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 21 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 22 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 11 February 2026

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 22 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

Generative AI guidance in France

In France, the CSPLA Report specifies how AI model providers should publish a policy of compliance with European copyright law respecting the authors opt-out principle, and to make available to rights holders and the public a sufficiently detailed summary of the content used to train AI models. Templates have been provided for these summaries but have not been kept in the final version of the GPAI Code of Practice.

The Senate Report highlights that today’s AI ecosystem includes “more or less open” models, a distinction that has become central to regulatory debates because openness affects transparency, auditability, and safety oversight. It explains that the EU AI Act now regulates not only AI uses but also foundation models (i.e., GPAI models) themselves, introducing a stricter regime for those deemed systemic‑risk models, due to their scale, dual‑use potential and difficulty to supervise. The report emphasises that generative AI still suffers from core reliability issues (e.g., hallucinations, opacity, and multi‑layered bias) which persist even with mitigation techniques such as Retrieval‑Augmented Generation (RAG), positioning these technical limitations as key reasons regulators now impose model‑level obligations, in addition to downstream application controls.

With regard to privacy aspects, the CNIL Generative AI Guidance provides recommendations on how to ensure privacy safeguards when using Generative AI, including: start with specific needs rather than deploying AI without clear purpose; define allowed and prohibited uses, especially regarding personal data; acknowledge system limitations and risks; choose secure deployment methods, preferably using local, specialized systems; train end users on proper usage and risks; and implement appropriate governance ensuring GDPR compliance with all stakeholders involved.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 5 February 2026

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 3 February 2026

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 19 July 2025

Laws specifically addressing AI have not yet been introduced in Hong Kong.

The GenAI Guideline addresses the technical limitations and service risks of using generative AI, and sets out a governance framework based on five dimensions, namely: personal data privacy, intellectual property, crime prevention, reliability and trustworthiness, and system security. It further outlines key principles of governance, which are in line with international practices, such as:

  • compliance with laws and regulations;
  • security and transparency;
  • accuracy and reliability;
  • fairness and objectivity; and
  • practicality and efficiency.

Although non-binding, the GenAI Guideline provides practical recommendations to three main types of stakeholders (i.e., Technology Developers, Service Providers and Service Users) based on their respective roles and responsibilities.

For organisations that are regulated by the HKMA and/or the SFC, please refer to the specific guidelines on the use of generative AI.

Last modified 25 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 24 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 23 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 3 February 2026

Currently, there are no laws in Japan that specifically address this point.

Last modified 31 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 14 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 24 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 23 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 23 July 2025

Laws specifically addressing AI have not been introduced in Mauritius yet.

Last modified 26 June 2025

Laws specifically addressing AI have not been introduced in Mexico yet.

Last modified 29 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 23 July 2025

Laws specifically addressing AI have not been introduced in New Zealand yet, so there are no statutory controls on the use of generative AI.

The GenAI Guidelines (summarised under Regulatory guidance / voluntary codes) are relevant for the New Zealand public sector's use of generative AI tools.

Additionally, the OPC's Gen AI Guidance, summarises privacy risks arising from the use of generative AI, which organisations subject to the Privacy Act are expected to appropriately mitigate.  The risks identified are:

  • privacy risks associated with the training data used by generative AI (eg how it was collected and whether it was collected with sufficient transparency);
  • confidentiality of information entered into generative AI tools;
  • accuracy of personal information created by generative AI; and
  • individuals' ability to exercise their data subject rights to access and correction of their personal information held in or processed by generative AI tools.
Last modified 14 July 2025

Laws specifically addressing AI have not been introduced in Nigeria yet.

Last modified 17 June 2025

The content on Controls on generative AI in the European Union applies in Norway.

Last modified 9 October 2025

Laws specifically addressing controls on generative AI have not been introduced in Peru yet.

Last modified 20 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 23 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 22 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 25 July 2025

Laws specifically addressing AI have not yet been introduced in Singapore. 

The Model Framework for GenAI sets out nine dimensions for consideration (see 'Regulatory Guidance / Voluntary Codes' above) in relation to generative AI.

Last modified 28 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 29 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 14 July 2025

The AI Act mandates several obligations on AI business operators that intend to offer products or services utilising generative AI.

  • Definition of Generative AI: This term refers to AI systems that produce content such as text, audio, images, and other outputs by mimicking the structure of input data (Article 2, Item 5).
  • Advance Notification Obligation: AI business operators must notify users in advance that their products or services are powered by generative AI (Article 31, Paragraph (1)). Non-compliance may result in an administrative fine of up to KRW 30 million (Article 43, Paragraph (1), Item 1).
  • Labelling Obligation: Products or services must be clearly labelled as being created by generative AI (Article 31, Paragraph (2)).
  • Deepfake Content: AI business operators providing virtual outputs that may be mistaken for real (often referred to as “deepfakes”), must ensure these are clearly labelled. If labelled content qualifies as artistic or creative expression, the manner of labelling should not hinder its appreciation (Article 31, Paragraph (3)).
  • Compliance Guidance: The specifics of notification and labelling, including potential exceptions, will be detailed in a forthcoming Presidential Decree (Article 31, Paragraph (4)).
Last modified 29 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 21 July 2025

General-Purpose AI Models

Article 3(63) of the EU AI Act defines a GPAI (general-purpose AI) model as an:

"AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market."

GPAI models are versatile and can be applied across various domains and contexts. The Act sets requirements to ensure that these specific models, due to their broad applicability and the wide range of tasks they can complete, adhere to high ethical and safety standards. Please note that not all AI models are GPAI models, and the EU AI Act only regulates the latter.

General-Purpose AI Models with Systemic Risk

Article 3(65) of the EU AI Act defines 'systemic risk' as:

"a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain".

Article 51 of the EU AI Act classifies a GPAI as having systemic risk if it has high impact capabilities (this is currently when the cumulative amount of computation used for training is greater than 10 to the power of 25 floating point operations but also through other indicators and benchmarks) or based on a decision of the Commission.

Systemic risk involves the broader, cumulative impact of GPAI models on society. This encompasses scenarios where GPAI models could lead to significant disruptions or risks, necessitating a regulatory focus to prevent widespread adverse effects and ensure resilience across sectors. In view of the higher risks, the Act sets additional requirements for GPAI models with systemic risk.

Importantly, the requirements of a GPAI model / system (i.e., without a specific use case) and the requirement of an AI system based on its risk profile (depending on the use case at stake) can be cumulative. For instance, if the provider of a GPAI model integrates its model in a high-risk AI system, then the rules for both GPAI models and high-risk AI systems should be complied with.

Last modified 7 July 2025

Laws specifically addressing AI have not been introduced in Thailand yet.

Last modified 25 July 2025

Laws specifically addressing AI have not been introduced in Turkey yet.

Last modified 30 July 2025

There is no unified federal law or emirate level law in the UAE that has a primary focus on regulating AI (and therefore no specific controls on generative AI).

The DIFC’s Data Protection Regulations does not contain any specific controls on generative AI.

Last modified 4 August 2025

There is no single statute addressing AI in the UK yet. Existing principles under e.g. the Equality Act 2010, Data Protection Act 2018, UK GDPR and, now, the Data Use and Access Act are therefore to be considered. 

Last modified 23 February 2026

As the U.S. does not have a comprehensive federal law regulating generative AI, controls on generative AI are emerging through a combination of enforcement actions, state and local legislation, and agency rules or guidance. 

At the federal level, several agencies, including the FTC and SEC, have taken enforcement actions against deceptive claims about AI. The FTC will be enforcing the TAKE IT DOWN Act, which covers certain types of deepfakes, and has issued rules about impersonation scams and fake reviews that would cover the use of generative AI tools.

At the state level, several jurisdictions have enacted targeted controls on generative AI. These laws include transparency obligations on AI developers, prohibitions on AI-generated deepfakes, disclosure requirements for consumer-bot interactions, and restrictions on chatbot use for mental health or companionship, among other things. Three examples are:

  • California’s Generative AI Training Data Transparency Act, which requires disclosure of high-level details about the training data used in generative AI systems
  • Colorado’s AI Act, which includes provisions requiring developers and deployers of high-risk AI systems, including generative models, to exercise reasonable care to prevent algorithmic discrimination
  • Utah’s AI Policy Act, which prohibits the undisclosed use of generative AI in regulated occupations and mandates clear disclosure when AI is used in consumer interactions
Last modified 10 March 2026

Continue reading

  • no results

Previous topic
Back to top