Artificial Intelligence in Australia
Definitions
Regulatory guidance / voluntary codes in Australia
On 23 May 2025, the Australian Signals Directorate's Australian Cyber Security Centre, together with its counterparts in the US, UK and New Zealand, released guidance on best practices for AI Data Security. The guidance sets out key data security risks in AI use and provides a list of best practice guidelines, including but not limited to, sourcing reliable data and tracking data provenance, verifying and maintaining data integrity during storage and transport, and data encryption.
In March 2025, the Commonwealth Ombudsman released an Automated Decision Making Better Practice Guide. The Guide is intended to inform the selection, adoption and use of AI by government agencies to ensure their compliance with Australian laws, including administrative law. Appendix A of the Guide features a comprehensive checklist which may assist government and non-government entities with decision making surrounding their use of AI.
Also in March 2025, the Australian Government Digital Transformation Agency released AI and Cyber Risk model clauses for procuring or developing AI models.
On 21 October 2024, the Office of the Australian Information Commissioner (OAIC), the national regulator for privacy and freedom of information, released two guidance documents relating to AI:
- Guidance on privacy and the use of commercially available AI products – This guidance document is intended to assist organisations deploying and using commercially available AI systems in complying with their privacy obligations. The guidance document specifies that privacy obligations apply to any personal information input into an AI system and the output that is generated by the AI system (where the output contains personal information). The OAIC also recommends that no personal information is entered into publicly available generative AI tools.
- Guidance on privacy and developing and training generative AI models – This guidance document recommends that AI developers take reasonable steps to ensure accuracy in generative AI models. With respect to privacy obligations, it notes that personal information includes inferred, incorrect or artificially generated information produced by AI models (such as hallucinations and deepfakes). In addition, this guidance document reminds developers that publicly available or accessible data may not automatically be legally used to train or fine-tune generative AI models or systems.
In September 2024, Australia's Department of Science, Industry and Resources published a Proposal Paper for introducing mandatory guardrails for AI in high-risk settings (Proposal Paper introducing mandatory guardrails). This paper identifies two broad categories of high-risk AI, namely (1) AI systems with known or foreseeable proposed uses that are considered to be high risk, and (2) advanced, highly capable general-purpose AI/GPAI models that are capable of being used, or being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems, where all possible applications and risks cannot be foreseen.
With respect to the first category listed above, the principles that organisations must consider in designating an AI system as high-risk are the risk of adverse impacts to:
- an individual's human rights, health or safety, and legal rights e.g. legal effects, defamation or similarly significant effects on an individual;
- groups of individuals or collective rights of cultural groups; and
- the broader Australian economy, society, environment and rule of law,
as well as the severity and extent of the adverse impacts outlined above.
With respect to AI designated as high-risk, the Proposal Paper introducing mandatory guardrails sets out the following proposed mandatory guardrails for organisations developing or deploying high-risk AI systems (page 35):
- "Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance;
- Establish and implement a risk management process to identify and mitigate risks;
- Protect AI systems, and implement data governance measures to manage data quality and provenance;
- Test AI models and systems to evaluate model performance and monitor the system once deployed;
- Enable human control or intervention in an AI system to achieve meaningful human oversight;
- Inform end-users regarding AI-enabled decisions, interactions with AI and AI generated content;
- Establish processes for people impacted by AI systems to challenge use or outcomes;
- Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks;
- Keep and maintain records to allow third parties to assess compliance with guardrails; and
- Undertake conformity assessments to demonstrate and certify compliance with guardrails."
The definition of high-risk AI and the guardrails are expected to be refined based on feedback provided by Australian stakeholders to the Proposal paper introducing mandatory guardrails.
On 5 September 2024, the Australian Government released a Voluntary AI Safety Standard publication that sets out substantially similar guardrails as those in the Proposal Paper introducing mandatory guardrails, with the exception of guardrail 10, which states:
"Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness."
Whereas the Proposal Paper introducing mandatory guardrails apply to high-risk AI, the Voluntary AI Safety Standard sets out voluntary guidelines for developers and deployers of AI to protect people and communities from harms, avoid reputation and financial risks to their organizations, increase organizational and community trust and confidence in AI systems, services and products, and align with legal obligations and expectations in Australia, among other things.
On 1 September 2024, the Policy for the Responsible Use of AI in Government (Policy) came into effect, aiming to empower the Australian Government to safely, ethically and responsibly engage with AI, strengthen public trust in the government's use of AI, and adapt to technological and policy changes over time.
In particular, the Policy requires government agencies to:
- designate accountability for compliance with the policy to certain public officials, and
- publish and keep updated an AI transparency statement.
Additional recommendations include fundamental AI training for all staff, additional training for staff with roles or responsibilities in connection with AI, understanding and recording where and how AI is being used within agencies, integrating AI considerations into existing frameworks, participating in the Australian Government's AI assurance framework, monitoring AI use cases and keeping up to date with policy changes.
Australia has been a signatory to the Bletchley Declaration since 1 November 2023, which establishes a collective understanding between 28 countries and the European Union on the opportunities and risks posed by AI.
In November 2019, the Australian Government published its AI Ethics Principles (Ethics Principles), designed to ensure that AI is safe, secure and reliable and to:
- help achieve safer, more reliable and fairer outcomes for all Australians;
- reduce the risk of negative impact on those affected by AI applications; and assist businesses and governments to practice the highest ethical standards when designing, developing and implementing AI.
Definitions in Australia
Information not provided.
Prohibited activities in Australia
Information not provided.
Controls on generative AI in Australia
Information not provided.
User transparency in Australia
Information not provided.
Fairness / unlawful bias in Australia
Information not provided.
Information not provided.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
Laws specifically addressing AI have not been introduced in Brazil yet.
Draft Article 4 of the proposed Brazilian AI Bill proposes the following definitions:
Artificial intelligence system
"Machine-based system that, with different degrees of autonomy and for explicit or implicit purposes, infers, from a set of data or information it receives, how to generate results, in particular, predictions, content, recommendations or decisions that can influence the virtual, physical or real environment" (draft Article 4, I)."
Developer
"A natural or legal person, whether public or private, who develops an artificial intelligence system, directly or on commission, with a view to placing it on the market or applying it to a service provided by them, under their own name or brand, for a fee or free of charge" (draft Article 4, V)."
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
National laws specifically addressing AI have not yet passed in Canada. The Privacy Principles differentiate between developers and providers (individuals or organizations that develop or train foundation models or generative AI systems, or that put such systems onto the market) and organizations using generative AI as part of their activities.
Article 3 of the Chilean AI Bill provides the following definitions:
AI System
"Machine-based system that, by explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, contents, recommendations or decisions that can influence physical or virtual environments. Different AI systems can vary in their levels of autonomy and adaptability after implementation."
Provider
"Any natural or legal person or government agency that develops an AI system with a view to introducing it on the market or putting it into service, whether free of charge or for a fee."
The PRC does not yet have an omnibus AI regulation – instead, the PRC has adopted a sector driven approach. That said, there are relevant definitions under specific laws and regulations.
Under the GenAI Measures:
- "Generative AI technology" refers to "models and related technologies that have the ability to generate texts, images, audios, videos or other content."
- "Generative AI service provider" refers to "any organisation or individual that uses generative AI technology to provide generative AI services (including providing such services through providing programming interfaces or other means)."
- "Generative AI service user" refers to "any organisation or individual who uses generative AI services to generate content."
Under the Deep Synthesis Provisions:
- "Deep synthesis technology" refers to "any technology that employs deep learning, virtual reality or any other generative or synthetic algorithm to generate texts, images, audio, video, virtual scenes or other network information."
- "Deep Synthesis services provider" refers to "any organisation or individual who provides deep synthesis services."
- "Providers of technical support for deep synthesis services" refers to "any organisation or individual who provides technical support for deep synthesis services."
- "User of deep synthesis services" refers to "any organisation or individual who uses deep synthesis services to generate, reproduce, release or distribute information."
- "Training data" refers to "labelled or benchmark datasets used for training machine learning models."
Under the Recommendation Algorithms Provisions:
- "Application of recommendation algorithm technologies" refers to "using algorithm technologies such as generation and synthesis technology, personalised pushing technology, ranking and selection technology, retrieval and filtering technology, and dispatching and decision-making technology to provide users with information."
Under the AI Data Standard:
- "Pre-training" refers to "the training process in which a generative AI model acquires general knowledge using large-scale datasets."
- "Fine-tuning" refers to "the training process in which a generative AI model, based on pre-training, acquires context-specific service capabilities using data from specific sources."
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Definitions in France
In the CNIL AI Fact Sheets, the CNIL considers it likely that AI providers creating training databases are acting as data controllers, unless an AI provider entrusts the database creation to a third party – in which case, the AI provider is the data controller only if it reuses the existing database for its own purpose. If, on the contrary, the AI providers develop an AI system on behalf of customers (being deployers) under their instructions, it is more likely that they will be deemed data processors. In any event, the CNIL reminds that a case-by-case analysis should be conducted.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
Laws specifically addressing AI have not yet been introduced in Hong Kong.
The Ethical AI Framework provides the following definition:
- AI System: a collection of interrelated technologies used to help solve problems autonomously and perform tasks to achieve defined objectives without explicit guidance from a human being.
The GenAI Guideline provides the following definitions:
- Technology Developer: organisations and individuals who create, train and maintain the foundational models and algorithms that power generative AI systems. (That is, technology developers and those who commission the development of technology or determine the use of technology.)
- Service Provider: entities that deploy generative AI technologies as customer-facing applications or services, acting as intermediaries between developers and end users. (That is, service providers, platform providers, and individuals who provide service with additional features and tools based on existing technology.)
- Service User: an individual who is an end user of an AI system, content creators, and disseminators of generative content.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
Under the AI Act, “AI-related technology” is defined as technology necessary to realize functions that substitute for human intellectual abilities such as cognition, reasoning, and judgment through artificial means, as well as technology related to information processing systems that utilize such functions to process input data and produce outputs.
The AI Act also defines “AI-utilizing business operators” as those who intend to develop or provide products or services utilizing AI-related technologies, or otherwise intend to utilize such technologies in their business activities.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
Laws specifically addressing AI have not been introduced in Mauritius yet.
Laws specifically addressing AI have not been introduced in Mexico yet. Article 4 of the AI Bill sets out the following definitions:
- Artificial Intelligence Systems: those involving the use and exploitation of information technologies to create computer programmes capable of performing calculations, operations, research or reasoning comparable to those performed by the human mind.
- Developer: any natural or legal person who creates or develops artificial intelligence systems.
- Supplier: any person, whether natural or legal, who markets or distributes artificial intelligence systems, either on a paid or pro bono basis.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
As there is no AI-specific legislation in New Zealand, there are no relevant statutory definitions. As New Zealand is an adherent to the OECD's AI Principles, the definition of "AI system" in those principles is relevant – namely:
"An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment."
This is the definition used in the AI Strategy, Framework, AI Guidance for Business and GenAI Guidelines.
The OPC AI Guidance broadly defines AI as computer systems where one or more of the following applies:
- machine learning systems developed or refined by processing training data;
- classifier systems used to put information into categories (eg captioning images);
- interpreter systems that turn noisy input data into standardised outputs (eg deciding what words are present in speech or handwriting);
- generative systems used to create text, images, computer code, or something else; and/or
- automation where computers take on tasks that people have done up until recently.
Laws specifically addressing AI have not been introduced in Nigeria yet.
The definitions of AI System, Developer and Provider applicable in the European Union are also applicable in Norway.
Article 3(b) of the AI Law defines an 'Artificial intelligence-based system' as follows:
"An electronic-mechanical system that can, for a set of human-defined objectives, make predictions, recommendations or decisions, influencing real or virtual environments. It is designed to operate with different levels of autonomy."
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
Laws specifically addressing AI have not yet been introduced in Singapore. That said, there are relevant definitions under specific guidance documents.
Under the Model Framework for GenAI:
- Generative AI is referred to as "AI models capable of generating text, images or other media types. They learn the patterns and structure of their input training data and generate new data with similar characteristics. Advances in transformer-based deep neural networks enable generative AI to accept natural language prompts as input, including large language models (LLM) such as GPT-4, Gemini, Claude and LLaMA."
Under the Model Artificial Intelligence Governance Framework:
- "AI" refers to "a set of technologies that seek to simulate human traits such as knowledge, reasoning, problem solving, perception, learning and planning, and, depending on the AI model, produce an output or decision (such as a prediction, recommendation, and/or classification). AI technologies rely on AI algorithms to generate models. The most appropriate model(s) is/are selected and deployed in a production system."
- "AI Solution Providers" refers to those who "develop AI solutions or application systems that make use of AI technology. These include not just commercial off-the-shelf products, online services, mobile applications, and other software that consumers can use directly, but also B2B2C applications, e.g. AI-powered fraud detection software sold to financial institutions. They also include device and equipment manufacturers that integrate AI-powered features into their products, and those whose solutions are not standalone products but are meant to be integrated into a final product. Some organisations develop their own AI solutions and can be their own solution providers."
- "Organisations" refers to "companies or other entities that adopt or deploy AI solutions in their operations, such as backroom operations (e.g. processing applications for loans), front-of-house services (e.g. e-commerce portal or ride-hailing app), or the sale or distribution of devices that provide AI-powered features (e.g. smart home appliances)."
- "Individuals" refers to those who "can, depending on the context, refer to persons to whom organisations intend to supply AI products and/or services, or persons who have already purchased the AI products and/or services. These may be referred to as “consumers” or “customers” as well."
Under the MOH Guidelines:
- "Developers" refer to "organisations or individuals who plan, fund, develop and/or maintain AI-MD, including standalone software medical devices that can interact with patients directly, or AI-MD intended to be used as part of healthcare service provision by organisations or individual healthcare professionals."
- "Implementers" refer to "organisations or individuals who use AI-MD to deliver healthcare services."
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
Under the AI Act, the term “artificial intelligence” or “AI” is defined as the electronic manifestation of human intellectual capabilities such as learning, inference, perception, judgement, and language comprehension.
“AI system,” the principal subject of regulation, is defined as an AI-based system with varying degrees of autonomy and adaptability, capable of influencing physical or virtual environments through its predictions, recommendations, and decisions.
Furthermore, the AI Act applies to AI business operators, defined as individuals or entities engaged in AI-related activities. These operators are categorised into two groups (Article 2, Item 7): (i) 'AI Developers' (corporations, organisations, individuals, and national institutions involved in the development and provision of AI); and (ii) 'AI User Businesses' (corporations, organisations, individuals, and national institutions that offer AI products or services using AI developed by others).
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
AI System
Article 3(1) of the EU AI Act defines an 'AI system' as follows:
"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".
The EU AI Act uses a technology neutral definition, focusing on the effect of the system rather than the techniques used. There are several key features of the definition which, acting together, distinguish the AI system from more traditional software systems. The central characteristics are the level of autonomy and adaptiveness in how the system operates and the ability for the system to infer how to generate outputs. So, an AI system must be able to operate independently at some level (like many existing technologies) but must also be able to apply logic to draw conclusions from data it is given. It may also adapt after deployment, in effect by continuing to "learn". These features are more akin to human capability than traditional technology systems, which operate using more fixed and pre-determined paths to process data. These outputs must influence physical or virtual environments, whether by making decisions or through other means.
The EU AI Act also sets out specific rules for GPAI models. GPAI models differ from AI systems; they can be an essential component integrated into an AI system, but do not themselves constitute an AI system until further components are added (such as an interface). For more information, please see Controls on generative AI.
Provider
Article 3(3) of the EU AI Act defines a 'provider' as follows:
"a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge".
Those falling within this definition as a ‘provider’ have significant responsibility for ensuring compliance with the EU AI Act, and so identifying the provider will be crucial for businesses and may well influence their choice of business/deployment model.
The provider is responsible for putting the AI on the market either by making it first available in the market or directly puts the AI into use for its own purposes and under its own name or trademark. An organisation may also become a downstream provider if it makes substantial modifications to a system or changes its intended purpose (Article 25(1)). Guidance from the European Commission is expected on what counts as a “substantial modification”. At this stage, the only conclusive criteria is that such modification must not have been foreseen by the provider in the initial conformity assessment carried out by the provider.
Payment is not relevant, which will impact GPAI models supplied onto the market on an open source or under free commercial terms.
Deployer
Article 3(4) of the EU AI Act defines a 'deployer' as follows:
"a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity".
In simple terms, a 'deployer' is an entity that uses an AI system other than for personal, non-professional use. Although the burden of responsibility on a deployer is not as great as on 'providers', there are still obligations that it must fulfil.
Note that the EU AI Act also implements requirements for organisations performing other roles (as distributor, importer, product manufacturer, and authorised representative). Together with the deployer and provider, such organisations are referred to as 'operators' of AI. Importantly, the same operator may qualify simultaneously as more than one of these roles if they meet the respective conditions. For instance, it is possible to be both the provider and the deployer of an AI system at the same time.
Laws specifically addressing AI have not been introduced in Thailand yet.
Laws specifically addressing AI have not been introduced in Turkey yet.
NAIS defines an "AI system" as:
"A system designed to perform a function, consisting of solely software or a combination of software and hardware, by collecting and interpreting structured and unstructured data using AI technologies. It can be created by embedding AI technologies into an existing system or be entirely based on AI technologies" (page 90 of NAIS).
Whilst there is no unified federal law or emirate level law in the UAE that has a primary focus on regulating AI (and therefore no definitions established by law), the following terms are defined in the AI Ethics Guide:
- AI System: A product, service, process, or decision-making methodology whose operation or outcome is materially influenced by artificially intelligent functional units (being a function unit that performs functions that are generally associated with human intelligence such as reasoning, learning and self-improvement).
- Developer: An entity that designs, builds, maintains, or tunes an AI System, or determines its purpose.
- Operator: An entity that uses AI Systems in operations or decision-making, provides services via AI Systems, or evaluates AI System use cases.
The DIFC’s Data Protection Regulations defines the following terms:
- (AI) System: Any machine-based system operating in an autonomous or semi-autonomous manner that can: (i) process personal data for human-defined purposes or purposes that the system itself defines, or both; and (ii) generate output as a result of or on the basis of such processing.
- Deployer: with respect to an AI System, the natural or legal person: (i) under whose authority or on whose direction or for whose benefit the AI System is operated; or (ii) who receives the benefit of the operation of the AI System or any output generated by the AI System, in each case without regard to whether or not the AI System is operated, supervised or hosted by such person, or such person defines or determines any of the purposes of which personal data is processed by such AI System.
- Operator: A Provider that operates or supervises an AI System on behalf or otherwise for the benefit, and on the direction of, a Deployer, in each case without regard to whether or not that Provider exercises any control over the processing of personal data by the AI System.
- Provider: A natural or legal person that develops an AI System, or procures that an AI System is developed for or on behalf of such person, in each case with a view to providing, commercialising or otherwise making such AI System available to Operators or Deployers.
A specific law addressing AI has not been introduced in the UK yet.
In the U.S., the definition of AI varies across jurisdictions and legal frameworks.
At the federal level, definitions of AI have appeared in several laws, including the National AI Initiative Act, reflected in 15 U.S.C. § 9401, which defines AI as follows:
“(3) ARTIFICIAL INTELLIGENCE – The term ‘artificial intelligence’ means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to:
(A) perceive real and virtual environments;
(B) abstract such perceptions into models through analysis in an automated manner; and
(C) use model inference to formulate options for information or action.”
State laws have also used different definitions of AI. Below are two variants.
Colorado’s AI Act does not provide a standalone definition of AI, but rather regulates AI systems as:
“Any machine-based system that, for any explicit or implicit objective, infers from the inputs the system received how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.” (C.R.S. § 6-1-1701(2))
Utah’s AI Policy Act carves out Generative AI as:
“An artificial system that: (i) is trained on data; (ii) interacts with a person using text, audio, or visual communication; and (iii) generates nonscripted outputs similar to outputs created by a human, with limited or no human oversight.” (Utah Code § 13-2-12(1)(a))