Artificial Intelligence in Australia
User transparency
Regulatory guidance / voluntary codes in Australia
On 23 May 2025, the Australian Signals Directorate's Australian Cyber Security Centre, together with its counterparts in the US, UK and New Zealand, released guidance on best practices for AI Data Security. The guidance sets out key data security risks in AI use and provides a list of best practice guidelines, including but not limited to, sourcing reliable data and tracking data provenance, verifying and maintaining data integrity during storage and transport, and data encryption.
In March 2025, the Commonwealth Ombudsman released an Automated Decision Making Better Practice Guide. The Guide is intended to inform the selection, adoption and use of AI by government agencies to ensure their compliance with Australian laws, including administrative law. Appendix A of the Guide features a comprehensive checklist which may assist government and non-government entities with decision making surrounding their use of AI.
Also in March 2025, the Australian Government Digital Transformation Agency released AI and Cyber Risk model clauses for procuring or developing AI models.
On 21 October 2024, the Office of the Australian Information Commissioner (OAIC), the national regulator for privacy and freedom of information, released two guidance documents relating to AI:
- Guidance on privacy and the use of commercially available AI products – This guidance document is intended to assist organisations deploying and using commercially available AI systems in complying with their privacy obligations. The guidance document specifies that privacy obligations apply to any personal information input into an AI system and the output that is generated by the AI system (where the output contains personal information). The OAIC also recommends that no personal information is entered into publicly available generative AI tools.
- Guidance on privacy and developing and training generative AI models – This guidance document recommends that AI developers take reasonable steps to ensure accuracy in generative AI models. With respect to privacy obligations, it notes that personal information includes inferred, incorrect or artificially generated information produced by AI models (such as hallucinations and deepfakes). In addition, this guidance document reminds developers that publicly available or accessible data may not automatically be legally used to train or fine-tune generative AI models or systems.
In September 2024, Australia's Department of Science, Industry and Resources published a Proposal Paper for introducing mandatory guardrails for AI in high-risk settings (Proposal Paper introducing mandatory guardrails). This paper identifies two broad categories of high-risk AI, namely (1) AI systems with known or foreseeable proposed uses that are considered to be high risk, and (2) advanced, highly capable general-purpose AI/GPAI models that are capable of being used, or being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems, where all possible applications and risks cannot be foreseen.
With respect to the first category listed above, the principles that organisations must consider in designating an AI system as high-risk are the risk of adverse impacts to:
- an individual's human rights, health or safety, and legal rights e.g. legal effects, defamation or similarly significant effects on an individual;
- groups of individuals or collective rights of cultural groups; and
- the broader Australian economy, society, environment and rule of law,
as well as the severity and extent of the adverse impacts outlined above.
With respect to AI designated as high-risk, the Proposal Paper introducing mandatory guardrails sets out the following proposed mandatory guardrails for organisations developing or deploying high-risk AI systems (page 35):
- "Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance;
- Establish and implement a risk management process to identify and mitigate risks;
- Protect AI systems, and implement data governance measures to manage data quality and provenance;
- Test AI models and systems to evaluate model performance and monitor the system once deployed;
- Enable human control or intervention in an AI system to achieve meaningful human oversight;
- Inform end-users regarding AI-enabled decisions, interactions with AI and AI generated content;
- Establish processes for people impacted by AI systems to challenge use or outcomes;
- Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks;
- Keep and maintain records to allow third parties to assess compliance with guardrails; and
- Undertake conformity assessments to demonstrate and certify compliance with guardrails."
The definition of high-risk AI and the guardrails are expected to be refined based on feedback provided by Australian stakeholders to the Proposal paper introducing mandatory guardrails.
On 5 September 2024, the Australian Government released a Voluntary AI Safety Standard publication that sets out substantially similar guardrails as those in the Proposal Paper introducing mandatory guardrails, with the exception of guardrail 10, which states:
"Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness."
Whereas the Proposal Paper introducing mandatory guardrails apply to high-risk AI, the Voluntary AI Safety Standard sets out voluntary guidelines for developers and deployers of AI to protect people and communities from harms, avoid reputation and financial risks to their organizations, increase organizational and community trust and confidence in AI systems, services and products, and align with legal obligations and expectations in Australia, among other things.
On 1 September 2024, the Policy for the Responsible Use of AI in Government (Policy) came into effect, aiming to empower the Australian Government to safely, ethically and responsibly engage with AI, strengthen public trust in the government's use of AI, and adapt to technological and policy changes over time.
In particular, the Policy requires government agencies to:
- designate accountability for compliance with the policy to certain public officials, and
- publish and keep updated an AI transparency statement.
Additional recommendations include fundamental AI training for all staff, additional training for staff with roles or responsibilities in connection with AI, understanding and recording where and how AI is being used within agencies, integrating AI considerations into existing frameworks, participating in the Australian Government's AI assurance framework, monitoring AI use cases and keeping up to date with policy changes.
Australia has been a signatory to the Bletchley Declaration since 1 November 2023, which establishes a collective understanding between 28 countries and the European Union on the opportunities and risks posed by AI.
In November 2019, the Australian Government published its AI Ethics Principles (Ethics Principles), designed to ensure that AI is safe, secure and reliable and to:
- help achieve safer, more reliable and fairer outcomes for all Australians;
- reduce the risk of negative impact on those affected by AI applications; and assist businesses and governments to practice the highest ethical standards when designing, developing and implementing AI.
Definitions in Australia
Information not provided.
Prohibited activities in Australia
Information not provided.
Controls on generative AI in Australia
Information not provided.
User transparency in Australia
Information not provided.
Fairness / unlawful bias in Australia
Information not provided.
Information not provided.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
The Brazilian AI Strategy identifies that a key issue to be addressed is that organisations and individuals that play an active role in the AI lifecycle should commit to transparency and responsible disclosure in relation to AI systems, providing relevant and state of the art information that will promote the general understanding of AI systems, making people aware of their interactions with AI systems, allowing those affected by an AI system to understand the results produced and allowing those adversely affected by an AI system to contest its outcome (para. 5, page 2, Summary of the Brazilian Artificial Intelligence Strategy).
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
National laws specifically addressing AI have not been yet passed in Canada.
The Voluntary Code specifies under its Transparency principle that signatories should (with varying levels of obligation, as indicated, depending on whether a signature is either a developer or a manager of a generative AI system and if the system is available for public use or not):
- publish information on capabilities and limitations of the system;
- develop and implement a reliable and freely available method to detect content generated by the system, with a near-term focus on audio-visual content (e.g., watermarking);
- publish a description of the types of training data used to develop the system, as well as measures taken to identify and mitigate risks; and
- ensure that systems that could be mistaken for humans are clearly and prominently identified as AI systems.
The Privacy Principles specify that organizations that develop, provide, or use generative AI technologies must be open and transparent about the collection, use, and disclosure of personal information and the potential risks to individuals’ privacy.
Article 4 of the Chilean AI Bill establishes the main principles applicable to AI systems. Article 4 d) states as follows:
Transparency and explainability
AI systems shall be developed and used by providing adequate traceability and explainability, so that humans can clearly and accurately know and be aware that they are communicating or interacting with an AI system, in those cases where such knowledge would help them make decisions about their rights, safety or privacy, informing recipients, where appropriate, how the system has obtained its predictions or results, as well as the capabilities and limitations of such AI system.
Article 8 of the Chilean AI Bill establishes the rule of Transparency mechanisms referred to above.
Finally, Article 13 of the Chilean AI Bill establishes the following transparency obligation for Limited-Risk AI Systems: Providers and implementers shall try to ensure that these systems are designed and developed in such a way that the AI system, the provider itself or the user clearly, intelligibly and in a timely manner informs such natural persons exposed to an AI system that they are interacting with an AI system, except in situations where this is obvious due to the circumstances and context of use.
The GenAI Measures require service providers to employ effective measures to increase the transparency in generative AI services and to improve the accuracy and reliability of generated content, based on the types and characteristics of the services.
The Deep Synthesis Provisions require deep synthesis services providers to develop and disclose their management rules and platform conventions.
The Recommendation Algorithms Provisions specify that to comply businesses must formulate and disclose the relevant principles, purposes and key operation mechanisms for recommendation algorithm-based services. Users have the right to opt out of the algorithm recommendation services or request the service provider to provide services not targeting their personal characteristics. Service providers must provide users with a convenient option to switch off the algorithmic recommendation services. If users choose to switch off algorithmic recommendation services, the algorithmic recommendation service provider must immediately cease providing the services.
Under the AIGC Labelling Measures, AI-generated content shall be marked with explicit labels and/or implicit labels, depending on the functionality of the underlying AI services and how the AI-generated content can be used:
- "Explicit labels" refer to"visible indicators—such as text, audio, or graphics—added to the AI-generated content or interactive interface, which can be clearly perceived by users."
- "Implicit labels" refer to "technical markers embedded in the data of AI-generated content files, which are not easily perceived by users."
Implicit labels should be embedded in the metadata of generated content files. Explicit labels should be added to AI-generated dialogue simulating natural human interaction, synthetic voices significantly altering personal characteristics, human face images generated or altered by AI, and immersive scenes, as well as other high-risk use cases.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
Criminalization of unauthorized deepfakes in France
Please note that the French Digital Space Law criminalizes the publication of deepfakes of other persons in a way that modifies by AI their image and/or voice without their consent. An offender may face imprisonment (up to one year) and financial penalties (up to 15,000 euros). Such penalties increase when deepfakes are shared through online platforms or involve sexually explicit content.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
User transparency in France
Please note that the French Influencer Law imposes requirements on influencers to include warnings on images that have been modified using AI. Modified images using filters or AI must include "retouched images" or "virtual images" labels.
In France, the CNCDH Opinion recommends to extend the EU AI Act transparency obligations in order to systematically inform people when they are exposed to or required to interact with an AI system and, when they are the subject of a decision, that this decision is based in part or in full on algorithmic processing even when undertaken by private organisations (and currently in France, this information requirement related to AI decision-making only applies with respect to public bodies).
Also, the Senate Report flags multiple transparency‑adjacent issues such as (i) the black box / explainability which underpins the difficulty of understanding model reasoning, motivating transparency and interpretability requirements in policy frameworks and (ii) deepfake watermarking/labeling by noting however that the increasing policy push for watermarking or equivalent measures enable users to recognize synthetic media.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
User transparency in Greece
Certain transparency obligations under Law 4961/2022 apply to both public and private sector bodies regarding the AI systems they use. With regard to public sector bodies using AI systems, Article 4 permits use of AI for decision-making or issuing administrative acts that affect individual or legal entity rights, provided that they are explicitly authorized by law and implement safeguards to protect these rights. Article 6 requires public entities to disclose information to addressees of administrative acts, any other affected legal entities or individualsn an accessible manner about the operational parameters, capabilities, and technical characteristics of AI systems, as well as the types of decisions or actions that these systems support.
Article 7 plays a crucial role in ensuring transparency by imposing obligations on contractors who develop AI systems for public sector bodies. These contractors must provide detailed information on the operation of AI systems, so that public sector bodies are able to fulfil their aforementioned obligations according to Article 6. Article 8 mandates public sector bodies to maintain an updated registry of their AI systems, which should be accessible to the National Transparency Authority upon request.
In the private sector, pursuant to Article 10, medium and large enterprises must maintain an updated electronic registry of AI systems used for profiling consumers or assessing employees, which includes information on the operational parameters, the number of individuals affected and safety measures in place. Additionally, businesses must establish ethical data usage policies, which form part of corporate governance disclosures, where applicable. Pursuant to Article 9, private entities using AI systems in employment decisions are required to inform employees or candidates in advance about the system’s role and decision-making parameters.
Laws specifically addressing AI have not yet been introduced in Hong Kong.
Transparency and Interpretability is the first principle within the Ethical AI Framework, and is described as fundamental. It requires organisations to be able to explain the decision-making processes of the AI applications to humans in a clear and comprehensible manner, and provides guidance on how to do this.
The GenAI Guideline emphasises that AI systems must fulfil transparency obligations (explainable AI), including regarding data sources and processing methods, and as regards personal data privacy (in accordance with Hong Kong's data protection law). Service Users should explicitly indicate whether generative AI has been involved in content generation or decision-making.
The transparency and interpretability ethical principle set out in the Guidance specifies that organisations should clearly and prominently disclose their use of AI and the relevant data privacy practices while striving to improve the interpretability of automated and AI-assisted decisions, and that transparency and interpretability are instrumental in demonstrating accountability as well as protecting individuals’ rights, freedom and interests in the use of AI. The Model Framework supplements this by stressing that an organisation's use of AI should be transparent to stakeholders, with the level of transparency varying depending on the stakeholder. It specifies: (i) clearly and prominently disclosing the use of AI systems (unless the use is obvious in the context/circumstances); (ii) providing adequate information on the purposes, benefits, limitations and effects of using AI systems in their products/services; and (iii) disclosing the results of risk assessment of the AI systems.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
The Social Principles state that appropriate explanations should be given in suitable cases such as how the AI data is obtained and used. Also, to allow people to understand the proposal of the AI and make informed decisions, open dialogue may be required regarding the use, adoption and operation of AI.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
User transparency in Latvia
Law on the Artificial Intelligence Centre provides for the establishment of the Artificial Intelligence Centre. The Centre’s consist of a council with nine members (including representatives from ministries, universities, and the private sector), a director acting as the executive body is currently being sought, and a secretariat to be provided by the State Agency for Digital Development.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
User transparency in Malta
Malta’s National Framework highlights the importance of fostering transparency in AI development. The guidance document highlights that when AI systems pose significant consequences to individuals’ lives, a higher level of transparency must be upheld. Transparency about the use and impact of an AI prediction or decision is essential when the impact of such an AI prediction or decision is greater. Organisations are expected to clearly inform users about any potential limitations or risks associated with their AI systems that may adversely impact them.
To promote transparency, the MDIA introduced a certification framework in October 2019. Through this initiative, applicants are provided with market recognition that their AI systems have been developed with transparency, fostering trust among users and consumers.
Laws specifically addressing AI have not been introduced in Mauritius yet. However, our Data Protection Act 2017 provides that every controller or processor shall ensure that personal data are processed in a transparent manner in relation to any data subject.
The principles of transparency identified in the Blueprint emphasises that information will be collected once with the citizen’s consent, used responsibly, and protected with the highest privacy and security standards. Citizens will always retain transparency and control over how their personal data is used. Information will flow securely through the “whole-of -Government” ensuring that forms are pre-filled with verified data to simplify user interactions.
The Blueprint further provides that the Government of Mauritius will undertake a comprehensive update of its data protection and privacy laws to strengthen trust in the digital environment. This includes:
- Updating of the Data Protection Act of 2017 to be realigned with the European Union General Data Protection Regulation (GDPR). It is to be noted that as per the budgetary speech dated 5 June 2025, it was further stated that the Data protection Act 2017 will be amended to fully align its provisions with international and regional standards including that of the Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data.
- The enactment of regulations relating to data protection officers and e-privacy to cater for the protection of data processed through electronic communications networks;
- The Freedom of Information Act to cater for access to public information; and
- The revision of the constitutional right to privacy to cater for data protection and freedom of information.
Laws specifically addressing AI have not been introduced in Mexico yet.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Laws specifically addressing AI have not been introduced in New Zealand yet, so there are no specific AI transparency requirements. However, the OPC AI Guidance highlights the importance of transparency when using AI tools in order to mitigate the risk of breaching Information Privacy Principle 3 of the Privacy Act. The AI Guidance for Business provides a transparency checklist for businesses to disclose their AI use.
Laws specifically addressing AI have not been introduced in Nigeria yet.
The content on User transparency in the European Union applies in Norway.
Laws specifically addressing user transparency in relation to AI have not been introduced in Peru yet. One of the strategic axes of the National Strategy is the promotion of the adoption of ethical guidelines for a sustainable, transparent and replicable use of AI with clear definitions on responsibilities and data protection.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Laws specifically addressing AI have not yet been introduced in Singapore.
Explainability and transparency is one of the guiding principles in the Model Framework. It suggests specific practices such as:
- providing general information on whether AI is used in products and/or services;
- disclosing the manner in which an AI decision may affect an individual consumer; and
- considering the information needs of consumers as they go through the journey of interacting with AI.
The PDPC Guidelines state that, where an AI system is deployed to provide recommendations, predictions or decisions based on personal data, the organisation must comply with the consent and notification obligations under the PDPA, unless exceptions apply.
The Principles recommend providing explanations regarding:
- what data is used to make AI / data analytics-driven decisions;
- how the data affects such decisions; and
- the potential consequences of such decisions.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Certain notification, labelling and/or explanation obligations are required for high-impact AI and generative AI, as commented above.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:
- Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
- Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
- Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
- Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Laws specifically addressing AI have not been introduced in Thailand yet.
Laws specifically addressing AI have not been introduced in Turkey yet. NAIS sets out an 'AI Principle' of 'Transparency and Explainability', as follows (page 61 of NAIS):
"Person(s) and organizations involved in the lifecycle of AI systems should ensure that the AI system is transparent and explainable in accordance with its context. People have the right to be informed of a decision that was made based on AI algorithms and to request explanatory information from public institutions and private sector organizations in such cases. It should be possible to explain to the end user and other stakeholders in non-technical terms and in plain language, why, how, where and for what purpose the decisions made based on automatic and algorithmic decisions, the data leading to said decisions and the information obtained from that data are used."
There is no unified federal law or emirate level law in the UAE that has a primary focus on regulating AI (and therefore no binding obligations in relation to user transparency).
However, the AI Ethics Guide contains a principle of transparency which provides that:
- Developers should build systems whose failures can be traced and diagnosed.
- People should be told when significant decisions about them are being made by AI.
- Within the limits of privacy and the preservation of intellectual property, those who deploy AI Systems should be transparent about the data and algorithms they use.
- Responsible disclosures should be provided in a timely manner and provide reasonable justifications for AI Systems outcomes. This includes information that helps people understand outcomes, like key factors used in decision making.
The DIFC’s Data Protection Regulations also provide that AI Systems must be designed in accordance with the principle of transparency. In particular, AI Systems must ensure that processing of personal data is explainable to data subjects and other stakeholders in non-technical terms, with appropriate supporting evidence. There are also specific notice, evidence and information requirements imposed on Deployers or Operators in relation to applications and website services that employ AI Systems to process personal data.
There is no single statute addressing AI in the UK yet. Existing principles under e.g. Data Protection Act 2018 and UK GDPR should be considered. The principle of appropriate transparency and explainability identified in the White Paper specifies that AI systems should be appropriately transparent and explainable, on the basis that transparency can increase public trust, which can be a significant driver of AI adoption.
In the context of AI, transparency may involve different types of disclosures, such as the use of a machine learning tool to make consequential decisions about consumers or the use of a chatbot to interact with consumers. The U.S. does not currently have a federal law that specifically mandates transparency in AI systems. Some laws of general applicability, like broad consumer protection laws, may require disclosures about AI to avoid consumer deception. On the state and local level, however, a patchwork of laws has developed requiring transparency in different situations. For example:
- California’s Generative AI: Training Data Transparency Act mandates disclosure of high-level details about the training data used in generative AI systems
- California’s TFAIA requires large “frontier” AI developers to publish transparency reports and annually update a public frontier AI safety framework describing how they assess and mitigate “catastrophic risk,” secure unreleased model weights, and respond to critical safety incidents
- Colorado’s AI Act requires developers and deployers of high-risk AI systems to maintain documentation that demonstrates reasonable care in preventing algorithmic discrimination
- Utah’s AI Policy Act mandates verbal or written disclosure when consumers interact with generative AI in regulated service contexts
- New York’s RAISE Act requires large developers to implement and publicly disclose a “safety and security protocol” and report any “safety incident” to mitigate risk
- New York City’s Local Law 144 requires employers to notify candidates when automated employment decision tools are used, and to publish the results of bias audits
These efforts may reflect a growing consensus that transparency is key to responsible AI deployment, particularly in applications such as employment, healthcare, and consumer services. However, the scope and enforcement of transparency obligations vary significantly across jurisdictions, contributing to a fragmented compliance landscape.