Artificial Intelligence in Australia

User transparency

Information not provided.

Last modified 25 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 18 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 8 July 2025

The Brazilian AI Strategy identifies that a key issue to be addressed is that organisations and individuals that play an active role in the AI lifecycle should commit to transparency and responsible disclosure in relation to AI systems, providing relevant and state of the art information that will promote the general understanding of AI systems, making people aware of their interactions with AI systems, allowing those affected by an AI system to understand the results produced and allowing those adversely affected by an AI system to contest its outcome (para. 5, page 2, Summary of the Brazilian Artificial Intelligence Strategy).

Last modified 31 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 23 July 2025

National laws specifically addressing AI have not been yet passed in Canada. 

The Voluntary Code specifies under its Transparency principle that signatories should (with varying levels of obligation, as indicated, depending on whether a signature is either a developer or a manager of a generative AI system and if the system is available for public use or not):

  • publish information on capabilities and limitations of the system;
  • develop and implement a reliable and freely available method to detect content generated by the system, with a near-term focus on audio-visual content (e.g., watermarking);
  • publish a description of the types of training data used to develop the system, as well as measures taken to identify and mitigate risks; and
  • ensure that systems that could be mistaken for humans are clearly and prominently identified as AI systems.

The Privacy Principles specify that organizations that develop, provide, or use generative AI technologies must be open and transparent about the collection, use, and disclosure of personal information and the potential risks to individuals’ privacy.

Last modified 11 July 2025

Article 4 of the Chilean AI Bill establishes the main principles applicable to AI systems. Article 4 d) states as follows:

Transparency and explainability

AI systems shall be developed and used by providing adequate traceability and explainability, so that humans can clearly and accurately know and be aware that they are communicating or interacting with an AI system, in those cases where such knowledge would help them make decisions about their rights, safety or privacy, informing recipients, where appropriate, how the system has obtained its predictions or results, as well as the capabilities and limitations of such AI system.

Article 8 of the Chilean AI Bill establishes the rule of Transparency mechanisms referred to above.

Finally, Article 13 of the Chilean AI Bill establishes the following transparency obligation for Limited-Risk AI Systems: Providers and implementers shall try to ensure that these systems are designed and developed in such a way that the AI system, the provider itself or the user clearly, intelligibly and in a timely manner informs such natural persons exposed to an AI system that they are interacting with an AI system, except in situations where this is obvious due to the circumstances and context of use.

Last modified 23 July 2025

The GenAI Measures require service providers to employ effective measures to increase the transparency in generative AI services and to improve the accuracy and reliability of generated content, based on the types and characteristics of the services.

The Deep Synthesis Provisions require deep synthesis services providers to develop and disclose their management rules and platform conventions.

The Recommendation Algorithms Provisions specify that to comply businesses must formulate and disclose the relevant principles, purposes and key operation mechanisms for recommendation algorithm-based services. Users have the right to opt out of the algorithm recommendation services or request the service provider to provide services not targeting their personal characteristics. Service providers must provide users with a convenient option to switch off the algorithmic recommendation services. If users choose to switch off algorithmic recommendation services, the algorithmic recommendation service provider must immediately cease providing the services.

Under the AIGC Labelling Measures, AI-generated content shall be marked with explicit labels and/or implicit labels, depending on the functionality of the underlying AI services and how the AI-generated content can be used:

  • "Explicit labels" refer to"visible indicators—such as text, audio, or graphics—added to the AI-generated content or interactive interface, which can be clearly perceived by users."
  • "Implicit labels" refer to "technical markers embedded in the data of AI-generated content files, which are not easily perceived by users."

Implicit labels should be embedded in the metadata of generated content files. Explicit labels should be added to AI-generated dialogue simulating natural human interaction, synthetic voices significantly altering personal characteristics, human face images generated or altered by AI, and immersive scenes, as well as other high-risk use cases.

Last modified 26 January 2026

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 23 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 14 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 9 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 21 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 22 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 11 February 2026

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 22 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.

Criminalization of unauthorized deepfakes in France

Please note that the French Digital Space Law criminalizes the publication of deepfakes of other persons in a way that modifies by AI their image and/or voice without their consent. An offender may face imprisonment (up to one year) and financial penalties (up to 15,000 euros). Such penalties increase when deepfakes are shared through online platforms or involve sexually explicit content.

  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.

User transparency in France

Please note that the French Influencer Law imposes requirements on influencers to include warnings on images that have been modified using AI. Modified images using filters or AI must include "retouched images" or "virtual images" labels.

In France, the CNCDH Opinion recommends to extend the EU AI Act transparency obligations in order to systematically inform people when they are exposed to or required to interact with an AI system and, when they are the subject of a decision, that this decision is based in part or in full on algorithmic processing even when undertaken by private organisations (and currently in France, this information requirement related to AI decision-making only applies with respect to public bodies).

Also, the Senate Report flags multiple transparency‑adjacent issues such as (i) the black box / explainability which underpins the difficulty of understanding model reasoning, motivating transparency and interpretability requirements in policy frameworks and (ii) deepfake watermarking/labeling by noting however that the increasing policy push for watermarking or equivalent measures enable users to recognize synthetic media.

Last modified 5 February 2026

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 3 February 2026

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.

User transparency in Greece

Certain transparency obligations under Law 4961/2022 apply to both public and private sector bodies regarding the AI systems they use. With regard to public sector bodies using AI systems, Article 4 permits use of AI for decision-making or issuing administrative acts that affect individual or legal entity rights, provided that they are explicitly authorized by law and implement safeguards to protect these rights. Article 6 requires public entities to disclose information to addressees of administrative acts, any other affected legal entities or individualsn an accessible manner about the operational parameters, capabilities, and technical characteristics of AI systems, as well as the types of decisions or actions that these systems support.

Article 7 plays a crucial role in ensuring transparency by imposing obligations on contractors who develop AI systems for public sector bodies. These contractors must provide detailed information on the operation of AI systems, so that public sector bodies are able to fulfil their aforementioned obligations according to Article 6. Article 8 mandates public sector bodies to maintain an updated registry of their AI systems, which should be accessible to the National Transparency Authority upon request.

In the private sector, pursuant to Article 10, medium and large enterprises must maintain an updated electronic registry of AI systems used for profiling consumers or assessing employees, which includes information on the operational parameters, the number of individuals affected and safety measures in place. Additionally, businesses must establish ethical data usage policies, which form part of corporate governance disclosures, where applicable. Pursuant to Article 9, private entities using AI systems in employment decisions are required to inform employees or candidates in advance about the system’s role and decision-making parameters.

Last modified 19 July 2025

Laws specifically addressing AI have not yet been introduced in Hong Kong.  

Transparency and Interpretability is the first principle within the Ethical AI Framework, and is described as fundamental. It requires organisations to be able to explain the decision-making processes of the AI applications to humans in a clear and comprehensible manner, and provides guidance on how to do this.

The GenAI Guideline emphasises that AI systems must fulfil transparency obligations (explainable AI), including regarding data sources and processing methods, and as regards personal data privacy (in accordance with Hong Kong's data protection law). Service Users should explicitly indicate whether generative AI has been involved in content generation or decision-making.

The transparency and interpretability ethical principle set out in the Guidance specifies that organisations should clearly and prominently disclose their use of AI and the relevant data privacy practices while striving to improve the interpretability of automated and AI-assisted decisions, and that transparency and interpretability are instrumental in demonstrating accountability as well as protecting individuals’ rights, freedom and interests in the use of AI. The Model Framework supplements this by stressing that an organisation's use of AI should be transparent to stakeholders, with the level of transparency varying depending on the stakeholder. It specifies: (i) clearly and prominently disclosing the use of AI systems (unless the use is obvious in the context/circumstances); (ii) providing adequate information on the purposes, benefits, limitations and effects of using AI systems in their products/services; and (iii) disclosing the results of risk assessment of the AI systems.

Last modified 25 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 24 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 23 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 3 February 2026

The Social Principles state that appropriate explanations should be given in suitable cases such as how the AI data is obtained and used. Also, to allow people to understand the proposal of the AI and make informed decisions, open dialogue may be required regarding the use, adoption and operation of AI.

Last modified 31 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.

User transparency in Latvia

Law on the Artificial Intelligence Centre provides for the establishment of the Artificial Intelligence Centre. The Centre’s consist of a council with nine members (including representatives from ministries, universities, and the private sector), a director acting as the executive body is currently being sought, and a secretariat to be provided by the State Agency for Digital Development.

Last modified 14 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 24 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 23 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.

User transparency in Malta

Malta’s National Framework highlights the importance of fostering transparency in AI development. The guidance document highlights that when AI systems pose significant consequences to individuals’ lives, a higher level of transparency must be upheld. Transparency about the use and impact of an AI prediction or decision is essential when the impact of such an AI prediction or decision is greater. Organisations are expected to clearly inform users about any potential limitations or risks associated with their AI systems that may adversely impact them.

To promote transparency, the MDIA introduced a certification framework in October 2019. Through this initiative, applicants are provided with market recognition that their AI systems have been developed with transparency, fostering trust among users and consumers.

Last modified 23 July 2025

Laws specifically addressing AI have not been introduced in Mauritius yet. However, our Data Protection Act 2017 provides that every controller or processor shall ensure that personal data are processed in a transparent manner in relation to any data subject.

The principles of transparency identified in the Blueprint emphasises that information will be collected once with the citizen’s consent, used responsibly, and protected with the highest privacy and security standards. Citizens will always retain transparency and control over how their personal data is used. Information will flow securely through the “whole-of -Government” ensuring that forms are pre-filled with verified data to simplify user interactions. 

The Blueprint further provides that the Government of Mauritius will undertake a comprehensive update of its data protection and privacy laws to strengthen trust in the digital environment. This includes:

  • Updating of the Data Protection Act of 2017 to be realigned with the European Union General Data Protection Regulation (GDPR). It is to be noted that as per the budgetary speech dated 5 June 2025, it was further stated that the Data protection Act 2017 will be amended to fully align its provisions with international and regional standards including that of the Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data.
  • The enactment of regulations relating to data protection officers and e-privacy to cater for the protection of data processed through electronic communications networks;
  • The Freedom of Information Act to cater for access to public information; and
  • The revision of the constitutional right to privacy to cater for data protection and freedom of information.
Last modified 26 June 2025

Laws specifically addressing AI have not been introduced in Mexico yet.

Last modified 29 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 23 July 2025

Laws specifically addressing AI have not been introduced in New Zealand yet, so there are no specific AI transparency requirements. However, the OPC AI Guidance highlights the importance of transparency when using AI tools in order to mitigate the risk of breaching Information Privacy Principle 3 of the Privacy Act. The AI Guidance for Business provides a transparency checklist for businesses to disclose their AI use.

Last modified 14 July 2025

Laws specifically addressing AI have not been introduced in Nigeria yet.

Last modified 17 June 2025

The content on User transparency in the European Union applies in Norway.

Last modified 9 October 2025

Laws specifically addressing user transparency in relation to AI have not been introduced in Peru yet. One of the strategic axes of the National Strategy is the promotion of the adoption of ethical guidelines for a sustainable, transparent and replicable use of AI with clear definitions on responsibilities and data protection.

Last modified 20 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 23 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 22 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 25 July 2025

Laws specifically addressing AI have not yet been introduced in Singapore. 

Explainability and transparency is one of the guiding principles in the Model Framework. It suggests specific practices such as:

  • providing general information on whether AI is used in products and/or services;
  • disclosing the manner in which an AI decision may affect an individual consumer; and
  • considering the information needs of consumers as they go through the journey of interacting with AI.

The PDPC Guidelines state that, where an AI system is deployed to provide recommendations, predictions or decisions based on personal data, the organisation must comply with the consent and notification obligations under the PDPA, unless exceptions apply.

The Principles recommend providing explanations regarding:

  • what data is used to make AI / data analytics-driven decisions;
  • how the data affects such decisions; and
  • the potential consequences of such decisions.
Last modified 28 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 29 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 14 July 2025

Certain notification, labelling and/or explanation obligations are required for high-impact AI and generative AI, as commented above.

Last modified 29 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 21 July 2025

Article 50 of the EU AI Act sets out transparency obligations for providers and deployers of certain AI systems, including the following:

  • Providers of AI systems must ensure that natural persons using an AI system must be informed that they are interacting with an AI system unless this is obvious to the natural person (this obligation excludes AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences).
  • Providers of AI systems must ensure that the synthetic outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated (excluding AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences) and must process data in accordance with other relevant EU laws.
  • Deployers of emotion recognition or biometric categorisation systems must inform the affected natural persons.
  • Deployers of AI systems that generate or manipulate image, audio or video content constituting deep fakes must disclose that the content has been artificially generated or manipulated.
Last modified 7 July 2025

Laws specifically addressing AI have not been introduced in Thailand yet. 

Last modified 25 July 2025

Laws specifically addressing AI have not been introduced in Turkey yet. NAIS sets out an 'AI Principle' of 'Transparency and Explainability', as follows (page 61 of NAIS):

"Person(s) and organizations involved in the lifecycle of AI systems should ensure that the AI system is transparent and explainable in accordance with its context. People have the right to be informed of a decision that was made based on AI algorithms and to request explanatory information from public institutions and private sector organizations in such cases. It should be possible to explain to the end user and other stakeholders in non-technical terms and in plain language, why, how, where and for what purpose the decisions made based on automatic and algorithmic decisions, the data leading to said decisions and the information obtained from that data are used."
Last modified 30 July 2025

There is no unified federal law or emirate level law in the UAE that has a primary focus on regulating AI (and therefore no binding obligations in relation to user transparency).

However, the AI Ethics Guide contains a principle of transparency which provides that:

  • Developers should build systems whose failures can be traced and diagnosed.
  • People should be told when significant decisions about them are being made by AI.
  • Within the limits of privacy and the preservation of intellectual property, those who deploy AI Systems should be transparent about the data and algorithms they use.
  • Responsible disclosures should be provided in a timely manner and provide reasonable justifications for AI Systems outcomes. This includes information that helps people understand outcomes, like key factors used in decision making.

The DIFC’s Data Protection Regulations also provide that AI Systems must be designed in accordance with the principle of transparency. In particular, AI Systems must ensure that processing of personal data is explainable to data subjects and other stakeholders in non-technical terms, with appropriate supporting evidence. There are also specific notice, evidence and information requirements imposed on Deployers or Operators in relation to applications and website services that employ AI Systems to process personal data.

Last modified 4 August 2025

There is no single statute addressing AI in the UK yet. Existing principles under e.g. Data Protection Act 2018 and UK GDPR should be considered.  The principle of appropriate transparency and explainability identified in the White Paper specifies that AI systems should be appropriately transparent and explainable, on the basis that transparency can increase public trust, which can be a significant driver of AI adoption.

Last modified 23 February 2026

In the context of AI, transparency may involve different types of disclosures, such as the use of a machine learning tool to make consequential decisions about consumers or the use of a chatbot to interact with consumers. The U.S. does not currently have a federal law that specifically mandates transparency in AI systems. Some laws of general applicability, like broad consumer protection laws, may require disclosures about AI to avoid consumer deception. On the state and local level, however, a patchwork of laws has developed requiring transparency in different situations. For example:

  • California’s Generative AI: Training Data Transparency Act mandates disclosure of high-level details about the training data used in generative AI systems
  • California’s TFAIA requires large “frontier” AI developers to publish transparency reports and annually update a public frontier AI safety framework describing how they assess and mitigate “catastrophic risk,” secure unreleased model weights, and respond to critical safety incidents
  • Colorado’s AI Act requires developers and deployers of high-risk AI systems to maintain documentation that demonstrates reasonable care in preventing algorithmic discrimination
  • Utah’s AI Policy Act mandates verbal or written disclosure when consumers interact with generative AI in regulated service contexts
  • New York’s RAISE Act requires large developers to implement and publicly disclose a “safety and security protocol” and report any “safety incident” to mitigate risk
  • New York City’s Local Law 144 requires employers to notify candidates when automated employment decision tools are used, and to publish the results of bias audits

These efforts may reflect a growing consensus that transparency is key to responsible AI deployment, particularly in applications such as employment, healthcare, and consumer services. However, the scope and enforcement of transparency obligations vary significantly across jurisdictions, contributing to a fragmented compliance landscape.

Last modified 10 March 2026

Continue reading

  • no results

Previous topic
Back to top