Artificial Intelligence in Australia

Regulatory guidance / voluntary codes

On 23 May 2025, the Australian Signals Directorate's Australian Cyber Security Centre, together with its counterparts in the US, UK and New Zealand, released guidance on best practices for AI Data Security. The guidance sets out key data security risks in AI use and provides a list of best practice guidelines, including but not limited to, sourcing reliable data and tracking data provenance, verifying and maintaining data integrity during storage and transport, and data encryption.

In March 2025, the Commonwealth Ombudsman released an Automated Decision Making Better Practice Guide.  The Guide is intended to inform the selection, adoption and use of AI by government agencies to ensure their compliance with Australian laws, including administrative law.  Appendix A of the Guide features a comprehensive checklist which may assist government and non-government entities with decision making surrounding their use of AI.    

Also in March 2025, the Australian Government Digital Transformation Agency released AI and Cyber Risk model clauses for procuring or developing AI models.

On 21 October 2024, the Office of the Australian Information Commissioner (OAIC), the national regulator for privacy and freedom of information, released two guidance documents relating to AI: 

  1. Guidance on privacy and the use of commercially available AI products – This guidance document is intended to assist organisations deploying and using commercially available AI systems in complying with their privacy obligations. The guidance document specifies that privacy obligations apply to any personal information input into an AI system and the output that is generated by the AI system (where the output contains personal information). The OAIC also recommends that no personal information is entered into publicly available generative AI tools.
  2. Guidance on privacy and developing and training generative AI models – This guidance document recommends that AI developers take reasonable steps to ensure accuracy in generative AI models. With respect to privacy obligations, it notes that personal information includes inferred, incorrect or artificially generated information produced by AI models (such as hallucinations and deepfakes). In addition, this guidance document reminds developers that publicly available or accessible data may not automatically be legally used to train or fine-tune generative AI models or systems.

In September 2024, Australia's Department of Science, Industry and Resources published a Proposal Paper for introducing mandatory guardrails for AI in high-risk settings (Proposal Paper introducing mandatory guardrails). This paper identifies two broad categories of high-risk AI, namely (1) AI systems with known or foreseeable proposed uses that are considered to be high risk, and (2) advanced, highly capable general-purpose AI/GPAI models that are capable of being used, or being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems, where all possible applications and risks cannot be foreseen. 

With respect to the first category listed above, the principles that organisations must consider in designating an AI system as high-risk are the risk of adverse impacts to:

  1. an individual's human rights, health or safety, and legal rights e.g. legal effects, defamation or similarly significant effects on an individual;
  2. groups of individuals or collective rights of cultural groups; and
  3. the broader Australian economy, society, environment and rule of law,

as well as the severity and extent of the adverse impacts outlined above.

With respect to AI designated as high-risk, the Proposal Paper introducing mandatory guardrails sets out the following proposed mandatory guardrails for organisations developing or deploying high-risk AI systems (page 35):

  1. "Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance;
  2. Establish and implement a risk management process to identify and mitigate risks;
  3. Protect AI systems, and implement data governance measures to manage data quality and provenance;
  4. Test AI models and systems to evaluate model performance and monitor the system once deployed;
  5. Enable human control or intervention in an AI system to achieve meaningful human oversight;
  6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI generated content;
  7. Establish processes for people impacted by AI systems to challenge use or outcomes;
  8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks;
  9. Keep and maintain records to allow third parties to assess compliance with guardrails; and 
  10. Undertake conformity assessments to demonstrate and certify compliance with guardrails." 

The definition of high-risk AI and the guardrails are expected to be refined based on feedback provided by Australian stakeholders to the Proposal paper introducing mandatory guardrails.

On 5 September 2024, the Australian Government released a Voluntary AI Safety Standard publication that sets out substantially similar guardrails as those in the Proposal Paper introducing mandatory guardrails, with the exception of guardrail 10, which states:

"Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness."

Whereas the Proposal Paper introducing mandatory guardrails apply to high-risk AI, the Voluntary AI Safety Standard sets out voluntary guidelines for developers and deployers of AI to protect people and communities from harms, avoid reputation and financial risks to their organizations, increase organizational and community trust and confidence in AI systems, services and products, and align with legal obligations and expectations in Australia, among other things. 

On 1 September 2024, the Policy for the Responsible Use of AI in Government (Policy) came into effect, aiming to empower the Australian Government to safely, ethically and responsibly engage with AI, strengthen public trust in the government's use of AI, and adapt to technological and policy changes over time. 

In particular, the Policy requires government agencies to: 

  • designate accountability for compliance with the policy to certain public officials, and 
  • publish and keep updated an AI transparency statement. 

Additional recommendations include fundamental AI training for all staff, additional training for staff with roles or responsibilities in connection with AI, understanding and recording where and how AI is being used within agencies, integrating AI considerations into existing frameworks, participating in the Australian Government's AI assurance framework, monitoring AI use cases and keeping up to date with policy changes. 

Australia has been a signatory to the Bletchley Declaration since 1 November 2023, which establishes a collective understanding between 28 countries and the European Union on the opportunities and risks posed by AI. 

In November 2019, the Australian Government published its AI Ethics Principles (Ethics Principles), designed to ensure that AI is safe, secure and reliable and to:

  • help achieve safer, more reliable and fairer outcomes for all Australians;
  • reduce the risk of negative impact on those affected by AI applications; and assist businesses and governments to practice the highest ethical standards when designing, developing and implementing AI.
Last modified 25 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

Last modified 18 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

Last modified 8 July 2025

The Brazilian AI Strategy published in 2021, which is summarised by a Summary of the Brazilian Artificial Intelligence Strategy published in the same year, states that AI should benefit people and the planet, driving inclusive growth, sustainable development and well-being. It also states that AI systems should be designed in a manner that respects the rule of law, human rights, democratic values and diversity, and should include appropriate safeguards to ensure a fair society. The strategy also makes it clear that organisations and individuals who play an active role in the AI lifecycle should commit to transparency and responsible disclosure in relation to AI systems and that AI systems should operate in a robust, safe and protected manner. 

Last modified 31 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

AI compliance in Bulgaria

In December 2020, the Council of Ministers in Bulgaria adopted a Concept for the Development of the Artificial Intelligence in Bulgaria until 2030 (Concept). The Concept is a national strategic policy document which outlines the main prerequisites and challenges the country is facing in view of the development and implementation of AI systems. The Concept also comments on the different economic sectors where implementation of AI would be beneficial (e.g. science, education, public administration, electronic healthcare, etc.). In addition, the Concept sets the main strategic objectives to be followed. It also demonstrates intention to facilitate and encourage business and research activities in the field of AI and to stimulate the natural course of development of the technology in Bulgaria by limiting the administrative burden. However, it is noteworthy that this document is outdated and adopted by the ex-administration, whose policy may not necessarily be furthered.

Last modified 23 July 2025

In September 2023 the Canadian Minister of Innovation, Science and Industry announced a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems (Voluntary Code) to provide Canadian companies with common standards until formal regulation is in effect, with the aim of enabling them to demonstrate voluntarily that they are developing and using generative AI systems responsibly. The Voluntary Code sets out identified measures to which companies must adhere when nominating themselves as signatories to it relating to the following:

  • Accountability – Firms understand their role with regard to the systems they develop or manage, put in place appropriate risk management systems, and share information with other firms as needed to avoid gaps.
  • Safety – Systems are subject to risk assessments, and mitigations needed to ensure safe operation are put in place prior to deployment.
  • Fairness and Equity – Potential impacts with regard to fairness and equity are assessed and addressed at different phases of development and deployment of the systems.
  • Transparency – Sufficient information is published to allow consumers to make informed decisions and for experts to evaluate whether risks have been adequately addressed.
  • Human Oversight and Monitoring – System use is monitored after deployment, and updates are implemented as needed to address any risks that materialize.
  • Validity and Robustness – Systems operate as intended, are secure against cyber attacks, and their behaviour in response to the range of tasks or situations to which they are likely to be exposed is understood.

The level of obligation in respect of each of the measures to be undertaken varies depending on whether a signatory is either a developer or a manager of a generative AI system and whether or not the system is available for public use or not.

In December 2023, Canadian privacy regulators announced 'Principles for responsible, trustworthy and privacy-protected generative AI technologies' (Privacy Principles) to help organizations that are developing, providing, or using generative AI technologies apply key Canadian privacy principles:

  • Legal Authority and Consent – Organizations should ensure they have legal authority for collecting and using personal information (and when consent is the legal authority, it should be valid and meaningful).
  • Appropriate Purposes – Organizations should only collect, use, and disclose personal information for appropriate purposes.
  • Necessity and Proportionality – Organizations should establish the necessity and proportionality of using generative AI, and of personal information within generative AI, to achieve the intended purposes.
  • Openness – Organizations should be open and transparent about the collection, use, and disclosure of personal information and the potential risks to individuals’ privacy.
  • Accountability – Organizations should establish accountability for compliance with privacy legislation and principles and make AI tools explainable.
  • Individual Access – Organizations should facilitate individuals’ right to access their personal information by developing procedures that enable it to be meaningfully exercised.
  • Limiting Collection, Use, and Disclosure – Organizations should limit the collection, use, and disclosure of personal information to only what is needed to fulfil the explicitly specified, appropriate identified purposes.
  • Accuracy – Organizations should ensure that personal information is as accurate, complete, and up-to-date as is necessary for purposes for which it is to be used.
  • Safeguards – Organizations should establish safeguards to protect personal information and mitigate potential privacy risks.
Last modified 11 July 2025

In May 2024 the Chilean government introduced the latest version of its AI National Policy (Policy) setting out objectives and priority actions that the country must undertake over the following decade. The Policy centres around three cross-cutting principles: 

  1. Ethical and responsible use of people-centred AI.
  2. AI serving sustainable development.
  3. AI in international and multi-stakeholder articulation.

The Policy is also structured along three axes:

  • Enabling Factors: This refers to the structural elements that enable the existence and deployment of AI, such as the development of tools, technological infrastructure and data.
  • Development and Adoption: This covers the space where AI is created and deployed, i.e. those who generate, provide and demand its different applications and techniques, including academia, the state, the productive sector and civil society.
  • Governance and Ethics: This addresses the new discussions and challenges that have arisen regarding the interaction between people and AI. It includes elements to advance in the development, use and implementation of AI systems, to protect people from their potential impacts and to support the social, economic and environmental transformations associated with these systems.
Last modified 23 July 2025

In addition to enacting laws specifically pertaining to AI-related technologies, the PRC also regulates AI through the implementation of recommended national standards and regulatory guidance:

Last modified 26 January 2026

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

Last modified 23 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

Last modified 14 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

AI compliance in the Czech Republic

The Czech Government has approved the 'National Strategy for Artificial Intelligence of the Czech Republic 2030', which provides a strategic framework for the development and use of trusted AI in the Czech Republic, and which will put the Action Plan into practice. This plan will include specific initiatives such as grant programmes, manuals for businesses, retraining courses and the introduction of new AI solutions. Its current form has been prepared by the MIT, in particular with the promoters of each key area, and will be submitted to the government as part of the Digital Czech Republic Implementation Plans.

One of the areas of focus is the legal and ethical aspects of AI, in relation to which the government of the Czech Republic commits to, among other things:

  • Support the development of non-binding soft-law tools for artificial intelligence in the private and public spheres to ensure its ethical use;
  • Provide advice and recommendations on human rights standards and ethical approaches to the use of AI, particularly in public administration; and
  • Provide consideration of the environmental impacts of AI systems in national policies, including emphasis on energy and, where appropriate, resource efficiency of AI systems.
Last modified 9 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

AI compliance in Denmark

In March 2019, the Danish government – led by the Ministry of Finance and the Agency for Digital Government – launched Denmark’s first national AI strategy. The strategy emphasizes an ethical and human-centered basis for AI, while highlighting the potential of AI to support research, innovation, and public services.

In January 2024, the Agency for Digital Government published three practical guides aimed at public authorities, businesses, and citizens. These guides support the responsible use of generative AI by offering advice on safe and ethical practices, including transparency, data protection, legal compliance, and digital literacy. The guides are designed to evolve over time, incorporating input from both public and private stakeholders.

On 8 February 2024, the Danish Government and all parties in the Danish Parliament (Folketinget) adopted a digitalisation strategy introducing two key AI initiatives: stragetgic efforts for AI and a regulatory sandbox for AI.

  • The strategic efforts aim to establish an ambitious and responsible framework for AI use in Denmark. This includes investments in Danish language resources for training language models and the potential development of a Danish language model.
  • The regulatory sandbox for AI aims to create a clear legal framework for the use of AI, which is considered a prerequisite for responsible deployment.

In March 2024, the regulatory sandbox for AI was officially launched, jointly managed by the Danish Data Protection Agency and the Agency for Digital Government. It provides free, practical guidance on GDPR and selected aspects of the EU AI Act to both public and private actors. The sandbox supports responsible and lawful AI development by helping organizations clarify legal requirements early and bring AI projects to market responsibly.

Last modified 21 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

AI compliance in Estonia

In Estonia, the Ministry of Economic Affairs and Communications, the Ministry of Justice and Digital Affairs and the Ministry of Education and Research have published an ‘Action Plan on Artificial Intelligence for the years 2024–2026’ (Action Plan). The Action Plan serves as Estonia's national AI strategy within the framework of the EU's coordinated action plan on AI. The Action Plan provides an overview of the activities planned for the coming years to further increase the adoption of AI-based solutions in Estonia. It aims to enhance the personalization, user-friendliness and accessibility of e-services, and the efficiency of the state. It covers the development and/or implementation of AI in the public and private sectors, as well as in education and research, along with the necessary legislative changes for 2024–2026. For example, there is a plan to amend the law to provide general rules for the issuing of automatic administrative acts and performing automatic administrative operations. Although some laws in Estonia already enable the issuing of automatic administrative acts, Estonia lacks a uniform regulation. Such laws include the Taxation Act, Environmental Charges Act and Unemployment Insurance Act. Additionally, the Action Plan provides that the EU AI Act shall be transposed to Estonian law where relevant.

According to the Estonian Government’s action plan and coalition agreement, the draft regulation on artificial intelligence is planned for the fourth quarter of 2025. In addition, the principles for the implementation of artificial intelligence in healthcare are planned for the second quarter of 2026.

Last modified 22 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

Last modified 11 February 2026

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

AI compliance in Finland

In Finland, the national implementation of the AI Act is still underway, and therefore, there are not many official guidelines available yet. However, some reports and guidelines have been published, which are presented below.

In 2021, the Non-Discrimination Ombudsman issued its observations on artificial intelligence’s effects on equality. The Ombudsman discusses both risks of discrimination related to use of artificial intelligence and the possibilities of AI to promote the realisation of equality. The Ombudsman also brings up the need for proactive impact assessment and supervision in the use of artificial intelligence. Use of different AI systems and algorithmic decision-making are increasing constantly, so the significance of questions of equality in their utilisation grows also. 

In 2022, the Ministry of Economic Affairs and Employment of Finland issued a repot called Finland as a leader in the twin transition – Final report of the Artificial Intelligence 4.0 programme. The report states that the vision of the program is to make Finland a winner in the twin transition, which describes a simultaneous digital and green transition. To achieve this vision, three areas of development were identified: (i) Strengthening high-level research on key technologies as well as development activities and investments (ii) Increasing the adoption of digital capabilities and technologies that accelerate the dual transition in industrial SMEs (iii) Making Finland an international frontrunner in the twin transition.

In 2023, the Ministry of Economic Affairs and Employment of Finland has issued a study on Impacts of the EU’s Proposed Regulation of Artificial Intelligence on the Business Environment of Finnish Companies. The study focused on the assessment of regulatory burden, changes in business opportunities and clarity of the requirements of the AI Act.

Further, in May 2025 the Finnish Data Protection Ombudsman has issued guidelines on ensuring data protection in the development and use of AI systems. The guidelines explain how organisations can ensure that personal data is processed lawfully in AI systems. The guidelines issued are not exhaustive, and organisations are still always expected to assess the requirements arising from legislation case by case.

Last modified 22 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

AI compliance in France

In France, many governmental reports and independent authorities’ guidelines have been issued on AI. The main ones impacting the AI framework are presented below.

In September 2017, Deputy Cédric Villani was tasked with leading a mission to implement a French and European AI strategy. This mission was presented in a report named 'Making sense of artificial intelligence', known as the 'Villani Report', which covers various aspects of AI, including economic policy, research, employment, ethics and social cohesion. Additionally, five annexes focus on the risks and opportunities of AI in specific areas: education, health, agriculture, transport and defense and security. This report led to the French government building a national AI strategy in 2018, which was last updated on 7 February 2025.

In June 2020, the French Banking Authority (ACPR) issued a study on the 'Governance of artificial intelligence algorithms in the financial sector' (ACPR AI Governance Study). This study highlights the need for AI algorithm evaluation and governance.

On 7 April 2022, the French national advisory commission on human rights (CNCDH) issued an 'Opinion on the impact of AI on fundamental rights' (CNCDH Opinion), which urges public authorities to establish a strong legal framework for AI. The document highlights how algorithms can perpetuate human biases and recommends measures for ensuring algorithmic transparency and fairness.

On 13 March 2024, the French Artificial Intelligence Commission (governmental body) published a report 'AI: our ambition for France' containing twenty-five recommendations to make France a major player in the AI technological revolution, notably by facilitating access to personal data (in particular health data) and adopting an “AI exception” for public research.

On 28 November 2024, the French Senate’s Office for the Evaluation of Scientific and Technological Choices (OPECST) issued a wide‑ranging report called ChatGPT, and after? Assessment and perspectives of artificial intelligence” that traces the evolution and mechanics of AI (from symbolic systems to deep learning and Transformer‑based “foundation models”), assesses economic, societal, cultural, and security implications, benchmarks France’s national AI strategy against roughly twenty other jurisdictions, and surveys emerging models of national, EU, and global governance. The report culminates in 18 recommendations, including several to be advanced at forthcoming international AI fora, emphasizing innovation, risk management, transparency, and democratic oversight to ensure AI serves the public interest while safeguarding sovereignty and fundamental rights (the Senate Report).

The French national data protection authority (CNIL) has issued non-binding AI fact sheets (CNIL AI Fact Sheets) that focus on the development phase of AI systems and models and highlight the necessity to comply with privacy requirements during all stages of the development of AI systems. CNIL has also built tools and best practices to be followed for AI tools and models to be used in compliance with privacy laws e.g., the risk assessment before the use of an AI system (CNIL AI Risk Assessment). In addition, the CNIL has published a guidance on the use of generative AI systems with a related Q&A (CNIL Generative AI Guidance) that aims to help organizations deploy such systems responsibly.

The French agency on security of IT systems (ANSSI) published guidance on 29 April 2024 setting out security recommendations for generative AI systems (ANSSI Generative AI Security Guidance). This guidance set out good practices to implement on the three stages of generative AI lifecycle: training; integration and deployment and operational production. Such practices should be adapted to the choice of providers (for hosting, training, testing, etc.) and the sensitivity of the data used, as well as the criticality of the intended use case of the AI system.

On 12 July 2024, the French Competition Regulator (Autorité de la Concurrence) issued an opinion on the competitive functioning of the generative artificial intelligence sector. This opinion focuses on strategies by major digital players to consolidate market power in the design, training, and specialization of large language models. Following this opinion, the Authority announced that it is opening an ex officio investigation to analyze the competitive functioning of the conversational agents (or chats) sector. The Authority also intends to examine the new issues that are emerging, particularly those linked to the use of conversational agents in the online retail sector, also referred to as ‘agentic commerce’ by launching in 2026 a public consultation.

The French High Council for Literary and Artistic Property (CSPLA), who acts as an observatory for the exercise and enforcement of copyright and neighboring rights, was tasked with clarifying the EU AI Act transparency requirements for AI model providers (Article 53). Its findings were made public via a report published on 11 December 2024 (CSPLA Report).

In 2025, the CIGREF (a non-profit association bringing together major French companies and administrations) issued a set of five guides focusing on helping large organizations adopt AI responsibly and in compliance with the EU AI Act, offering practical guidance on key obligations, governance structures, legal issues, and contractual impacts. They also provided best practices and enterprise feedback on generative AI adoption, highlighting organizational readiness, risks, and responsible use patterns.

Last modified 5 February 2026

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

Last modified 3 February 2026

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

AI compliance in Greece

The High-Level Advisory Committee on Artificial Intelligence, established in November 2023 under the supervision of the Greek Prime Minister, developed Greece’s national AI strategy entitled 'A Blueprint for Greece's AI Transformation' (AI Strategy) in November 2024.

Τhe AI Strategy sets a comprehensive set of principles for ensuring that AI systems are effective and, are developed and used responsibly throughout their whole lifecycle. These principles:

  • stress the importance of first confirming that AI is truly necessary for the given solution, ensuring that the project is feasible, and using high-quality data for training algorithms;
  • emphasize the need for clear processes and rules governing data access, alignment among stakeholders, and defining success through key performance indicators and risk-value assessments;
  • highlight the importance of appropriate infrastructure, organization, and workforce for the deployment of AI, while continually evaluating the interpretability and added value of the AI system; and
  • address crucial aspects of responsible AI, including monitoring security risks, complying with legal and data regulations, ensuring ethical alignment, and fostering environmental sustainability throughout the AI system’s development and implementation.

The report 'Generative AI Greece 2030', authored by the National Center for Social Research (EKKE) and the National Center for Scientific Research with backing from the Special Secretariat of Foresight, examines the future landscape of generative AI in Greece by 2030. The report proposes co-creating non-mandatory guidelines for public authorities, social partners, and other stakeholders to ensure AI development aligns with ethical principles and mitigate risks of socio-economic divides due to unequal access to AI. To this end, the report calls for the creation of ethical guidelines and supervision mechanisms for AI that promote societal values, safety, transparency, innovation, and human welfare, while addressing issues like digital inequality and algorithmic discrimination.

The National Commission for Bioethics & Technoethics of Greece has issued an 'Opinion on ”the applications of Artificial Intelligence in Health in Greece ' that includes guidelines emphasizing that AI applications must align with fundamental ethical principles, such as:

  • Autonomy: Respect patients' right to informed decision-making while ensuring privacy and consent;
  • Beneficence and no harm: Improve health outcomes or diagnostics without causing harm;
  • Safety: Implement strict quality control to prevent errors;
  • Fairness: Ensure fair distribution of AI benefits in healthcare;
  • Equality: Provide equitable access to AI-based healthcare for all;
  • Prevention & Precaution: Stop AI use if risks are identified or uncertain;
  • Explainability: Ensure AI decisions are transparent, interpretable, and accountable;
  • Complementarity: AI supports, but does not replace, human medical judgement.

Ιn March 2025 the National Commission for Bioethics & Technoethics of Greece issued another 'Opinion on ”the use Artificial Intelligence in Greek schools '. The opinion contains ethical guidelines and policy recommendations for the use of AI in primary and secondary education. The Commission declares as fundamental the following ethical principles which should be considered for the introduction of any AI application in schools: respect for human dignity, autonomy, beneficence and no harm, equitable access, complementarity, transparency, sustainability, and augmentation over automation and inventiveness over repetition.

Regarding tertiary education, certain faculties of Greek universities, such as the University of Crete, the National and Kapodistrian University of Athens, the University of Macedonia, the Aristotle University of Thessaloniki and the University of Western Attica have published guidelines on the use of AI tools by students and faculty members (both administrative and teaching staff). Those guidelines emphasize that the use of AI in Greek universities is permitted as an assistive tool always with full disclosure of AI involvement, critical evaluation of the output and respect for intellectual property. Submitting AI-generated content as original work without acknowledgment constitutes academic misconduct comparable to plagiarism. Violations may lead to institutional sanctions.

Furthermore, the Hellenic Federation of Enterprises (SEV) has issued a Guide on the use of AI for businesses. This guide aims to help Greek enterprises understand the impact of AI and integrate AI effectively. It focuses on practical changes, business benefits (like productivity, revenue increase, and cost reduction), and employee empowerment. The guide also covers strategy, challenges, and prerequisites for successful implementation, detailing widespread applications across various sectors.

Finally, the Hellenic Association of Communication Agencies (EDEE) and the Hellenic Advertisers Association (SDE) have jointly issued a Best Practice Guide titled ‘10 Principles for the Responsible Use of Artificial Intelligence in Advertising’, addressed to advertising agencies and individuals advertising their products/services.

Last modified 19 July 2025

There are a number of (non-binding) guidelines.

Ethical AI Framework: The Digital Policy Office of the Hong Kong SAR Government (DPO) issued an Ethical Artificial Intelligence Framework (Ethical AI Framework) in July 2024 setting out a tailored AI framework for ethical use of AI and big data analytics when implementing IT projects and an assessment template for AI and big data analytics to assess the implications of AI applications. It seeks to establish a common approach and structure to govern the development and deployment of AI applications and to maximise the benefits of the application of AI in IT projects, and sets out twelve ethical principles. Initially the Ethical AI Framework was developed for use in the internal adoption of AI within the Government, before being customised and released more widely as a guiding principle for all organisations which utilise AI or big data analytics in IT projects.

GenAI Guideline: The DPO published the Hong Kong Generative Artificial Intelligence Technical and Application Guideline (GenAI Guideline) in April 2025, to promote the safe and responsible application of generative AI technologies.

AI and Data Privacy: The Office of the Privacy Commissioner for Personal Data, Hong Kong's privacy regulator, has published:

  • the Checklist on Guidelines for the Use of Generative AI by Employees issued by the PCPD in March 2025;
  • the Artificial Intelligence: Model Personal Data Protection Framework issued by the PCPD (Model Framework) in June 2024. The Model Framework provides a set of recommended measures to assist compliance with the requirements of Hong Kong's data protection law when implementing AI systems as well as adherence with the three data stewardship values and seven ethical principles for AI advocated in the Guidance (see below). The measures cover:
  1. Establishing AI strategy and governance;
  2. Conducting risk assessment and human oversight;
  3. Customisation of AI models and implementation and management of AI; and
  4. Communication and engagement with stakeholders;
  • the 10 Tips for Users of AI issued by the PCPD in September 2023; and
  • the 'Guidance on the Ethical Development and Use of Artificial Intelligence' (Guidance) in August 2021. The Guidance applies to the development and use of AI systems that involve the use of personal data or the identification, assessment or monitoring of individuals, either of which would potentially impact the privacy of individuals in relation to personal data. The objectives of the Guidance are to facilitate the healthy development and use of AI in Hong Kong and assist organisations in complying with Hong Kong laws applying to personal data.

The Guidance specifies three data stewardship values, together with seven ethical principles for AI, namely:

  1. Human oversight.
  2. Transparency and interpretability.
  3. Data privacy.
  4. Beneficial AI.
  5. Reliability, robustness and security.

Industry-specific: Various regulators in Hong Kong have also issued industry-specific guidance, including:

Last modified 25 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

Last modified 24 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

Last modified 23 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

AI compliance in Italy

By Ministerial Decree No. 180 of 17 December 2025, the Ministry of Labour formally adopted the Guidelines for the implementation of artificial intelligence in the employment sector. The guidelines aim to foster the responsible adoption of AI by safeguarding workers’ rights, promoting sustainable innovation, and ensuring compliance with applicable legal frameworks. Subject to updates by the Observatory on the adoption of AI systems in the workplace established under Article 12 of Law No. 132/2025, the guidelines provide practical tools for enterprise digitalisation, including by specifying training requirements in AI and AI safety, and outlining available economic incentives for AI adoption. Finally, the guidelines set out key principles for responsible AI use in the workplace, including by listing the provisions contained in labor laws and regulations that apply to AI systems. These include, among others, the prohibition of remote monitoring using AI tools that monitor workers' behavior, productivity, or movements, unless such systems have been included in an agreement with trade union representatives or authorized by the labor inspectorate. The guidelines also foster mitigation of bias and discrimination through the establishment of clear internal rules; protection of privacy and worker dignity, including limits on AI-based surveillance and safeguards against automation-related stress; and equitable access to AI technologies for large enterprises, SMEs, and self-employed professionals.

Last modified 3 February 2026

On 19 April 2024 the Ministry of Internal Affairs and Communications and the Ministry of Economy, Trade and Industry released the 'AI Guidelines for Business Ver1.0' (AI Guidelines). Although the AI Guidelines are non-binding, they aim to establish guiding principles for business operators using AI to promote innovation and use of AI while reducing the social risks posed by AI. The AI Guidelines are abstract in nature, outlining the basic principles and approaches for ensuring safety of AI use whilst maximising the benefits. In response, the 'Appendix to the AI Guidelines' provides specific desirable approaches for AI-related business entities.

In addition to the AI Guidelines, there are several guidelines that establish legal interpretations of AI-related matters within the existing legal framework. For example, on 15 March 2024, the Agency for Cultural Affairs released the 'General Understanding on AI and Copyright in Japan' to clarify its view on the copyrightability of AI-generated works. The copyrightability of such works is determined on a case-by-case basis, considering factors such as the quantity and content of instructions and inputs (such as prompts), the number of attempts to generate works and the selection process from among multiple generated works.

The LDP Headquarters for the Promotion of Digital Society Project Team on the Evolution and Implementation of AIs published an 'AI White Paper 2024: New Strategies in Stage II - Toward the world's most AI-friendly country' in April 2024 (2024 White Paper). The 2024 White Paper reflects on the rapid evolution of the AI landscape since the publication of the 2023 White Paper (referred to below) and sets out a Stage II strategy for enhance Japan's AI competitiveness and safety, promote AI R&D and utilisation, and lead international AI rulemaking, whilst fostering cooperation with Asian countries and the global south.

The LDP Headquarters for the Promotion of Digital Society Project Team on the Evolution and Implementation of AIs published an 'AI White Paper Japan's National Strategy in the New Era of AI' in April 2023 (2023 White Paper). The 2023 White Paper outlined Japan's strategy in the new era of AI, focusing on the impact of large-scale language models like ChatGPT, the need for a new national strategy and the importance of international competitiveness and regulatory frameworks.

In January 2019 the Cabinet Office of Japan published 'Social Principles of Human Centric AI' (Social Principles), introducing a collection of social principles for AI and highlighting some factors to be considered in the research and development of AI, as well as its implementation in society.

Last modified 31 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

Last modified 14 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

Last modified 24 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

Last modified 23 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

AI compliance in Malta

In October 2019, the Maltese Government developed and published a National AI Strategy titled ‘Strategy and Vision for Artificial Intelligence in Malta 2030’ (National AI Strategy). While not legally binding, the National AI Strategy aims to position Malta highly among nations with a national AI programme. The National AI Strategy comprises over seventy actions across three strategic pillars: Investment, Start-Ups & Innovation, Public Sector Adoption, and Private Sector Adoption. In addition, the National AI Strategy also includes three strategic enablers: Education & Workforce, Legal & Ethical Framework, and Ecosystem Infrastructure. The MDIA is tasked with monitoring and governing the implementation of the National AI strategy, ensuring that the process aligns with the respective objectives and timelines. The National AI Strategy prioritises the public sector, highlighting AI integration to enhance healthcare, education, traffic management and tourism. Simultaneously, the National AI Strategy also emphasises the integration of legal and ethical considerations into AI systems, which is vital to safeguard national security, protect citizens’ rights, advance commercial interests, and ensure trustworthy AI technology.

In October 2019, Malta also developed an Ethical AI Framework (National Framework), which outlines principles and governance practices for creating reliable AI systems. The National Framework forms an integral part of the National AI Strategy to establish key ethical principles, including human autonomy, harm prevention, and fairness. The National Framework aims to guide AI practitioners in identifying risks and following high ethical standards. While not legally binding, the National Framework acts as AI guidance that is in alignment with emerging international standards, including the Ethics Guidelines for Trustworthy AI published by the High-Level Expert Group on Artificial Intelligence as set up by the European Commission.

Last modified 23 July 2025

The UNESCO AI Readiness Assessment for Mauritius recommended the formulation of a modern AI policy. In response, the Blueprint embeds the development of a National AI Policy that governs trustworthy and ethical development/use of AI. The policy will be grounded on principles of transparency, fairness and accountability. Further, it will be human-centric and innovation friendly.

The Blueprint also mentions that the following actions shall be taken in the field of AI:

  • Accelerate the implementation of intelligent automation, virtual assistants, and predictive analytics within Government. For instance, AI powered Job match.
  • Build on the leadership of Mauritius in Africa AI readiness to position Mauritius as a regional hub for ethical AI-supporting startups and SMEs in developing further AI-driven solutions in Fintech, Agritech, Edtech, and Climate Action.
  • Create regulatory AI sandboxes and regional/mobile Fab Labs to disseminate knowledge on AI and build AI capacity/awareness across the island. AI shall be introduced in the curriculum for students as from the upper primary level.
  • Encourage Public Private Partnership initiatives to set up AI Tech Park for Research & Development, startups and innovation.
  • Apply AI to monitor environmental risks, optimise resource use, and enable smarter urban planning and agriculture through data-driven systems.
Last modified 26 June 2025

In 2018 Mexico presented an Artificial Intelligence strategy and founded its 'IA2030Mx' coalition comprising nine institutions across various sectors, leading to the development and publication of the Mexican National Agenda for Artificial Intelligence in September 2020. The Agenda discusses several key thematic axes, including 'data, digital infrastructure and ethical' and 'skills, capacities and education' and provides recommendations for a future pathway relating to each axis.

Last modified 29 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

Last modified 23 July 2025

Guidance from the Office of the Privacy Commissioner

The Office of the Privacy Commissioner (OPC) issued the Artificial intelligence and the Information Privacy Principles in September 2023 (OPC AI Guidance) which provides non-binding guidance on compliance with the Information Privacy Principles (the key obligations under the Privacy Act when adopting AI-enabled solutions.  The AI Guidance builds on the OPC’s Generative Artificial Intelligence guidance dated June 2023 (OPC's Gen AI Guidance).

At a high-level, the OPC AI Guidance:

  • recommends undertaking a Privacy Impact Assessment before deploying AI;
  • emphasises the importance of good governance, which requires involvement of senior leadership;
  • highlights the importance of transparency and explainability, accuracy, robustness and security, accountability and human values and fairness.  Also consistent with international regulation is the OPC's call for a 'privacy-by-design' approach to implementing AI;
  • identifies a need to consider te ao Māori perspectives on privacy (broadly, te ao Māori is the Māori worldview including tikanga Māori - Māori customs and protocols).  Specific concerns identified in the OPC AI Guidance include:
    • bias from systems developed overseas that do not work accurately for Māori;
    • collection of Māori information without work to build relationships of trust, leading to inaccurate representation of Māori taonga that fail to uphold tapu and tikanga; and
    • exclusion from processes and decisions of building and adopting AI tools that affect Māori whānau, hapū, and iwi, including use of these tools by the public sector; and
  • identifies some use cases for AI as higher-risk, requiring more care.  For example, the use of AI tools for automated decision making. 

For more information on the OPC AI Guidance, see DLA Piper's update here: New Zealand's Privacy Commissioner follows global trends with latest guidance on AI | DLA Piper.

New Zealand Government Cabinet Paper

The Minister of Science, Innovation and Technology – Hon Judith Collins KC published the Approach to work on Artificial Intelligence Cabinet paper in July 2024, seeking agreement from Cabinet’s Economic Policy Committee on a strategic approach for New Zealand’s use of AI. The Minister proposed a “light-touch, proportionate and risk-based approach to AI regulation”. The approach would leverage existing laws as guardrails and only introduce new regulation to “unlock innovation or address acute risks”. Cabinet has focussed on the following five key domains:

  • setting a strategic approach to AI;
  • enabling safe AI innovation in the public service;
  • harnessing AI in the New Zealand economy (with the Ministry of Business, Innovation and Employment (MBIE) instructed to formulate OPC AI guidance for firms to utilise);
  • prioritising engagement on international rules and norms; and
  • coordinating with work on national security.

OECD AI Principles

The New Zealand Government has adopted the Organisation for Economic Co-operation and Development's (OECD's) AI Principles adopted in 2019 and updated in 2024 (OECD AI Principles) to guide the development of trustworthy, innovative, and democratic AI in New Zealand, aligning with other OECD member states.

National AI Strategy

In July 2025, the New Zealand Government released New Zealand’s Strategy for Artificial Intelligence: Investing with confidence (AI Strategy) aiming to accelerate private sector AI adoption and innovation. The AI Strategy commits to stable and enabling policy for AI, involving a light-touch and principles-based approach that relies on existing legislation and the OECD AI principles. In the AI Strategy, the Government outlines that it will reduce barriers to adoption, provide clear regulatory guidance, build necessary capabilities, and ensure that adoption occurs responsibly. The AI Strategy emphasises the opportunities in AI adoption and application rather than foundational AI development.

The Ministry of Business, Innovation and Employment published a Responsible AI Guidance for Businesses (AI Guidance for Business) alongside the AI Strategy to assist with its practical application. The AI Guidance for Business is a non-binding guide of good practices and actions that can support businesses to adopt AI. It identifies and discusses various types of considerations for businesses using or developing AI systems, including risks to cybersecurity, privacy, human rights, workplace culture, the environment, intellectual property and creators, and physical safety.

Public Service AI Framework

The New Zealand Government has introduced the Public Service AI Framework (Framework) to guide the responsible use of AI across the public sector. While not legally binding, the Framework sets out best practice principles for AI adoption. Its vision is the responsible adoption of AI "to modernise public services and deliver better outcomes for all New Zealanders."

The Framework is guided by five AI principles:

  • Inclusive, sustainable development – Public Service AI systems should contribute to inclusive growth, sustainable development and the reduction of economic, social, gender and other inequalities, including by reference to access to technology.
  • Human-centred values – Public Service AI should respect the rule of law, democratic values, human and labour rights, including personal data protection and privacy, ensuring ethical appropriate use.
  • Transparency and explainability – Those using, or interacting with, Public Service AI should be aware of, and understand, how the Public Service is using that AI. Public Service agencies should therefore disclose when AI is used, how those systems were developed and how they affect outcomes.
  • Security and safety – The security of customers and staff is a core business requirement. Public Service AI should apply a robust risk management approach and ensure the traceability of data.
  • Accountability – Public Service AI should be subject to oversight. Capability should therefore keep up with technological changes, including to relevant regulatory and governance frameworks.

The Framework's principles are informed by the OECD AI Principles, as well as the UK's Generative AI Framework for HMG (HTML) dated January 2024 (but since withdrawn), the Algorithm Charter for Aotearoa New Zealand dated July 2020, and the AI Forum's AI Principles dated March 2020.

The Government Chief Digital Officer is leading a Public Service AI work programme to support the implementation of the Framework’s vision while working closely with MBIE to compile a cross-portfolio policy work programme. The programme is guided by six pillars:

  • Governance – supporting transparency and human accountability in Public Service AI use.
  • Guardrails – enabling safe and responsible Public Service AI use.
  • Capability – building internal and external AI knowledge and skills.
  • Innovation – providing pathways that enable safe AI testing and innovation.
  • Social licence – ensuring New Zealanders have trust and confidence in Public Service AI use.
  • Global voice – ensuring international counterparts see New Zealand as a trusted AI partner.

Public Service Generative AI Guidance

The New Zealand Government published the Responsible AI Guidance for the Public Service: GenAI dated February 2025 (GenAI Guidelines) to support the New Zealand Public Service to explore generative AI systems in ways that are safe, transparent and responsible. The GenAI Guidelines outline foundational aspects of supporting public sector agencies in the utilisation and adoption of generative AI and give examples of how each aspect can be implemented.

Key considerations are also highlighted about generative AI systems which affect customer experience with the New Zealand Government and include transparency, accessibility, ethical considerations to address bias, Māori and indigenous data considerations and privacy. Public Service agencies are expected to ensure transparency and accountability in their use of generative AI, enhance employee skills and capabilities, and follow best practices in procurement to align generative AI solutions with business needs and regulatory compliance.

AI Forum publications

The Artificial Intelligence Forum of New Zealand - Te Kāhui Atamai Iahiko o Aotearoa (AI Forum), released its AI Blueprint for Aotearoa dated July 2024, designed as a strategy to highlight current industry investments in AI in New Zealand and help guide strategic investments over the next five years to support AI technologies. It proposes a mechanism to leverage existing industry initiatives and programmes to help drive results.

The AI Forum also released its Trustworthy AI in Aotearoa AI Principles dated March 2020 (AI Forum AI Principles). The AI Forum AI Principles are organised under five subheadings, namely: fairness and justice; reliability, security and privacy; transparency; human oversight and accountability; and wellbeing.

Reserve Bank report

The Reserve Bank of New Zealand (RBNZ) issued its Financial Stability Report dated May 2025 (Report), which included a special topic "Rise of the machines – How could artificial intelligence impact financial stability?". The Report outlines the current use of AI within the financial sector, explores its potential benefits and challenges and provides an overview of the evolving regulatory landscape. The report identifies AI-driven risks to financial stability, including errors, data privacy concerns, market distortions, and increased exposure to cyber attacks, all of which could amplify existing systemic risks.

Additionally, the Report flags current and upcoming legislation and binding standards that are or will be relevant to mitigating AI-driven risks. These include:

  • the proposed Risk Management Standard and Operational Resilience Standards for deposit takers, slated to take effect in 2028; and
  • the Financial Markets (Conduct of Institutions) Amendment Act 2022 (commonly known as CoFI), which aims to ensure that financial institutions treat consumers fairly. The RBNZ considers that this will serve as an important framework for regulating the conduct risk associated with AI.
Last modified 14 July 2025

Nigeria is a signatory to the Bletchley Declaration, having joined 27 other countries and the European Union at the AI Safety Summit held at Bletchley Park in November 2023. The declaration is an agreement to establish a shared understanding of the opportunities and risks posed by frontier AI systems and commits signatories to international cooperation for the safe and responsible development of artificial intelligence.

In August 2024, the National Centre for Artificial Intelligence and Robotics (NCAIR), a specialized arm of the National Information Technology Development Agency (NITDA) published a draft of Nigeria’s National Artificial Intelligence Strategy (NAIS). The effort was coordinated under the leadership of the Federal Ministry of Communications, Innovation and Digital Economy. In April 2025, Nigerian media reported the release of the approved and finalised version of the NAIS, however there has been no confirmation from official government sources.

In March 2025 NITDA launched its AI Transformation Roadmap, 2025 which aims to guide its journey into a smart organization that integrates human expertise with AI capabilities. This roadmap focuses on practical steps for AI adoption within NITDA, emphasizing capacity building, ethical AI use and innovation acceleration.

In August 2024, the Nigerian Bar Association (NBA), during its Annual General Meeting of its Section on Legal Practice, issued Guidelines for the use of Artificial Intelligence in the legal profession, 2024 (NBA AI Guidelines). The document focuses on transparency, data privacy, human oversight and responsible AI adoption by lawyers. However, the NBA AI Guidelines are sector-specific and not issued by a government authority.

Last modified 17 June 2025

The content on Regulatory guidance / voluntary codes in the European Union applies in Norway.

Last modified 9 October 2025

Peru has a National Strategy for Artificial Intelligence for the period 2021-2026 (National Strategy), the purpose of which is to:

  • Propose axes, objectives and actions that promote research, development and adoption of AI;
  • Help create solutions to national problems based on AI; and
  • Generate new opportunities for the country's development, prioritising productive sectors and public services aligned with national strategies and policies.

The National Strategy also includes the creation of the National Centre for Innovation and Artificial Intelligence and the National Centre for High Performance Computing.

Last modified 20 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

Last modified 23 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

AI compliance in Portugal

Within the framework of the Portuguese National Digital Strategy, approved by the Council of Ministers Resolution no. 207/2024 of 30 December 2024, a national AI Agenda was expected to be presented by the Portuguese Government by the end of the first quarter of 2025, including the development of a Portuguese language LLM (approved by the Council of Ministers Resolution no. 201/2024 of 30 December). However, despite the initial schedule, the formal presentation of the agenda was postponed (due to the ongoing government transition).

The Portuguese Labour Code was amended through the publication of Law no. 13/2023 of 3 April (within the so-called “Decent Work Agenda” - Agenda do Trabalho Digno framework), which regulates the use of algorithms and AI systems in labour relations.

In particular:

  • Article 24 (3) of the Portuguese Labour Code foresees that any decision regarding employees and job candidates based on algorithms or other AI systems may not favour, benefit, disadvantage or deprive them of any right nor exempt them from duties on grounds of ancestry, age, sex, sexual orientation, gender identity, marital status, family situation, economic situation, education, origin or social condition, genetic heritage, reduced labour capacity, disability, chronic illness, nationality, ethnic origin or race, territory of origin, language, religion, political or ideological convictions and trade union membership; and
  • Article 106 (3) (s) of the Portuguese Labour Code foresees that, among the information to be provided by employers to the employees when hiring, the employer shall provide information regarding the parameters, criteria, rules and instructions on which the algorithms or other AI systems affecting decision-making on access to and maintenance of employment, as well as working conditions, including profiling and monitoring.

Besides, in 2023, a 'Guide to Ethical Artificial Intelligence, Transparent and Responsible in Public Administration' was published by the Portuguese Agency for Administrative Modernisation (Agência para a Modernização Administrativa/AMA), as part of the GuIA Responsável Project. The Guide provides principles and practical guidance for ethical and responsible use of AI in the public sector, focusing on ethics, transparency, explainability, fairness and accountability. It also includes an ethical risk assessment tool to help public entities evaluate and manage AI systems, serving as a strategic reference for the responsible use of AI in Portugal’s public administration.

Last modified 22 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

AI compliance in Romania

Romania's government approved the National Artificial Intelligence Strategy for 2024-2027 in July 2024. The purpose of this strategy is to accelerate the adoption and use of AI in Romania across various sectors, while ensuring that this technology is used responsibly and ethically. It aims to place AI as one of the main national priorities, promoting economic growth, social well-being, and national security through the responsible and ethical use of AI.

Last modified 25 July 2025

There are a number of (non-binding) guidelines:

National AI Strategy: the Singapore government published its first National AI Strategy in 2019, outlining plans to enhance the integration of AI for the transformation of its economy. Building on this foundation, the government launched the Singapore National AI Strategy 2.0 to address the recent technological advances, particularly in generative AI, and to create a robust AI ecosystem, enhance workforce skills, ensure sufficient infrastructure and promote a safe environment for innovation.

Model AI Governance Frameworks: the Infocomm Media Development Authority of Singapore (IMDA) and the AI Verify Foundation (a not-for-profit foundation wholly owned by IMDA) released on 30 May 2024 a Model AI Governance Framework for Generative AI (Model Framework for GenAI) that builds on the Model Artificial Intelligence Governance Framework (Model Framework) that was first published on 23 January 2019. The Model Framework for GenAI provides practical guidance for private sector entities to tackle ethical and governance challenges when implementing AI solutions. It aims to foster public understanding and confidence in technologies by explaining how AI systems work, establishing robust data accountability measures and ensuring open and transparent communication. The Model Framework for GenAI sets out nine dimensions for fostering a trusted AI ecosystem:

  1. Accountability - putting in place the right incentive structure for different players in the AI system development life cycle to be responsible to end-users​.
  2. Data - ensuring data quality and addressing potentially contentious training data in a pragmatic way, as data is core to model development.
  3. Trusted development and deployment - enhancing transparency around baseline safety and hygiene measures based on industry best practices in development, evaluation and disclosure.
  4. Incident reporting - implementing an incident management system for timely notification, remediation and continuous improvements, as no AI system is foolproof​.
  5. Testing and assurance - providing external validation and added trust through third-party testing and developing common AI testing standards for consistency.
  6. Security - addressing new threat vectors that arise through generative AI models​.
  7. Content provenance - transparency about where content comes from as useful signals for end-users​.
  8. Safety and alignment R&D - accelerating R&D through global cooperation among AI Safety Institutes to improve model alignment with human intention and values​.
  9. AI for public good - responsible AI includes harnessing AI to benefit the public by democratising access, improving public sector adoption, upskilling workers and developing AI systems sustainably​.

PDPC Guidelines: the Personal Data Protection Commission (PDPC), Singapore's privacy regulator, published the Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems (PDPC Guidelines) on 1 March 2024. The PDPC Guidelines offer clarity on using personal data for AI development, outline consumer consent requirements, specify obligations for third-party AI developers under the Personal Data Protection Act (PDPA), and provide best practices for business compliance with the PDPA. The PDPC Guidelines are not intended to be legally binding, but rather will act as a point of advisory guidance for the interpretation of the PDPA.

Guidelines and Companion Guide on Securing AI Systems: the Cyber Security Agency of Singapore (CSA) published the Guidelines on Securing AI Systems on 15 October 2024 to help system owners secure AI throughout its lifecycle. These guidelines aim to help to protect AI systems against more traditional cybersecurity risks, such as supply chain attacks, as well as novel risks such as Adversarial Machine Learning. Further, to support system owners, the CSA has collaborated with AI and cybersecurity practitioners to develop a Companion Guide on Securing AI Systems. The key considerations that system owners should consider include:

  • taking a lifecycle approach-businesses should adopt a comprehensive lifecycle approach to AI security in five key stages: (1) planning and design, (2) development, (3) deployment, (4) operations and maintenance, and (5) end of life. Businesses should conduct thorough risk assessments, prioritise identified risks, implement appropriate security measures, and continuously evaluate any residual risks throughout the AI system's lifecycle.
  • planning and design - businesses are encouraged to be proactive in securing their AI systems. This involves staying informed about the latest security developments, and regularly updating risk management strategies to address emerging threats.
  • development - businesses must assess and monitor potential security risks of the AI system's supply chain across its life cycle, and consider security benefits and trade-offs when selecting the appropriate model to use. They should also recognise the importance of AI-related assets such as models, data, prompts, logs and assessments, and establish procedures to monitor, verify and manage versions, protect these assets and secure the AI development environment.
  • deployment - it is important to secure the deployment infrastructure and environment of AI systems (e.g. establishing access controls and logging/ monitoring, segregation of environments, etc). Incident management procedures should be developed and maintained.
  • operations and maintenance - businesses should monitor AI system outputs and behaviour closely, to detect any anomalies or issues. It is important to have a robust process for vulnerability disclosure, to quickly address any potential security concerns.
  • end of life - there should be proper and secure disposal/destruction of data and model, especially when training AI models on large volumes of data. Businesses must securely destroy sensitive customer data to prevent data breaches and comply with relevant regulations.

Proposed Guide on Synthetic Data Generation: the PDPC and the IMDA released a Proposed Guide on Synthetic Data Generation on 15 July 2024. This guide seeks to assist organisations to understand synthetic data (SD) generation techniques and possible use cases (especially for AI). The guide outlines recommended governance, contractual and technical controls to reduce the privacy risk of potential re-identification of synthetic data.

Industry-specific: various industry regulators in Singapore have issued guidance specific to AI, including:

  • In the financial sector, the Monetary Authority of Singapore (MAS) published in 2018 (and updated in 2019) the 'Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore's Financial Sector' (Principles), which outline essential principles for firms providing financial products and services regarding the responsible use of artificial intelligence and data analytics (AIDA). These principles also aim to enhance internal governance related to data management and usage to enhance public confidence in AIDA practices. Further, MAS published in 2024 an information paper entitled 'Artificial Intelligence (AI) Model Risk Management' outlining good practices for managing AI and generative AI model risks identified during its review of banks' AI model risk management practices. The paper emphasises governance and oversight, key risk management systems and processes, and the development and deployment of AI. MAS suggested that these best practices be considered and followed by not only banks but all financial institutions.
  • In the healthcare sector, the Ministry of Health (MOH), the Health Sciences Authority (HSA) and Synapxe co-developed the Artificial Intelligence in Healthcare Guidelines (MOH Guidelines) in 2021 to support patient safety and trust in AI applications within the healthcare industry by sharing good practices with AI developers and implementers. The MOH Guidelines also complements existing regulatory requirements of AI Medical Devices under the HSA.
Last modified 28 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

AI compliance in the Slovak Republic

On 2 November 2020, the Permanent Commission for Ethics and Regulation of Artificial Intelligence (CERAI) was established by the Ministry of Investment, Regional Development and Informatization of the Slovak Republic as an independent expert and advisory body. The role of CERAI is to consider the ethical, social and legal issues related to the research, development, deployment and use of technologies using AI components and AI systems. At its second meeting on 25 May 2021, CERAI approved the Commission's Outline and Focus.

In 2019, the Slovak Centre for Artificial Intelligence Research, was founded. It is a neutral, independent and non-profit platform that, in addition to networking actors in the field of research and application of AI tools, serves mainly as a platform of excellence in this rapidly developing field.

Last modified 29 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

Last modified 14 July 2025

Various governmental authorities such as MSIT, the Personal Information Protection Commission (PIPC) and the Korea Communications Commission (KCC) are creating regulatory guidance. For general guidance on AI, the following can be considered:

'National Guidelines for AI Ethics' were prepared by MSIT in 2020, to provide comprehensive standards that should be followed by all members of society to implement 'human-centered AI'. The National Guidelines for AI Ethics highlight three basic principles that should be considered during the development and utilisation of AI to achieve 'AI for humanity':

  1. Respect for human dignity;
  2. The common good of society; and
  3. Proper use of technology.

They also list ten key requirements that should be met throughout the AI system lifecycle to abide by the three basic principles, including safeguarding human rights, protection of privacy, prevention of harm, transparency, respect for diversity, and accountability, among others.

'AI Ethics Self-Checklists' were also prepared by MSIT and the Korea Information Society Development Institute (KISDI) in 2023 to help AI actors examine their adherence to the National Guidelines for AI Ethics in practice. They cover philosophical and social disclosures, including ethical considerations concerning the development and utilisation of AI, as well as social norms and values to be pursued. The AI Ethics Self-Checklists provide both a general-purpose checklist and a field-specific checklist that can be used by different AI actors, with the latter covering fields of AI chatbot, AI for writing, and AI image recognition systems. 

'Guidebooks for Development of Trustworthy AI' were prepared by MSIT and Telecommunications Technology Association (TTA) in 2023 and 2024 providing development requirements and verification items to be used as reference materials for ensuring trustworthiness in the process of developing AI products and services. The eight sector-specific versions of the Guidebooks for Development of Trustworthy AI provide sector-specific specialised use cases based on requirements and assessment questions of the general version of the same to enhance practical use. The sector-specific versions recommend selecting appropriate sector-specific requirements and assessment questions considering the characteristics of AI services during AI trustworthiness assurance activities, covering the medical, autonomous driving, public and social, general AI, smart security, and hiring sectors.

A 'Strategy to Realize Artificial Intelligence Trustworthy for Everyone' was also announced by the MSIT in 2021. It seeks to realise trustworthy AI for everyone applying three pillars (of technology, system and ethics) across ten action plans.

Last modified 29 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

Last modified 21 July 2025

In order to ensure the consistent, effective, and uniform application of the EU AI Act across the European Union, the European Commission has adopted some guidelines (that are non-binding since only the Court of Justice of the European Union has authoritative interpretation powers) on the following provisions of the text:

Further guidelines on high-risk AI systems are expected, and are currently under consultation. The Commission is also expected to provide harmonized standards and common specifications for both high-risk AI systems and general-purpose AI models, providing organizations with further tools which provide a presumption of conformity.

The Commission released the final version of its general-purpose AI Code of Practice on 10 July 2025, and followed it up by publishing Guidelines on the scope of obligations for general-purpose AI model providers on 18 July 2025.

The Commission has also released the first draft of its Code of Practice on Transparency of AI-Generated Content. The Code is planned to be finalized by June 2026. If approved, the final code will be a voluntary tool for providers and deployers to demonstrate compliance with their obligations for marking and labelling AI-generated content under the EU AI Act.

Under the EU AI Act, providers of AI systems that do not fall under the high-risk classification, as well as deployers, have the possibility to adopt voluntary codes of conduct (Article 95) in order to adopt, on a non-binding basis, technical solution and industry best practices. Because of this, it is expected that the AI office will issue further codes of conduct for this (which will be distinct from the GPAI Code of Practice and the Code of Practice on Transparency).

To provide organisations with support identifying and implementing AI literacy initiatives, the Commission launched a repository of AI literacy practices. The repository was updated in November 2025 to improve the searchability of practices.

In May 2024, the Council of Europe published a Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Framework). It is an international, legally binding treaty aiming to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, whilst being conducive to technological progress and innovation.

Last modified 7 July 2025

The Thailand AI Ethics Guidelines were published in February 2022, seeking to ensure that AI technology is developed, applied and used in an ethical way. The Generative AI Governance Guideline for Organizations was also published in 2024, introducing the concept and definition of generative AI as well as its limitation and risk and providing a framework for the ethical use of generative AI.

Last modified 25 July 2025

Laws specifically addressing AI have not been introduced in Turkey yet. The Digital Transformation Office and the Ministry of Industry and Technology of the Republic of Turkey prepared the National Artificial Intelligence Strategy 2021-2025 (NAIS) that was published in August 2021. NAIS identifies several key AI values and principles, which includes proportionality, safety and security, fairness, transparency and explainability and responsibility and accountability. It also outlines strategic priorities, objectives and measures relating to such principles.

Last modified 30 July 2025

The Ministry of AI has published various non-binding guides, including an AI Ethics Guide and The UAE Charter for the Development and Use of Artificial Intelligence (AI Charter).

The AI Ethics Guide promotes ethical design and deployment of AI systems in the public and private sectors. The guide sets out key principles to be followed when designing, developing and deploying AI, including fairness, accountability, transparency, explainability, security, ethics, sustainability, and privacy, as well as specific guidelines to ensure each principle is adhered to. Whilst the guide is non-binding, the Ministry of AI’s intention is that the guide will evolve into a universal framework that is followed by public and private sector entities. It is a living document, and the government is taking a collaborative approach whereby AI stakeholders can be involved in ongoing dialogue.

The AI Charter envisions the transformation of the UAE into a global leader in the ethical oversight and use of AI and seeks to establish a guiding framework to protect the rights of the UAE community in the development and use of AI. It establishes general principles for the development and use of AI in the UAE, which are aligned with the principles referred to in the AI Ethics Guide, and emphasizes the importance of complying with applicable laws when developing and using AI.

Last modified 4 August 2025

On 31 January 2025, the UK Government published a Code of Practice for the Cyber Security of AI (Code) setting out cyber security requirements applying throughout the lifecycle of AI systems. The Code consists of thirteen principles to be voluntarily applied by relevant groups within the AI Supply chain, namely system operators, developers, data custodians, end-users and other affected entities, with each principle linked to a particular stage of the AI system lifecycle.

On 13 January 2025, the UK Government announced an AI Opportunities Action Plan (Action Plan), its roadmap towards harnessing AI opportunities to enhance growth and productivity for the UK, focusing heavily on investment in infrastructure and skills.

The Bletchley Declaration dated 1 November 2023 was the outcome of the UK's AI Safety Summit held by the previous UK Government and signed by several international governments, each affirming that AI should be designed, developed, deployed and used in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible. The UK delegation notably joined the USA in declining to sign the declaration on 'inclusive' AI at the Paris AI Summit in 2025.

On 29 March 2023, the UK Government published a White Paper: A pro-innovation approach to AI regulation (White Paper) elaborating on the approach to AI set out in its 18 July 2022 AI Governance and Regulation Policy Statement. The White Paper set out proposals for implementing a proportionate, future-proof and pro-innovation legislative framework for regulating AI and identified five key principles (para. 48, section 3.2.3):

  1. Safety, security and robustness;
  2. Appropriate transparency and explainability;
  3. Fairness;
  4. Accountability and governance; and
  5. Contestability and redress.

On 31 July 2025, BSI launched the world’s first international standard for independent audits of AI systems aiming to ensure consistent evaluation of AI reliability, fairness and safety.

The government is planning to legislate to grant the AI Safety Institute statutory independence by late 2025, making voluntary safety pledges legally binding. 

Additionally, in July 2025, the government signed non-binding arrangements with several frontier AI model providers, to foster adoption in public services including deployment in ‘AI Growth Zones’. 

Last modified 23 February 2026

Over many years, and especially from 2022 onward, the U.S. federal government issued Presidential Executive Orders, voluntary frameworks and reports, and agency-level enforcement and guidance to set priorities and shape AI governance. States have also issued guidance and voluntary codes of conduct.

Presidential Executive Orders and Official Statements

In addition to the December 2025 Executive Order described above, the Trump Administration has also issued other orders and documents focusing on AI, the most significant of which are described below.

In January 2025, the Trump Administration issued EO 14179, titled “Removing Barriers to American Leadership in Artificial Intelligence,” which revoked an executive order from the Biden Administration that had focused in part on civil rights and algorithmic discrimination. The new EO called for the elimination or revision of prior AI-related policies deemed inconsistent with promoting innovation and leadership in the U.S. It emphasized the development of AI systems that are “free from ideological bias or engineered social agendas,” and directed agencies to align their policies accordingly within 180 days.

In July 2025, the White House released “America’s AI Action Plan” which establishes a strategic framework for achieving U.S. global dominance in AI. The plan identifies over 90 federal policy actions across three pillars: accelerating AI innovation through deregulation and support for open-source models, building American AI infrastructure including energy capacity and semiconductor manufacturing, and leading in international AI diplomacy while securing strategic advantages over adversaries. The plan emphasizes removing regulatory barriers that hinder private sector innovation, empowering American workers to benefit from AI opportunities, and ensuring AI systems reflect American values and free speech principles.

Voluntary AI-related frameworks

In parallel, voluntary frameworks continue to guide ethical and responsible AI development. Most notably:

  • AI Bill of Rights (October 2022): Issued by the White House Office of Science and Technology Policy (OSTP) during the Biden Administration, the “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” is a set of principles aimed at guiding ethical AI use and protecting the public from harmful AI practices. While not enforceable, its core principles have influenced corporate ethics policies and state-level legislation. The Trump Administration has moved away from the principles expressed therein.
  • NIST AI Risk Management Framework (AI RMF 1.0) (January 2023): This voluntary and non-binding framework, released by the U.S. Department of Commerce’s NIST, is designed to mitigate AI risks. Widely adopted by both private companies and government agencies as a best-practice guide, the Risk Management Framework (RMF) encourages organizations to assess and mitigate risks based on the context and potential impact of the AI system. Notably, the Trump Administration, through the White House’s July 2025 AI Action Plan, recommends that NIST revise the AI RMF 1.0 to remove references to certain topics including misinformation, DEI, and climate change.
  • NIST Generative AI Profile (July 2024): NIST released this voluntary guide as a supplement to the RMF. It tailors the RMF’s core principles – “map,” “measure,” “manage,” and “govern” – to the risks of generative AI, such as misinformation, deepfakes, and IP concerns. It offers over 400 recommended actions across the generative AI lifecycle and emphasizes stakeholder engagement, transparency, and responsible deployment.
  • NIST AI 100-4 (November 2024): In furtherance of the Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, NIST issued the report “Reducing Risks Posed by Synthetic Content: An Overview of Technical Approaches to Digital Content Transparency” as a technical overview of methods to increase transparency and reduce the risks associated with AI-generated content. It provides foundational guidance for developing future standards and applies its concepts to the AI RMF. It aims to improve trust in digital media by examining technical approaches for content authentication, provenance tracking, synthetic content detection, and the prevention of harmful AI-generated materials.
  • NIST Cybersecurity Framework AI Profile (December 2025): Issued as a preliminary draft, NIST’s Cyber AI Profile provides guidelines for managing cybersecurity risks associated with AI systems and for leveraging AI to improve cybersecurity capabilities. It applies the core functions of the NIST Cybersecurity Framework (CSF) 2.0 to help organizations strategically adopt AI while addressing emerging cybersecurity risks. It organizes its guidance into three focus areas: securing AI components, using AI for cyber defense, and thwarting AI-enabled attacks.
  • NIST Possible Approach for Evaluating AI Standards Development (January 2026): NIST issued the grant contractor report, “A Possible Approach for Evaluating AI Standards Development,” as a conceptual paper proposing a framework to measure the effectiveness and impact of AI standards. While the report presents a non-prescriptive approach intended to foster discussion, it introduces a formal “theory of change” model to help stakeholders evaluate how AI standards achieve goals such as promoting innovation and public trust. It outlines a process for identifying the inputs, activities, outputs, and outcomes of standards development and measuring their impact against a “counterfactual,” or what would have happened in the absence of the standard.  

Federal agency action

Several federal agencies are also leveraging their statutory authorities to address emerging risks, ensure compliance, and hold organizations accountable for the misuse or misrepresentation of AI technologies. Enforcement actions by agencies such as the FTC, Securities and Exchange Commission (SEC), Department of Justice (DOJ), Food and Drug Administration (FDA), and Department of Health & Human Services (HHS) have aimed to help shape responsible AI practices. These actions span a range of issues – from consumer protection and investor transparency to employment discrimination and medical device safety. The following outlines the roles of some of the key federal agencies in AI oversight and highlights their regulatory focus areas.

FTC

The FTC’s mission is to protect consumers and promote fair competition. The agency has targeted deceptive practices and misleading claims about AI – often referred to as “AI washing.” The agency has now brought numerous enforcement actions against companies that exaggerate the capabilities of their AI systems or falsely market products as AI-powered to gain consumer trust. The FTC has also focused its enforcement on privacy issues with AI systems and the misuse of generative AI for scams and fake reviews. In addition, the agency has explored antitrust issues relating to algorithmic pricing and the market for cloud computing. 

SEC

The SEC’s regulatory focus on AI centers around ensuring transparency, managing conflicts of interest, and protecting investors. It requires firms to clearly disclose how AI is used, particularly when it influences investment decisions or client interactions, to police false or misleading AI statements to investors or clients. The SEC addresses organizational claims about AI capabilities that are misleading to investors, and requires compliance with existing securities laws, applying a technology-neutral, risk-based approach to oversight. Like the FTC, the SEC has been bringing enforcement actions relating to “AI washing.”

DOJ

The DOJ enforces a broad array of federal criminal and civil laws, intensifying its focus on misconduct related to AI, particularly “AI washing.” In April 2025, the DOJ, working in parallel with the SEC, brought securities and wire fraud charges against the former CEO of a technology startup for allegedly defrauding investors of over USD 42 million by falsely claiming his company used advanced AI when its services were actually being performed manually. This enforcement posture underscores the significance of the DOJ’s late 2024 guidance on how companies should manage risks associated with AI and other emerging technologies. In certain cases, when considering punishment for criminal wrongdoing, federal prosecutors would use this guidance in considering the efficacy of a company’s relevant compliance program. The agency has also brought law enforcement actions involving AI-related mistakes and misuse, sometimes working with agencies like the SEC. Given that DOJ also enforces civil rights laws, it has signaled in the past that AI systems used in areas like housing, employment, and lending must comply with anti-discrimination statutes.

FDA 

The FDA plays a central role in regulating AI in both the medical device and drug development contexts, proactively establishing regulatory infrastructure to ensure compliance and safety. In January 2025, the agency released a draft guidance, “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products,” which introduces a risk-based credibility assessment framework for AI models used in this context. It outlines a seven-step process for assessing AI model credibility, discusses challenges such as data quality and algorithmic bias, and highlights the need for life cycle maintenance of AI models to ensure their continued reliability. The FDA has also adopted a separate risk-based framework for the regulation of Software as a Medical Device (SaMD), focusing on the intended use of the software and the potential impact on patient health, which includes evaluating the software’s clinical functionality, reliability, and performance. The FDA strongly encourages sponsors to engage with the agency early in the development process to discuss the use of AI in the context of drug development.

HHS

The HHS, through its Office for Civil Rights (OCR), plays a central role in governing the use of AI and other advanced technologies that implicate protected health information. OCR administers and enforces the HIPAA Privacy, Security, and Breach Notification Rules, and has increasingly framed these authorities to account for evolving technological and cybersecurity risks. In particular, OCR has moved to modernize HIPAA Security Rule requirements to reflect changes in the digital health ecosystem, explicitly citing the growing sophistication of cyber threats, the expanded use of automated and data‑intensive systems, and the need for stronger safeguards around electronic protected health information.

Last modified 10 March 2026

Continue reading

  • no results

Previous topic
Back to top