Artificial Intelligence in the United States

Regulatory guidance / voluntary codes in the United States

Over many years, and especially from 2022 onward, the U.S. federal government issued Presidential Executive Orders, voluntary frameworks and reports, and agency-level enforcement and guidance to set priorities and shape AI governance. States have also issued guidance and voluntary codes of conduct.

Presidential Executive Orders and Official Statements

In addition to the December 2025 Executive Order described above, the Trump Administration has also issued other orders and documents focusing on AI, the most significant of which are described below.

In January 2025, the Trump Administration issued EO 14179, titled “Removing Barriers to American Leadership in Artificial Intelligence,” which revoked an executive order from the Biden Administration that had focused in part on civil rights and algorithmic discrimination. The new EO called for the elimination or revision of prior AI-related policies deemed inconsistent with promoting innovation and leadership in the U.S. It emphasized the development of AI systems that are “free from ideological bias or engineered social agendas,” and directed agencies to align their policies accordingly within 180 days.

In July 2025, the White House released “America’s AI Action Plan” which establishes a strategic framework for achieving U.S. global dominance in AI. The plan identifies over 90 federal policy actions across three pillars: accelerating AI innovation through deregulation and support for open-source models, building American AI infrastructure including energy capacity and semiconductor manufacturing, and leading in international AI diplomacy while securing strategic advantages over adversaries. The plan emphasizes removing regulatory barriers that hinder private sector innovation, empowering American workers to benefit from AI opportunities, and ensuring AI systems reflect American values and free speech principles.

Voluntary AI-related frameworks

In parallel, voluntary frameworks continue to guide ethical and responsible AI development. Most notably:

  • AI Bill of Rights (October 2022): Issued by the White House Office of Science and Technology Policy (OSTP) during the Biden Administration, the “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” is a set of principles aimed at guiding ethical AI use and protecting the public from harmful AI practices. While not enforceable, its core principles have influenced corporate ethics policies and state-level legislation. The Trump Administration has moved away from the principles expressed therein.
  • NIST AI Risk Management Framework (AI RMF 1.0) (January 2023): This voluntary and non-binding framework, released by the U.S. Department of Commerce’s NIST, is designed to mitigate AI risks. Widely adopted by both private companies and government agencies as a best-practice guide, the Risk Management Framework (RMF) encourages organizations to assess and mitigate risks based on the context and potential impact of the AI system. Notably, the Trump Administration, through the White House’s July 2025 AI Action Plan, recommends that NIST revise the AI RMF 1.0 to remove references to certain topics including misinformation, DEI, and climate change.
  • NIST Generative AI Profile (July 2024): NIST released this voluntary guide as a supplement to the RMF. It tailors the RMF’s core principles – “map,” “measure,” “manage,” and “govern” – to the risks of generative AI, such as misinformation, deepfakes, and IP concerns. It offers over 400 recommended actions across the generative AI lifecycle and emphasizes stakeholder engagement, transparency, and responsible deployment.
  • NIST AI 100-4 (November 2024): In furtherance of the Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, NIST issued the report “Reducing Risks Posed by Synthetic Content: An Overview of Technical Approaches to Digital Content Transparency” as a technical overview of methods to increase transparency and reduce the risks associated with AI-generated content. It provides foundational guidance for developing future standards and applies its concepts to the AI RMF. It aims to improve trust in digital media by examining technical approaches for content authentication, provenance tracking, synthetic content detection, and the prevention of harmful AI-generated materials.
  • NIST Cybersecurity Framework AI Profile (December 2025): Issued as a preliminary draft, NIST’s Cyber AI Profile provides guidelines for managing cybersecurity risks associated with AI systems and for leveraging AI to improve cybersecurity capabilities. It applies the core functions of the NIST Cybersecurity Framework (CSF) 2.0 to help organizations strategically adopt AI while addressing emerging cybersecurity risks. It organizes its guidance into three focus areas: securing AI components, using AI for cyber defense, and thwarting AI-enabled attacks.
  • NIST Possible Approach for Evaluating AI Standards Development (January 2026): NIST issued the grant contractor report, “A Possible Approach for Evaluating AI Standards Development,” as a conceptual paper proposing a framework to measure the effectiveness and impact of AI standards. While the report presents a non-prescriptive approach intended to foster discussion, it introduces a formal “theory of change” model to help stakeholders evaluate how AI standards achieve goals such as promoting innovation and public trust. It outlines a process for identifying the inputs, activities, outputs, and outcomes of standards development and measuring their impact against a “counterfactual,” or what would have happened in the absence of the standard.  

Federal agency action

Several federal agencies are also leveraging their statutory authorities to address emerging risks, ensure compliance, and hold organizations accountable for the misuse or misrepresentation of AI technologies. Enforcement actions by agencies such as the FTC, Securities and Exchange Commission (SEC), Department of Justice (DOJ), Food and Drug Administration (FDA), and Department of Health & Human Services (HHS) have aimed to help shape responsible AI practices. These actions span a range of issues – from consumer protection and investor transparency to employment discrimination and medical device safety. The following outlines the roles of some of the key federal agencies in AI oversight and highlights their regulatory focus areas.

FTC

The FTC’s mission is to protect consumers and promote fair competition. The agency has targeted deceptive practices and misleading claims about AI – often referred to as “AI washing.” The agency has now brought numerous enforcement actions against companies that exaggerate the capabilities of their AI systems or falsely market products as AI-powered to gain consumer trust. The FTC has also focused its enforcement on privacy issues with AI systems and the misuse of generative AI for scams and fake reviews. In addition, the agency has explored antitrust issues relating to algorithmic pricing and the market for cloud computing. 

SEC

The SEC’s regulatory focus on AI centers around ensuring transparency, managing conflicts of interest, and protecting investors. It requires firms to clearly disclose how AI is used, particularly when it influences investment decisions or client interactions, to police false or misleading AI statements to investors or clients. The SEC addresses organizational claims about AI capabilities that are misleading to investors, and requires compliance with existing securities laws, applying a technology-neutral, risk-based approach to oversight. Like the FTC, the SEC has been bringing enforcement actions relating to “AI washing.”

DOJ

The DOJ enforces a broad array of federal criminal and civil laws, intensifying its focus on misconduct related to AI, particularly “AI washing.” In April 2025, the DOJ, working in parallel with the SEC, brought securities and wire fraud charges against the former CEO of a technology startup for allegedly defrauding investors of over USD 42 million by falsely claiming his company used advanced AI when its services were actually being performed manually. This enforcement posture underscores the significance of the DOJ’s late 2024 guidance on how companies should manage risks associated with AI and other emerging technologies. In certain cases, when considering punishment for criminal wrongdoing, federal prosecutors would use this guidance in considering the efficacy of a company’s relevant compliance program. The agency has also brought law enforcement actions involving AI-related mistakes and misuse, sometimes working with agencies like the SEC. Given that DOJ also enforces civil rights laws, it has signaled in the past that AI systems used in areas like housing, employment, and lending must comply with anti-discrimination statutes.

FDA 

The FDA plays a central role in regulating AI in both the medical device and drug development contexts, proactively establishing regulatory infrastructure to ensure compliance and safety. In January 2025, the agency released a draft guidance, “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products,” which introduces a risk-based credibility assessment framework for AI models used in this context. It outlines a seven-step process for assessing AI model credibility, discusses challenges such as data quality and algorithmic bias, and highlights the need for life cycle maintenance of AI models to ensure their continued reliability. The FDA has also adopted a separate risk-based framework for the regulation of Software as a Medical Device (SaMD), focusing on the intended use of the software and the potential impact on patient health, which includes evaluating the software’s clinical functionality, reliability, and performance. The FDA strongly encourages sponsors to engage with the agency early in the development process to discuss the use of AI in the context of drug development.

HHS

The HHS, through its Office for Civil Rights (OCR), plays a central role in governing the use of AI and other advanced technologies that implicate protected health information. OCR administers and enforces the HIPAA Privacy, Security, and Breach Notification Rules, and has increasingly framed these authorities to account for evolving technological and cybersecurity risks. In particular, OCR has moved to modernize HIPAA Security Rule requirements to reflect changes in the digital health ecosystem, explicitly citing the growing sophistication of cyber threats, the expanded use of automated and data‑intensive systems, and the need for stronger safeguards around electronic protected health information.

Continue reading

  • no results

Previous topic
Back to top