Artificial Intelligence in the United States

Law / proposed law in the United States

AI laws and Proposed Laws

In the U.S., artificial intelligence (AI) is regulated at both the federal and state levels. While the U.S. lacks a unified federal AI law, the states have been active in modifying existing laws to account for AI and, in some cases, passing targeted AI-specific legislation.

This section outlines the major enacted laws at both federal and state levels, highlighting how states have taken the lead in adapting existing legal frameworks and introducing AI-specific laws in the absence of a comprehensive federal approach.

Federal AI legislation landscape

The federal regulatory landscape for AI remains limited in scope. Although a significant volume of AI-related legislation has been introduced in Congress, only one standalone statute intended to regulate the posting and distribution of AI-generated content has been enacted to date: 

  • Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act (TAKE IT DOWN Act): Although the statute does not regulate AI systems directly, it requires online platforms to delete flagged non-consensual intimate imagery, including AI-generated deepfakes, within 48 hours. The law creates criminal penalties for distributing content and empowers the Federal Trade Commission (FTC) to enforce compliance. 

Accordingly, federal policy is more defined by proposals than binding obligations, and most operational guidance continues to come from executive actions and agency-level enforcement.

Potential federal framework

On 11 December 2025, President Trump signed an Executive Order (EO) aimed at creating a national policy framework for AI to ensure American dominance in the field. The EO seeks to replace the existing patchwork of state laws—which the Trump Administration views as burdensome and detrimental to innovation—with a unified standard.

To achieve this, the EO outlines a two-pronged strategy: challenging existing state AI laws in court and establishing a new federal regulatory framework that would preempt them. A new task force, led by the Attorney General (AG), will be created to raise legal challenges to state laws that are viewed as unconstitutional or otherwise conflicting with federal regulations. Additionally, the EO directs the Secretary of Commerce to evaluate state AI legislation. The proposed federal framework will focus on key areas such as child safety, censorship prevention, and copyright protection, while preempting conflicting state-level regulations.

State-level AI legislation landscape

The lack of comprehensive federal AI legislation has led to a proliferation of state-level laws and regulations, with many more bills working their way through state legislatures in 2026. Some of these laws establish frameworks and requirements that impact both public- and private-sector use of AI technologies.

In 2025, all 50 states, Puerto Rico, the Virgin Islands, and Washington, D.C., introduced AI-related legislation. According to the National Conference of State Legislatures, 38 states adopted or enacted approximately 100 AI‑related measures. What materially changes in 2026 is enforceability: several major state AI laws take effect, significantly increasing the need for cross‑state governance frameworks, comprehensive inventories, and demonstrable evidence of controls.

These laws and regulations impose transparency and disclosure obligations, prohibit deceptive uses of generative AI, and seek to mitigate algorithmic discrimination in certain domains. The following list includes the principal state laws shaping AI regulation in the U.S. and a few examples of some of the narrower AI-focused laws:

California

  • California has enacted significant AI-related legislation, establishing new requirements for transparency, safety, and accountability across various AI applications. California’s SB53, the Transparency in Frontier Artificial Intelligence Act (TFAIA), was signed into law on 29 September 2025, and took effect on 1 January 2026. It requires large frontier AI developers to publish transparency reports and annually update a public frontier AI safety framework describing how they assess and mitigate “catastrophic risk,” secure unreleased model weights, and respond to critical safety incidents. Further, California’s AB2013, Generative Artificial Intelligence: Training Data Transparency Act (TDTA) was signed into law 28 September 2024, and took effect 1 January 2026. The TDTA requires AI developers to publicly post a high-level summary of the datasets used to train generative AI systems or services made available to the public since January 2022, enumerating specific categories of required disclosures.
  • During 2025, the California state legislature continued to pass many AI-related bills, most of which took effect on 1 January 2026. Signed into law on 13 October 2025, the California AI Transparency Act (AB853) mandates that developers of generative AI must embed “provenance data” into digital content to verify its authenticity and origin. (This law has staggered effective dates through 1 January 2028.) AB489, signed into law on 11 October 2025, prohibits the use of AI to falsely imply that advice or services are being provided by a licensed healthcare professional. Further, enacted on 13 October 2025, SB243 imposes specific safety protocols on “companion bots,” requiring them to prevent harmful conversations and regularly remind users that they are interacting with an AI. Other new laws, also enacted on 13 October 2025, create liability for services that enable deepfake pornography (AB621) and bar defendants from claiming an AI “autonomously caused the harm” in civil actions (AB316).

Colorado 

  • Colorado enacted the Consumer Protections for Interactions with AI Act (Colorado AI Act) in May 2024, and it is scheduled to take effect on 30 June 2026. It is recognized as the first comprehensive statute in the U.S. specifically targeting “high-risk” AI systems. The law requires developers and deployers of qualifying AI applications to exercise reasonable care in preventing algorithmic discrimination, mandates clear documentation of AI activities, and holds entities accountable for the outputs of their AI systems. By categorizing certain AI deployments as “high-risk,” the Colorado AI Act imposes heightened responsibilities in critical areas such as employment, healthcare, lending, housing, and government services.

Illinois

  • In August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act, which imposes significant restrictions on the use of AI in mental healthcare. The law, effective immediately, broadly prohibits any entity without a professional license from offering therapy services, a rule that explicitly includes services delivered via AI, and bars licensed healthcare professionals from delegating therapeutic decisions to AI systems.

Kentucky

  • Signed and effective on 24 March 2025, Kentucky’s AI Governance Act (SB4) establishes a comprehensive framework for AI use within state government. It calls for adoption of uniform AI policy standards and creates a governance committee to oversee ethical, transparent, and responsible AI use across state agencies. It includes provisions for human oversight, public disclosure, and protection of personal and business information.

Nevada

  • On 5 June 2025, Nevada enacted AB406, which makes it a deceptive trade practice to misrepresent the capabilities of AI in mental healthcare. The law prohibits offering AI systems that are programmed to perform services that would constitute the practice of professional mental healthcare if done by a person. Furthermore, providers are barred from marketing or otherwise representing that their AI systems are capable of delivering such care. AB406 took effect on 1 July 2025.

New York

  • New York enacted the Responsible AI Safety and Education (RAISE) Act on 19 December 2025, which establishes a comprehensive regulatory framework for developers of large-scale “frontier” AI models.[1] Effective on 1 January 2027, this law requires large developers to implement and publicly disclose a detailed “safety and security protocol” designed to mitigate the risk of “critical harm,” defined as events causing mass injury or over USD 1 billion in damages. It also requires developers to report any “safety incident” that demonstrates an increased risk of such harm to the state attorney general within 72 hours.
  • Further, New York enacted a first-of-its-kind law requiring advertisers to disclose the use of AI-generated individuals in commercial advertising on 11 December 2025. The law mandates a conspicuous disclosure when a “synthetic performer”—a digitally created asset made with generative AI to resemble a human who is not an identifiable person—is featured in a visual or audiovisual advertisement. This rule is narrowly targeted at AI-generated actors and does not apply to audio-only ads, deepfakes of real performers, or AI enhancements of real performers. This law takes effect on 9 June 2026.
  • Enacted on 11 December 2021, New York City’s Local Law 144 regulates the use of “automated employment decision tools” (AEDTs) in hiring and promotion decisions. Effective since 5 July 2023, the law imposes three core obligations on employers: they must conduct an annual independent bias audit to assess whether the tool has a disparate impact on candidates based on race, ethnicity, or sex; they must post a summary of the audit results publicly on their websites; and they must provide notice to candidates that an AEDT is being used and of their right to request an alternative screening process.

Texas

  • Texas enacted the Texas Responsible AI Governance Act (TRAIGA) on 22 June 2025, establishing foundational duties for state agencies, developers, and deployers of AI systems operating within Texas. The law went into effect on 1 January 2026, and prohibits state agencies from certain uses of social scoring and biometric data. Developers and deployers face prohibitions on the intentional misuse of AI for certain types of behavioral manipulation, unlawful discrimination, deepfakes, and infringement of constitutional rights. TRAIGA provides protections for organizations that follow recognized frameworks, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework, as well as a 60-day cure period for violations, and the creation of a regulatory sandbox.

Utah 

  • Utah employed a relatively comprehensive approach to AI oversight by adopting, and making effective, the AI Policy Act (SB149) on 1 May 2024. This legislation requires professionals in regulated occupations – such as law, medicine, and financial services – to disclose their use of generative AI tools during high-risk interactions, such as when providing sensitive advice or handling personal data. Additionally, consumers must be informed if they explicitly inquire whether they are interacting with AI.

A handful of other U.S. states have considered and rejected broad AI laws. In addition, numerous other states and localities have enacted specific statutes or municipal ordinances that regulate discrete aspects of AI. The following list includes several such examples:  

Maine

  • Maine enacted “An Act to Ensure Transparency in Consumer Transactions Involving Artificial Intelligence” (the Maine AI Chatbot Disclosure Act) on 12 June 2025. Effective 23 September 2025, the law establishes targeted disclosure requirements for AI‑driven interactions. It generally prohibits businesses (and other persons) from using an AI chatbot – or similar text‑ or voice‑based computer technology – in trade or commerce in a manner that may mislead or deceive a reasonable consumer into believing they are interacting with a human, unless the business provides a clear and conspicuous disclosure that the interaction involves AI.

Maryland

  • Maryland enacted HB820 on 20 May 2025, regulating how health insurance plans and related entities may use AI in coverage and treatment decisions made in utilization management and review decisions. Effective 1 October 2025, the law requires covered entities to ensure that the AI tool’s determinations are grounded in the enrollee’s individual clinical information, do not replace the role of a health care provider, and are applied fairly and equitably without resulting in unfair discrimination.

Pennsylvania

  • On 7 July 2025, Pennsylvania enacted Act 35 (formerly SB649) to address the malicious use of AI-generated deepfakes. Effective 5 September 2025, the law establishes criminal penalties for generating (or creating and distributing) a forged digital likeness with intent to defraud or injure, or with knowledge and intent to facilitate fraud or injury by another – including where the actor knows or reasonably should know the audio or visual at issue is forged.

Illinois

  • Illinois enacted HB3773 on 9 August 2024, amending the Illinois Human Rights Act to regulate the use of AI in employment decisions, prohibiting discriminatory practices. Effective 1 January 2026, the law requires employers to provide notice to applicants and workers if they use AI for hiring, discipline, discharge, or other workplace-related purposes.

This continued surge in state legislative activity reflects a wide range of approaches and priorities – from establishing task forces to study AI’s impact to imposing specific obligations on companies deploying AI systems. This dynamic landscape may underscore the growing value of state-level action in the absence of federal guidance, and organizations are encouraged to closely monitor both enacted laws and pending legislation in the jurisdictions in which they operate.

Continue reading

  • no results

Back to top