Artificial Intelligence in the United States

Prohibited activities in the United States

As noted, the U.S. has not enacted a comprehensive federal law that explicitly outlines prohibited uses of AI. However, certain AI-related activities are restricted or prohibited under existing laws and proposed legislation. Enforcement actions have been taken under broader legal authorities such as consumer protection, civil rights, and securities laws. 

At the federal level, two of the many proposed bills aiming to prohibit specific AI practices are:

  • The Preventing Algorithmic Collusion Act (2025), which would ban the use of pricing algorithms – including those powered by AI – to incorporate nonpublic competitor data to facilitate price-fixing
  • The Transparency and Responsibility for Artificial Intelligence Networks Act (TRAIN Act) (2025), which would create an administrative subpoena process allowing copyright owners to compel AI developers to disclose copies of, or records sufficient to identify, copyrighted works used to train generative artificial intelligence models

While these bills have not become law, federal agencies have used existing statutes that prohibit deceptive or harmful AI practices. For example:

  • The FTC has taken enforcement action against companies for “AI washing” (misleading claims about AI capabilities) and is studying the business practices of companies that offer companion chatbots, focusing on their effect on children
  • The SEC has charged firms for misrepresenting the role of AI in investment strategies
  • The DOJ has pursued criminal charges in cases involving fraudulent claims about AI functionality

At the state level, some jurisdictions have enacted laws that explicitly prohibit certain AI uses, such as:

  • Colorado’s AI Act, which prohibits the deployment of high-risk AI systems without reasonable safeguards to prevent algorithmic discrimination
  • Utah’s AI Policy Act, which prohibits the undisclosed use of generative AI in regulated occupations (e.g., legal, medical), requires clear disclosure when AI is used in consumer interactions, and holds individuals liable for AI-driven misconduct under state consumer protection laws
  • New York City’s Local Law 144, which prohibits the use of automated employment decision tools without prior bias audits and candidate notification
  • California and Illinois, which have passed laws restricting the unauthorized use of AI-generated digital replicas and require transparency in political advertising

Overall, while the U.S. lacks a unified list of federally prohibited AI activities, a growing patchwork of federal enforcement actions and state-level statutes is continuing to define the boundaries of acceptable AI use.

Continue reading

  • no results

Previous topic
Back to top