Artificial Intelligence in the United States

Controls on generative AI in the United States

As the U.S. does not have a comprehensive federal law regulating generative AI, controls on generative AI are emerging through a combination of enforcement actions, state and local legislation, and agency rules or guidance. 

At the federal level, several agencies, including the FTC and SEC, have taken enforcement actions against deceptive claims about AI. The FTC will be enforcing the TAKE IT DOWN Act, which covers certain types of deepfakes, and has issued rules about impersonation scams and fake reviews that would cover the use of generative AI tools.

At the state level, several jurisdictions have enacted targeted controls on generative AI. These laws include transparency obligations on AI developers, prohibitions on AI-generated deepfakes, disclosure requirements for consumer-bot interactions, and restrictions on chatbot use for mental health or companionship, among other things. Three examples are:

  • California’s Generative AI Training Data Transparency Act, which requires disclosure of high-level details about the training data used in generative AI systems
  • Colorado’s AI Act, which includes provisions requiring developers and deployers of high-risk AI systems, including generative models, to exercise reasonable care to prevent algorithmic discrimination
  • Utah’s AI Policy Act, which prohibits the undisclosed use of generative AI in regulated occupations and mandates clear disclosure when AI is used in consumer interactions

Continue reading

  • no results

Previous topic
Back to top