Artificial Intelligence in the United States

Human oversight in the United States

Human oversight of AI systems is not federally mandated in the U.S., but some states have passed related laws, particularly for high-risk applications.

At the federal level, the NIST AI RMF encourages organizations to implement human oversight mechanisms throughout the AI lifecycle. It defines oversight as the ability for humans to understand, monitor, and, when necessary, intervene in AI system operations. While not legally binding, the framework is widely adopted across industries and referenced in agency guidance.

Since the Biden Administration, some federal enforcement agencies, such as the FTC and SEC, have continued to stress the value of human accountability, particularly in cases where AI is used to make decisions that affect consumers or investors. However, these expectations are grounded in broader legal principles, rather than AI-specific statutes.

At the state level, human oversight is more explicitly addressed in certain laws, such as:

  • Colorado’s AI Act, which requires deployers of high-risk AI systems to implement appropriate levels of human oversight to ensure the system operates as intended and does not result in algorithmic discrimination
  • California’s Physicians Make Decisions Act, which prohibits healthcare coverage denials made on the sole basis of an AI or algorithmic tool
  • New York City’s Local Law 144, which mandates that employers using automated employment decision tools conduct bias audits and provide human-readable explanations of how such tools may influence hiring decisions

These developments reflect a growing belief that human oversight may be a key to ensuring accountability, safety, and fairness in AI use.

Continue reading

  • no results

Previous topic
Back to top