Artificial Intelligence in the United States

Fairness / unlawful bias in the United States

As with transparency, there is no federal law in the U.S. that specifically addresses fairness, bias, or other forms of algorithmic discrimination in AI systems. Under the Biden Administration, federal agencies sought to address these issues by applying existing civil rights, employment, and consumer protection laws to AI use cases. However, this activity has almost entirely ended, and agency-issued guidance on these subjects has in some cases been removed from public websites. Meanwhile, however, several states have enacted or proposed legislation to directly address algorithmic discrimination. For example:

  • California’s Fair Employment and Housing Act applies to employers’ use of “[AI], algorithms, and other automated-decision systems” in employment decisions
  • Colorado’s AI Act prohibits the deployment of high-risk AI systems without reasonable safeguards to prevent algorithmic discrimination, with enforcement led by the AG
  • Illinois has enacted workplace AI legislation that prohibits the use of AI in hiring or employment decisions that could result in discrimination
  • New Jersey issued guidance clarifying that the New Jersey Law Against Discrimination (LAD) applies to “algorithmic discrimination” resulting from the use of AI and other decision-making tools, including in employment
  • New York City’s Local Law 144 requires annual bias audits for automated employment decision tools and mandates candidate notification

These state and local efforts, combined with prior federal activity, may reflect a growing – though not entirely shared – belief that AI systems can perpetuate or amplify existing societal biases, and that legal frameworks are evolving to ensure fairness, particularly in domains like employment, housing, and healthcare.

Continue reading

  • no results

Previous topic
Back to top