Artificial Intelligence in the United States

High-risk AI in the United States

Unlike in the EU, the risk categorization of AI technologies in the U.S. is not defined by a single, harmonized legislative or regulatory taxonomy. Whether a specific AI technology or use is considered “high-risk” will depend on, and will matter only if, jurisdiction-specific laws or rules include a relevant definition. Currently in the U.S., the Colorado AI Act is the only legislation that adopts a risk stratification system that categorizes certain uses of AI as “high-risk.”

The Colorado AI Act defines “high-risk” AI systems as those that make, or significantly contribute to making, a “consequential decision.” Under the Act, a consequential decision has a material legal or similarly significant effect on the provision, denial, cost, or terms of:

  • Education enrollment or opportunity
  • Employment or an employment opportunity
  • A financial or lending service
  • An essential government service
  • Healthcare services
  • Housing
  • Insurance, or
  • Legal services.

The definition excludes AI systems intended to perform a narrow procedural task or detect deviations in decision-making patterns. These systems are not intended to replace or influence a previously completed human assessment without human review. 

Continue reading

  • no results

Previous topic
Back to top