Dive Brief:
- Colorado deployers of “high-risk artificial intelligence systems,” including employers and other entities involved in the provision or denial of employment opportunities, must take “reasonable care” to protect consumers from algorithmic discrimination effective Feb. 1, 2026, according to a law signed Friday by Gov. Jared Polis.
- Under the law, Senate Bill 24-205, deployers must create AI risk management policies and programs. They also must complete impact assessments for all AI systems both annually as well as within 90 days after any modifications are made. Consumers must be notified about the AI’s deployment, and deployers must publish statements disclosing the AI systems they deploy and the information those systems collect.
- Employers that use AI must provide consumers opportunities to correct any incorrect personal data processed by an AI and appeal adverse consequential decisions made by an AI. In the event that a deployer discovers algorithmic discrimination has occurred, it must report the discovery to the state attorney general within 90 days.
Dive Insight:
Colorado’s law defines algorithmic discrimination to include any condition in which the use of AI results in unlawful, differential treatment or impact that disfavors an individual or group on the basis of a wide range of characteristics, including age, color, disability, ethnicity, race and many more.
However, this definition excludes the offer, license or use of an AI system for the sole purpose of a developer or deployer’s self-interest to identify, mitigate or prevent discrimination; ensure federal law compliance; or expand applicant pools to increase diversity or redress historical discrimination.
Additionally, some of the law’s requirements do not apply when the deployer employs fewer than 50 full-time employees and does not use its own data to train the AI; the AI system meets certain exemption criteria; and the deployer makes an impact assessment of the AI available to consumers.
If any enforcement action of the law is brought by Colorado’s attorney general, deployers may have an affirmative defense against such actions if they discover and cure such violations or are otherwise in compliance with a nationally or internationally recognized AI risk management framework, such as the one published by the National Institute of Standards and Technology. Deployers bear the burden of demonstrating that they meet these requirements, however.
The Centennial State’s law is now part of a small, but growing, list of AI laws in the U.S. At least three other jurisdictions, including Illinois, Maryland and New York City have passed laws specifically to regulate AI use in hiring.
In a statement accompanying the bill signing, Polis wrote that the law “is among the first in the country to attempt to regulate the burgeoning [AI] industry on such a scale.” He added that he was concerned about the effects the law could have on AI development in Colorado, and said that a “needed cohesive federal approach” could preempt the state’s law.
“Stakeholders, including industry leaders, must take the intervening two years before this measure takes effect to fine tune the provisions and ensure that the final product does not hamper development and expansion of new technologies in Colorado that can improve the lives of individuals across our state,” Polis said.