On May 17, 2024, Colorado Governor Jared Polis signed into law Senate Bill 24-205, known colloquially as the Colorado Artificial Intelligence Act (hereinafter the CAIA).1. The legislation is formally titled in the document as “An Act Concerning Consumer Protections for Interactions with Artificial Intelligence,” but many practitioners have been referring to the law as the “Colorado AI Act” or the “Colorado Artificial Intelligence Act.” )) This legislation, set to take effect on February 1, 2026, makes Colorado the second U.S. state to enact a major artificial intelligence consumer protection law, reflecting another significant step towards state-level regulation of AI. Colorado’s law follows one enacted in Utah in March,2). The CAIA also follows draft regulations under the California Privacy Protection Act (CPPA) see infra at n. 39. )) as well as a New York City law passed in 2021 affecting automated employment decisions.3. )) However, the CAIA imposes farther-reaching requirements, including a new general duty of care for developers and deployers of AI to protect individuals from algorithmic discrimination—which Colorado defines as any differential “treatment or impact” resulting from the use of an artificial intelligence system—on the basis of protected characteristics.4 The CAIA has been compared to the European Union’s recent AI Act, although it is less stringent and more narrowly drawn in several key ways.5 The CAIA affects predictive artificial intelligence systems which make decisions, not newer generative artificial intelligence like ChatGPT which create content.
In the absence of congressional action, Colorado’s law may help to set the tone for predictive artificial intelligence regulation nationwide, and it may impact the behavior of developers and deployers across state lines as they seek compliance with Colorado’s requirements. Practitioners note that Colorado’s law may influence other states’ pending legislation,6 and disputes concerning its requirements may first arise in employment disputes.7
Definitions and Obligations
The CAIA’s goal is to protect individuals from algorithmic discrimination by artificial intelligence systems operating in Colorado. Specifically, it protects people from artificial intelligence systems which are “high-risk” because they make or substantially help to make “consequential decisions” regarding humans.8 Consequential decisions include the decision to provide or deny education, employment, lending, government services, health care, housing, insurance, or legal services.9 Under the CAIA, it is unlawful for the use of an AI system to cause discriminatory treatment or a disparate impact on the basis of a protected characteristic regarding a consequential decision.10 Artificial intelligence systems are not considered “high-risk” if they are narrowly drawn to a particular purpose or do not replace a human assessment. Also expressly excluded are the suite of apps commonly included in most devices such as calculators and anti-malware and anti-virus protections, to name a few.11 Notably, the CAIA’s requirements apply to both the companies who develop artificial intelligence systems and the companies which deploy them as end users.12 There are some exceptions to certain requirements for small business deployers; other than that, the law applies across the board.13
Developer and Deployer Obligations
The CAIA creates and imposes a general duty of reasonable care for both developers and deployers of high-risk artificial intelligence systems.14 For developers, this duty extends to any foreseeable risks of algorithmic discrimination from the intended and contracted uses of their AI product.15 This provision is the first such duty established by any state law.16 The law also requires developers to provide a detailed product description17 and deployers to disclose their use of high-risk AI systems and the known or reasonably foreseeable risks from their use, and to draft impact assessments;18 it further requires deployers to provide individuals with more disclosures ahead of a “consequential decision” and in the event of an adverse decision;19 it requires that individuals be informed of a statutory right (granted by the Colorado Privacy Act20) to opt out from having their personal data processed by an AI system;21 it grants individuals a right to an explanation of an adverse consequential decision, a right to correct any information, and a right to appeal the matter for human review;22 and it requires deployers to undertake risk management and other best practices such as an annual review for algorithmic discrimination.23 By complying with the CAIA’s documentation, disclosure, risk management, and other listed requirements, developers and deployers establish a rebuttable presumption that they have used reasonable care in deploying their high-risk artificial intelligence system.24
Enforcement and Penalties
The CAIA is enforced exclusively by the Colorado Attorney General; violations of its requirements are deemed to be an unfair trade practice under the Colorado Consumer Protection Act,25 with penalties of up to $20,000 per violation.26 Developers and deployers have an affirmative defense if they discover and cure violations through their own means or with sought-after feedback from users, so long as they are also otherwise in compliance with the latest Artificial Intelligence Risk Management Framework (AI RMF) published by the National Institute of Standards and Technology (NIST) or another designated framework.27 The Colorado Attorney General may also further develop the CAIA’s framework through rulemaking regarding developer documentation, impact assessments, notice and disclosure requirements, acceptable risk management standards, and the requirements for affirmative defenses and establishing rebuttable presumptions.28
Conclusion
Because algorithmic discrimination laws are so new, Colorado’s approach could serve as a model for other states29 and influence future federal legislation or regulation.30 On the one hand, the bill is a strong first step toward ensuring continued human control and human verification over automated decisions that affect people, and toward mitigating forms of bias and discrimination that may exist in AI systems. On the other hand, the bill imposes new restrictions and state-of-the-art requirements on all businesses regarding a still-emerging field of technology that is highly dynamic and not perfectly understood by the general public.
Opponents of the bill claim that regulation may be needed, but the bill is not the correct form. The U.S. Chamber of Commerce objected to the legislation, saying that it may hamper small business adoption of AI and that a gap-filling approach would be better than the CAIA’s broad application.31 Similar objections came from tech industry groups like the Chamber of Progress and the Consumer Technology Association32 who urged the Governor Polis to veto SB 205 in favor of strengthening other existing consumer protection and civil rights laws to target discrimination itself. These groups cited the difficulty of “pinpointing” where discriminatory outcomes originate—that could possibly be buried within AI model training data as an example.33
Advocates from groups like the Center for Democracy & Technology reply that the CAIA reaffirms a “central tenet of our civil rights laws” in establishing a discriminatory impact standard, and that it does not necessarily create new or higher standards for companies to prevent their AI decisions from being discriminatory.34 For example, they note, the federal Equal Employment Opportunity Commission (EEOC) has already taken the position that both deployers and developers should be liable under Title VII for the disparate impact of AI decisions in employment discrimination,35 and the Federal Trade Commission has taken a similarly strong posture.36 Similar groups like Consumer Reports urged the signing of the bill but with further strengthening of particular provisions, citing potential “loopholes” in the bill’s “overbroad” protection of trade secrets and its exemption of systems performing undefined “narrow procedural tasks” from the definition of “consequential decisions”.37
Governor Jared Polis signed the bill with reservations, stating that he hoped the CAIA would be revised during the two years prior to its implementation, or that it would help to jumpstart an “overdue” national conversation on regulation and lead to preemptive federal action which would create an “even playing field” across states.38 His signing statement expressed particular concern with the disparate impact standard.22
Governor Polis’s description of the legislation as a conversation starter may be apt. Colorado’s law joins Utah’s, as well as draft regulations by the California Privacy Protection Agency (CPPA)39 and a raft of proposed bills in statehouses across the country.40 The CAIA’s sponsors stated that their goal was to “lay a foundation,” one which could presumably be refined and built upon—in Colorado and elsewhere. An evolving patchwork of regulation may also incentivize greater federal attention and possible preemption. However, federal legislation is never a guarantee, and states may chart their own path as they seek to simultaneously develop modern, AI-cognizant consumer protections alongside favorable business conditions for the technology industry.
The author would like to thank Steiger Fellow Ivan Garcia for significant contributions to this piece. This article was authored by Alex Siegal, a second year Harvard law student, and Steiger Fellow Ivan Garcia as part of their clerkships at the National Association of Attorneys General.
ENDNOTES
- Act of May 17, 2024, ch. 198, 2024 Colo. Sess. Laws 1199 (codified at Colo. Rev. Stat. § 6-1-1701 et seq. (2024[↩]
- Act of Mar. 13, 2024, ch. 186, 2024 Utah Laws (codified in sections of titles 13, 63I, and 76 of Utah Code Ann. (LexisNexis 2024[↩]
- Local Law No. 144 of 2021, 2020 N.Y. City Council Int. No. 1894-A (codified at N.Y. City Admin. Code § 20-870 et seq. (2024[↩]
- Colo. Rev. Stat. § 6-1-1701(1)(a). [↩]
- See, e.g., Adam Aft et al., North America: From Brussels to Boulder – Colorado enacts comprehensive AI law on the heels of European Union’s AI Act with significant obligations for businesses and employers, Baker McKenzie (May 22, 2024), Marian A. Waldmann Agarwal et al., Navigating New Frontiers: Colorado’s Groundbreaking AI Consumer Protection Law, Morrison Foerster (May 31, 2024),. [↩]
- Cf. Artificial Intelligence 2024 Legislation, Nat’l Conf. of State Legislatures (Jun. 3, 2024). [↩]
- Cf. Kevin Angle & Rachel Marmor, Colorado Legislature Approves AI Bill Targeting “High-Risk” Systems and AI Labeling, Holland & Knight (May 16, 2024). [↩]
- See Colo. Rev. Stat. § 6-1-1701(1)– (9). [↩]
- See id. § 6-1-1701(3). [↩]
- See id. §§ 6-1-1701, 6-1-1702, 6-1-1703. [↩]
- See id. § 6-1-1701(9). [↩]
- See id. §§ 6-1-1702, 6-1-1703. [↩]
- See Colo. Rev. Stat. § 6-1-1703(6). Some practitioners note that the breadth of the law is unusual compared to a typical “‘comprehensive’ privacy law,” which often may exempt healthcare or financial institutions. See, e.g., Angle & Marmor, supra note 8; see also Vivek Mohan et al., Colorado’s Mile High AI Act: 6 Key Takeaways, Gibson Dunn (May 10, 2024). However, the CAIA’s purpose extends beyond privacy: it targets healthcare and lending decisions explicitly as “consequential decisions” in which algorithmic discrimination could pose high-risk harms. [↩]
- See Colo. Rev. Stat. §§ 6-1-1702(1), 6-1-1703(1). [↩]
- See id. § 6-1-1702(1), [↩]
- See Tatiana Rice et al., The Colorado Artificial Intelligence Act: FPF U.S. Legislation Policy Brief 1 (2024), (Future of Privacy Forum policy brief); see also Mohan et al., supra note 3; Angle & Marmor, supra note 7; Mallory Culhane, Two Unlikely States Are Leading The Charge on Regulating AI, Politico: The Fifty (May 15, 2024). [↩]
- See Colo. Rev. Stat. § 6-1-1702(2, 5). The CAIA provides that the developer must disclose the intended purpose of the AI system and its intended uses and benefits, as well as “known or reasonably foreseeable limitations of the high-risk system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the high-risk system.” Under its document description requirement, a developer must efficiently describe the type of data used to train the AI system, how the AI system was evaluated for performance and relevant information, the data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases and appropriate mitigation, the intended outputs, and the measures the developer has taken to mitigate reasonably foreseeable risks of algorithmic discrimination. A developer must also provide deployers and the Colorado Attorney General with information on how the developer has mitigated any reasonable or foreseeable risk of algorithmic discrimination within 90 days of being informed of any such risks. [↩]
- See Colo. Rev. Stat. § 6-1-1703(3). [↩]
- See id. §§ 6-1-1703(4–5, 7, 9), 6-1-1704. [↩]
- See id. § 6-1-1306 (1)(a)(I)(C). [↩]
- See id. § 6-1-1703(4). [↩]
- See id. [↩][↩]
- See id. §§ 6-1-1703(2, 3). See also id. § 6-1-1702(4) (requiring developers to provide information that will assist with risk management). The risk management policy and program must specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. [↩]
- See id. §§ 6-1-1702(1), 6-1-1703(1). [↩]
- See id. § 6-1-1706(1–2). [↩]
- See id. § 6-1-112(1). [↩]
- See id. § 6-1-1706(3). [↩]
- See id. § 6-1-1707. [↩]
- See, e.g., Culhane, supra note17. [↩]
- Adam S. Forman et. al., Colorado SB-24-205: On the Verge of Addressing AI Risk with Sweeping Consumer Protection Law, Nat’l Law Rev. (May 17, 2024). [↩]
- See U.S. Chamber of Commerce Letter to Governor Polis, U.S. Chamber of Com. (May 7, 2024). [↩]
- See Marissa Ventrelli, Organizations Urge Colorado Governor to Veto AI Bill, Say It Will Hurt Small Businesses, Colo. Pol. (May 17, 2024). [↩]
- Letter from Kouri Marshall, Chamber of Progress, to Governor Jared Polis, May 10, 2024. [↩]
- Matt Scherer, Colorado’s Artificial Intelligence Act is a Step in the Right Direction. It Must be Strengthened, Not Weakened, Ctr. for Democracy & Tech. (May 22, 2024). [↩]
- See EEOC Releases New Resource on Artificial Intelligence and Title VII, U.S. Equal Emp. Opportunity Comm’n (May 18, 2023)(press release linking to new technical assistance document). [↩]
- See Kirk J. Nahra et al., FTC Hosts Tech Summit on Artificial Intelligence, WilmerHale (Feb. 1, 2024)(summarizing relevant aspects of the January 2024 FTC Tech Summit). [↩]
- Grace Gedye, Consumer Reports Backs Signing of High-Risk AI Bill, Calls for Colorado General Assembly to Strengthen It Before It Goes into Effect, Consumer Reps. (May 18, 2024). [↩]
- Letter from Governor Jared Polis to the Colorado General Assembly, May 17, 2024 (signing statement). [↩]
- See generally Cal. Privacy Protection Agency, Draft Automated Decisionmaking Technology Regulations (2023). [↩]
- Cf. Artificial Intelligence 2024 Legislation, supra note 7. [↩]