AI Regulation
Colorado AI Act (SB 24-205): What SaaS Companies Need to Know
Published March 16, 2026 · 18 min read
Compliance deadline approaching
The Colorado AI Act is now in effect as of February 1, 2026. Deployers must complete impact assessments by June 30, 2026. Run npx codepliant go to scan your codebase for AI services and generate NIST AI RMF aligned governance documentation.
Colorado Senate Bill 24-205, known as the Colorado AI Act, is the first comprehensive state-level AI regulation in the United States. Signed into law in May 2024, it takes effect on February 1, 2026, with a compliance deadline of June 30, 2026 for companies to complete their initial impact assessments. If your SaaS application uses AI to make or substantially influence decisions that affect Colorado residents, this law applies to you.
While the EU AI Act gets most of the attention, the Colorado AI Act is more immediately relevant for US-based SaaS companies. It creates concrete obligations around algorithmic discrimination prevention, impact assessments, transparency disclosures, and governance practices. And unlike federal AI guidance, it has enforcement teeth: the Colorado Attorney General can bring actions against non-compliant companies.
This guide covers what the Colorado AI Act requires, who it applies to, what the deadlines are, how to detect AI services in your codebase, and what your engineering and product teams need to do to comply.
In this guide
What is the Colorado AI Act (SB 24-205)?
The Colorado AI Act regulates "high-risk artificial intelligence systems" — AI systems that make or are a substantial factor in making consequential decisions about Colorado residents. It applies to both developers (companies that build AI systems) and deployers (companies that use AI systems in their products or operations).
The Act focuses specifically on preventing algorithmic discrimination — the use of AI in ways that result in unlawful differential treatment based on protected characteristics including age, race, sex, disability, religion, sexual orientation, gender identity, and veteran status.
Unlike broader AI regulations, the Colorado AI Act is narrow in scope but deep in requirements. It does not attempt to regulate all AI — only high-risk systems that affect consequential decisions. But for those systems, the obligations are substantial.
Who does the Colorado AI Act apply to?
The Act applies to two categories of entities:
Developers
A developer is any entity that creates, codes, or substantially modifies an AI system. If you build AI features into your SaaS product, you are likely a developer under the Act. This includes companies that fine-tune foundation models, build custom ML models, or create AI-powered features that influence decisions.
Deployers
A deployer is any entity that uses a high-risk AI system to make consequential decisions. If your SaaS product is used by businesses in Colorado to make decisions about their customers or employees, both you and your customers may be deployers. This is critical for B2B SaaS: even if your company is not based in Colorado, if your customers use your AI features to make decisions about Colorado residents, the Act applies.
What counts as a consequential decision?
The Act defines consequential decisions as those with material legal or similarly significant effects on individuals in these areas:
Employment
Hiring, termination, promotion, compensation, performance evaluation, disciplinary actions. AI-powered resume screening, candidate ranking, and performance analytics all qualify.
Education
Admissions, financial aid, grading, disciplinary decisions, academic opportunity allocation. AI-driven tutoring systems that determine curriculum paths may qualify.
Financial services
Lending, credit, insurance underwriting, investment advice. AI-powered credit scoring, fraud detection that blocks transactions, and insurance pricing all qualify.
Housing
Tenant screening, rental pricing, mortgage qualification, property insurance. AI-powered tenant scoring tools and dynamic pricing algorithms qualify.
Healthcare
Treatment recommendations, insurance coverage decisions, resource allocation. AI diagnostic tools and triage systems qualify.
Legal services
Bail determinations, sentencing recommendations, case outcome predictions used to advise clients.
Government services
Benefits eligibility, licensing, permit approvals. AI systems used by government agencies to process applications.
Key deadlines for SB 24-205 compliance
February 1, 2026 — Act takes effect
The Colorado AI Act becomes law. All obligations begin to apply. Companies should already have compliance programs in progress.
June 30, 2026 — Initial impact assessments due
Deployers must complete their first impact assessments for all high-risk AI systems in use. This is the hard compliance deadline most companies need to plan for.
Ongoing — Annual updates
Impact assessments must be updated at least annually, or whenever significant modifications are made to a high-risk AI system.
Obligations for AI developers
If you build AI systems (including AI features in your SaaS product), the Colorado AI Act requires:
1. Reasonable care to protect against algorithmic discrimination
This is the Act's core requirement. You must use reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination. "Reasonable care" is assessed based on the totality of circumstances, including the size and complexity of the AI system, the nature and severity of potential harms, and the feasibility and cost of mitigation measures.
Practically, this means testing your AI systems for bias across protected characteristics, documenting the training data and its known limitations, implementing safeguards against discriminatory outputs, and monitoring system behavior in production.
2. Documentation and disclosure
Developers must make available to deployers and other developers:
- A general description of the AI system, including its intended uses and known limitations
- Documentation of the training data, including known biases or gaps
- The types of data the system processes and outputs it generates
- Known risks of algorithmic discrimination and mitigation measures
- How the system should be used, monitored, and maintained
This documentation enables deployers to conduct their own impact assessments and implement appropriate safeguards.
3. Public disclosure on your website
Developers must publish a statement on their website that describes the types of high-risk AI systems they develop, how they manage known or foreseeable risks of algorithmic discrimination, and the nature of the high-risk AI systems they have developed. Codepliant's AI Disclosure Generator can help you create this statement based on the actual AI services detected in your codebase.
Obligations for AI deployers
If your company uses high-risk AI systems (including AI features in SaaS products you subscribe to), you have deployer obligations:
1. Risk management policy
Deployers must implement a risk management policy and governance framework for high-risk AI systems. This includes designating personnel responsible for AI governance, establishing processes for identifying and mitigating discrimination risks, and training employees who interact with high-risk AI systems. See our AI Governance Framework Generator for NIST AI RMF aligned documentation.
2. Impact assessments
This is the most substantial deployer obligation. Before deploying a high-risk AI system — and at least annually thereafter — you must complete an impact assessment that includes:
- The purpose, intended use cases, and deployment context of the AI system
- An analysis of whether the system poses risks of algorithmic discrimination
- The categories of data processed by the system
- Metrics used to evaluate system performance and fairness
- A description of the transparency measures provided to consumers
- Post-deployment monitoring plans
Impact assessments must be retained for at least three years and provided to the Colorado Attorney General upon request.
3. Consumer transparency
Deployers must notify consumers before a high-risk AI system makes a consequential decision about them. The notice must include:
- That a high-risk AI system is being used to make or substantially assist in making a consequential decision
- A description of the system and how it is used in the decision
- Contact information for the deployer
- A description of the consumer's right to opt out (where applicable) and appeal
4. Consumer rights
When a high-risk AI system makes an adverse consequential decision, consumers have the right to:
- Receive an explanation of the decision, including the principal factors and logic that led to the outcome
- Appeal the decision and request human review
- Correct inaccurate data that was used in the decision
Detecting AI services in your codebase
The first step toward Colorado AI Act compliance is understanding what AI services your application uses. Codepliant scans your dependencies, imports, and environment variables to detect AI integrations automatically.
AI providers Codepliant detects
| Provider | Package / Import | Env variable |
|---|---|---|
| OpenAI | openai | OPENAI_API_KEY |
| Anthropic | @anthropic-ai/sdk | ANTHROPIC_API_KEY |
| Google AI | @google-ai/generativelanguage | GOOGLE_AI_API_KEY |
| LangChain | langchain | LANGCHAIN_API_KEY |
| Vercel AI SDK | ai | OPENAI_API_KEY |
| Cohere | cohere-ai | COHERE_API_KEY |
| Together AI | together-ai | TOGETHER_API_KEY |
| Replicate | replicate | REPLICATE_API_TOKEN |
Scan your codebase
Run Codepliant to identify all AI services and generate an inventory for your impact assessment:
$ npx codepliant go
Scanning project...
Detected services:
✓ OpenAI (openai) — AI / Machine Learning
✓ Anthropic (@anthropic-ai/sdk) — AI / Machine Learning
✓ Stripe (@stripe/stripe-js) — Payment Processing
✓ PostHog (posthog-js) — Analytics
✓ Sentry (@sentry/nextjs) — Error Tracking
✓ Prisma (prisma) — Database
Generated documents:
✓ legal/privacy-policy.md
✓ legal/terms-of-service.md
✓ legal/ai-disclosure.md
✓ legal/cookie-policy.md
✓ legal/ai-governance.md
Done in 1.2sGet structured output for impact assessments
Use the JSON output mode to extract AI service data for your impact assessment documentation:
$ npx codepliant scan --json | jq '.services[] | select(.category == "AI / Machine Learning")'
{
"name": "OpenAI",
"package": "openai",
"category": "AI / Machine Learning",
"dataCollected": [
"user prompts",
"conversation history",
"API usage metadata"
],
"detectedVia": ["dependency", "import", "env"]
}
{
"name": "Anthropic",
"package": "@anthropic-ai/sdk",
"category": "AI / Machine Learning",
"dataCollected": [
"user prompts",
"conversation history",
"API usage metadata"
],
"detectedVia": ["dependency", "import"]
}Automate compliance in CI/CD
Keep your impact assessment documentation current by running Codepliant in your CI/CD pipeline. This ensures documentation is regenerated whenever your AI integrations change:
name: Compliance Docs
on:
push:
branches: [main]
paths:
- 'package.json'
- 'requirements.txt'
- '.env.example'
jobs:
compliance:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npx codepliant go
- name: Commit updated docs
run: |
git config user.name "github-actions"
git config user.email "actions@github.com"
git add legal/
git diff --cached --quiet || git commit -m "Update compliance docs"
git pushThe affirmative defense: NIST AI RMF alignment
The Colorado AI Act provides an important affirmative defense. Developers and deployers that comply with a recognized AI risk management framework — specifically the NIST AI Risk Management Framework (AI RMF) or a substantially equivalent framework — can use that compliance as a defense against enforcement actions.
This is significant because it gives companies a clear path to compliance. If you implement the NIST AI RMF and can demonstrate adherence, you have a strong defense even if an algorithmic discrimination issue arises. Codepliant generates AI governance documentation aligned with the NIST AI RMF, giving you a starting point for this defense.
The NIST AI RMF consists of four core functions: Govern (establish policies and accountability), Map (identify and categorize AI risks), Measure (assess and analyze risks), and Manage (prioritize and act on risks). When you run npx codepliant go, the generated ai-governance.md document maps your detected AI services to these four functions, providing a concrete starting point for NIST AI RMF compliance.
Colorado AI Act vs. EU AI Act: key differences
If you are also preparing for the EU AI Act deadline on August 2, 2026, here is how the two regulations compare:
Scope
The EU AI Act covers all AI systems with a risk-based classification. The Colorado Act focuses exclusively on high-risk AI systems that make consequential decisions.
Primary concern
The EU AI Act addresses safety, transparency, and fundamental rights broadly. The Colorado Act focuses specifically on algorithmic discrimination.
Impact assessments
Both require impact assessments for high-risk systems. The Colorado Act requires annual updates and three-year retention. The EU Act has more detailed conformity assessment procedures.
Enforcement
The EU AI Act is enforced by national authorities with fines up to 7% of global turnover. The Colorado Act is enforced by the state Attorney General under existing consumer protection authority.
Affirmative defense
The Colorado Act provides an explicit affirmative defense for NIST AI RMF compliance. The EU AI Act has no equivalent safe harbor.
Data privacy overlap
The Colorado Act intersects with the Colorado Privacy Act (CPA). The EU AI Act intersects with GDPR. Both require understanding how personal data flows through AI systems.
For companies that need to comply with both regulations, Codepliant generates documentation that covers overlapping requirements. See our guides on the EU AI Act and GDPR for developers for more detail on international compliance.
Compliance action plan for SaaS companies
With the June 30, 2026 impact assessment deadline approaching, here is what your team should do now:
- Identify your high-risk AI systems. Review every AI feature in your product. Does it make or substantially influence decisions about employment, education, finance, housing, healthcare, or other consequential areas? If yes, it is a high-risk system under the Act. Run
npx codepliant goto generate an AI inventory from your codebase. - Determine your role. Are you a developer (building the AI), a deployer (using the AI), or both? Most SaaS companies that build AI features into their products are both.
- Conduct impact assessments. For each high-risk AI system, document its purpose, data inputs, decision outputs, discrimination risks, fairness metrics, and mitigation measures. This assessment must be completed by June 30, 2026.
- Test for bias. Evaluate your AI systems for differential treatment across protected characteristics. Use statistical fairness metrics appropriate to your use case (demographic parity, equalized odds, calibration).
- Implement transparency notices. Build consumer-facing disclosures that inform users when high-risk AI is used in decisions affecting them. Include appeal and opt-out mechanisms. See our AI Disclosure Generator for a starting point.
- Establish governance. Designate AI governance responsibility, create risk management policies, and train relevant staff. Use the NIST AI RMF as your framework to take advantage of the affirmative defense.
- Generate documentation. Use Codepliant to generate AI governance documentation aligned with NIST AI RMF. This documentation supports both compliance and the affirmative defense.
- Plan for ongoing compliance. Impact assessments must be updated annually and whenever significant system changes occur. Integrate Codepliant into your CI/CD pipeline to keep documentation current.
Beyond Colorado: the US state AI regulation landscape
Colorado is the first state to enact comprehensive AI regulation, but it will not be the last. Several other states have introduced or are considering similar legislation:
- Illinois: The Illinois AI Video Interview Act already regulates AI in hiring. Broader AI legislation is under consideration.
- California: Multiple AI bills were introduced in 2024-2025, including proposals for algorithmic impact assessments and AI transparency requirements.
- New York City: Local Law 144 regulates automated employment decision tools, requiring annual bias audits.
- Connecticut: Enacted an AI governance framework in 2024 with disclosure and assessment requirements for state agencies.
The trend is clear: AI regulation is expanding across the United States. Building compliance infrastructure now — impact assessment frameworks, transparency systems, governance processes — prepares you for the regulations that follow Colorado. If you also handle personal data of EU residents, read our GDPR compliance guide and privacy policy guide for SaaS to cover the data privacy side.
Prepare for the Colorado AI Act deadline
Scan your codebase to detect AI services and generate governance documentation aligned with the NIST AI RMF. Free, open source, no account required.
Detects OpenAI, Anthropic, Google AI, LangChain, Vercel AI SDK, Cohere, Replicate, Together AI, and more.
Related resources
AI Governance Framework Generator
Generate NIST AI RMF aligned governance documentation for your application.
EU AI Act: What Developers Need to Know
Comprehensive guide to the EU AI Act deadline on August 2, 2026.
GDPR Compliance for Developers
Developer-focused guide to GDPR compliance with code examples.
Privacy Policy for SaaS
How to generate a privacy policy based on your actual codebase.
SOC 2 for Startups
Developer survival guide to SOC 2 compliance with a 30-day readiness timeline.
AI Disclosure Generator
Generate AI transparency disclosures for your SaaS product.
Data Privacy Compliance Hub
Overview of all compliance frameworks Codepliant supports.