Skip to main content

EU AI Act

EU AI Act: What Developers Need to Know Before August 2, 2026

Published March 15, 2026 · Updated March 16, 2026 · 14 min read

Deadline approaching

Article 50 transparency obligations become enforceable on August 2, 2026. If your SaaS uses any AI provider — OpenAI, Anthropic, Google AI, LangChain, Vercel AI SDK — you need to disclose it. Run npx codepliant go to check what AI services your codebase uses.

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. If you build software that uses AI — and in 2026, that means most SaaS products — you need to understand what the Act requires and when those requirements kick in. The most immediately relevant deadline for developers is August 2, 2026, when Article 50 transparency obligations become enforceable.

This guide covers everything developers need to know: the regulatory timeline, risk classification system, specific obligations by risk tier, how to detect AI services in your codebase, practical compliance steps, and what happens if you do not comply. Whether you are integrating OpenAI APIs, running fine-tuned models, or building AI features with LangChain and Vercel AI SDK, this article will help you prepare.

The EU AI Act timeline: key dates for developers

The EU AI Act was published in the Official Journal of the European Union on July 12, 2024, and entered into force on August 1, 2024. But the obligations phase in gradually over three years:

February 2, 2025Prohibited AI practices

In effect

AI systems that pose unacceptable risks are banned. This includes social scoring systems, real-time biometric identification in public spaces (with exceptions), manipulation techniques that exploit vulnerabilities, and emotion recognition in workplaces and education.

August 2, 2025Governance and general-purpose AI

In effect

National competent authorities must be designated. Rules for general-purpose AI (GPAI) models begin to apply, including transparency and copyright obligations for GPAI providers like OpenAI and Anthropic.

August 2, 2026Transparency obligations (Article 50)

Upcoming

This is the critical deadline for most developers. All AI systems that interact with people, generate synthetic content, or make decisions affecting individuals must include transparency measures. If your SaaS uses AI, this applies to you.

August 2, 2027High-risk AI system requirements

Future

Full compliance requirements for high-risk AI systems, including conformity assessments, quality management systems, post-market monitoring, and registration in the EU database.

Understanding the risk classification system

The EU AI Act takes a risk-based approach. Your obligations depend on which risk tier your AI system falls into. Understanding your classification is the first step toward compliance.

Unacceptable Risk (Prohibited)

These AI practices are banned outright, effective February 2025. They include social scoring by governments, real-time remote biometric identification in public spaces (with narrow law enforcement exceptions), AI that manipulates behavior to cause harm, and systems that exploit age, disability, or social situations.

As a developer, verify that none of your AI features fall into this category. Most SaaS applications do not, but edge cases exist. For example, an AI-powered hiring tool that infers emotional states from video interviews could be classified as prohibited emotion recognition in the workplace.

High Risk

High-risk AI systems have the most extensive obligations. These include AI used in hiring and recruitment, credit scoring and insurance, education (admissions, grading), critical infrastructure, law enforcement, border control, and access to essential services.

If your AI system is classified as high-risk, you must implement a risk management system, ensure data governance with training data documentation, maintain technical documentation, enable human oversight, achieve accuracy and robustness standards, and register in the EU AI database. The full requirements take effect August 2, 2027, but preparation should start now.

Limited Risk (Transparency Obligations)

This is where most SaaS AI features land. Limited-risk systems are subject to Article 50 transparency obligations, effective August 2, 2026. These include:

  • AI-generated content labeling: Users must be informed when content (text, images, audio, video) is generated by AI. This applies to chatbots, AI writing assistants, image generators, and any feature that produces synthetic content.
  • Chatbot disclosure: When a user interacts with an AI system, they must be informed they are interacting with AI, not a human. This includes customer support chatbots, virtual assistants, and AI agents.
  • Deepfake labeling: AI-generated or manipulated images, audio, or video must be labeled as artificially generated. This includes AI avatars, voice synthesis, and image editing tools.
  • Emotion recognition disclosure: If your system detects emotions or biometric categorization, users must be informed (though many emotion recognition uses are prohibited entirely).

Minimal Risk

AI systems that pose minimal risk — spam filters, AI-powered search, recommendation engines, inventory management — have no mandatory obligations under the Act. However, the European Commission encourages voluntary codes of conduct for these systems, and best practice suggests documenting AI usage even when not legally required.

What Article 50 means in practice for your application

Article 50 is the section most relevant to SaaS developers in 2026. Let us break down what it requires in concrete terms.

  1. Disclose AI-generated content. If your application generates text, images, audio, or video using AI, you must clearly inform users. This applies whether you use OpenAI, Anthropic, Mistral, Llama, or any other provider. Practically, add visible labels near AI-generated content — a writing assistant should indicate AI-generated portions, an image tool must label AI outputs, and a code assistant should distinguish AI suggestions from human-written code.
  2. Identify AI interactions. If your application includes a chatbot or virtual assistant, users must be informed they are interacting with AI before or at the start of the interaction. A simple banner stating "You are interacting with an AI assistant" at the top of the chat interface satisfies this requirement.
  3. Mark synthetic media. AI-generated or substantially modified images, audio, and video must be labeled with both human-readable disclosures (visible labels) and machine-readable markings (metadata, watermarks).
  4. Document your AI systems. While not strictly an Article 50 requirement, the broader AI Act expects organizations to document what AI models they use, what data they process, what decisions they influence, and what safeguards are in place. Starting this documentation early makes compliance with future obligations significantly easier.

Detecting AI services in your codebase

The first step toward AI Act compliance is knowing exactly which AI services your application uses. In a growing codebase with dozens of dependencies, this is harder than it sounds. AI integrations can appear in direct dependencies, transitive dependencies, environment variables, and import statements scattered across hundreds of files.

Codepliant scans your codebase and detects AI services automatically. Here is how it works — and what it looks for.

What Codepliant detects

Codepliant recognizes these AI service integrations out of the box:

OpenAI — packages: openai | env: OPENAI_API_KEY, OPENAI_ORG
Anthropic — packages: @anthropic-ai/sdk | env: ANTHROPIC_API_KEY, CLAUDE_API_KEY
Google Generative AI — packages: @google/generative-ai | env: GOOGLE_AI_KEY, GEMINI_API_KEY
LangChain — packages: langchain | env: LANGCHAIN_API_KEY
Vercel AI SDK — packages: @vercel/ai, @ai-sdk/openai, @ai-sdk/anthropic, @ai-sdk/google | env: (inherits provider keys)
Cohere — packages: cohere, cohere-ai | env: COHERE_API_KEY
Together AI — packages: together-ai | env: TOGETHER_API_KEY
Replicate — packages: replicate | env: REPLICATE_API_TOKEN

Running the scan

To audit your project for AI services, run Codepliant in your project root:

terminal
$ npx codepliant go

Scanning /Users/you/your-saas-app...

Detected services:
  ├── openai (AI) — via package.json dependency + OPENAI_API_KEY in .env
  ├── @anthropic-ai/sdk (AI) — via import in src/lib/chat.ts
  ├── @vercel/ai (AI) — via package.json dependency
  ├── stripe (Payment) — via package.json dependency + STRIPE_SECRET_KEY
  ├── posthog-js (Analytics) — via import in src/app/providers.tsx
  └── @sendgrid/mail (Email) — via package.json dependency

AI services detected: 3
  → openai: collects "prompts, completions, user messages, usage metadata"
  → @anthropic-ai/sdk: collects "prompts, completions, user messages"
  → @vercel/ai: collects "prompts, completions, streaming responses"

Generating documents...
  ✓ legal/privacy-policy.md
  ✓ legal/ai-disclosure.md         ← EU AI Act Article 50
  ✓ legal/terms-of-service.md
  ✓ legal/cookie-policy.md
  ... and 31 more documents

Done. Generated 35 documents in legal/

For automated pipelines, use the --json flag to get machine-readable output, useful for blocking deployments that introduce new AI services without updated disclosure documentation:

terminal
$ npx codepliant scan --json | jq '.services[] | select(.category == "ai")'
output.json
{
  "name": "openai",
  "category": "ai",
  "detectedVia": ["dependency", "envVariable"],
  "dataCollected": [
    "prompts",
    "completions",
    "user messages",
    "usage metadata"
  ],
  "evidence": [
    { "type": "dependency", "file": "package.json", "value": "openai@4.73.0" },
    { "type": "envVariable", "file": ".env", "key": "OPENAI_API_KEY" }
  ]
}
{
  "name": "@anthropic-ai/sdk",
  "category": "ai",
  "detectedVia": ["import"],
  "dataCollected": [
    "prompts",
    "completions",
    "user messages"
  ],
  "evidence": [
    { "type": "import", "file": "src/lib/chat.ts", "value": "import Anthropic from '@anthropic-ai/sdk'" }
  ]
}

You can add a compliance gate to your CI pipeline that fails if AI services are detected but no AI disclosure document exists:

.github/workflows/compliance.yml
name: Compliance Check
on: [push, pull_request]

jobs:
  compliance:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Run Codepliant scan
        run: npx codepliant go
      - name: Verify AI disclosure exists
        run: |
          if npx codepliant scan --json | jq -e '.services[] | select(.category == "ai")' > /dev/null 2>&1; then
            if [ ! -f "legal/ai-disclosure.md" ]; then
              echo "ERROR: AI services detected but no AI disclosure document found."
              echo "Run 'npx codepliant go' locally to generate compliance docs."
              exit 1
            fi
          fi

When Codepliant detects AI services, it generates an ai-disclosure.md document that covers the specific transparency requirements of Article 50. The document includes:

  • A list of all AI services integrated in your application, with their purpose and data handling practices
  • The types of AI-generated content your application produces
  • How users are informed about AI interactions (chatbot disclosures, content labels)
  • Data processing details specific to each AI provider (what data is sent to which provider, retention policies, geographic processing)
  • References to each AI provider's terms of service and data processing agreements

This is not a generic template. Because Codepliant scans your actual code, the disclosure document reflects your specific AI usage — not a checklist of possibilities. See the AI Disclosure Generator page for more details on the output format.

Extraterritorial scope: why this affects non-EU companies

Like GDPR before it, the EU AI Act has extraterritorial reach. Article 2 establishes that the Act applies to:

  • Providers who place AI systems on the EU market, regardless of where they are established
  • Deployers of AI systems located within the EU
  • Providers and deployers located outside the EU, where the output produced by the AI system is used in the EU

If your SaaS product is accessible to EU users and includes AI features, you are likely within scope. This mirrors how GDPR applies to any company processing data of EU residents, and companies that ignored GDPR's extraterritorial scope learned expensive lessons.

Penalties for non-compliance

The EU AI Act introduces a tiered penalty structure based on the severity of the violation:

Prohibited AI practices

Up to 35 million EUR or 7% of global annual turnover, whichever is higher

High-risk system obligations

Up to 15 million EUR or 3% of global annual turnover

Transparency obligations (Article 50)

Up to 15 million EUR or 3% of global annual turnover

Incorrect information to authorities

Up to 7.5 million EUR or 1% of global annual turnover

For SMEs and startups, fines are capped proportionally. But even proportional fines based on 3% of turnover can be significant. More practically, non-compliance creates business risk: enterprise customers increasingly require AI governance as part of vendor assessments.

General-purpose AI model rules: what API consumers need to know

The AI Act introduces specific obligations for providers of general-purpose AI (GPAI) models — companies like OpenAI, Anthropic, Google, and Meta. These obligations include publishing model documentation, complying with EU copyright law, and implementing safety testing for models with systemic risk.

As a developer consuming these APIs, you do not bear the GPAI provider obligations directly. However, you benefit from understanding them. GPAI providers must give you sufficient documentation to fulfill your own downstream obligations. Practically, this means OpenAI and Anthropic will publish technical documentation, model cards, and usage policies that you should reference in your own compliance documentation.

Importantly, using a GPAI API does not exempt you from deployer obligations. You remain responsible for how you use the AI output, what disclosures you provide to your users, and what risks your specific application introduces.

Practical compliance steps for developers

With the August 2, 2026 deadline approaching, here is a practical checklist for engineering teams:

Step 1: Inventory your AI usage. Catalog every AI integration — direct API calls, embedded ML models, AI-powered features in third-party libraries, and automated decision-making systems. Codepliant checks three detection surfaces: package dependencies, import/require statements, and environment variable patterns, catching AI services that manual audits miss.

terminal
# Scan your project and output only AI services
$ npx codepliant scan --json | jq '[.services[] | select(.category == "ai")]'

Step 2: Classify your risk level. For each AI feature, determine which risk tier applies. Most SaaS AI features fall under limited risk with transparency obligations. If you are in hiring, credit, education, or healthcare, you may be high-risk. The AI Governance Framework page explains how to perform a risk assessment aligned with both the EU AI Act and the NIST AI RMF.

Step 3: Implement transparency measures. For the August 2026 deadline, focus on Article 50 transparency: add AI-generated content labels, chatbot disclosures, synthetic media marking, and general AI usage notices. These are UI changes your engineering team can implement directly.

Step 4: Generate compliance documentation. Document your AI systems, their purposes, risk assessments, and transparency measures. This serves as evidence of compliance and is required for high-risk systems.

terminal
# Generate all compliance documents including AI disclosure
$ npx codepliant go

# Output includes:
#   legal/ai-disclosure.md          — Article 50 transparency statement
#   legal/privacy-policy.md         — Updated with AI data processing
#   legal/data-processing-record.md — Record of processing activities
#   ... and 32 more documents

Step 5: Establish ongoing governance. Compliance is not a one-time event. Establish processes for reviewing AI usage when adding new features, updating documentation when integrations change, and monitoring AI system performance. Integrate Codepliant into your CI/CD pipeline to regenerate compliance documentation on every deployment.

Lessons from GDPR: why early preparation matters

The EU AI Act follows the same enforcement playbook as GDPR. When GDPR took effect in May 2018, many companies assumed enforcement would be slow and limited. Eight years later, over 2,000 fines totaling more than 4.5 billion EUR have been issued. Meta alone has been fined over 2.5 billion EUR for GDPR violations.

The AI Act enforcement is expected to follow a similar pattern: initial guidance and warnings, followed by increasing enforcement activity. Companies that prepare now will avoid both financial penalties and the reputational damage of being an early enforcement target.

GDPR also showed that compliance creates competitive advantage. Companies with strong data privacy practices win enterprise deals faster. The same dynamic is already emerging with AI governance — procurement teams at large organizations are adding AI compliance requirements to their vendor assessment questionnaires. For more on GDPR compliance, see our GDPR guide for developers.

Impact by industry

The AI Act affects different industries differently. Here is how common SaaS verticals are impacted:

Developer tools and AI coding assistants

Limited risk. Must disclose AI-generated code suggestions and label AI-assisted outputs. Implement transparency notices in IDE plugins and API responses.

Customer support and chatbots

Limited risk. Must inform users they are interacting with AI before or at the start of the conversation. Requires clear chatbot labeling.

Content creation and marketing tools

Limited risk. Must label AI-generated text, images, and video. Both human-readable labels and machine-readable metadata required for media content.

HR tech and recruitment platforms

High risk. AI used in hiring decisions requires conformity assessments, risk management systems, human oversight, and registration in the EU database. Full requirements by August 2027.

Fintech and lending platforms

High risk. AI used in credit scoring or insurance decisions has extensive documentation, testing, and oversight requirements. Must enable human review of AI decisions.

Healthcare and medtech

High risk. AI in medical diagnosis, treatment recommendations, or health monitoring requires conformity assessments and is also subject to HIPAA in the US market.

What to do right now

You have less than five months until Article 50 obligations become enforceable. Here is what your engineering team should prioritize:

  1. Run an AI audit. Use Codepliant to scan your codebase and generate a complete AI inventory. Understand exactly what AI systems your application uses and how they process data.
  2. Add transparency disclosures. Implement AI-generated content labels, chatbot notices, and AI usage disclosures in your UI. These are the minimum requirements for August 2026.
  3. Generate compliance documentation. Use Codepliant to produce AI disclosure statements, risk assessments, and governance frameworks aligned with both the EU AI Act and the NIST AI RMF.
  4. Brief your legal team. Share this guide and your AI inventory with legal counsel. They can help determine your exact risk classification and identify any high-risk use cases that need additional preparation.
  5. Integrate into CI/CD. Add Codepliant to your deployment pipeline so compliance documentation updates automatically as your AI integrations evolve.

Check your AI compliance now

Scan your codebase to detect AI services and generate Article 50 compliant documentation. Free, open source, no account required.

Detects OpenAI, Anthropic, Google AI, LangChain, Vercel AI SDK, Cohere, Replicate, Together AI, and more.

npx codepliant go

View on GitHub · npm package · Documentation

Related resources