The clock is ticking
The EU AI Act entered into force in August 2024, with high-risk AI requirements taking effect in August 2026. Companies deploying AI in the EU — or serving EU users — need to be ready.
The European Union's Artificial Intelligence Act is the world's first comprehensive legal framework for AI. Unlike voluntary guidelines or industry self-regulation, this is binding law with real penalties. And while the law entered into force in August 2024, the most impactful provisions — the ones affecting "high-risk" AI systems — take effect in August 2026.
That's six months from now.
If your company deploys AI in the EU, or if your AI products serve EU users, this affects you. Here's what you need to know.
What's already in effect
Since February 2025, AI systems with "unacceptable risk" are banned in the EU, and all AI providers must ensure basic AI literacy among their staff.
The AI Act has a phased rollout. Two provisions are already active:
Banned AI practices (since February 2025). The EU has outright banned certain AI applications considered an "unacceptable risk": social scoring by governments, real-time biometric surveillance in public spaces (with narrow exceptions), manipulation of vulnerable groups, and emotion recognition in workplaces and schools.
AI literacy requirements (since February 2025). All organizations deploying AI systems must ensure their staff have sufficient AI literacy — meaning they understand the capabilities, limitations, and risks of the AI systems they use.
What is "high-risk" AI?
High-risk AI systems include those used in hiring, credit scoring, medical devices, education, law enforcement, critical infrastructure, and border control — categories defined in Annex III of the AI Act.
The heart of the AI Act is its classification of "high-risk" AI systems. These face the heaviest requirements. Annex III defines the categories:
High-Risk AI Categories (Annex III)
If you're building or deploying AI that touches any of these domains — even indirectly — your system may be classified as high-risk. And that means you'll need to comply with nine specific requirements.
The 9 requirements for high-risk AI
High-risk AI systems must meet 9 requirements: risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy and robustness, post-market monitoring, and cybersecurity.
Article 9 through Article 15 of the AI Act specify what high-risk AI providers must do:
1. Risk management system. A continuous, iterative process to identify, analyze, and mitigate risks throughout the AI system's lifecycle. Not a one-time checklist — an ongoing practice.
2. Data governance. Training, validation, and testing datasets must meet quality criteria. Bias testing is mandatory. Data provenance must be documented.
3. Technical documentation. Detailed documentation of the system's design, development, and intended use — sufficient for authorities to assess compliance.
4. Record-keeping. Automatic logging of events during the system's operation, with logs retained for a period appropriate to the system's intended purpose.
5. Transparency. Users must be informed they are interacting with an AI system. Instructions for use must be clear, including the system's capabilities and limitations.
6. Human oversight. The system must be designed to allow effective human oversight, including the ability to intervene, override, or stop the system.
7. Accuracy and robustness. The system must achieve appropriate levels of accuracy and be resilient to errors and inconsistencies.
8. Post-market monitoring. Providers must establish a system to monitor the AI's performance after deployment and report serious incidents.
9. Cybersecurity. Appropriate levels of cybersecurity to protect against attacks that could compromise the AI system's behavior.
Penalties
Violations carry fines up to €35 million or 7% of global annual revenue, whichever is higher — exceeding even GDPR penalties.
The EU is serious about enforcement. The penalty structure exceeds even GDPR:
AI Act Penalty Framework
For SMEs and startups, penalties are adjusted to be proportionate — but they're still significant. The message is clear: AI compliance isn't optional.
It's not just Europe
Over 70 countries are developing AI regulations. South Korea, Japan, Brazil, Canada, and Colorado (US) all have AI governance frameworks in progress or enacted.
The EU AI Act doesn't exist in a vacuum. A global wave of AI regulation is building:
South Korea passed its AI Basic Act in January 2025, creating a framework for high-impact AI with notification and impact assessment requirements.
Japan has taken a lighter-touch approach with its AI governance guidelines, but is actively considering binding legislation.
Brazil is advancing its AI regulatory framework (PL 2338/2023), which takes significant inspiration from the EU approach.
Canada's AIDA (Artificial Intelligence and Data Act) addresses high-impact AI systems with requirements for risk assessment and mitigation.
Colorado (US) enacted SB 24-205, the first US state law specifically addressing AI discrimination in high-risk decisions, effective February 2026.
Companies building AI products for global markets will increasingly face a patchwork of requirements. Preparing for the EU AI Act now positions you for compliance worldwide.
How verification helps
Cross-model verification directly supports 5 of the 9 high-risk AI requirements: risk management, documentation, record-keeping, accuracy, and post-market monitoring.
Cross-model AI verification isn't just a feature — it's becoming a compliance tool. As we explored in The Verification Paradox, manual verification destroys the speed advantage of AI. Automated cross-model checking resolves that — and happens to map directly to several AI Act requirements:
Requirement 1 — Risk management. Cross-model verification is a risk mitigation measure. By checking AI outputs against multiple models, you identify potential errors and hallucinations before they reach end users. This is exactly what the AI Act means by "appropriate measures to manage risks."
Requirement 3 — Documentation. Every verification session generates a record: which models were queried, what they agreed on, where they disagreed, and what confidence score was assigned. This documentation is compliance-ready.
Requirement 4 — Record-keeping. Verification logs serve as automatic event records — timestamped, with model versions and outputs. These are exactly the kind of logs Article 12 requires.
Requirement 7 — Accuracy. Cross-model verification directly measures output reliability. When models disagree, the system flags uncertainty. When they agree, confidence is quantified. This is a practical implementation of "appropriate levels of accuracy."
Requirement 8 — Post-market monitoring. Continuous verification provides real-time monitoring of AI output quality. Drift in model accuracy, new types of hallucinations, or emerging failure modes are detected automatically — not months later in a manual review.
This doesn't mean verification is a silver bullet for compliance. But it addresses some of the hardest requirements — the ones that demand ongoing, systematic quality assurance rather than one-time documentation.
What to do now
Start by classifying your AI systems, auditing your documentation, implementing verification processes, and building a compliance roadmap before August 2026.
Six months is not a lot of time. Here's a practical starting checklist:
Classify your AI systems. Map every AI system you deploy or develop to the AI Act's risk categories. Be honest about edge cases — if there's doubt, assume high-risk.
Audit your documentation. The AI Act requires detailed technical documentation. Start documenting now: training data sources, model architectures, testing procedures, known limitations.
Implement verification. Whether you use CrossCheck or build your own process, start verifying AI outputs systematically. Don't wait for a compliance deadline to discover your models hallucinate in critical domains.
Establish human oversight. Ensure every high-risk AI decision has a clear human review process. Document who reviews what, how they can intervene, and what override procedures exist.
Build your compliance roadmap. The AI Act requires ongoing compliance, not one-time certification. Plan for continuous monitoring, regular audits, and incident reporting procedures.
References
Regulation (EU) 2024/1689 of the European Parliament and of the Council — the EU AI Act. Official Journal of the European Union, 12 July 2024.
European Commission AI Act Timeline and Implementation Guidance, 2024–2026.
South Korea AI Basic Act (January 2025). Brazil PL 2338/2023. Colorado SB 24-205 (February 2026).
For background on cross-model verification research, see Does Asking 3 AIs Beat Trusting 1?
For data on how users experience AI trust gaps, see 654 Comments About AI Hallucinations — What We Found.
Verification as compliance infrastructure
CrossCheck AI provides automated cross-model verification with timestamped audit trails — built for teams preparing for the EU AI Act.