EU AI Act High-Risk Rules Take Effect: What Changes in August 2025
The EU AI Act's high-risk classification requirements are now enforceable. Here's what's actually required, who's affected, and how to prepare your compliance strategy.
The EU AI Act's high-risk classification requirements are now enforceable. Here's what's actually required, who's affected, and how to prepare your compliance strategy.
The milestone: As of 2 August 2025, the EU AI Act's high-risk AI system requirements are enforceable. Companies deploying AI systems in high-risk categories must now comply with mandatory requirements around risk management, data governance, transparency, and human oversight.
Why this matters: This is the first major enforcement phase of the world's most comprehensive AI regulation. Companies selling to EU customers - regardless of where they're headquartered - need to understand their obligations.
The builder's question: Is your AI system classified as high-risk? If so, what specifically do you need to do to comply?
The August 2025 deadline activates requirements for high-risk AI systems defined in Annex III of the Act. These include AI used in:
General-purpose AI models (like GPT-4 or Claude) have separate timelines - their requirements take effect in August 2025 for providers, with additional systemic risk requirements phased in later.
Your AI system is high-risk if it falls within these use case categories AND involves automated decision-making that significantly affects individuals:
| Category | Examples | High-risk? |
|---|---|---|
| CV screening | Automated resume filtering | Yes |
| Content recommendations | Social media feeds | No |
| Credit decisions | Loan approval automation | Yes |
| Customer support | Chatbot answering questions | No |
| Fraud detection | Transaction blocking | Depends on implementation |
Not every AI touching a high-risk domain is automatically high-risk. The Act includes exemptions for:
This is where legal interpretation matters. A recruitment chatbot that schedules interviews probably isn't high-risk. One that ranks candidates almost certainly is.
Whatever you conclude, document the reasoning. Regulators will want to see that you've conducted a good-faith assessment. The documentation should include:
If your system is classified as high-risk, you must implement:
A documented, ongoing process for identifying and mitigating risks:
## Risk Management Requirements
1. **Risk identification:** Systematic analysis of risks to health, safety, fundamental rights
2. **Risk estimation:** Assessment of likelihood and severity
3. **Risk mitigation:** Measures to eliminate or reduce identified risks
4. **Residual risk:** Documentation of risks that remain after mitigation
5. **Testing:** Validation that mitigation measures work as intended
6. **Monitoring:** Ongoing surveillance for new risks post-deployment
This isn't a one-time exercise. The risk management system must be maintained throughout the AI system's lifecycle.
Requirements for training, validation, and testing datasets:
For systems using third-party models (OpenAI, Anthropic, etc.), you're responsible for data governance in your fine-tuning and evaluation datasets, not the base model training data.
Comprehensive documentation covering:
The documentation must be sufficient for conformity assessment bodies to evaluate your system.
Automatic logging of system operation:
interface AISystemLog {
timestamp: string;
inputData: Record<string, unknown>; // Or reference to stored input
outputDecision: string;
confidenceScore?: number;
modelVersion: string;
humanOverrideApplied?: boolean;
overrideReason?: string;
}
Logs must be retained for a period appropriate to the system's purpose - typically the longer of the product lifecycle or relevant retention regulations.
Mechanisms enabling human oversight of the AI system:
Users must be informed that they're interacting with an AI system. Additionally:
High-risk systems must undergo conformity assessment before market placement. For most categories, self-assessment is permitted - you don't need external certification.
The self-assessment process:
Some categories (biometric identification, critical infrastructure) require third-party assessment by notified bodies.
The AI Act includes significant penalties:
| Violation | Maximum fine |
|---|---|
| Prohibited AI practices | €35M or 7% global turnover |
| High-risk requirement violations | €15M or 3% global turnover |
| Providing incorrect information | €7.5M or 1.5% global turnover |
For SMEs, fines are calculated using the lower percentage threshold where it results in a higher absolute figure.
Beyond fines, non-compliant systems can be ordered off the market - a potentially more significant consequence for commercial viability.
Don't over-engineer compliance. If your system isn't clearly high-risk, document why and move on. The Act is designed to be proportionate.
Use existing frameworks. ISO 42001 (AI management systems) and NIST AI RMF align well with Act requirements. Implementing these standards provides strong compliance evidence.
Build logging from day one. Retrofitting comprehensive logging is painful. Design it into your architecture now.
Conduct inventory. Most large organisations don't know all the AI systems they're using. Start with discovery.
Centralise governance. Distributed AI deployments need centralised oversight. Establish an AI governance function with clear authority.
Engage legal early. The boundary between high-risk and not-high-risk involves legal judgment. Get legal teams involved in classification decisions.
Provide compliance support. Customers using your AI in high-risk contexts need documentation, logging capabilities, and transparency features. Build these as product features.
Clarify the responsibility split. The Act distinguishes between providers (who build AI) and deployers (who use it). Be explicit about which obligations fall where.
This is phase one. Coming milestones:
| Date | Milestone |
|---|---|
| August 2025 | High-risk system requirements enforceable (now) |
| August 2025 | GPAI provider obligations begin |
| August 2026 | General-purpose AI model systemic risk requirements |
| August 2027 | Requirements for AI systems already on market |
The regulatory landscape will continue evolving. Build compliance capabilities that can adapt.
The EU AI Act is workable. Yes, it adds compliance overhead - but the requirements are reasonable for systems making consequential decisions about people's lives.
For most AI builders, the practical impact is:
Companies that have been building AI responsibly won't find compliance dramatically burdensome. Those cutting corners on documentation, testing, and oversight have work to do.
The Act rewards good engineering practices. That's not a terrible outcome.
Further reading: