News15 Aug 20258 min read

EU AI Act High-Risk Rules Take Effect: What Changes in August 2025

The EU AI Act's high-risk classification requirements are now enforceable. Here's what's actually required, who's affected, and how to prepare your compliance strategy.

MB
Max Beech
Head of Content

The milestone: As of 2 August 2025, the EU AI Act's high-risk AI system requirements are enforceable. Companies deploying AI systems in high-risk categories must now comply with mandatory requirements around risk management, data governance, transparency, and human oversight.

Why this matters: This is the first major enforcement phase of the world's most comprehensive AI regulation. Companies selling to EU customers - regardless of where they're headquartered - need to understand their obligations.

The builder's question: Is your AI system classified as high-risk? If so, what specifically do you need to do to comply?

What's now enforceable

The August 2025 deadline activates requirements for high-risk AI systems defined in Annex III of the Act. These include AI used in:

  • Employment: Recruitment, performance evaluation, promotion decisions
  • Education: Student assessment, exam proctoring, admissions decisions
  • Essential services: Credit scoring, insurance pricing, social benefits allocation
  • Law enforcement: Evidence analysis, crime prediction, border control
  • Justice: Legal research assistance, case outcome prediction
  • Critical infrastructure: Energy, water, transport management

General-purpose AI models (like GPT-4 or Claude) have separate timelines - their requirements take effect in August 2025 for providers, with additional systemic risk requirements phased in later.

The high-risk determination

Step 1: Check Annex III categories

Your AI system is high-risk if it falls within these use case categories AND involves automated decision-making that significantly affects individuals:

CategoryExamplesHigh-risk?
CV screeningAutomated resume filteringYes
Content recommendationsSocial media feedsNo
Credit decisionsLoan approval automationYes
Customer supportChatbot answering questionsNo
Fraud detectionTransaction blockingDepends on implementation

Step 2: Apply the materiality test

Not every AI touching a high-risk domain is automatically high-risk. The Act includes exemptions for:

  • Systems performing "narrow procedural tasks"
  • Systems that "assist but do not replace" human decision-making
  • Systems with "de minimis" impact on individual rights

This is where legal interpretation matters. A recruitment chatbot that schedules interviews probably isn't high-risk. One that ranks candidates almost certainly is.

Step 3: Document your determination

Whatever you conclude, document the reasoning. Regulators will want to see that you've conducted a good-faith assessment. The documentation should include:

  • System description and intended purpose
  • Analysis against Annex III categories
  • Rationale for high-risk or non-high-risk classification
  • Review date and responsible party

Compliance requirements for high-risk systems

If your system is classified as high-risk, you must implement:

Risk management system

A documented, ongoing process for identifying and mitigating risks:

## Risk Management Requirements

1. **Risk identification:** Systematic analysis of risks to health, safety, fundamental rights
2. **Risk estimation:** Assessment of likelihood and severity
3. **Risk mitigation:** Measures to eliminate or reduce identified risks
4. **Residual risk:** Documentation of risks that remain after mitigation
5. **Testing:** Validation that mitigation measures work as intended
6. **Monitoring:** Ongoing surveillance for new risks post-deployment

This isn't a one-time exercise. The risk management system must be maintained throughout the AI system's lifecycle.

Data governance

Requirements for training, validation, and testing datasets:

  • Relevance: Data must be appropriate for the intended purpose
  • Representativeness: Datasets should reflect the populations the system will serve
  • Completeness: Sufficient coverage of relevant scenarios
  • Error-free: Reasonable measures to detect and correct errors
  • Bias examination: Explicit analysis of potential biases

For systems using third-party models (OpenAI, Anthropic, etc.), you're responsible for data governance in your fine-tuning and evaluation datasets, not the base model training data.

Technical documentation

Comprehensive documentation covering:

  • System architecture and design choices
  • Training methodologies and data sources
  • Performance metrics and validation results
  • Known limitations and appropriate use cases
  • Hardware and software requirements

The documentation must be sufficient for conformity assessment bodies to evaluate your system.

Logging and traceability

Automatic logging of system operation:

interface AISystemLog {
  timestamp: string;
  inputData: Record<string, unknown>; // Or reference to stored input
  outputDecision: string;
  confidenceScore?: number;
  modelVersion: string;
  humanOverrideApplied?: boolean;
  overrideReason?: string;
}

Logs must be retained for a period appropriate to the system's purpose - typically the longer of the product lifecycle or relevant retention regulations.

Human oversight

Mechanisms enabling human oversight of the AI system:

  • Clear display of system capabilities and limitations to operators
  • Ability for humans to interpret outputs (explainability)
  • Capacity to override, interrupt, or reverse AI decisions
  • "Stop button" functionality for high-consequence scenarios

Transparency obligations

Users must be informed that they're interacting with an AI system. Additionally:

  • Clear instructions for use
  • Contact information for the provider
  • Information about the system's purpose and decision factors

Conformity assessment

High-risk systems must undergo conformity assessment before market placement. For most categories, self-assessment is permitted - you don't need external certification.

The self-assessment process:

  1. Technical documentation review: Verify completeness
  2. Quality management system: Confirm processes are in place
  3. Testing: Validate system meets requirements
  4. Declaration: Sign EU Declaration of Conformity
  5. CE marking: Apply marking to product/documentation
  6. Registration: Enter system in EU database

Some categories (biometric identification, critical infrastructure) require third-party assessment by notified bodies.

Penalties for non-compliance

The AI Act includes significant penalties:

ViolationMaximum fine
Prohibited AI practices€35M or 7% global turnover
High-risk requirement violations€15M or 3% global turnover
Providing incorrect information€7.5M or 1.5% global turnover

For SMEs, fines are calculated using the lower percentage threshold where it results in a higher absolute figure.

Beyond fines, non-compliant systems can be ordered off the market - a potentially more significant consequence for commercial viability.

Practical compliance strategies

For startups

Don't over-engineer compliance. If your system isn't clearly high-risk, document why and move on. The Act is designed to be proportionate.

Use existing frameworks. ISO 42001 (AI management systems) and NIST AI RMF align well with Act requirements. Implementing these standards provides strong compliance evidence.

Build logging from day one. Retrofitting comprehensive logging is painful. Design it into your architecture now.

For enterprises

Conduct inventory. Most large organisations don't know all the AI systems they're using. Start with discovery.

Centralise governance. Distributed AI deployments need centralised oversight. Establish an AI governance function with clear authority.

Engage legal early. The boundary between high-risk and not-high-risk involves legal judgment. Get legal teams involved in classification decisions.

For AI service providers

Provide compliance support. Customers using your AI in high-risk contexts need documentation, logging capabilities, and transparency features. Build these as product features.

Clarify the responsibility split. The Act distinguishes between providers (who build AI) and deployers (who use it). Be explicit about which obligations fall where.

What's next

This is phase one. Coming milestones:

DateMilestone
August 2025High-risk system requirements enforceable (now)
August 2025GPAI provider obligations begin
August 2026General-purpose AI model systemic risk requirements
August 2027Requirements for AI systems already on market

The regulatory landscape will continue evolving. Build compliance capabilities that can adapt.

Our take

The EU AI Act is workable. Yes, it adds compliance overhead - but the requirements are reasonable for systems making consequential decisions about people's lives.

For most AI builders, the practical impact is:

  1. Document your classification reasoning - even if you conclude you're not high-risk
  2. Implement sensible logging - you should be doing this anyway
  3. Enable human oversight - also good practice regardless of regulation
  4. Be transparent - tell users when they're interacting with AI

Companies that have been building AI responsibly won't find compliance dramatically burdensome. Those cutting corners on documentation, testing, and oversight have work to do.

The Act rewards good engineering practices. That's not a terrible outcome.


Further reading: