TL;DR
- EU AI Act begins enforcement August 2, 2025 for prohibited AI systems; full enforcement by August 2027
- Most B2B AI applications classify as "limited risk" (transparency requirements only) or "minimal risk" (no requirements)
- High-risk AI systems (HR decisions, credit scoring) face strict requirements: documentation, human oversight, risk management
- Action now: Classify your AI systems, document use cases, implement governance, prepare for audits
EU AI Act 2025 Deadline: What B2B Companies Must Do Now
The European Union's AI Act - the world's first comprehensive AI regulation - begins enforcement in 8 months (August 2, 2025). Companies deploying AI in the EU market must comply or face fines up to €35M or 7% of global revenue.
Timeline:
- February 2, 2025: Prohibited AI systems ban takes effect (6 months from now)
- August 2, 2025: Rules for general-purpose AI models (GPT-4, Claude, etc.)
- August 2, 2026: High-risk AI system requirements
- August 2, 2027: Full enforcement for all AI systems
If you're using AI agents for business workflows, here's what you need to do right now.
Understanding AI Risk Classification
The EU AI Act classifies AI systems into four risk categories:
1. Unacceptable Risk (Prohibited)
What: AI systems that manipulate behavior, exploit vulnerabilities, social scoring, real-time biometric identification in public spaces.
Impact on B2B: Minimal. These don't apply to typical business AI use cases.
Action: None required unless you're building surveillance or manipulation systems (unlikely).
2. High Risk
What: AI used in:
- Employment decisions (CV screening, interview evaluation, promotion recommendations)
- Credit scoring and loan decisions
- Access to essential services (insurance underwriting, benefit eligibility)
- Law enforcement and justice
- Critical infrastructure management
Requirements:
- Conformity assessments before deployment
- Quality management systems
- Technical documentation (data governance, model cards, testing records)
- Human oversight mechanisms
- Accuracy, robustness, and cybersecurity measures
- Transparency and information to users
Impact on B2B: If you use AI for hiring, credit decisions, or insurance, you are likely high-risk.
Action: Full compliance programme required (see below).
3. Limited Risk (Transparency Requirements)
What: AI systems interacting with humans (chatbots), generating synthetic content (deepfakes, AI-generated text), or making automated recommendations.
Requirements:
- Inform users they're interacting with AI
- Clearly label AI-generated content
- Allow users to opt-out or request human review
Impact on B2B: Most AI agents for customer service, marketing, sales fall here.
Action: Add transparency notices, implement human escalation.
4. Minimal Risk
What: All other AI systems not covered above (AI-enabled video games, spam filters, simple automation).
Requirements: None.
Impact on B2B: Simple workflow automation, data processing, basic AI likely qualify.
Action: Document that you've classified as minimal risk; maintain records.
What B2B Companies Must Do Now
Month 1-2: AI System Inventory and Classification
Step 1: List all AI systems you use
Create a comprehensive inventory:
- Where is AI used? (HR, sales, marketing, customer service, finance, operations)
- What decisions does it make? (recommendations, approvals, rejections)
- What data does it process? (customer data, employee data, financial data)
- Who is affected? (employees, customers, suppliers)
Example inventory:
| System | Use Case | Decision Type | Data Processed | Users Affected |
|---|
| AI agent (Athenic) | Customer support email routing | Recommendation | Customer emails, support history | Customers |
| AI agent (Athenic) | Sales lead scoring | Recommendation | CRM data, website behavior | Prospects |
| Applicant tracking system | CV screening | Decision | Candidate CVs, job descriptions | Job applicants |
| Credit assessment tool | Loan approvals | Decision | Financial statements, credit history | Loan applicants |
Step 2: Classify each system by risk level
Use EU guidance:
- High-risk if: Makes or significantly influences decisions about employment, credit, essential services
- Limited risk if: Interacts with humans as AI, generates content, makes recommendations
- Minimal risk: Everything else
For most B2B AI agents: You'll likely be limited risk (transparency requirements) unless you're using AI for hiring or credit decisions.
Month 3-4: Implement Transparency Measures (Limited Risk)
If your AI systems are limited risk (most common for B2B), implement:
1. User notification
- Inform users they're interacting with AI
- Example: "This response was generated by an AI agent. Click here to speak with a human."
2. Content labelling
- Label AI-generated emails, reports, content
- Example: "This email was drafted by AI and reviewed by our team."
3. Human escalation
- Allow users to request human review
- Example: "Not satisfied with this AI response? Request human assistance."
4. Opt-out mechanisms
- Let users opt out of AI interactions
- Example: "Prefer human-only support? Update your preferences here."
Implementation checklist:
Month 3-4: Full Compliance Program (High Risk)
If you use AI for hiring, credit, or other high-risk applications, you need:
1. Risk management system
- Document potential risks (bias, errors, security vulnerabilities)
- Implement mitigation measures
- Ongoing monitoring and testing
2. Data governance
- Document training data sources and quality
- Implement data minimization (only collect necessary data)
- Ensure data accuracy and relevance
3. Technical documentation
- Model cards (how AI works, capabilities, limitations)
- Testing and validation reports
- System architecture and design documentation
- Instructions for use
4. Human oversight
- Designate human reviewers for AI decisions
- Define when human intervention is required
- Train humans to understand AI outputs and limitations
5. Accuracy and robustness
- Set accuracy thresholds (e.g., "hiring AI must be >95% accurate")
- Test against diverse populations (avoid bias)
- Monitor performance continuously
6. Cybersecurity measures
- Protect AI systems from attacks (adversarial inputs, data poisoning)
- Implement access controls
- Audit logging
Implementation cost estimate:
- Small company (1-2 high-risk systems): £40K-£80K
- Medium company (3-5 systems): £80K-£180K
- Large company (5+ systems): £180K-£400K
"The EU AI Act isn't just European - if you have EU customers, employees, or suppliers, you're in scope. We started compliance work in September and are on track for August. Companies waiting until 2025 will scramble." - Michael Thompson, Chief Legal Officer at FinanceFlow (quoted November 2024)
Month 5-6: Documentation and Audit Preparation
Create compliance documentation:
-
AI Register
- List of all AI systems
- Risk classification for each
- Compliance measures implemented
- Responsible individuals
-
Policy documents
- AI ethics policy
- Data governance policy
- Human oversight procedures
- Incident response plan (what to do if AI fails or causes harm)
-
Audit trail
- Records of AI decisions (who, what, when, why)
- Testing and monitoring results
- User complaints and resolutions
- Changes to AI systems over time
-
Training records
- Staff trained on AI governance
- Users informed about AI use
- Date and content of training
Prepare for audits:
- Designate compliance officer
- Schedule internal audits (quarterly recommended)
- Establish process for regulatory inquiries
Penalties for Non-Compliance
EU AI Act fines are severe:
| Violation Type | Maximum Fine |
|---|
| Prohibited AI use | €35M or 7% of global annual turnover (whichever is higher) |
| Non-compliance with high-risk requirements | €15M or 3% of global turnover |
| Providing incorrect information to authorities | €7.5M or 1% of global turnover |
For context:
- Company with €100M revenue: up to €7M fine for high-risk non-compliance
- Company with €1B revenue: up to €30M fine
Industry-Specific Guidance
If you're in HR Tech or Recruitment
Risk level: HIGH
Why: AI-powered CV screening, interview scoring, promotion recommendations all qualify as high-risk.
Action:
- Full compliance programme required
- Document model training data (ensure non-discriminatory)
- Implement human review of all AI hiring recommendations
- Test for bias across gender, age, ethnicity
- Budget £60K-£120K for compliance (depending on system complexity)
If you're in Fintech or Lending
Risk level: HIGH (if used for credit decisions)
Why: AI credit scoring, loan approval, fraud detection impact access to financial services.
Action:
- Full compliance programme
- Explainability requirements (why was credit denied?)
- Human review for all credit denials
- Ongoing monitoring for discriminatory outcomes
- Budget £80K-£150K for compliance
If you're in SaaS/B2B Software
Risk level: LIMITED (most cases)
Why: Most B2B AI (customer support, sales automation, marketing) is limited risk.
Action:
- Transparency requirements only
- Add "AI-powered" disclosures
- Implement human escalation
- Budget £12K-£28K for implementation
How Athenic Helps with Compliance
Athenic AI agents are designed with EU AI Act compliance in mind:
Transparency features:
- Clear "AI-generated" labels on all outputs
- Human approval workflows for sensitive decisions
- Audit trails of all AI actions
Risk management:
- Configurable approval thresholds
- Human-in-the-loop for high-stakes decisions
- Monitoring dashboards showing AI performance
Documentation:
- Automatic logging of AI decisions
- Exportable audit reports
- Integration with compliance tools
Learn more about Athenic's compliance features →
Recommended Action Plan
This month (December 2024):
January-February 2025:
March-April 2025:
May-July 2025:
External Resources
Need help with EU AI Act compliance? Athenic provides audit trails, approval workflows, and transparency features to help B2B companies meet regulatory requirements. Book a compliance consultation →
Disclaimer: This article provides general guidance only and is not legal advice. Consult qualified legal counsel for compliance with EU AI Act requirements specific to your business.
Related reading: