AI Act Compliance Checklist

In August 2025, the EU AI Act’s obligations for high-risk AI systems will begin to apply. Here’s a breakdown of what that actually means and your checklist to understand your risks and obligations under AI Act.

Waitlist is open.

Case study legaltech organization
EU AI Act compliance checklist

AI Act Key Date: August 2025

Case study legaltech organization
This is the official deadline for compliance with the EU AI Act for providers and users of high-risk AI systems. After this point:

☑️ Companies developing or deploying high-risk AI (as defined in the Act) must ensure their systems meet strict requirements.

☑️ CE marking will be mandatory for high-risk AI systems before they can enter the EU market or continue operating legally.

☑️ Post-market monitoring, transparency, documentation, and human oversight obligations kick in.

☑️ National authorities and market surveillance bodies will begin formal enforcement, with the power to impose fines for non-compliance (up to €35 million or 7% of global turnover, depending on the breach).

Who AI Act Applies To in August 2025

☑️ AI developers (“providers”) who place high-risk AI systems on the EU market.

☑️ Companies using high-risk AI systems in sectors like:

  • Recruitment and employment
  • Education and exams
  • Credit scoring and banking
  • Biometric identification
  • Law enforcement and border control
  • Healthcare and diagnostics

 

Case study - Eyre - legal

If you fall into these categories and your system hasn’t been assessed and documented according to the EU AI Act’s criteria, you could face legal and financial penalties starting in August 2025.

EU AI Act Checklist (May 2025)

Team management at Eyre - alternative to Google Meet

Determine Applicability

🔲 Classify your AI system under the EU AI Act (Prohibited / High-risk / Limited-risk / Minimal-risk).

🔲  Map all AI use cases within your organization.

🔲  Identify if you are a provider, deployer, importer, or distributor of AI systems under the law’s definitions.

🔲  Confirm whether your AI system is being placed on the EU market or used within the EU.

Does at least one statement apply to you? Then you may be at risk - continue with the checklist!

Data Governance

🔲 Ensure training, validation, and testing data are relevant, representative, and free from bias.

🔲 Validate data quality, integrity, and diversity.

🔲 Implement data governance protocols including documentation and versioning.

Technical Documentation

🔲 Prepare comprehensive technical documentation before placing the system on the market.

🔲 Include system architecture, development methods, training data details, and post-market plans.

Transparency and Information Disclosure

🔲 Inform users that they are interacting with an AI system.

🔲 Provide clear instructions for use, limitations, and human oversight responsibilities.

🔲 Ensure the system can generate logs and audit trails for traceability.

Human Oversight

🔲 Design AI systems to allow effective human intervention and control.

🔲 Define clear human oversight procedures and fail-safes (e.g. override or shut-down features).

Accuracy, Robustness, and Cybersecurity

🔲 Validate accuracy, reliability, and robustness through testing.

🔲 Implement cybersecurity measures to prevent adversarial attacks or model manipulation.

🔲 Monitor for performance degradation over time (post-market

Case study - Eyre - legal

Legal and Certification Requirements

🔲 Conduct a conformity assessment (internal or via notified body depending on the risk).

🔲  Obtain and display the CE marking for the system.

🔲  Register your high-risk AI system in the EU database (to be set up by the European Commission).

Post-Market Monitoring and Incident Reporting

🔲  Create a post-market monitoring system for tracking performance and risks.

🔲  Establish procedures for reporting serious incidents and malfunction to authorities within 15 days.

🔲  Keep documentation and logs up to date and available for inspection by regulators.

AI Act Readiness Operations

🔲  Appoint or upskill a compliance lead or external legal advisor familiar with AI regulations.

🔲  Update contracts with third-party developers or providers to include AI Act compliance obligations.

🔲  Train relevant teams (product, legal, compliance, engineering) on AI Act requirements.

Optional but Strategic: ISO 42001 & Trust Frameworks

🔲  Align your AI governance system with ISO/IEC 42001 (AI management systems).

🔲  Implement ethics and accountability reviews, even if not mandated.

🔲 Prepare internal policies on AI fairness, transparency, and redress mechanisms.

Keep Track of Upcoming Standards and Guidance

🔲  Monitor releases from the European Commission, ENISA, and national authorities.

🔲  Subscribe to updates on harmonized standards and implementation guidelines.

🔲  Review sector-specific guidance if you operate in healthcare, financial services, law enforcement, etc.

Bonus: Quick Questions to Assess AI Act Readiness

🔲  Do you know what risk category your AI use cases fall into?

🔲 Can you show traceability for your training and testing data?

🔲 Have you documented your model lifecycle in line with technical documentation requirements?

🔲 Are your users properly informed that they’re interacting with AI?

🔲 Have you registered high-risk systems in the upcoming EU AI database?

Next Steps: From AI Act Checklist to Action

Congratulations — completing the AI Act compliance checklist is a strong step toward future-proofing your AI systems. Now, turn compliance into a competitive advantage. We can help you automate key compliance tasks. Let’s talk about how to simplify, scale, and sustain your AI Act compliance program.

EU AI Act compliance checklist