Waitlist is open.
☑️ AI developers (“providers”) who place high-risk AI systems on the EU market.
☑️ Companies using high-risk AI systems in sectors like:
🔲 Classify your AI system under the EU AI Act (Prohibited / High-risk / Limited-risk / Minimal-risk).
🔲 Map all AI use cases within your organization.
🔲 Identify if you are a provider, deployer, importer, or distributor of AI systems under the law’s definitions.
🔲 Confirm whether your AI system is being placed on the EU market or used within the EU.
🔲 Ensure training, validation, and testing data are relevant, representative, and free from bias.
🔲 Validate data quality, integrity, and diversity.
🔲 Implement data governance protocols including documentation and versioning.
🔲 Prepare comprehensive technical documentation before placing the system on the market.
🔲 Include system architecture, development methods, training data details, and post-market plans.
🔲 Inform users that they are interacting with an AI system.
🔲 Provide clear instructions for use, limitations, and human oversight responsibilities.
🔲 Ensure the system can generate logs and audit trails for traceability.
🔲 Design AI systems to allow effective human intervention and control.
🔲 Define clear human oversight procedures and fail-safes (e.g. override or shut-down features).
🔲 Validate accuracy, reliability, and robustness through testing.
🔲 Implement cybersecurity measures to prevent adversarial attacks or model manipulation.
🔲 Monitor for performance degradation over time (post-market
🔲 Conduct a conformity assessment (internal or via notified body depending on the risk).
🔲 Obtain and display the CE marking for the system.
🔲 Register your high-risk AI system in the EU database (to be set up by the European Commission).
🔲 Create a post-market monitoring system for tracking performance and risks.
🔲 Establish procedures for reporting serious incidents and malfunction to authorities within 15 days.
🔲 Keep documentation and logs up to date and available for inspection by regulators.
🔲 Appoint or upskill a compliance lead or external legal advisor familiar with AI regulations.
🔲 Update contracts with third-party developers or providers to include AI Act compliance obligations.
🔲 Train relevant teams (product, legal, compliance, engineering) on AI Act requirements.
🔲 Align your AI governance system with ISO/IEC 42001 (AI management systems).
🔲 Implement ethics and accountability reviews, even if not mandated.
🔲 Prepare internal policies on AI fairness, transparency, and redress mechanisms.
🔲 Monitor releases from the European Commission, ENISA, and national authorities.
🔲 Subscribe to updates on harmonized standards and implementation guidelines.
🔲 Review sector-specific guidance if you operate in healthcare, financial services, law enforcement, etc.
🔲 Do you know what risk category your AI use cases fall into?
🔲 Can you show traceability for your training and testing data?
🔲 Have you documented your model lifecycle in line with technical documentation requirements?
🔲 Are your users properly informed that they’re interacting with AI?
🔲 Have you registered high-risk systems in the upcoming EU AI database?
Copyright © 2025 Eyre.ai All rights reserved.