Purpose and Scope
This AI Risk and Fairness Policy sets out how eyre.ai designs, develops, and operates artificial intelligence and automated systems within the eyre.ai European sovereign meeting platform.
The policy is intended to demonstrate alignment with the EU AI Act, GDPR, UK GDPR, and related international best practices, and is suitable for enterprise customers, public-sector buyers, audits, grant reporting, and procurement reviews.
This policy applies to all AI-enabled functionality provided by eyre.ai.
Governance Principles
eyre.ai is built on the following core principles:
- European Sovereignty: Data processing, AI operation, and governance are designed to align with European legal, regulatory, and ethical expectations.
- Risk-Based Design: AI systems are designed and assessed based on their potential impact on individuals, organisations, and fundamental rights.
- Fairness and Non-Discrimination: AI systems are designed to avoid unjustified bias and discriminatory outcomes.
- Accountability and Auditability: All AI-related decisions and outputs are traceable, explainable, and auditable.
- Security and Confidentiality: AI systems operate within strict security boundaries and are protected against unauthorised access or data leakage.
AI System Scope and Use
eyre.ai uses AI to support platform functionality such as meeting assistance, governance automation, and operational optimisation.
AI systems are designed to support users and organisational processes. They do not make legally binding decisions on behalf of customers, nor do they replace human judgement in governance, legal, or compliance contexts.
Human oversight remains available for all AI-supported workflows.
Risk Management and Impact Assessment
eyre.ai applies a structured risk management approach aligned with the EU AI Act and GDPR principles.
This includes:
- Identification of foreseeable risks associated with AI use
- Assessment of potential impact on individuals and organisations
- Implementation of mitigation measures proportionate to identified risks
- Periodic review of AI behaviour and outcomes
Where required, Data Protection Impact Assessments (DPIAs) or AI risk assessments are conducted to evaluate and document risk controls.
Fairness and Bias Mitigation
eyre.ai is committed to ensuring that AI systems operate fairly and without unjustified bias.
Measures include:
- Use of clearly defined, documented logic for AI-supported processes
- Avoidance of sensitive attributes unless strictly necessary and lawfully justified
- Regular review of outputs to identify and address unintended bias
- Human review mechanisms for critical or sensitive use cases
Transparency and Explainability
eyre.ai ensures transparency in how AI systems are used within the platform.
This includes:
- Clear documentation of AI-supported features
- Ability to explain system behaviour and outputs at an appropriate level of detail
- Traceable records of AI-assisted actions where relevant
AI outputs are designed to be understandable and interpretable by users and auditors.
Data Protection and Privacy
AI systems operated by eyre.ai comply with GDPR and UK GDPR principles, including data minimisation, purpose limitation, and storage limitation.
Personal data is processed only as necessary to deliver platform functionality and is protected by appropriate technical and organisational measures.
Special categories of personal data are not intentionally processed unless explicitly enabled and controlled by the customer.
Data Localisation and International Transfers
eyre.ai is designed with a European sovereign architecture.
- AI systems operate within the eyre.ai controlled environment.
- Training, inference, and data processing occur within the platform perimeter.
- eyre.ai does not integrate customer data with third-party AI tools or external model providers.
- Customer data is not used to train external models and is not shared outside the eyre.ai environment.
Where international data transfers are required (for example, customer access from outside the EU), such transfers are governed by:
- EU Standard Contractual Clauses
- UK International Data Transfer Addendum, where applicable
Supplementary measures are applied to ensure an equivalent level of protection.
Security and Operational Controls
eyre.ai implements security controls appropriate to the sensitivity of AI-enabled processing, including:
- Encryption in transit and at rest
- Role-based access controls
- Logging and monitoring of AI-related operations
- Incident response and escalation procedures
AI systems are subject to ongoing monitoring and review.
Third-Party Dependencies
eyre.ai prioritises minimal reliance on third-party AI dependencies.
- AI functionality is developed and operated within the eyre.ai perimeter.
- No third-party AI tools receive customer data for training or inference.
- Where infrastructure providers are used, they are subject to contractual and security controls consistent with this policy.
Compliance and Review
This policy is reviewed periodically to reflect regulatory developments, including the EU AI Act, and evolving best practices.
Compliance with this policy is monitored as part of eyre.ai’s internal governance and risk management processes.
Contact and Accountability
Questions regarding this AI Risk and Fairness Policy may be directed to:
eyre.ai – Governance and Compliance Team
This policy is published for transparency and accountability purposes.
Last updated: January 2026
Annex IV – EU AI Act Compliance Mapping
This Annex provides technical and organisational information aligned with Annex IV of the EU Artificial Intelligence Act, mapping eyre.ai AI systems to applicable regulatory requirements. This Annex is intended to support audits, procurement reviews, grant evaluations, and regulatory assessments.
General Information
| Annex IV Reference | Description | eyre.ai Implementation |
|---|---|---|
| Annex IV, 1(a) | Name and description of the AI system | AI-enabled functionality within the eyre.ai European sovereign meeting platform supporting governance, compliance automation, and operational workflows. |
| Annex IV, 1(b) | Intended purpose of the AI system | To assist users with governance, compliance, and operational tasks. AI systems do not make autonomous or legally binding decisions. |
| Annex IV, 1(c) | Version and lifecycle | AI systems are continuously maintained and updated under controlled change management processes. |
Risk Classification and Intended Use
| Annex IV Reference | Description | eyre.ai Implementation |
|---|---|---|
| Article 5 | Prohibited practices | eyre.ai does not deploy AI systems for manipulation, social scoring, or exploitation of vulnerabilities. |
| Article 6 | High-risk classification | AI systems are non-high-risk and do not fall under Annex III categories. |
| Annex IV, 2 | Reasonably foreseeable misuse | Misuse risks are assessed and mitigated through access controls, documentation, and human oversight. |
Risk Management System (Article 9)
| Annex IV Reference | Description | eyre.ai Implementation |
|---|---|---|
| Article 9(1) | Risk identification and analysis | Documented risk identification covering operational, legal, and fundamental rights impacts. |
| Article 9(2) | Risk mitigation measures | Proportionate mitigation measures implemented and reviewed periodically. |
| Article 9(3) | Continuous review | Ongoing monitoring and periodic reassessment of AI behaviour. |
Data Governance and Management (Article 10)
| Annex IV Reference | Description | eyre.ai Implementation |
|---|---|---|
| Article 10(1) | Data governance practices | Data minimisation, purpose limitation, and access controls applied. |
| Article 10(2) | Training, validation, and testing data | AI training and inference occur solely within the eyre.ai controlled environment. No third-party model training. |
| Article 10(3) | Bias and representativeness | Sensitive attributes avoided unless explicitly enabled and lawfully justified. |
Technical Documentation and Record-Keeping (Articles 11–12)
| Annex IV Reference | Description | eyre.ai Implementation |
|---|---|---|
| Article 11 | Technical documentation | Documentation available to support audits, procurement, and regulatory review. |
| Article 12 | Logging and traceability | Audit logs and records maintained for AI-supported actions. |
Transparency and Information to Users (Article 13)
| Annex IV Reference | Description | eyre.ai Implementation |
|---|---|---|
| Article 13(1) | Transparency obligations | AI-supported features are clearly disclosed to users. |
| Article 13(2) | Interpretability | Outputs designed to be understandable by users and auditors. |
Human Oversight (Article 14)
| Annex IV Reference | Description | eyre.ai Implementation |
|---|---|---|
| Article 14(1) | Human oversight measures | Human review available for all AI-supported workflows. |
| Article 14(4) | Override and intervention | Organisational controls enable intervention and oversight. |
Accuracy, Robustness, and Cybersecurity (Article 15)
| Annex IV Reference | Description | eyre.ai Implementation |
|---|---|---|
| Article 15(1) | Accuracy and robustness | AI systems monitored and reviewed for reliable operation. |
| Article 15(2) | Cybersecurity | Encryption, access controls, monitoring, and incident response implemented. |
Data Localisation and International Transfers
| Annex IV Reference | Description | eyre.ai Implementation |
|---|---|---|
| GDPR & AI Act alignment | Data localisation | Data processed primarily within the EEA under European sovereign architecture. |
| GDPR Chapter V | International transfers | Transfers governed by EU Standard Contractual Clauses and UK IDTA, with supplementary measures. |
Third-Party Dependencies
| Annex IV Reference | Description | eyre.ai Implementation |
|---|---|---|
| Annex IV, 1(e) | Third-party components | No third-party AI tools receive customer data for training or inference. AI operates fully within the eyre.ai perimeter. |
Conformity and Review
| Annex IV Reference | Description | eyre.ai Implementation |
|---|---|---|
| Annex IV, 1(f) | Compliance assurance | Policy and controls reviewed periodically and updated in line with regulatory developments. |