EU AI Act — Extended Articles 10, 13, 14, 15, 16, 43, 52, 61
This mapping extends the Article 12 baseline with the transparency, human-oversight, accuracy/robustness, data-governance, conformity-assessment, and post-market-monitoring provisions that complete the high-risk AI system obligations. Ledgix evidences each article with a combination of the ledger, signed impact assessments, model cards, dataset sheets, bias audits, training-data lineage, and incidents.
Status: Full — every control resolves to an artifact Ledgix produces today following the Phase 2 (model cards, dataset sheets, incidents), Phase 3 (bias audits), Phase 4 (impact assessments), and Phase 9 (training data lineage) shipping.
Scope
These articles apply to providers and deployers of high-risk AI systems in the EU. Coverage spans transparency to deployers (Art. 13), human oversight design (Art. 14), accuracy/robustness/cybersecurity (Art. 15), provider obligations including log retention (Art. 16), data governance for training/validation/testing (Art. 10), conformity assessment (Art. 43), transparency when interacting with natural persons (Art. 52), and post-market monitoring (Art. 61). The regulation is applicable 2026-08-02.
Controls covered
| Field | Type | Required | Description |
|---|---|---|---|
| Art13(1) | events_jsonl / framework_mapping | Transparency and Provision of Information to Deployers | Per-decision reason, citations, and confidence scores plus the framework mapping document. |
| Art13(3) | policy_snapshots / model_cards / dataset_sheets | Instructions for Use — Characteristics, Limitations, Expected Output | Versioned policies plus signed model cards and dataset sheets. |
| Art14(1) | events_jsonl | Human Oversight — Design for Effective Oversight | HITL-reviewed events and human-principal linkages. |
| Art14(4) | events_jsonl | Human Oversight — Ability to Override or Stop | Denied events reflect operational intervention points. |
| Art10(1) | training_data_lineage / policy_snapshots | Data Governance — Training, Validation and Testing Data | Signed lineage records per model_ref plus governing data-governance policies. |
| Art10(3) | training_data_lineage / dataset_sheets | Data Governance — Relevance, Representativeness, Accuracy, and Completeness | Quality-check metadata, filters, and dataset sheets articulating sampling and representativeness. |
| Art10(2)(f) | bias_audits / dataset_sheets | Data Governance — Examination for Biases | Signed bias audits plus dataset sheets documenting examined biases. |
| Art10(2)(g) | bias_audits / incidents / policy_snapshots | Data Governance — Bias Mitigation Measures | Bias audits, incident records capturing mitigation actions, and policy revisions following findings. |
| Art15(1) | events_jsonl / key_history | Accuracy, Robustness and Cybersecurity | Confidence distribution supports accuracy metrics; signing-key lifecycle evidences cybersecurity posture. |
| Art15(3) | checkpoint_chain | Resilience Against Errors, Faults or Inconsistencies | Continuous Merkle integrity sequence supporting resilience claims. |
| Art16(a) | policy_snapshots | Provider Obligations — Quality Management System | Versioned policies represent operating QMS artifacts. |
| Art16(g) | events_jsonl / proof_index | Provider Obligations — Keep Automatically Generated Logs | Automatic per-decision log available for the six-month retention period. |
| Art61 | events_jsonl / incidents / checkpoint_chain | Post-market Monitoring System | Operational telemetry, signed incident records, and tamper-evident checkpoint sequence. |
| Art43 | impact_assessments / policy_snapshots | Conformity Assessment for High-Risk AI Systems | Signed conformity assessment records (ia_type=conformity_eu_ai_act) per system. |
| Art52 | impact_assessments / events_jsonl | Transparency Obligations for AI Systems Interacting with Natural Persons | Transparency measures documented per conformity assessment; per-decision reasons support user-facing transparency. |
Evidence types referenced
- events_jsonl — per-decision reason, citations, confidence, and oversight rationale.
- policy_snapshots — versioned policies as QMS artifacts and capability limits.
- model_cards — signed model cards enumerating intended use, performance, limitations.
- dataset_sheets — signed dataset sheets disclosing composition and known biases.
- training_data_lineage — signed lineage records per model_ref.
- bias_audits — per-window bias audit reports with four-fifths and chi-square findings.
- impact_assessments — signed conformity assessments (Art. 43) and transparency measures (Art. 52).
- incidents — post-market monitoring incidents with detection, root-cause, and remediation.
- checkpoint_chain — continuous integrity record supporting resilience and monitoring claims.
- key_history — cybersecurity posture via signing-key lifecycle.
- proof_index — Merkle inclusion index supporting integrity of retained logs.
- framework_mapping — deployer-facing control-to-evidence map.
Known gaps (if any)
None — every control resolves to an artifact Ledgix produces today. Conformity assessments require tenant-authored AIA records; the admin console provides a conformity-EU template pre-populated from operational data.
Audit pack workflow
Export an evidence ZIP for this framework from the admin console's Evidence Exports panel by selecting EU AI Act — Extended Articles 13, 14, 15, 16 and a time window. Each control's evidence_locators[] in the included framework_mapping.json points to the corresponding file in the ZIP.
References
- Framework mapping JSON:
vault/internal/compliance/frameworks/eu_ai_act_extended.json - Canonical source: EU AI Act (Regulation (EU) 2024/1689) — EUR-Lex