Customer developer docs

EU AI Act — Extended Articles 10, 13, 14, 15, 16, 43, 52, 61

EU AI Act Arts. 10, 13–16, 43, 52, 61 — transparency, human oversight, accuracy, data governance, conformity assessment, post-market monitoring.

EU AI Act — Extended Articles 10, 13, 14, 15, 16, 43, 52, 61

This mapping extends the Article 12 baseline with the transparency, human-oversight, accuracy/robustness, data-governance, conformity-assessment, and post-market-monitoring provisions that complete the high-risk AI system obligations. Ledgix evidences each article with a combination of the ledger, signed impact assessments, model cards, dataset sheets, bias audits, training-data lineage, and incidents.

Status: Full — every control resolves to an artifact Ledgix produces today following the Phase 2 (model cards, dataset sheets, incidents), Phase 3 (bias audits), Phase 4 (impact assessments), and Phase 9 (training data lineage) shipping.

Scope

These articles apply to providers and deployers of high-risk AI systems in the EU. Coverage spans transparency to deployers (Art. 13), human oversight design (Art. 14), accuracy/robustness/cybersecurity (Art. 15), provider obligations including log retention (Art. 16), data governance for training/validation/testing (Art. 10), conformity assessment (Art. 43), transparency when interacting with natural persons (Art. 52), and post-market monitoring (Art. 61). The regulation is applicable 2026-08-02.

Controls covered

FieldTypeRequiredDescription
Art13(1)events_jsonl / framework_mappingTransparency and Provision of Information to DeployersPer-decision reason, citations, and confidence scores plus the framework mapping document.
Art13(3)policy_snapshots / model_cards / dataset_sheetsInstructions for Use — Characteristics, Limitations, Expected OutputVersioned policies plus signed model cards and dataset sheets.
Art14(1)events_jsonlHuman Oversight — Design for Effective OversightHITL-reviewed events and human-principal linkages.
Art14(4)events_jsonlHuman Oversight — Ability to Override or StopDenied events reflect operational intervention points.
Art10(1)training_data_lineage / policy_snapshotsData Governance — Training, Validation and Testing DataSigned lineage records per model_ref plus governing data-governance policies.
Art10(3)training_data_lineage / dataset_sheetsData Governance — Relevance, Representativeness, Accuracy, and CompletenessQuality-check metadata, filters, and dataset sheets articulating sampling and representativeness.
Art10(2)(f)bias_audits / dataset_sheetsData Governance — Examination for BiasesSigned bias audits plus dataset sheets documenting examined biases.
Art10(2)(g)bias_audits / incidents / policy_snapshotsData Governance — Bias Mitigation MeasuresBias audits, incident records capturing mitigation actions, and policy revisions following findings.
Art15(1)events_jsonl / key_historyAccuracy, Robustness and CybersecurityConfidence distribution supports accuracy metrics; signing-key lifecycle evidences cybersecurity posture.
Art15(3)checkpoint_chainResilience Against Errors, Faults or InconsistenciesContinuous Merkle integrity sequence supporting resilience claims.
Art16(a)policy_snapshotsProvider Obligations — Quality Management SystemVersioned policies represent operating QMS artifacts.
Art16(g)events_jsonl / proof_indexProvider Obligations — Keep Automatically Generated LogsAutomatic per-decision log available for the six-month retention period.
Art61events_jsonl / incidents / checkpoint_chainPost-market Monitoring SystemOperational telemetry, signed incident records, and tamper-evident checkpoint sequence.
Art43impact_assessments / policy_snapshotsConformity Assessment for High-Risk AI SystemsSigned conformity assessment records (ia_type=conformity_eu_ai_act) per system.
Art52impact_assessments / events_jsonlTransparency Obligations for AI Systems Interacting with Natural PersonsTransparency measures documented per conformity assessment; per-decision reasons support user-facing transparency.

Evidence types referenced

  • events_jsonl — per-decision reason, citations, confidence, and oversight rationale.
  • policy_snapshots — versioned policies as QMS artifacts and capability limits.
  • model_cards — signed model cards enumerating intended use, performance, limitations.
  • dataset_sheets — signed dataset sheets disclosing composition and known biases.
  • training_data_lineage — signed lineage records per model_ref.
  • bias_audits — per-window bias audit reports with four-fifths and chi-square findings.
  • impact_assessments — signed conformity assessments (Art. 43) and transparency measures (Art. 52).
  • incidents — post-market monitoring incidents with detection, root-cause, and remediation.
  • checkpoint_chain — continuous integrity record supporting resilience and monitoring claims.
  • key_history — cybersecurity posture via signing-key lifecycle.
  • proof_index — Merkle inclusion index supporting integrity of retained logs.
  • framework_mapping — deployer-facing control-to-evidence map.

Known gaps (if any)

None — every control resolves to an artifact Ledgix produces today. Conformity assessments require tenant-authored AIA records; the admin console provides a conformity-EU template pre-populated from operational data.

Audit pack workflow

Export an evidence ZIP for this framework from the admin console's Evidence Exports panel by selecting EU AI Act — Extended Articles 13, 14, 15, 16 and a time window. Each control's evidence_locators[] in the included framework_mapping.json points to the corresponding file in the ZIP.

References