Liora Audit Chain provides tamper evidence for AI decision records backed by mechanized formal verification — mathematical theorems machine-checked in Lean 4 with zero unproved placeholders. Every record is cryptographically chained, independently verifiable, and signed with quantum-resistant digital signatures. The system detects modification, deletion, insertion, reordering, and truncation of records through mathematical proof, not policy.
Key Capabilities:
- Tamper-evident hash chains with Merkle inclusion proofs for independent verification — formally verified integrity properties
- NIST FIPS 204 digital signatures (ML-DSA) for quantum-resistant non-repudiation
- NIST FIPS 203 encryption (ML-KEM) with forward-secure key ratcheting
- Verified entropy sovereignty: cryptographic randomness from a formally verified software source, independent of hardware RNG
- AI-native schemas for decision verdicts, confidence scores, and proof dimensions
- Extensible proof attachments for application-layer authentication (computational identity, attestation tokens, third-party signatures)
- Regulatory compliance mapping: EU AI Act (Article 12), NIST AI RMF, FDA AI/ML, DoD AI Ethical Principles, OCC SR 11-7
- Deployment: Embedded C++ library, sidecar process, gateway service, or Docker container. Air-gapped capable.
Regulatory Compliance
Liora Audit Chain is designed to address AI logging, traceability, and auditability requirements across multiple regulatory and governance frameworks:
- EU AI Act — Article 12 record-keeping requirements for high-risk AI systems
- NIST AI Risk Management Framework (AI RMF 1.0) — Govern, Map, Measure, and Manage functions
- FDA AI/ML Guidance — Good Machine Learning Practice principles and 21 CFR Part 11
- DoD AI Ethical Principles — Traceable, Reliable, and Governable
- OCC SR 11-7 — Model risk management for financial institutions
Detailed compliance mapping with requirement-by-requirement technical coverage is available under NDA.