Five trains. One defense.
Regulated Enterprise
The EU AI Act, ISO/IEC 42001, NIST AI RMF, ISO/IEC 23894, and US sector enforcers (SEC, FTC, DoD) are converging on AI decisions. They don't accept dashboards as evidence. They accept signed records that reconstruct what the system knew, reasoned, and authorized. Anything else is legal exposure.
Regulated enterprise AI governance requires evidence that survives an enforcement action — not dashboards, logs, or policy documents. The regulatory regimes converging on AI (currently anchored by EU AI Act, ISO/IEC 42001, NIST AI RMF, ISO/IEC 23894, and sector enforcers, with NIST 2.0, state AI laws, ISO 5338/25059/24029, DoD RAI updates, and international frameworks queued behind them) evaluate AI systems on whether they can reconstruct, for any specific decision, what the system knew, what reasoning it applied, what evidence it was authorized to use, and why the output was fit to emit — verifiably, without trusting the AI vendor's own reporting. Kenshiki Labs produces a signed Claim Ledger entry for every governed decision, mapped to each regime's specific obligation, and replayable by third-party auditors. The mapping expands as new frameworks come into force; the artifact contract does not change.
If the system cannot show what policy was in scope, which evidence was authorized, whether each claim held up, and why the output was fit to emit — under adversarial scrutiny, verifiable without trusting the vendor — fluent enterprise automation becomes a direct regulatory liability. For a company with $100B in global revenue, a single 7% EU AI Act fine is a $7B exposure. You don't vibe your way out of that. You prove your way out of it. Or you don't.
The five-train convergence (and counting)
Regulated AI governance isn't one law. It's five trains converging on the same evidentiary standard today — and the list of frameworks pressing on AI decisions isn't closed. State AI laws (CO, CA, NY, IL, TX), ISO 5338/25059/24029, NIST AI RMF 2.0, DoD RAI updates, OECD principles, UK AISI, Singapore MGF, Canada AIDA, EU DORA, FDA AI/ML guidance, and Council of Europe AI Convention are queued behind the five. Every new framework adds cross-references to every existing one, so the cost of mapping compliance grows faster than headcount can absorb. This is why the architecture below — same Claim Ledger contract regardless of which framework is asking — matters more than per-regime point solutions.
- EU AI Act (Reg 2024/1689): binding law, extraterritorial, fines up to 7% of global turnover. High-risk systems must have a Quality Management System, logging, human oversight, and risk-management documentation that survives enforcement review.
- ISO/IEC 42001: first international AI management system standard. Enterprises are audited against it to prove AI governance is designed, operated, and measured.
- NIST AI RMF 1.0 / 2.0: US federal and enterprise gold standard. Govern, Map, Measure, Manage. Most programs fail the Measure and Manage functions because they lack an evidence layer.
- ISO/IEC 23894: AI risk management technical baseline. Bridges IT risk to AI-specific failure modes. The technical controls that back the ISO 42001 management story.
- Sector enforcers: SEC and FTC actions against AI washing; DoD RAI Strategy for federal contracts. Traceability and reliability aren't policy goals — they're procurement gates.
- Queued: state AI laws, ISO 5338/25059/24029, OECD principles, UK AISI, Singapore MGF, Canada AIDA, EU DORA, FDA AI/ML guidance — each adding cross-edges to every regime above. The regulatory crosswalk compounds non-linearly with every new framework.
Who this is for
Chief Risk, Compliance, Security, and Audit leaders
preparing the company's defense against EU AI Act enforcement, ISO 42001 certification, NIST AI RMF assessment, and sector-regulator scrutiny — with evidence that survives an adversarial audit, not a dashboard that hopes it won't be challenged.
External auditors, regulators, and enterprise buyers
evaluating the company's AI governance under diligence. They need to verify — without trusting the vendor's own reporting — that a specific decision was properly scoped, evidenced, and authorized. Kenshiki's signed records let them recompute the hashes and confirm the chain themselves.
Go deeper
Regime × artifact compliance map
The page to send to general counsel — each of the 5 regulatory regimes mapped to the specific Kenshiki artifact that satisfies it, with replay instructions.
Public assurance ledger
Every authorization boundary tracked against formal specification, adversarial testing, hardware attestation, and independent replay — with commit hashes.
Claim Ledger
The per-inference record behind every regulatory defense — decomposed claims, signed evidence, audit-grade reconstruction.
Refinery
Private governed runtime for regulated enterprise workflows without giving up the Claim Ledger contract.
Deterministic Admissibility
The pre-flight obligation gate for version-current evidence before retrieval begins.