Kenshiki

Resources

Documentation

These documents are easier to use as guided tracks than as a flat reading shelf. Start with the problem, move into the runtime contract, and end on the failure patterns that make governed inference necessary in the first place.

Track 01

Why this exists

Start with the thesis. These papers explain why authority has to move outside the model, why scale does not rescue reliability, and why the market window is already open.

Track 02

How it works

Then read the runtime contract. This path moves from system shape to ingestion, prompt compilation, admissibility, claim checking, and the APIs that expose the stack.

Architecture spec [ FOUNDING RFC ]

Governed Intelligence Architecture

The canonical runtime contract for the whole system.

The unified architecture specification that integrates SIRE identity, air-gapped ingestion, CFPO prompt compilation, Tri-Pass inference, and the Claim Ledger into one deterministic, auditable pipeline.

Developer guide [ DEVELOPER GUIDE ]

Governed Intelligence API

How the contract shows up in the API surface.

How to integrate with the Kenshiki API: verified vs fallback responses, tenant isolation, ReBAC authorization, attestation, streaming, and error handling.

Phase 0 [ FOUNDING RFC ]

The Ingestion Pipeline

How raw sources become governed evidence.

How raw documents become governed evidence: air-gapped parsing, deterministic chunking, streaming embeddings, and geometric boundary calculation — the Phase 0 that feeds Kura.

Evidence identity [ FOUNDING RFC ]

The SIRE Identity System

How SIRE defines what a source is and where it can speak.

The deterministic tagging methodology that controls what evidence enters the retrieval boundary. SIRE defines the identity of every source document in Kura — not by what the model thinks, but by what the evidence actually is.

Phase 1 [ FOUNDING RFC ]

Prompt Governance

How Compiler turns a loose prompt into a governed query.

The specification that defines how Kenshiki compiles prompts: CFPO ordering, evidence-to-zone mapping, compiler invariants, and the enforcement contract between the Prompt Compiler and the Claim Ledger.

Phase 2 [ FOUNDING RFC ]

The HAIC Framework

The HAIC architecture behind generate, decompose, verify.

The original architecture design that proposed externalizing the truth boundary, multi-pass causal verification, and cryptographic claim attribution — the intellectual foundation of the Kenshiki platform.

Admissibility [ FOUNDING RFC ]

Deterministic Admissibility Gating

How obligations and evidence sufficiency are enforced before emission.

The admissibility engine that resolves regulatory obligations to human-approved document versions, enforces transactional supersession, and fails closed with actionable remediation — the gate that runs before retrieval begins.

Observability [ FOUNDING RFC ]

How Kenshiki Reads the Model

How Claim Ledger reads what the model actually relied on.

Inference-time observability: the signals Kenshiki uses to inspect token confidence, entailment, stability, and causal attribution before unsupported output reaches operations.

Track 03

What fails without it

Finish with the failure patterns. These are the documents that make the cost of unsupported authority impossible to shrug off as a product-paper problem.

Live evidence Incident archive

See the empirical failure record

The archive turns the abstract risk argument into sourced incidents across healthcare, finance, government, legal, and consumer harm. It is the fastest way to test whether the architecture is solving a real problem.

Next Step

Use the documents to pick your next proof surface.

If you understand the thesis and want to inspect the runtime contract, go to architecture. If you want to see how the system is packaged and bought, go to pricing. If you already know the fit, start in Workshop.