Kenshiki

About

We build AI systems that have to prove themselves.

Kenshiki Labs builds the governance control plane that sits between a question and a consequential answer. We do not ask people to trust fluent output on instinct. We require evidence, verification, and an explicit decision before anything leaves the system.

Why Kenshiki exists

AI rarely fails like normal software. There is no obvious crash, no red error state, no stack trace for the person relying on the answer. It fails by sounding authoritative before anyone has established whether the authority is real. Kenshiki exists because bigger models do not solve that problem. Better fluency only makes unsupported reasoning harder to spot.

What Kenshiki is

Kenshiki is an AI governance control plane. Kura defines what counts as admissible evidence. Kadai turns governed evidence into bounded answers. The Compiler constrains the request before generation, the Claim Ledger checks each emitted claim against evidence, and the Boundary Gate makes the final release decision. Two APIs. One contract. Same system whether you start on shared infrastructure or run inside your own boundary.

  • Kura stores governed source material with provenance, structure, and retrieval boundaries
  • Kadai returns answers bounded by what Kura can support
  • Claim Ledger and Boundary Gate decide what can actually be emitted

What Kenshiki is not

The wrong mental model slows people down. Kenshiki is not another model endpoint, not a content-moderation layer, and not a monitoring dashboard that tells you after the fact that something went wrong. It is the system that governs what the generation layer can rely on and what it is allowed to emit.

  • Not a model — it governs the generation layer instead of replacing it
  • Not a content filter — it checks evidence, not tone or topic
  • Not a monitoring tool — it intervenes before emission, not after reliance
  • Not a replacement for your data — it proves against your sources rather than hand-waving around them

What makes Kenshiki different

We are not trying to make a model look more responsible. We are moving authority out of the model entirely. The model can propose language. It cannot certify its own truth, invent its own provenance, or decide on its own that an answer is safe to rely on.

  • No evidence, no emission
  • Authority lives outside the model
  • Every consequential answer is reduced to claims that can be checked
  • The same governance contract holds across Workshop, Refinery, and Clean Room

How it runs

Kenshiki is designed so teams can start where they are and deepen the proof boundary over time. Workshop lets you use shared Kadai or the public-model APIs you already have. Refinery moves the same system into your environment with stronger local control. Clean Room runs the full stack inside an air-gapped boundary when external connectivity is not an option.

  • Workshop: shared Kadai or BYOK/public API with governed retrieval and claim checking
  • Refinery: private deployment with stronger telemetry and chain of custody
  • Clean Room: air-gapped execution with signed, hardware-rooted attestation

Who it is for

Kenshiki is for teams operating where a fluent mistake can move money, shape care, expose intelligence, or create legal and regulatory risk. We build for operators who will be asked to show their work later, not just teams trying to ship a demo now.

  • Defense and intelligence workflows where sourcing must survive review
  • Government and public-sector systems that face oversight and disclosure pressure
  • Healthcare and life-sciences teams that need evidence-backed recommendations
  • Regulated enterprises that must explain outputs under audit, litigation, or policy review

Leadership

Kenshiki is led by two founders who have spent roughly 35 to 40 years each working in computer science, software systems, and data-intensive production environments. The company is being built by people who have spent their careers close to the places where technical claims either hold up under pressure or fail expensively.

  • One founder studied computer science and mathematics at MIT, worked at Bell Labs, served on Microsoft's Windows Base Team in the 2000s, later worked on connected-car systems, and most recently served as Chief Data Scientist at an ad-tech company.
  • The other founder earned a computer science degree from Ohio State and later served as VP of Software Engineering at CBC Innovis.
  • Together they bring deep experience across operating systems, enterprise software, data systems, and the practical realities of building technology that has to survive scrutiny.

How to evaluate us

The right way to assess Kenshiki is not by whether the prose sounds good. It is by whether the system can show what evidence was in scope, what claims were made, what held up, what failed, and why an answer was allowed to leave at all. That is why we publish our architecture, pricing model, and failure cases openly.

  • Read the platform architecture to see where authority actually lives
  • Review pricing to understand how governance is separated from raw inference
  • Use the AI Incident Archive to compare our design against real failure modes

Start Here

See how the contract actually lands.

If you're evaluating fit, pricing is the fastest next read. If you want to inspect the runtime contract, go to architecture. If you want to talk through your environment, contact us directly.