Start governed synthesis without private infrastructure
Workshop
Shared Kadai or BYOK. Governed outputs.
Workshop is built for the moment a fluent answer sounds right and you hesitate. It runs the full Kenshiki governance pipeline on shared infrastructure. Generation can happen through Kenshiki-hosted Kadai models or a connected model API like GPT, Claude, or OpenRouter. The model still generates the language. Kenshiki controls what it sees, evaluates what comes back, and decides what reaches a user. In the three-plane architecture, Compiler and Kura handle build, Kadai orchestrates, Ledger and Gate enforce control — so cross-plane policy propagation has no seams. SIRE provides portable agent identity so evidence scope follows each caller across runtimes.
Without this: the model answers directly, and the burden of deciding whether it's usable falls on whoever reads it. You discover problems after someone relies on them.
Today
Your team calls a model directly. It returns fluent text. Someone reads it, decides it sounds right, and passes it along. When that output is questioned — in a review, an audit, a legal proceeding — no one can show what it was based on.
With Workshop
The same model can still generate the language — shared Kadai or a connected API — but now inside a governed loop. The prompt is compiled, evidence is retrieved, claims are checked, and the response gets an explicit state before it reaches anyone.
How Workshop works
A user question enters the Kenshiki pipeline before the generation layer sees it. The prompt is compiled, evidence is retrieved, and bounded context is sent through Workshop's shared generation lane: Kenshiki-hosted Kadai or a connected model API. The response comes back as a proposal, is decomposed into claims, checked against evidence, and assigned a state before it reaches anyone.
Output states
What Workshop is
The full Kenshiki bounded-synthesis pipeline on shared infrastructure. Compiler, retrieval, Claim Ledger, and output-state assignment all run inside Kenshiki. Generation can happen through Kenshiki-hosted Kadai or an approved model API, both receiving only the constrained context Kenshiki provides.
- Use shared Kadai or a BYOK/public endpoint as the generation layer
- The full governance pipeline runs in front of and behind that model
- Answers are evaluated before they reach a user
The Kenshiki contract
Two APIs. One contract.
Put evidence into Kura. Ask Kadai for answers bounded by it. In Workshop, the generation layer can be shared Kadai or a connected model API — but the contract is the same. SIRE scopes what evidence is admissible. The generation model acts as a renderer inside the governance pipeline, not as the authority.
- Kura defines the evidence boundary
- SIRE scopes evidence admissibility per query identity
- Kadai returns answers bounded by that evidence
- The generation model renders — it does not decide
Who this is for
The Team Shipping AI Into Real Work
already using public model APIs, or wanting a shared Kadai starting point, but unable to justify what those systems produce under review, audit, or challenge.
The Decision-Maker
receives an answer only after Kenshiki has checked what supports it, what is missing, and whether it is allowed to leave the system.
Go deeper
See Workshop in action
Ask a question and watch the governance pipeline return a response with claims checked, gaps surfaced, and states assigned.
AI Incident Archive
Real cases where public models produced fluent, confident answers that turned out to be wrong.
Claim Ledger
The verification engine inside every Kenshiki response. Breaks answers into claims and records what held up.
Platform Architecture
How the full Kenshiki pipeline is structured — Kura, Kadai, Compiler, Ledger, and Boundary Gate.
Integrations
How Kenshiki plugs into AI factories, enterprise SSO, evidence systems, and GRC workflows.
Pricing
Workshop starts under $100/month. Usage-based pricing for Kura and Kadai, with BYOK/public-model support.