Private deployment. Stronger proof.
Refinery
Same governance contract. No public-model boundary.
Refinery is Kenshiki's private deployment tier. Same bounded-synthesis contract as Workshop, but the generation layer moves off the public internet. The answer path stays inside a private runtime — prompt compilation, retrieval, generation, claim evaluation, and output gating all happen under controlled infrastructure. In the three-plane architecture, Refinery unifies build and orchestration with control inside private infrastructure — no public-model boundary, no seams.
Without this: you can move AI into a private environment and still get answers no one can defend. Data residency solves where the model runs. It doesn't solve whether the output holds up.
Today
Your team moved AI into a private environment for data residency or confidentiality. The model is no longer public — but the output is still fluent prose that humans have to interpret and defend. When challenged, you can say where it ran. You still can't show why a specific claim should be trusted.
With Refinery
The request runs through Kenshiki inside a private deployment. The prompt is compiled, evidence is retrieved from governed sources, a private inference engine generates a proposal, and the Claim Ledger checks it against evidence and local telemetry before assigning an output state.
How Refinery works
Refinery runs the same bounded-synthesis pipeline as Workshop, but moves generation into a private deployment. The prompt is compiled, governed evidence is retrieved, a private inference engine produces a proposal, and the Claim Ledger uses source checks plus local telemetry to decide what is allowed to leave.
Output states
What Refinery is
A private deployment of the full Kenshiki stack. Same Prompt Compiler, retrieval, Claim Ledger, and output-state contract as Workshop — but generation happens on a private inference engine instead of a public endpoint.
- Private deployment tier for production workflows
- No public model API in the critical path
- Same bounded-synthesis contract as Workshop
The Kenshiki contract
Same contract. Private runtime.
Same Kura/Kadai contract as Workshop. SIRE provides portable agent identity so evidence scope travels with the deployment. The difference is that the backing inference runtime is private — no public model API in the critical path, and no release without Claim Ledger evaluation.
- Same Kura/Kadai contract as Workshop
- SIRE provides portable agent identity across deployment modes
- Generation moves into a private runtime
- Claim Ledger evaluates both source support and local model telemetry
Who this is for
The Platform Team
deploying AI into private infrastructure while preserving data control and repeatable proof of what the system relied on.
The Reviewer
receives an answer already classified, traceable, and fit for production — not raw model prose that still has to be defended by hand.
Go deeper
Claim Ledger
The verification engine inside every Refinery response. Breaks answers into claims and records what held up.
Platform Architecture
How the same contract runs across Workshop, Refinery, and Clean Room — and how the proof boundary changes at each tier.
Integrations
How Kenshiki plugs into AI factories, enterprise SSO, evidence systems, and GRC workflows.
Kura
The governed source layer — ingestion, provenance, chunking, and retrieval-ready evidence inside a private deployment.
Pricing
Platform fee plus usage. Managed by Kenshiki.