Kenshiki Labs

AI Control Plane

What is a governance control plane?

A governance control plane is the unified system that defines, enforces, and audits AI policy across the entire inference pipeline — separating what you want to happen (policy intent) from how it gets executed (application logic). It makes governance architectural, not aspirational.

Why this matters

Most organizations treat AI governance as an afterthought. They build the RAG system, deploy the model, and then bolt on compliance monitoring. By then, the damage is done — data has leaked, policies have been violated, and the only question left is “How much did we miss?”

A control plane means governance is structural, not bolt-on. It enforces policy at every stage: evidence access, retrieval, inference, and output emission. No workarounds. No exceptions. No “we’ll audit it later.”

How it works

The governance control plane operates across four layers:

  1. Data plane: Evidence ingestion, provenance tagging, tenant isolation (RLS).
  2. Retrieval plane: REBAC-enforced retrieval, evidence scoping, coverage tracking.
  3. Inference plane: Prompt compilation, model invocation, claim verification.
  4. Output plane: Gate decisions, output state assignment, Claim Ledger recording.

Policy flows through all four layers. A single policy (“data scientists can’t see cardholder data”) automatically applies to what evidence they can retrieve, what the model sees, and what output states they’re allowed to emit.

How Kenshiki Labs implements this

Kenshiki Labs is a governance control plane. It provides:

  1. SIRE (evidence identity): Cryptographically tagged sources of truth.
  2. Kura (retrieval): REBAC-enforced access to governed evidence.
  3. Prompt Compiler: Converts policy into prompt structure.
  4. Boundary Gate: Verifies output against policy before emission.
  5. Claim Ledger: Audit trail of every decision.

Deploy any model (GPT, Claude, Llama, custom). Kenshiki Labs sits in front of it, enforcing policy at every stage.

Frequently asked questions

How is a governance control plane different from a policy engine?

A policy engine evaluates policy rules at specific decision points. A governance control plane is the entire system: policy definition, evidence scoping, retrieval gating, inference bounding, output admission, and audit trail. It's not one component — it's the architecture that makes policy operational.

Can I add a control plane to my existing AI system?

Yes, but it requires integration. The control plane needs to sit in your inference orchestration pipeline: between request and retrieval, between retrieval and prompting, between inference and emission. You can wrap an existing stack, but the earlier you integrate, the more effectively governance works.

Does a control plane slow down AI inference?

Minimally. Policy evaluation and boundary checking add single-digit milliseconds. For most applications, that's negligible compared to model inference (100+ ms). The tradeoff is worthwhile: you trade milliseconds for governance.

What happens if the control plane and the model disagree?

The control plane wins. If the model generates a claim that contradicts policy or evidence, the output gate blocks it or downgrades the output state to PARTIAL or REQUIRES_SPEC. The model has no veto — policy is structural, not advisory.

Can I use a governance control plane with multiple models?

Yes. The control plane is model-agnostic. Evidence scoping, policy gates, and audit trails work the same whether you're using GPT-4, Claude, Llama, or custom models. You can even swap models without changing the governance layer.

Who defines the policies in a control plane?

Policies are typically defined by compliance, legal, or product teams — whoever owns the governance requirements. The control plane provides the tooling to declare policies (via schemas, configuration, or code) and enforces them at runtime. Policies change, the control plane doesn't.

How do I know the control plane is actually enforcing policy?

The Claim Ledger proves it. Every inference produces a ledger entry that records: what policy gates fired, what they checked, whether they passed, and what the final output state was. You don't have to trust — you can verify.

Can a governance control plane prevent all AI harms?

No. It can prevent certain categories: data leakage (via evidence scoping), policy violations (via gates), and unaudited decisions (via the ledger). But it won't prevent hallucinations that are grounded in evidence, or emergent model behaviors. It's a necessary layer, not a complete solution.

Related concepts