Frequently asked
Questions a CRO, GC, or CISO would ask before signing.
Kenshiki Labs answers the questions a Chief Risk, Compliance, or Security officer asks during procurement diligence about runtime AI governance, the difference between packaging and evidence boundaries, what the Claim Ledger provides, what ARBV adds beyond red teaming, why EU AI Act exposure can reach the parent organization, and whether the governance contract requires replacing GPT or Claude.
Where to go from here
If a specific question isn't answered here, the deeper material lives on the pages most relevant to your role.
- If you owe the board an answer — see the regulated-enterprise sector brief at /industries/regulated-enterprise.
- If you are mapping regime obligations to artifacts — open the regime × artifact × replay map at /compliance.
- If you want the technical contract — read the architecture overview at /platform/architecture.
- If you want a concrete signed decision record to inspect — see the Claim Ledger at /tools/ledger.
Frequently asked questions
The core questions this page should answer before anyone trusts a generated recommendation, explanation, or control narrative.
Why isn't a private model deployment enough?
A private deployment proves where the model ran. It does not prove what evidence the model was allowed to use, which claims it made, which claims held up, or why the answer was allowed to leave. That is the difference between a packaging boundary and an evidence boundary.
What survives when counsel, auditors, or procurement ask what happened?
A signed Claim Ledger entry. It records the authorized evidence, retrieved sources, model claims, verification results, policy state, and final release decision so a contested answer can be reconstructed without trusting a dashboard.
What does ARBV add beyond normal red teaming?
Red teaming usually produces a report. ARBV produces boundary evidence: formal invariants, adversarial pressure tests, dangerous-flip measurements, and signed Boundary Evidence Records that buyers and auditors can replay.
Why does the regulatory exposure reach the parent organization?
The EU AI Act uses an undertaking concept: liability can follow the economic enterprise, not just the internal org chart. A failure in one subsidiary's AI system can become parent-company exposure when the fact pattern is bad enough.
Does Kenshiki Labs require replacing GPT, Claude, or open models?
No. Kenshiki Labs sits around the model. Kadai can work with OpenAI-compatible backends, but the governance contract comes from Kura, the Claim Ledger, ARBV, and the cryptographic envelope, not from trusting the model's weights.
Who has this problem first?
Teams whose AI output can move money, shift care, expose intelligence, affect rights, or trigger legal consequences have it first. Defense, government, healthcare, and regulated enterprises feel the pressure early because their decisions already have oversight, records, procurement, and enforcement burdens.