Kenshiki Labs
For General Counsel

Your AI is now a discoverable system.

Regulators are slow. Courts are not. The first major AI reckonings will arrive case-by-case, on the records the defendant can produce. This page is about the records you'll need before the docket is filed.

The shift

Regulators are slow. Courts are not.

The historical innovation-to-regulation cycle assumed a gap between commercial deployment and the rules that close the harm gap. AI broke that assumption. Capability now moves at semiconductor speed; democratic rulemaking moves at democratic speed. The EU AI Act took four years to draft and another two before its high-risk obligations bind in 2026. By the time those rules apply, capability is eight to twelve generations beyond what was being regulated.

The implication for counsel: the harm-gap closer in the next decade will not be regulators. It will be courts. Tort moves case-by-case, on facts the parties can produce, with no four-year drafting cycle. The opioids reckoning was a tort story before it was a regulatory story. Asbestos was tort-first too. AI's first major reckonings will follow the same pattern, and the defendants who win will be the ones who already have the records to produce.

The canary

Specific cases. Specific timelines. Already moving.

HR-tech / discrimination

Mobley v. Workday

Class action testing whether an AI screening tool's vendor can be held liable for employment discrimination as the AI provider, not just the employer. If class certification or material ruling lands, every HR-tech vendor's liability theory is on the docket.

Banking / Fair Lending

OCC exam letters

OCC examiners have started requesting AI-decision data lineage in Fair Lending exam letters. The first published enforcement action that includes data-lineage findings becomes the exam-prep template every other regulated bank works back from.

Already binding

CFPB Circular 2023-03

Already establishes that AI-denial reasoning is subject to Fair Lending. "The model decided" is not a defense for adverse-action notice failures. The circular is binding rule, not future regulation — the operator's exposure is current.

The records

What survives a discovery request, and what doesn't.

Compliance audits accept process documentation. Discovery does not. The records that hold up under a hostile cross-examination meet a meaningfully higher bar — and the bar is set by existing case law and federal rules, not by AI-specific statutes that haven't been written yet.

  • Authentication. The record's identity has to be provable without trusting the defendant's word. FRE 901/902 and the federal rules on electronically stored evidence already define what authenticated ESI looks like.

  • Contemporaneity. The record has to be created at decision time, not retrofit when a question lands. Reconstruction-after-the-fact is the failure mode courts already treat as suspect under spoliation doctrine in regulated sectors.

  • Chain of custody. The record has to be traceable from generation through retrieval, with no unaccounted-for state changes in between. ISO 27037 sets the international standard for digital evidence handling.

  • Sector retention duty. Regulated industries — banking, healthcare, broker-dealer, telecom — already carry record-retention duties that AI-decision logging extends naturally. Adverse inference for missing AI logs in those sectors is a small step from existing spoliation case law.

The trap

"We can't tell" stops being a strategy when courts treat absence as recklessness.

The current AI-deployment landscape rewards opacity in the short term. If you can't tell what your AI did, you can't be held liable for what your AI did — or so the calculation goes. The rational deployer keeps reconstruction expensive and uncertain because reconstruction is the same artifact that creates accountability.

That's a Nash trap, not a competitive advantage. It works until it stops working catastrophically — and then the entire industry's standards reset reactively, retroactively. The asbestos / tobacco / opioids precedent is consistent: deployers benefit from opacity for decades, then every company gets measured by records they didn't have. The companies who documented their handling had a defense. The ones that didn't got pulverized — not by the harm itself, but by the absence of evidence that they'd been responsible during the harm.

The right thing is the only thing that survives the audit nobody told you was coming. That audit, for AI, is discovery — and the records that survive it are the only ones the operator controls in advance.