As trust in AI outputs erodes, courts and regulators begin treating undocumented training history as a fatal defect, shifting accountability from what a model says to how it became what it is.
Early AI governance focused on bad outputs, but institutions eventually discover that results alone reveal too little. The decisive question becomes genealogical: which data, interventions, audits, and corrections shaped this model before it produced a recommendation or statement? A new legal doctrine spreads from courts into procurement, healthcare, and finance. It improves accountability and makes hidden tampering harder, yet it also favors organizations wealthy enough to preserve immaculate training records and maintain auditable model families.
At 2:15 p.m. in a civil court in Toronto, a junior attorney scrolls through a model genealogy file instead of a witness statement, trying to show that a lending system's recommendation came after an unlogged fine-tune performed during a quarter-end sales push.
Proponents say this rule finally aligns AI accountability with the realities of machine learning, where silent tuning decisions matter more than polished demos. Critics warn that lineage compliance can become a paperwork moat that protects large incumbents and turns legal access into a documentation arms race.