As courts repeatedly restore excluded AI vendors to critical infrastructure contracts, public trust in AI safety shifts from technical certification to legal contest.
The market for trustworthy AI stops revolving around model cards, audits, and lab tests alone. Instead, major vendors build permanent legal-technical teams that can challenge every disqualification, reinterpret every safety clause, and secure provisional access to public systems while disputes unfold. Governments still speak the language of risk, but in practice they buy from the firms best able to survive courtroom combat. AI companies become quasi-public actors not because they were elected, but because the state can no longer remove them cleanly.
At 7:40 a.m. in a Seoul municipal control room, a civil engineer watches the flood-response dashboard flicker back online under a vendor that had been banned only six weeks earlier; the memo beside her explains not the model change, but the injunction that forced its return.
The same legal scrutiny that empowers dominant vendors can also expose arbitrary state action and force agencies to document their risk judgments more rigorously. In some sectors, that pressure produces better records, clearer standards, and fewer politically convenient blacklists.