← Back to Futures
mid mixed A 4.42

The Audit Drought

Society becomes highly confident in AI safety just as independent capacity to verify that safety evaporates.

Turning Point: National research agencies redirect most safety funding into company-hosted compute partnerships, leaving universities and public labs unable to test frontier systems without corporate approval.

Why It Starts

The rhetoric of responsible development sounds reassuring: the biggest firms say they alone have the scale, talent, and discipline to build safely. Over time, that claim becomes a self-fulfilling institutional design. Public watchdogs lose access to compute, benchmark data, and model internals, while independent researchers drift into corporate fellowships that limit what they can disclose. Major failures are still found, but usually after deployment, by users encountering them in schools, hospitals, courts, and call centers. Safety does not disappear; it becomes private, delayed, and unverifiable from the outside.

How It Branches

  1. Large firms persuade governments that concentrating safety research around frontier developers is the fastest route to responsible progress.
  2. Public grants increasingly require access to proprietary compute environments and datasets controlled by the same companies under study.
  3. Independent labs shrink or pivot to consultancy work, reducing the number of actors able to perform adversarial testing without permission.
  4. Serious alignment failures surface mainly through downstream incidents, prompting reactive fixes instead of credible pre-release evaluation.

What People Feel

On a rainy Tuesday in Boston, a doctoral student sits in a nearly empty university lab, refreshing a rejected grant portal while reading a hospital incident report that reveals the kind of model behavior her team can no longer afford to test.

The Other Side

Centralized safety programs can standardize methods, speed up incident response, and prevent reckless open release of dangerous capabilities. Joint labs between firms and public institutions may also produce genuine breakthroughs. The problem is structural dependence: when every meaningful test requires corporate access, public assurance turns into a ceremony of trust rather than a practice of verification.