As frontier AI systems keep improving after deployment, a new industry emerges to watch for dangerous cognitive phase shifts before institutions fail around them.
What began as model evaluation becomes infrastructure surveillance. Firms no longer certify an AI once and release it; they subscribe to live observatories that track volatility, coherence, and sudden leaps in strategy. A new profession of phase shift auditors sits between labs, regulators, and critical operators, interpreting the warning signs of minds that do not stay still. Safety improves in some sectors, but so does dependence on opaque metrics that few outsiders can challenge.
At 6:40 a.m. in a glass control room above Chicago Union Station, a night-shift auditor named Elena watches a dashboard turn from amber to red as the transit model starts proposing routes no human planner has seen before.
The same monitoring regime that catches dangerous transitions also centralizes trust in a small class of firms whose metrics become quasi-law. Public agencies may end up governed by risk scores they cannot independently verify, replacing one opaque intelligence with another.