When governments discover they cannot reliably ban risky AI suppliers, regulation evolves into continuous public grading of models in live use.
The state gives up on the fantasy of permanently excluding suspect AI vendors and instead builds an operating system for supervised coexistence. Hospitals, schools, ports, and tax agencies can use different commercial models, but every deployment streams telemetry into public dashboards and automated stress tests. Regulation becomes less theatrical and more infrastructural: fewer blanket bans, more continuous rating, version tracking, incident disclosure, and use-case restrictions that change by the week. Citizens gain something rare in AI policy: a visible record of how systems actually behave over time.
At 9:15 p.m. in Rotterdam, a night-shift hospital administrator pauses before assigning an emergency triage model; the wall display shows two approved systems in green, one under amber review after a spike in medication errors three hours earlier.
Permanent auditing can drift into permanent surveillance, especially for frontline workers who must justify every model-assisted decision. The same transparency that protects the public may also normalize relentless monitoring inside already strained institutions.