Advanced AI is treated like controlled strategic infrastructure, and the decisive contest shifts from model size to inspection rights, access tiers, and treaty enforcement.
Frontier AI no longer circulates like normal software. It lives inside a layered regime of licenses, verified facilities, export controls, and treaty monitoring. Nations bargain over who may inspect whom, which labs qualify for higher-capability access, and how much secrecy can coexist with safety claims. The result is not a clean monopoly but a dense diplomatic order in which access to the most dangerous models becomes a lever of statecraft.
In Geneva on a wet October evening, an electrical engineer from Nairobi waits outside a secure hearing room with a metal badge clipped to her blazer. Inside, delegates are arguing over whether her lab may receive temporary access to a chemistry model needed for regional vaccine work.
Strict control can slow reckless release, but it can also lock scientific power inside wealthy states and incumbent firms. If inspection regimes become tools of exclusion rather than trust, safety language may harden into a new form of technological class system.