If model makers can legally refuse military and surveillance uses, cross-border AI trade may evolve into a customs regime where acceptable alignment principles determine what systems can enter a country.
AI systems stop moving internationally as ordinary software and start moving like sensitive goods with ideological paperwork attached. Governments negotiate access, audit rights, and forbidden use cases as if they were tariff schedules. Smaller states gain leverage by banding together around shared certification rules, while major powers try to pressure allies into adopting their preferred model doctrines. The result is a calmer market for compliant systems and a harsher divide for everyone outside the approved blocs.
In a glass office overlooking the Danube on a wet October morning, a trade official scans a shipment dossier for an education model and pauses at the clause banning latent biometric inference, knowing one missing audit key could delay every school deployment in the country.
A customs system for alignment can restrain the worst uses of AI and give smaller countries bargaining power, but it also normalizes ideological gatekeeping at the infrastructure layer. The same mechanism that blocks repression can also freeze pluralism and punish states that cannot afford certified systems.