After a series of AI-driven physical disruptions, governments stop regulating models mainly by intelligence level and start regulating machine actions through liability hearings, simulated trials, and court-issued operating rights.
In this future, the decisive question is no longer how capable an AI model is, but what kinds of real-world actions it is allowed to perform. Companies must submit agents and robot stacks to scenario tribunals that test failure behavior under dense simulation before granting action rights for logistics, infrastructure, and emergency operations. Regulators, insurers, and municipalities build a common legal vocabulary around machine actuation, making courts and compliance labs as important as model labs. Innovation continues, but deployment slows into a ritual of hearings, audits, and public incident reviews.
At 6:40 a.m. in Busan, a dock supervisor waits outside Court Chamber 4 while a wall display replays a thousand simulated forklift trajectories from last winter's accident case. She is not there to defend a person but a fleet behavior profile that her employer hopes will be cleared for coastal deployment before typhoon season.
Supporters argue that the courts turned hidden technical risk into a public process and prevented a race to reckless deployment. Critics say the system favors large firms that can afford endless simulation evidence and turns basic civic infrastructure into a permissioned market dominated by legal specialists.