Repeated failures of surveillance technology push society toward accepting pre-emptive detention based on algorithmic risk scores.
A decade of high-profile crimes committed by individuals under electronic surveillance — cut ankle bracelets, spoofed GPS signals, hacked monitoring apps — erodes public faith in technology-mediated supervision. Each failure generates media firestorms and political pressure that ratchets in only one direction: toward containment. Civil liberties organizations fight each escalation, but polling consistently shows supermajority support for preventive measures. The system that emerges is not a sudden authoritarian seizure but a democratic one: citizens vote, repeatedly and enthusiastically, to cage people who have not yet committed crimes. The algorithm becomes the judge that no elected official has to be.
Lee Jun-seo, a 28-year-old man in Incheon who was convicted of assault at age 19 and served his full sentence, opens his front door at 6 AM on a Wednesday in April 2032 to find two officers holding a tablet displaying his risk score: 0.83. They explain, politely, that he is being transferred to a 'protective residential facility' for an indefinite period. He asks what he did. They tell him he hasn't done anything yet.
Predictive detention systems face a fundamental statistical problem: even a highly accurate algorithm produces enormous numbers of false positives when applied to rare events like violent crime. A system with 95% accuracy screening millions of people would detain tens of thousands of innocent individuals for every genuine threat prevented. The political coalition supporting such a system may fracture once its members or their families begin appearing in the detained population.