After repeated failures of ankle-monitor tracking to prevent premeditated crimes, governments deploy AI behavioral-pattern analysis that issues 'criminal intent alerts,' igniting a global debate over pre-crime detention.
The pattern is sickeningly familiar: an offender wearing an ankle monitor conducts careful reconnaissance of a target's home, workplace, and daily routes — all within legally permitted movement zones — then removes the device and commits a violent crime within the hour. After the fourth such case makes national headlines in 2028, the public demands something beyond location tracking. The government responds with an AI system that continuously analyzes movement patterns, speed variations, dwell times, and route deviations to generate 'behavioral anomaly scores.' When the score crosses a threshold, probation officers receive an alert and can petition a judge for emergency preventive custody. The system catches two potential attackers in its first year. It also flags 847 false positives — people whose only crime was deviating from their routine to visit a sick relative or pick up a child from a different school. The legal challenges begin immediately. By 2031, the constitutional court must rule on a question that philosophers have debated for centuries and engineers have now made operational: can the state restrain a person for a crime they have not yet committed?
It is 6:15 AM on a Wednesday in September 2030, in a studio apartment in Suwon. Choi Donghyun, 34, is putting on his shoes to go to a convenience store for breakfast when three plainclothes officers arrive at his door with an emergency custody order. His behavioral anomaly score spiked overnight — the AI flagged that he had walked past his ex-wife's apartment building three times in the past week. He had. It is on the way to his new therapist's office. He tries to explain this. The officers are sympathetic but procedurally bound. He spends eleven hours in a holding room before a judge reviews his case and releases him. His therapist later notes that the detention set his recovery back by months. The system logged the outcome as 'intervention successful — no crime occurred.'
The alternative — waiting for the crime to happen — has a body count. Every false positive represents an inconvenience; every false negative represents a victim. If the system prevents even one murder per year, the utilitarian calculus may favor its existence, provided robust appeal mechanisms protect the falsely flagged. The question is not whether prediction is perfect but whether it is better than the current system of passive monitoring that repeatedly fails.