← Back to Futures
near dystopian B 4.27

The Intent Alarm

After repeated failures of ankle-monitor tracking to prevent premeditated crimes, governments deploy AI behavioral-pattern analysis that issues 'criminal intent alerts,' igniting a global debate over pre-crime detention.

Turning Point: South Korea's Ministry of Justice authorizes the 'Predictive Threat Assessment System' pilot in 2029, granting probation officers the authority to request emergency custody based on AI-generated behavioral anomaly scores — before any crime has been committed.

Why It Starts

The pattern is sickeningly familiar: an offender wearing an ankle monitor conducts careful reconnaissance of a target's home, workplace, and daily routes — all within legally permitted movement zones — then removes the device and commits a violent crime within the hour. After the fourth such case makes national headlines in 2028, the public demands something beyond location tracking. The government responds with an AI system that continuously analyzes movement patterns, speed variations, dwell times, and route deviations to generate 'behavioral anomaly scores.' When the score crosses a threshold, probation officers receive an alert and can petition a judge for emergency preventive custody. The system catches two potential attackers in its first year. It also flags 847 false positives — people whose only crime was deviating from their routine to visit a sick relative or pick up a child from a different school. The legal challenges begin immediately. By 2031, the constitutional court must rule on a question that philosophers have debated for centuries and engineers have now made operational: can the state restrain a person for a crime they have not yet committed?

How It Branches

  1. Multiple high-profile cases of ankle-monitored offenders committing premeditated crimes after conducting surveillance within their permitted zones expose the fundamental inadequacy of location-only tracking.
  2. Public outrage creates political pressure for a technological solution, and the government funds an AI system that analyzes behavioral patterns — not just location — to predict criminal intent from movement data.
  3. The system's early successes in preventing two attacks generates political support, but its 400:1 false positive ratio triggers a wave of wrongful preventive detentions and civil liberties lawsuits.
  4. The constitutional court accepts a landmark case challenging the legal basis for 'pre-behavioral custody,' forcing a ruling that will define the boundary between prediction and punishment for decades.

What People Feel

It is 6:15 AM on a Wednesday in September 2030, in a studio apartment in Suwon. Choi Donghyun, 34, is putting on his shoes to go to a convenience store for breakfast when three plainclothes officers arrive at his door with an emergency custody order. His behavioral anomaly score spiked overnight — the AI flagged that he had walked past his ex-wife's apartment building three times in the past week. He had. It is on the way to his new therapist's office. He tries to explain this. The officers are sympathetic but procedurally bound. He spends eleven hours in a holding room before a judge reviews his case and releases him. His therapist later notes that the detention set his recovery back by months. The system logged the outcome as 'intervention successful — no crime occurred.'

The Other Side

The alternative — waiting for the crime to happen — has a body count. Every false positive represents an inconvenience; every false negative represents a victim. If the system prevents even one murder per year, the utilitarian calculus may favor its existence, provided robust appeal mechanisms protect the falsely flagged. The question is not whether prediction is perfect but whether it is better than the current system of passive monitoring that repeatedly fails.