← Back to Futures
mid dystopian A 4.51

The Algorithm Will See You in Court

After decades of politically motivated prosecutorial reform, an AI-driven indictment review system becomes the compromise solution that nobody fully trusts but everyone reluctantly adopts.

Turning Point: In 2031, South Korea's National Assembly passes the Prosecutorial Neutrality Act after a constitutional crisis in which three successive administrations each dismantled their predecessor's prosecution reforms, creating a legal vacuum that left thousands of cases in procedural limbo.

Why It Starts

Every new government reshuffles prosecutorial authority to punish the last administration's allies and protect its own. Investigation, indictment, and trial boundaries blur until public trust in criminal justice reaches historic lows. Into this vacuum steps a proposal that seems absurd until the alternatives seem worse: an AI-based indictment screening system trained on case law, designed to evaluate whether charges meet evidentiary thresholds without political bias. The system is marketed as a neutral referee. But neutrality encoded by humans inherits their blind spots, and the question of who trains the model becomes the new political battleground.

How It Branches

  1. Three consecutive administrations each restructure prosecution and investigation authority within their first year, creating contradictory legal frameworks that paralyze ongoing cases
  2. A landmark corruption trial collapses when the defendant argues that the investigating body lacked legal standing under the current reform, and the Supreme Court agrees in a split decision
  3. The Ministry of Justice commissions an AI indictment screening pilot to depoliticize the charge-filing process, initially limited to financial crimes
  4. Within two years the system expands to cover all felonies, but opposition parties discover that training data overrepresented cases from conservative-era prosecutions, triggering accusations that the algorithm itself carries political bias

What People Feel

Attorney Yoon Seokmin stands in Courtroom 402 of the Seoul Central District Court on a humid Thursday in June 2032, arguing a motion to dismiss. His client, a former deputy minister, was indicted by the AI screening system with a confidence score of 0.73. Yoon holds up a tablet showing the system's reasoning chain — seven nodes of precedent analysis, each hyperlinked to case law. He tells the three-judge panel that a machine calculated his client's fate in eleven seconds. The presiding judge, who spent thirty years evaluating probable cause by instinct and experience, removes her glasses and asks counsel a question she has never asked before: what margin of error is acceptable for justice?

The Other Side

Human prosecutors have never been neutral. Political appointment, career incentives, and personal ambition have always shaped who gets charged and who walks free. An AI system, for all its flaws, at least produces auditable reasoning chains — something no human prosecutor has ever been required to provide. The question is not whether the algorithm is biased, but whether it is less biased than the system it replaces. If the training data problem can be solved through adversarial review and diverse datasets, machine-assisted screening could represent the most significant improvement in prosecutorial accountability in a century.