← Back to Futures
mid dystopian B 4.30

The Algorithm in the Courtroom

Prosecution reform opens the door to AI-driven charging decisions, gradually displacing human prosecutorial discretion with algorithmic case assessment.

Turning Point: In 2030, South Korea's newly established Public Prosecution Service adopts an AI case assessment system that flags charging recommendations, and within two years, prosecutors who override the algorithm's recommendations face mandatory review boards.

Why It Starts

The path is paved with good intentions. After years of controversy over selective prosecution and political bias, reformers argue that algorithmic charging decisions would be more consistent, transparent, and immune to political pressure. The new Public Prosecution Service, created from the ashes of the old prosecutorial system, adopts an AI tool initially marketed as 'advisory.' But advisory quickly becomes default. Prosecutors who override the algorithm must file written justifications reviewed by oversight committees. The system's conviction rate is higher than human prosecutors'. Politicians celebrate the 'objectivity.' Defense attorneys notice something else: the algorithm has no concept of mercy, no capacity for the gut feeling that a technically guilty person deserves a second chance, no ability to read the room of a community that needs healing more than punishment. The system is fair in the way that a scale is fair — perfectly, inhumanly, and without appeal to anything beyond its training data.

How It Branches

  1. Prosecutorial reform legislation dismantles the existing prosecutor general's office and creates a new independent Public Prosecution Service with explicit mandates for consistency and political neutrality
  2. The new agency adopts an AI case assessment system trained on ten years of case outcomes, initially as an advisory tool to flag inconsistencies in charging patterns across districts
  3. Internal metrics show the AI's recommended charges result in 23% higher conviction rates, leading management to make algorithm consultation mandatory before any charging decision
  4. Prosecutors who deviate from AI recommendations face mandatory justification reviews, creating institutional pressure to defer to the system rather than exercise independent judgment

What People Feel

Assistant Prosecutor Yoon Jihye sits in her office in Suwon at 11 PM, staring at the red flag on her screen. The AI system recommends full charges against a nineteen-year-old university student for a protest-related property damage case — technically correct under the statute, conviction probability 94%. She knows the kid. Not personally, but she knows the type: scared, first offense, the kind of person who would never offend again with a warning. She opens the override form. It asks for her employee number, a written justification, her supervisor's name. The last three prosecutors in her district who filed overrides were called before the consistency review board. She stares at the form. She closes it. She clicks 'Accept Recommendation.'

The Other Side

Human prosecutorial discretion has historically been anything but just — it has been shaped by racial bias, political pressure, class prejudice, and personal vendettas. An imperfect algorithm that applies rules consistently might produce less total injustice than a system that relies on the variable conscience of individual prosecutors, many of whom exercise their discretion in ways that reinforce existing inequalities.