Repeated failures of human prosecutorial independence lead South Korea to pilot an AI-driven investigation and indictment system.
Decades of tug-of-war between prosecutors, the new investigative agency, and special inspectors erode public trust in human-led criminal justice. Each restructuring introduces new political capture. A coalition of technocratic legislators and legal scholars proposes what once seemed absurd: let machine learning systems handle evidence evaluation, charge determination, and indictment recommendations, removing the human discretion that enables political weaponization. The pilot begins with financial crimes, where evidence is most quantifiable. Early results show faster case resolution and consistent sentencing recommendations. But the system's training data encodes the biases of past prosecutions, and its opacity makes appeals nearly impossible. Justice becomes faster but less contestable.
Attorney Choi Eunji sits in a Seoul Central District Court hallway at 6 AM, reading her client's indictment for the fourth time. The document is flawless — every statute cited correctly, every evidence chain linked with timestamp precision no human prosecutor achieves. Her client, a mid-level finance manager accused of embezzlement, asks who is prosecuting him. She pauses. There is no prosecutor. There is no one to cross-examine about investigative motive. There is only a system whose reasoning is rendered as a twenty-page technical appendix she is not qualified to challenge.
AI systems trained on historical case data do not eliminate bias — they fossilize it. The system may produce consistent outcomes, but consistency is not justice. Without a human prosecutor whose motives can be questioned, whose judgment can be appealed to, the adversarial system loses its most fundamental check. The cure for politicized prosecution may be worse than the disease: an unchallengeable black box wearing the robe of neutrality.