As office work is revealed to be mostly document production and exception handling, elite white-collar status migrates to people who legally absorb the consequences of AI-made decisions.
The prestigious career is no longer analyst, manager, or strategist. It is bearer. Hospitals, banks, logistics firms, and schools run on automated recommendations, but the social system still demands a human neck to place beneath the blade. A new professional class emerges to supervise exception queues, sign liability attestations, and decide when to override the machine. They are paid well because they are the last scarce component in a highly automated office: someone society can punish. The result is stability of a grim kind, where responsibility survives mainly as an address for blame.
At 11:30 p.m. in a small apartment in Incheon, a thirty-eight-year-old former consultant named Ji-eun studies a hospital escalation file before tapping her biometric signature onto an oncology treatment override. She earns more than she did in strategy, but she keeps a separate phone for families who may someday call to ask why a machine was allowed to decide.
Advocates say named liability at least prevents institutions from hiding behind opaque systems and dissolving accountability into process. Opponents argue that the model preserves the appearance of human responsibility while concentrating impossible moral burdens on a narrow class of insured signers.