As AI agents let one worker produce the output of a small team, firms stop hiring for production volume and start hiring for who can legally absorb the risk.
The modern office grows smaller at the center and denser at the edges. A few high-leverage orchestrators command fleets of agents, while a widening layer of warrantors, reviewers, and liability clerks signs off on what machines have done. Salaries polarize accordingly: those who can direct systems capture enormous upside, and those who cannot become human shock absorbers for algorithmic mistakes. Work does not disappear; it hardens into command at the top and accountability at the bottom.
At 11:15 p.m. in a glass office tower in Singapore, Daniel scrolls through eighty-seven overnight approvals from systems he did not build, adding his signature to each because the shipment leaves at dawn and the insurance policy names him, not the software, as the accountable party.
Liability layers may slow reckless automation, and some argue that forcing humans to remain answerable is the only way to preserve trust. But if accountability is concentrated without power, responsibility becomes less a safeguard than a dumping ground.