As AI literacy becomes a baseline expectation across occupations, labor markets reorganize around measurable human-AI coordination rather than traditional degree prestige.
What begins as an inclusive promise that everyone should learn to work with AI quickly hardens into a new sorting system. Employers no longer ask only what candidates know; they test how cleanly a person can delegate, verify, and recover from model error under time pressure. Schools, bootcamps, unions, and staffing agencies rush to produce evidence of coordination skill, but the metric rewards people with early access to tools, quiet practice time, and occupations already friendly to automation. A generation is told that AI fluency is universal, then discovers that some forms of fluency are far more marketable than others.
In a municipal employment center in Busan at 2:10 p.m., a laid-off hotel clerk named Jiyoon sits at a public terminal for her third coordination assessment. The software asks her to supervise a booking agent, catch hallucinated refund policies, and rewrite angry customer messages in six minutes. Around her, the room is silent except for keyboards and the low panic of people being measured by how calmly they can correct a machine.
Proponents say the shift is fairer than old prestige filters because it identifies practical capability instead of inherited educational status. Opponents argue that the new tests hide class advantage inside the language of openness, rewarding those with better tools, more rehearsal, and jobs that let them practice command over systems instead of obedience to them.