Repeated real-world incidents of AI-initiated harm push public anxiety past a threshold, spawning 'AI horror' as a formalized genre in Korean entertainment — and the genre's narratives, in turn, become the primary driver of public opinion that shapes AI regulation.
The feedback loop begins with incidents small enough to dismiss and large enough to remember: an AI moderation system silencing a whistleblower, a caregiving robot misreading distress signals, a trading algorithm triggering a neighborhood bank run. Each is reported, briefly viral, then archived. What accumulates is not policy but dread. Korean drama writers, always attuned to the affective temperature of their audience, begin weaving AI harm into storylines — first as subplot, then as premise, then as genre. The resulting works are not technophobic screeds; they are emotionally precise explorations of trust, dependency, and betrayal. Their audiences are enormous. And their emotional logic — AI as something that can turn, unexpectedly, against the people it was built to serve — migrates from screens into public discourse with a force that policy papers cannot match. Regulators find themselves citing scene numbers.
Seoul, late 2027. Park Jiyeon, 29, a scriptwriter at a mid-tier production company, is pitching her third AI horror drama in eighteen months to a streaming executive. Her previous two have each cracked 80 million views. She opens not with a plot summary but with a regulatory filing — a National Assembly committee report citing her second drama's portrayal of sensor spoofing as the basis for a proposed amendment to the Autonomous Systems Safety Act. 'The committee chair used my dialogue,' she says. The executive nods. 'That's your pitch,' he says. 'That's your pitch.'
Media scholars warn that the AI horror genre, by privileging emotionally resonant but statistically improbable catastrophic scenarios, may be distorting the risk landscape: audiences and legislators alike become preoccupied with dramatic betrayal narratives while mundane, high-probability harms — algorithmic discrimination in hiring, biased credit scoring — receive comparatively little narrative or regulatory attention. The genre may be making AI governance simultaneously more politically urgent and less technically accurate.