When dedicated AI agent hardware replaces the smartphone as the primary computing device, the screen-centric digital era gives way to ambient computing, fundamentally restructuring human attention.
A new category of personal hardware — small, screenless, voice-and-haptic devices carrying always-on AI agents — pulls computing away from the glass rectangle that has dominated human attention for two decades. Without feeds to scroll, notifications to swipe, or interfaces to navigate, users report recovering a sense of presence they had forgotten was missing. Urban design responds: cities begin removing digital billboards, cafes ban screens, and a generation of children grows up interacting with AI through speech and gesture rather than taps and swipes. But the transition is uneven, and screen-dependent industries fight back hard.
It is a Sunday morning in Jeju, 2032. Hana, a nine-year-old, wakes up and says good morning to the small device clipped to her backpack strap. It tells her the weather, reminds her about her grandmother's birthday, and plays her favorite song through bone-conduction audio as she walks to the kitchen. She has never owned a smartphone. Her father, Seojin, watches her from the doorway, remembering how he used to lose three hours every evening to his phone screen. The family's home has no television. There is a bookshelf where the monitor used to be. Seojin's hands are steady in a way they haven't been for years.
Screenless computing may simply shift attention capture to audio and haptic channels rather than eliminating it. Voice-based AI interactions can be equally addictive and manipulative. Moreover, many essential tasks — data visualization, creative work, medical imaging, complex document review — fundamentally require visual interfaces and cannot be reduced to voice or haptics.