When only a handful of organizations can afford to retrain frontier models, every downstream business that fine-tunes them ends up paying a hidden tax for continued access to the source.
Fine-tuning is cheap enough for thousands of firms, but full retraining remains concentrated in a few labs with capital, compute, and rare data. That asymmetry creates a new industrial dependency: derivative model operators can customize endlessly, yet their products still rely on upstream compatibility, updates, and legal permission. Over time, the real margin shifts away from local AI services and toward the gatekeepers that control whether yesterday's fine-tune can survive tomorrow's base-model change.
Near midnight in a refrigerated warehouse outside Rotterdam, an operations director stares at a dashboard full of delayed shipments because the routing model her company fine-tuned for years stopped meeting compliance rules after its upstream provider deprecated a critical architecture branch.
Defenders of the system argue that concentrated retraining is economically rational and safer than letting every firm build its own unstable foundation model. Opponents counter that society has recreated a utility without utility obligations, leaving whole industries exposed to private upstream decisions.