As medical AI grows reliable enough for routine care, the fiercest political struggle moves from accuracy to deciding who pays when its mistakes are unevenly distributed.
The public argument about medical AI begins with performance charts and ends in distributional politics. Once systems work well enough overall, people stop asking whether they are impressive and start asking whose false negatives are tolerated, whose appeals move fastest, and whose premiums rise after automated risk flags. New boards emerge to negotiate these tradeoffs across regulators, hospitals, insurers, and patient groups. They are meant to align safety socially rather than technically, but in practice they become arenas where statistical harm is translated into class, geography, and bargaining power.
On a rainy Thursday in Manchester, a school secretary waits outside a tribunal room with a folder of printouts showing how her cancer alert was downgraded by an approved model. Inside, lawyers are not debating whether the system works in general; they are debating who owes her for being on the wrong side of an allowed error band.
One view is that explicit harm allocation is progress because hidden tradeoffs become visible and contestable. Another view is that society normalizes preventable injustice once it writes acceptable suffering into administrative categories and learns to call that fairness.