← Back to Futures
mid dystopian B 4.40

The Harm Allocation Boards

As medical AI grows reliable enough for routine care, the fiercest political struggle moves from accuracy to deciding who pays when its mistakes are unevenly distributed.

Turning Point: Following several high-profile disputes over missed diagnoses across different demographic groups, governments create national boards that assign acceptable error thresholds, liability shares, and insurance treatment for clinical AI systems.

Why It Starts

The public argument about medical AI begins with performance charts and ends in distributional politics. Once systems work well enough overall, people stop asking whether they are impressive and start asking whose false negatives are tolerated, whose appeals move fastest, and whose premiums rise after automated risk flags. New boards emerge to negotiate these tradeoffs across regulators, hospitals, insurers, and patient groups. They are meant to align safety socially rather than technically, but in practice they become arenas where statistical harm is translated into class, geography, and bargaining power.

How It Branches

  1. Clinical AI demonstrates stable average performance across large populations, encouraging fast expansion into screening, triage, and reimbursement workflows.
  2. Patient advocates expose that even small error gaps create concentrated harm for specific regions, age groups, and minority populations.
  3. Governments formalize risk allocation rules, turning model oversight into a permanent political process over compensation, approval speed, and insurable liability.

What People Feel

On a rainy Thursday in Manchester, a school secretary waits outside a tribunal room with a folder of printouts showing how her cancer alert was downgraded by an approved model. Inside, lawyers are not debating whether the system works in general; they are debating who owes her for being on the wrong side of an allowed error band.

The Other Side

One view is that explicit harm allocation is progress because hidden tradeoffs become visible and contestable. Another view is that society normalizes preventable injustice once it writes acceptable suffering into administrative categories and learns to call that fairness.