AI as a tool for physicians in clinical practice is one thing. AI as a replacement is quite another. Yet some healthcare executives such as NYC Health + Hospitals CEO Mitchell Katz, who recently stated that “we could replace a great deal of radiologists with AI at this moment, if we are ready to do the regulatory challenge”, see replacement as the cure.
Over half of U.S. states impose caps on medical malpractice damages. Those caps exist because, without them, the system doesn’t work, physicians can’t get insurance, risk can’t be pooled, and the profession can’t function.
Now ask yourself a straightforward question: Do you believe any legislature is going to extend liability caps to an AI system? Do you believe any jury is going to feel the same sympathy for an algorithm that it might for a physician who made a judgment call under pressure?
The answer, almost certainly, is no.
When a physician makes a pattern of serious errors, there’s a corrective pathway. Peer review. Credentialing action. Reporting to the National Practitioner Data Bank. Loss of licensure. The system removes that individual while the rest of the profession continues functioning. The risk is isolated.
An AI system isn’t an individual. It is one platform deployed across many patients, across many sites. When it fails, it doesn’t fail as Dr. Smith at Memorial Hospital. It fails everywhere at once. And the only honest equivalent of losing your license is pulling the system entirely.
Physicians are allowed to be imperfect. Products are held to a different standard. The result, for institutions deploying AI in a clinical context, and, potentially, for the executives making that decision, is unlimited liability, concentrated rather than distributed.
This is the accountability gap that the current enthusiasm for AI in medical practice, notably in radiology, is attempting to step over. But in reality, that gap is more like a chasm, and someone’s going to fall into it.


