All Posts
AI Governance 9 min read read March 28, 2026

Who Bears the Risk When AI Gets It Wrong? The Case for Shared Clinical Liability

D

Dr. Jennifer Obi, MD

Pulmonary & Critical Care Physician | Founder, The Clinical AI Institute

They tell us AI is here to "support" clinical decisions.

But what happens when that "support" leads to a medical error? When an algorithm identifies a patient at low risk for stroke and that patient subsequently has one? When it misses a critical diagnosis that a fatigued physician, trusting the system, also failed to catch?

Currently, the law has a very simple, very old answer: the responsibility rests entirely with the last human to sign the chart. The physician.

But that formula is no longer fair. And it is no longer safe.


The Accountability Gap in AI-Assisted Medicine

We are at a pivotal moment in the history of clinical practice. Artificial intelligence tools are embedded in electronic health records, radiology platforms, sepsis prediction systems, medication dosing algorithms, and diagnostic support software. These tools are not passive. They actively shape the information a physician sees, the alerts they receive, the risk scores they act on, and the treatment pathways they follow.

Yet the legal and ethical framework governing who is responsible when those tools fail has not kept pace. The doctrine of physician accountability — rooted in centuries of medical jurisprudence — was designed for a world where clinical decisions flowed entirely from human judgment. It was not designed for a world where an algorithm trained on millions of data points quietly steers the ship while the physician holds the wheel.

The result is a structural accountability gap that places physicians in an untenable position: we are asked to trust, act on, and ultimately own the consequences of systems we did not build, cannot realistically audit, and have absolutely zero control over.


"Support" or Risk Transfer? Understanding the Distinction

The language used to introduce AI into clinical practice matters enormously. Vendors, health systems, and regulators consistently describe AI tools as "decision support" — a framing that implies the physician remains fully in command, with AI playing a subordinate advisory role.

But this framing obscures what is actually happening in practice.

When a sepsis prediction algorithm flags a patient as low risk and the physician, informed by that score, does not escalate care — only for that patient to deteriorate hours later — the algorithm did not merely "support" the decision. It shaped it. It influenced the physician's cognitive framing, their prioritization of that patient relative to others, and the clinical path that followed.

If I cannot understand the "why" behind an AI's recommendation — if the model is a black box that produces a score without a traceable clinical rationale — then I am not being supported. I am being insulated from information I would otherwise have generated myself, and I am being handed a conclusion I am expected to ratify.

That is not clinical support. That is risk transfer.

The technology company earns the revenue. The health system earns the efficiency gains. And the physician absorbs 100% of the consequence when the system is wrong.


The Legal Landscape: Where Liability Currently Sits

Under current U.S. medical malpractice law, liability for AI-assisted clinical errors is almost universally assigned to the treating physician or the healthcare institution — not to the AI vendor. Several legal doctrines reinforce this outcome.

The Captain of the Ship Doctrine holds that the physician, as the senior decision-maker in a clinical encounter, bears ultimate responsibility for all care rendered — regardless of who or what influenced that care. Applied to AI, this doctrine means that a physician who acts on an erroneous AI recommendation is treated no differently than one who made the same error without AI involvement.

Software as a Medical Device (SaMD) Regulation under the FDA creates a regulatory pathway for AI tools but does not establish civil liability for vendors when those tools contribute to patient harm. FDA clearance or approval does not confer immunity, but it also does not create a clear cause of action against the developer.

Learned Intermediary Doctrine, borrowed from pharmaceutical law, has been applied in some AI contexts to argue that once a vendor discloses the limitations of its tool to the prescribing physician (the "learned intermediary"), the vendor's duty to the patient is discharged. The physician, having been warned, assumes the risk.

The combined effect of these doctrines is a legal environment in which AI companies can scale their influence across millions of clinical decisions while remaining largely insulated from the consequences of errors those decisions produce.


The Physician's Impossible Position

Consider what we are actually asking of physicians when we deploy AI in clinical settings without a corresponding update to the accountability framework.

We are asking them to evaluate the output of systems trained on datasets they have never seen, using architectures they were never trained to understand, validated on populations that may not reflect their patient panel, and updated on schedules they are not informed of. We are asking them to do this in real time, under cognitive load, while managing multiple patients simultaneously.

And then, when the system is wrong, we are asking them to bear the full professional, legal, and moral weight of that error.

This is not a theoretical concern. The Epic Sepsis Model, one of the most widely deployed AI tools in American hospitals, was found in a 2021 study published in JAMA Internal Medicine to have a positive predictive value of just 12% — meaning that for every 100 patients it flagged as high-risk, 88 were false positives. Physicians who over-relied on the model's negative predictions — trusting that a low-risk score meant low risk — were exposed to liability for outcomes the model failed to predict.

The physicians did not build the Epic Sepsis Model. They did not validate it. They did not choose to deploy it. They were handed it as part of their EHR environment and expected to use it responsibly — without any formal training on its limitations, its failure modes, or the populations on which it underperformed.


The Case for Shared Accountability

If an AI system materially influences the path of a patient's treatment, that system — and the company behind it — must carry a proportionate share of the clinical accountability. This is not a radical proposition. It is the logical extension of principles we already apply in other domains.

When a pharmaceutical company's drug causes harm, the company bears liability — even when a physician prescribed it. When a medical device manufacturer's product fails, the manufacturer bears liability — even when a surgeon implanted it. The physician's role as a learned intermediary does not extinguish the upstream party's responsibility; it distributes it.

The same logic should apply to AI. A company that markets a diagnostic algorithm to hospitals, trains it on proprietary data, deploys it at scale, and earns revenue from its use is not a passive tool provider. It is an active participant in the clinical decision-making process. When that participation leads to patient harm, the company should share in the consequence.

Shared accountability is not about punishing innovation. It is about creating the incentive structures that make innovation responsible. If AI companies know they will bear a portion of the liability when their tools fail, they will invest more rigorously in validation, transparency, bias testing, and post-market surveillance. They will build tools that are explainable, not just accurate. They will design for clinical integration, not just regulatory clearance.


What Responsible AI Accountability Looks Like in Practice

Reforming the accountability framework for clinical AI does not require dismantling existing medical malpractice law. It requires extending it — thoughtfully and deliberately — to reflect the new reality of how clinical decisions are made.

Proportionality. Liability should be distributed in proportion to the degree of influence each party exercised over the clinical decision. A physician who overrode an AI recommendation bears more responsibility than one who followed it. A vendor whose tool produced a high-confidence erroneous output bears more responsibility than one whose tool was used outside its intended clinical scope.

Transparency as a precondition for deployment. AI tools deployed in clinical settings should be required to provide clinicians with interpretable explanations for their outputs — not just scores or classifications, but the clinical features that drove them. A physician cannot meaningfully evaluate a recommendation they cannot understand.

Mandatory post-market surveillance. AI vendors should be required to monitor the real-world performance of their tools continuously, report performance degradation to deploying institutions, and notify clinicians when a tool's validated performance characteristics no longer hold in their patient population.

Institutional accountability. Health systems that deploy AI tools bear a duty to train the clinicians who use them — not just on how to operate the interface, but on the tool's validation data, known limitations, failure modes, and the populations on which it was and was not tested.

Regulatory evolution. The FDA's SaMD framework should be extended to include civil liability provisions that allow patients harmed by AI-influenced care to bring claims against vendors when those vendors' tools materially contributed to the harm.


The Conversation We Are Not Having

We have quietly moved from clinical judgment toward algorithm-dependent care. We have done so without updating our framework for responsibility, without training our physicians to critically evaluate AI outputs, and without establishing the legal infrastructure to hold all relevant parties accountable when patients are harmed.

The consequences of this silence are already visible. Physicians are burning out under the weight of documentation burdens that AI was supposed to reduce. Patients are being harmed by tools that were validated on populations that do not reflect them. And health systems are deploying AI at scale while their clinicians receive no formal education on how to use it responsibly.

This is not a technology problem. It is a governance problem. And governance problems require deliberate, structured, multi-stakeholder solutions — not market forces alone.

The Clinical AI Institute exists precisely to address this gap. Through the PIVOT Framework™ — our proprietary clinical standard for responsible AI governance — we work with health systems, physician groups, and healthcare organizations to build the structures, competencies, and accountability mechanisms that responsible AI adoption requires.

Because the question is not whether AI will shape clinical care. It already does.

The question is whether we will build the governance infrastructure to ensure that when AI is wrong, the consequences are shared fairly — and that the incentives exist to make AI right.


My Question to You

If AI is influencing care, should the AI company carry some of that clinical liability? Or does it all belong on the "captain of the ship"?

Physicians, attorneys, healthcare leaders, patient advocates — I want to hear your perspective. Leave a comment below or reach out directly. This is a conversation that cannot wait.


Dr. Jennifer Obi, MD is a triple board-certified Pulmonary and Critical Care physician and the Founder of The Clinical AI Institute. She advises health systems, physician groups, and conference organizers on responsible AI implementation, clinical governance, and the ethical deployment of AI in medicine.

The Clinical AI Institute works with health systems, physician groups, and conference organizers to build the governance structures and clinical competencies that responsible AI adoption requires.

Discussion

Be the first to join this conversation.

Share Your Perspective

Physicians, attorneys, healthcare leaders — this is a conversation that matters. What is your line in the sand?

Your email will not be published. All comments are reviewed.