Dr. Jennifer Obi, MD
Pulmonary & Critical Care Physician | Founder, The Clinical AI Institute
A patient is harmed. The recommendation came from AI. The physician signed the order.
Now what happens next?
This is not a hypothetical. It is a scenario playing out in hospitals right now — quietly, without resolution, and without a clear legal or ethical framework to guide what comes after. Artificial intelligence is already shaping clinical decisions at scale: triaging patients in emergency departments, flagging sepsis risk in the ICU, recommending medication dosages, interpreting imaging studies. And in nearly every one of those contexts, the physician is still the one who signs.
But the physician did not build the algorithm. The physician cannot audit it in real time. The physician may not even know what data it was trained on, what populations it was validated in, or where its confidence intervals break down.
Yet when something goes wrong, the physician is the one standing in front of the medical board.
There is a structural problem at the center of AI in healthcare that almost no one is talking about openly: influence has been outsourced, but responsibility has not.
AI systems in clinical settings are designed to influence decisions — to nudge, recommend, flag, and prioritize. They are marketed to health systems on the promise that they will improve outcomes, reduce errors, and save time. And many of them do. But when they do not — when the sepsis model misses the patient who deteriorates overnight, when the imaging algorithm fails to flag the early-stage malignancy, when the dosing recommendation interacts with a contraindication the model was not trained to recognize — the question of who is accountable becomes deeply uncomfortable.
The developer will point to the terms of service. The health system will point to the physician. The physician will point to the tool they were told to trust. And the patient, or the patient's family, will be left navigating a liability landscape that the law has not yet fully mapped.
This is not just a legal problem. It is a patient safety problem. Because when accountability is unclear, the systems that should catch errors — peer review, incident reporting, root cause analysis — lose their ability to function. You cannot fix what you cannot assign.
Medical malpractice law in the United States has historically been built around a single standard: what would a reasonable physician have done in the same circumstances? That standard was designed for a world where clinical decisions flowed from physician judgment, informed by training, experience, and the direct assessment of a patient.
AI does not fit cleanly into that framework.
When a physician relies on an AI recommendation and an adverse outcome results, courts are beginning to grapple with questions that existing doctrine was not designed to answer. Was the physician's reliance on the AI tool reasonable? Should the physician have overridden the recommendation? Did the health system have an obligation to validate the tool before deploying it? Does the AI developer bear any duty of care to the patient?
To date, there is no settled legal standard. A small but growing body of case law and regulatory guidance is beginning to emerge — the FDA's evolving framework for Software as a Medical Device (SaMD), the FTC's scrutiny of algorithmic accountability, and proposed state-level legislation in several jurisdictions — but the gap between where the law is and where clinical AI deployment is moving remains substantial.
In practice, this means that physicians are absorbing liability for decisions they did not fully make, using tools they did not fully choose, in systems they cannot fully audit.
The question of where accountability should sit in AI-assisted clinical decisions is not simple, and honest engagement with it requires acknowledging that no single answer is sufficient. There are three plausible frameworks, each with real implications.
Liability stays with the physician. This is the current default. The physician is the licensed professional. The physician has the legal and ethical duty of care. Under this framework, using an AI tool is no different from consulting a reference database or a specialist colleague — the physician is responsible for integrating that input with clinical judgment and making the final decision. The argument for this position is that it preserves physician accountability and prevents the erosion of clinical responsibility. The argument against it is that it places an impossible burden on physicians who are being asked to trust tools they cannot meaningfully evaluate, in health systems that are deploying those tools without adequate physician input.
Liability shifts to the health system. Health systems are the entities that select, procure, configure, and deploy AI tools. They have the institutional capacity to conduct pre-deployment validation, monitor for performance drift, and establish governance structures for AI oversight. Under this framework, when a health system deploys an AI tool that contributes to patient harm, the institution bears primary responsibility — not the individual clinician. This position has growing support in health law scholarship and aligns with how liability is allocated in other high-stakes institutional contexts. The challenge is that it requires health systems to invest seriously in AI governance infrastructure, which many have not yet done.
Liability is shared with the technology. This is the most legally novel position, and in some respects the most honest one. AI developers are not passive vendors. They make design choices, training data choices, and validation choices that directly affect clinical outcomes. Under product liability doctrine, there is a reasonable argument that developers of AI tools used in clinical settings should bear some portion of responsibility when those tools cause harm. The FDA's SaMD framework is beginning to move in this direction, but the legal and regulatory infrastructure to support shared liability at scale does not yet exist.
Waiting for the law to catch up is not a strategy. Physicians who are using AI tools in clinical practice need to take concrete steps to protect their patients and themselves.
Know what you are using. Before relying on any AI tool in clinical decision-making, understand what it was trained on, what populations it was validated in, and what its known failure modes are. If that information is not available from the vendor or the health system, that is itself a significant red flag.
Document your reasoning. When you override an AI recommendation, document why. When you follow one, document that you exercised independent clinical judgment in doing so. The medical record is your primary protection in any subsequent liability proceeding, and it should reflect that you were thinking — not just clicking.
Engage in governance. Physicians need to be at the table when health systems are making AI procurement and deployment decisions. This is not optional. If physicians are not involved in evaluating the tools that will shape their clinical decisions, they are accepting liability for choices they had no voice in making.
Advocate for transparency. Push for health systems and vendors to provide meaningful transparency about AI tool performance — not just aggregate accuracy metrics, but performance stratified by patient population, clinical setting, and edge cases. Responsible AI in healthcare requires that the people using these tools can actually evaluate them.
AI will not stop shaping clinical decisions. The tools will become more capable, more embedded, and more consequential. The question is not whether to use them — it is how to use them responsibly, and how to ensure that when they fail, the system responds in a way that protects patients and preserves the integrity of clinical accountability.
The line in the sand is this: influence and responsibility must travel together. Any framework — legal, regulatory, or institutional — that allows AI to shape clinical decisions while insulating the entities that build and deploy those tools from accountability is not just legally incoherent. It is dangerous.
Physicians, attorneys, healthcare leaders, and policymakers need to be having this conversation now — before the case law is written by the worst possible outcomes, and before patients pay the price for a governance gap that everyone saw coming and no one chose to close.
Where should liability land? The conversation is open. It should not wait.
The Clinical AI Institute works with health systems, physician groups, and conference organizers to build the governance structures and clinical competencies that responsible AI adoption requires.
Be the first to join this conversation.
Physicians, attorneys, healthcare leaders — this is a conversation that matters. What is your line in the sand?
Continue Reading