top of page
Search

The Edge Where Science Meets Ethics and Where AI Now Stands

  • David Beyda, MD
  • Jul 7
  • 4 min read

Updated: Aug 26

"When science and ethics work together, medicine becomes more than a transaction. It becomes a covenant, a sacred trust between doctor and patient. And now, AI is entering that sacred space."
"When science and ethics work together, medicine becomes more than a transaction. It becomes a covenant, a sacred trust between doctor and patient. And now, AI is entering that sacred space."

Article by David H. Beyda, MD

Chair and Professor, Department of Bioethics and Medical Humanism

College of Medicine-Phoenix, University of Arizona


There’s always been a tension in medicine that doesn’t get talked about enough. It’s the tug-of-war between what we can do and what we should do. Between the precision of science and the uncertainty of ethics. Between evidence and values. And now, a third player is stepping into that space: artificial intelligence.

I’ve spent my career walking that narrow edge. As a pediatric intensivist, I learned early on, that the correct dose of epinephrine could bring a child back from cardiac arrest. Still, I also realized there were moments when sitting beside a mother and saying, “There’s nothing more we can do,” was the most human thing to offer. Now, as a professor of bioethics, I believe medicine isn’t just about outcomes. It’s about presence. About asking better questions, not just giving faster answers.

But AI doesn’t pause to ask. It predicts. It calculates. It offers solutions in milliseconds. That’s its strength and its risk.

Science, Ethics, and the Promise of AI

Science in medicine speaks in measurements, thresholds, and probabilities. It gives us tools that save lives. But science alone is agnostic. It doesn’t care whether the life it prolongs is one of suffering or dignity. That’s where ethics guides. It steps in to ask, “Should we use this knowledge?” It listens to culture, values, fear, and hope.

When science and ethics work together, medicine becomes more than a transaction. It becomes a covenant, a sacred trust between doctor and patient. And now, AI is entering that sacred space.

A recent study from Microsoft grabbed headlines. Their new AI system, the Microsoft AI Diagnostic Orchestrator (MAI-DxO), correctly diagnosed nearly 85% of complex cases drawn from the New England Journal of Medicine, compared to just over 20% for generalist physicians attempting the same challenge. Even more striking, it did so at about 20% lower cost.

This appears to be a breakthrough in science at its finest.

Risks Behind the Headlines

But we need balance. That accuracy came in a controlled setting, with cases ideal for testing. Real-world conditions bring messiness. Patients are unique. Data may be incomplete. Emotional stakes are high. Experts universally stress that MAI‑DxO still needs rigorous, real-world clinical trials and regulatory approvals before it’s ready for prime time.

Bias is another concern. If AI is trained primarily on data from one demographic, it can misdiagnose patients from other demographics. That violates the core principles of justice and non-maleficence in medical ethics.

There’s also the risk of overreliance. If clinicians begin deferring to AI’s recommendations without reflection, the human touch of medicine can fade. Algorithms don’t sit with families. They don’t hold hands. That’s not a failure of AI; it’s a warning to us.

Holding AI to Ethical Standards

So, how do we move forward without losing what matters?

First, we treat AI as a tool, not a replacement. MAI‑DxO is a guide, not a governor. It should offer insights, not mandates. And every recommendation it makes should be open to human scrutiny and discussion.

Second, we need transparency. Tools like MAI‑DxO must explain their reasoning. Microsoft designed theirs to show its logic path, a step toward explainable AI. That builds trust and accountability.

Third, we need oversight, regulation, and education. Hospitals, universities, and regulators all must set guardrails to ensure accuracy, fairness, privacy, and autonomy. We should teach medical students not only how to use AI but also how to recognize when to question it.

Holding Both Certainty and Humility

Let me give an example. Imagine an ICU where AI predicts a patient has a 95% chance of dying within three days. Clinicians might shift focus, recommending comfort care. But what if the family believes in miracles, or the data behind that prediction is skewed for their ethnicity or age group? Now we’re back at ethics. The choice belongs to the patient and their loved ones, not an algorithmic probability.

Or think about the 15% of MAI‑DxO cases it missed. When it’s wrong, who takes responsibility? The doctor who followed it? The team that developed it? The company that deployed it? We need clarity there.

A Balanced Path Forward

Yes, all the same, I’m excited by AI. It’s already helping detect conditions earlier, uncover patterns in data we can’t see, and deliver expert-level diagnostics in remote places. But we don’t erase science with it. We amplify it. We don’t negate ethics. We embed it deeper.

Medical education must evolve. Students need a new kind of literacy that enables them to read lab results, appraise model reasoning, and still listen to the human story behind them.

That means teaching them: here’s what the model says, but here’s how to ask if it fits your patient’s life story. It means making room for uncertainty. For questions.

The Future We Choose

AI won’t replace doctors. It may occasionally outperform us in pattern recognition and diagnostics, as Microsoft’s cases demonstrate. That’s a gift. However, if we let it replace our listening, judgment, and humility, medicine loses something essential.

The real risk isn’t that AI gets better at diagnosing. It’s that we stop doing the hard, quiet, relational work that heals.

So let’s build a future that balances science’s speed with ethics’ depth. That treats AI as a partner, not a pilot. That values accuracy and humanity.

Because in the end, healing isn’t just about what we can do. It’s about who we choose to be.

 

Reference:

·         Microsoft’s AI is Better Than Doctors at Diagnosing Disease

 

·         The Age of Intelligent Machines, Chapter Eight: The Search for Knowledge.

 
 
 

1 Comment


kennas
Jul 08

Thank you for articulating the limitation of machines and your courage in sharing compelling vignettes from your practice. The Art of medicine is of equal importance to the Science of medicine and using a diagnostic tool with rapid data analysis may be beneficial at times yet the machine may not replace the physician and we must demonstrate this truth. During the history taking we are attune to the patient's voice, speech, vocabulary, expressions, their complexion, hair texture and patterns, nail appearance, attire, their body language, smell, mood and demeanor. During the physical exam, we observe, palpate, auscultate, and manipulate. We create a differential diagnosis based on our index of suspicion within the patient's unique bio-psycho-social-spiritual context. The patient may …


Edited
Like
bottom of page