Healthcare AI Ethics: Learning by Doing

“I always tried to live according to Kant’s moral law,” — paraphrase of Adolph Eichmann’s testimony in Jerusalem in 1961 about why he obeyed Hitler’s orders without questioning.
Perhaps this is the most extreme case of self-justification using (bastardized) philosophical underpinnings, but it does drive home a point about teaching ethics: how do we make the next generation of healthcare professionals better at using ethics in practice and not just a generation of televangelists, able to recite but not follow ethical principles?
Teaching ethics around artificial intelligence poses a particularly thorny problem, especially since the incentives of the companies building AI are out-of-whack with the ethical use of their product. AI companies want to get you hooked. AI companies want to ship these models now. Is it really a surprise that OpenAI has recently slashed its resources and time given to safety testers? It seems that the ethical and responsible use of these technologies will rely on us, the deployers of these technologies in the classroom.
So how do you teach ethics so that it sticks? At ReelDx we’ve created a “Learning Lab” — essentially a semester-long class for ReelDx interns on how to use AI. As part of the curriculum, we’re teaching hands-on exercises to explore the ethical dilemmas associated with the use of AI in healthcare.
A meta-analysis of ethics instruction showed that instruction is more effective when it’s
a) interactive (classroom debates, individual journaling, roleplay, etc.),
b) field-specific (so in context of healthcare), and
c) varied (a series of different exercises, not just one).
And of course, we’ve tried to have a little fun with the exercises below so that the training isn’t a chore.
As with all things AI, these exercises are a work in progress, so we’d love to hear from you, dear reader, if you use these exercises. Just reply to this email and Rob will write back.

1. Exercise 1: AI Ethics Mapping & Discussion
Using AI in healthcare education without ethical guidelines is like giving my 16-year-old the credit card for “emergencies” — technically valuable in the right context, but we all know where this is heading without proper boundaries.
This ethics mapping exercise helps students visually chart the ethical risks and educational value of various AI use cases in healthcare education.
How it works:
Present students with the quadrant chart below. Each axis asks a guiding question:
- How much educational value does this AI tool provide?
- How much ethical risk does it carry?
Then, in small groups, have students place different AI use cases into the chart: from plagiarism detection to AI-simulated patient conversations. (You can use ours, or have students build their own.)
Assignment:
Download the Google Slide here.
What students gain:
This exercise encourages systems-level thinking. Students begin to see AI not just as a utility, but as a tool whose use involves tradeoffs and risks.

2. Build an AI Model Card
Before your students start trusting AI tools, they should understand how these models are made. Model cards are a simple but powerful way to surface the assumptions, limitations, and intended use of any AI system.
How it works:
Introduce students to the concept of a model card — a structured summary of what an AI system is, what it does well, and where it can go wrong. Then have them fill out a card for a healthcare-specific use of AI (e.g. a triage bot, diagnostic tool, or admission screener).
Assignment:
You can use the ReelDx template as a starting point.
What students gain:
They begin to see AI not as magic, but as a designed system with constraints — and someone, somewhere, is responsible for those choices. This helps frame future AI tools they’ll encounter with healthy skepticism and informed curiosity.

3. Generate and Debate an AI Dilemma
How it works:
Using this AI Ethics Tutor GPT, students generate a fictional healthcare scenario where AI creates an ethical dilemma. It could involve misdiagnosis, algorithmic bias, privacy concerns, or power dynamics between clinicians and systems.
Want a pre-made scenario? Try ours here.
Once their scenario is written, students take sides:
- One group defends the AI use
- One group critiques it
- A third group proposes a third path
You then facilitate a structured debate, followed by a guided debrief.
Debrief Questions:
- What assumptions did each side make?
- What ethical values were in tension?
- How did it feel to argue a position you don't personally agree with?
What students gain:
Students begin to see that in AI ethics, there are rarely clear answers — only better questions. They learn to hold technical innovation and human experience in the same frame.
This exercise is inspired by classroom-tested debate methods from recent research on AI ethics education.
4. Roleplay: AI Ethics in Action
Sometimes the fastest way to teach ethical complexity is to stop talking about ethics and start acting it out.

Scenario: “The Algorithm Will See You Now”
An AI triage tool has flagged a Black woman’s chest pain as “non-urgent.” A nurse is pushing for a manual re-check. A physician is balancing limited staff and beds. The AI developer believes the tool is working as designed. The administrator is worried about liability, metrics, and cost.
Each student takes one of these roles—and debates how to handle the fallout.
How it works:
Assign students one of the following roles:
- A patient
- A clinician
- An AI developer
- A hospital administrator
Each student receives a short character prompt (linked [here] in this Google Doc) describing their goals, perspective, and pressure points. Students act out the conversation and attempt to reach a resolution.
Afterward, hold a debrief with guiding questions like:
- What did your character care about most?
- Did you agree with your role’s actions?
- What would have changed if another character had more power?
Optional twist: Have students switch roles after the debrief and replay the scene. The perspective flip often unlocks deeper empathy and ethical insight.
What students gain:
Roleplay moves students from ethical theory into lived tension. They explore systems thinking, power dynamics, and the human side of decision-making in AI-integrated healthcare.
This activity is adapted from a peer-reviewed ethics education framework. Full prompt and roleplay structure available [here].

5. Design an AI Triage System
This exercise drops students into the middle of a crisis scenario and asks them to design an AI system that decides who gets care… and who doesn’t.
Inside the worksheet:
- Scenario setup
- Criteria options for AI decision rules
- Role-play character profiles
- Reflection and debrief questions
Download the AI Triage System Worksheet here
What students gain:
They’ll wrestle with real moral trade-offs and see how abstraction becomes personal. This is where AI ethics becomes unforgettable.
The Human Part
Using AI in healthcare education — to say nothing of healthcare itself — involves ethical risk.
These tools are already here, and our students will enter their healthcare careers needing to cope with the risks these tools pose. If we don’t educate them about these risks, then they’ll have to learn on the job where the stakes are higher and the risks pose danger to actual people.
Don’t we want them to work with tools involving AI in the classroom? Don’t we want to understand those tools ourselves as educators so that we can lead the way?
When we built InSitu at ReelDx, we faced the same ethical question many educators now ask: is it right to use real patient data to train future clinicians?
The answer wasn’t obvious.
There were real questions. About privacy, about consent, and about what “consent” means when the technology keeps changing.
What convinced us wasn’t just anonymization protocols. It was the reminder that most patients who say yes to educational use aren’t agreeing to a platform or a product, they’re agreeing to a purpose.
They want future doctors to learn.
If their rare condition, misdiagnosis, or unexpected outcome can help someone else someday, that’s the “why.”
The delivery method — textbook, video, or AI — doesn’t change the intention.
A well-designed simulation can carry that intention forward with more privacy and precision than a traditional case.
It doesn’t replace the human part.
It protects it.
Curious where to start — or where to go next? If you’re thinking about how to bring AI ethics into your classroom, we’d love to hear from you. Whether you want to swap ideas, share what’s working, or get support along the way, our team is here for you.