Medical schools should be teaching students about artificial intelligence (AI) ethics, the authors of a recently published paper argued.1
“The motivation for the paper came from thinking about how AI was pervading medical care, and worrying that medical students were not equipped to think through the ethical implications of emerging AI applications,” says Sara Gerke, co-author and research fellow at Harvard’s Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics.
Numerous real-life examples of AI in healthcare already pose difficult ethical questions. Informed consent, patient privacy, transparency, allocation, and safety are a few. “We hope that all medical schools will dedicate time within their curricula to discuss AI through an ethical lens by discussing these issues,” Gerke says.
In medical schools, AI ethics should be “embedded into broader ethics training. Currently, I don’t think AI ethics is covered in medical school at all,” says Satish Gattadahalli, director of digital health and health informatics, public sector, at Grant Thornton.
Gattadahalli says a good place to start is areas where AI is becoming prevalent — radiology, care planning recommendations for chronic diseases, and predictions of ICU mortality. Some ethical concerns surrounding health AI center around data — who owns it, how it is used, and how it is kept private. Quality of data also is a concern, since decisions could be based on questionable information. “We want to make sure that the inherent data biases are not going to exacerbate health inequities,” Gattadahalli says.
There is a need to ensure data that are captured are heterogenous. This information should represent various demographics and disease cohorts. “Whatever we do, we need to make sure our underlying principle is ‘Do No Harm,’ with patient safety issues always front and center,” Gattadahalli stresses.
Medical professionals — not just clinicians, but also support staff and nursing — “need to become AI-literate over the coming years,” says Jason Corso, PhD, director of the Stevens Institute for Artificial Intelligence in Hoboken, NJ.
This includes both ethical and technical aspects of AI. “AI will not replace clinicians. But they will need to understand AI technology more deeply than they perhaps believe will be necessary in order to properly integrate and benefit from the information provided by AI technologies,” Corso offers.
Some hospitals have not designated a specific person to be responsible for implementing data management principles throughout the organization. Typically, hospital chief information officers are focused mainly on computer networks and bringing systems back online after outages. “There needs to be some leadership around AI. Someone needs to own this problem,” Gattadahalli says.
AI systems should promote equity and fairness, be safe, preserve patient autonomy, and be “traceable, explainable, with transparent data and models,” Gattadahalli says.
How to explain it all to patients is yet another ethical challenge. “Patients are not AI experts. Neither are doctors,” Gattadahalli notes. Physicians must explain in simple terms that AI is a factor in clinical decision-making, but it is not the only factor. Otherwise, patients could misperceive that doctors are relying solely on AI to make a diagnosis. On the positive side, patients would benefit if AI took away physicians’ documentation burdens, allowing doctors to spend more time listening to patients. “The key thing is to make sure it does not get in the way of the patient/provider relationship,” Gattadahalli cautions.
Long-standing bioethics principles (including autonomy, beneficence, and justice) can be applied to AI in the field of pathology and laboratory medicine, the authors of another paper argued.2 Brian R. Jackson, MD, MS, lead author, says bioethicists in clinical settings can give input on specific issues — for example, transparency of data-sharing agreements between the hospital and outside entities.
“Just because a data-sharing agreement may be legal doesn’t automatically mean that it’s ethical,” says Jackson, vice president and chief medical informatics officer at University of Utah School of Medicine.
Another example is the patient consent process — specifically, whether patients are asked to separately consent to use their health information for industry research, development, and other nonclinical purposes (regardless of whether the data are de-identified).
Also, bioethicists can weigh in on how clinicians can validate AI systems before use in patient care, with ongoing monitoring for bias and other problems.
Protecting patient information is a key ethical concern, says Michael McCarthy, PhD, an associate professor in the Neiswanger Institute for Bioethics at Loyola University Chicago.
Beyond that, as AI becomes a tool used in providing accurate, rapid diagnoses (e.g., radiology or dermatology), it will be important that physicians can explain how the diagnosis was obtained.
“In addition to protecting the privacy of the patient information being used in developing AI technology, one has to consider the health inequities that already exist and the ways in which existing injustice can introduce further bias,” McCarthy says.
Another question to consider is one of justice. Who will benefit as advances in AI are made? “Will those who already have access to care and treatment receive better care?” McCarthy asks. “Or is there a way to use the information to enhance the care of those who bear disproportionately the burden of disease?”
- Katznelson G, Gerke S. The need for health AI ethics in medical school education. Adv Health Sci Educ Theory Pract 2021; Mar 3. doi: 10.1007/s10459-021-10040-3. [Online ahead of print].
- Jackson BR, Ye Y, Crawford JM, et al. The ethics of artificial intelligence in pathology and laboratory medicine: Principles and practice. Acad Pathol 2021 Feb 16;8:2374289521990784. doi: 10.1177/2374289521990784. eCollection Jan-Dec 2021.