Focus on AI: Perspective: Assessing the Lack of ‘Explainability’ in AI-based Clinical Decision Support Systems


By: Randolph Fillmore

Categories: AAMI News, Clinical, Health Technology Management, Information Technology

The wave of clinical decision support systems (CDSS) aimed at helping clinicians in making disease diagnoses and treatment decisions promises to standardize and revolutionize the practice of medicine. Yet, there is debate over the legality and ethics of making important medical decisions using artificial intelligence that (unlike a fourth-year medical student) cannot say exactly “why” a particular medical decision should be made.

A study published in November 2020 in the journal BMC Medical Informatics and Decision Making, titled “Explainability for artificial intelligence in healthcare: a multidisciplinary perspective,” concluded that “…omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health.”

The authors back up their opinion by looking at technological, legal, medical, patient, and ethical perspectives.


The Technological Perspective.
The authors charge that there is a “trade-off” between explainability and performance and how this is a big challenge for developers of CDSSs, citing an example in which an X-ray system to detect patient risk worked well in the hospital where it was developed but not elsewhere. The kind of data and hardware used were ultimately to blame.

The Legal Perspective. The authors ask, “To what extent is explainability in AI legally required?” For them, important issues regarding informed consent from the patient and liability have not raised much concern. In the case of AI-based decision support, do the underlying processes and algorithms have to be explained to the patient—especially in terms of risk?

The authors maintain that approval and certification bodies have been “slow to introduce requirements for explainable AI” and agree there should be more “transparency and accountability,” as suggested by an FDA a discussion paper, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD).

In the future, exploitability may be important to avoid the threat of malpractice lawsuits, the authors write.

The medical perspective. A 2020 survey of 680 US primary care physicians conducted by Deloitte, noted that physicians of the future will need new ways to apply quantitative thinking and will need the ability to “look under the hood” and understand the algorithms behind clinical decision support systems so that they can critically assess weaknesses in software. However, “looking under the hood” may not provide explainability, nor may it help prevent medical errors. Validation and explainability are “instrumental” in the clinical setting so that disagreements between the AI system and human experts can be resolved.

The patient perspective. According to the authors, if there is no explainability, physicians may not be able to explain to patients why some recommendations were derived, potentially creating “black box medicine” medicine that conflicts with patient-centered medicine.

“Explainability can address this this issue by providing clinicians and patients with a personalized conversation aid based on the patient’s individual characteristics and risk factors … and provides a visual representation or natural language explanation of how different factors contributed to the final risk assessment,” they maintain.