FDA, Stakeholders Continue Quest to Build Trust in the Use of AI in Medical Devices


By: Gabrielle Hirneise

November 12, 2020

Categories: AAMI News, Clinical, Early Career, Education & Academia, Executive , Health Technology Management, Individual Contributor, Information Technology, Manager or Director, Medical Device Manufacturers, Medical Device Manufacturing

Artificial intelligence (AI) and machine learning (ML) are at the intersection of a multitude of fields. However, very few stakeholders are taken into account when promoting engagement with these technologies. The Food and Drug Administration’s (FDA’s) Center for Devices and Radiological Health (CDRH), which is at the forefront of advancing these technologies for the healthcare industry, is attempting to bridge the gap between the interests of different stakeholders by assembling the Patient Engagement Advisory Committee (PEAC), which consists of patients, healthcare experts, caregivers, and more.

Its most recent meeting to address AI and ML in medical devices was held on Oct. 22. The meeting featured keynote speakers and hosted breakout sessions with individuals on the provider and receiving side of the healthcare field. These breakout sessions posed scenarios and questions that were discussed and later relayed to the larger group.

“The PEAC is one of the most direct ways that the public can provide comments or key recommendations to the FDA,” said Paul T. Conway, chairperson of the PEAC. “The standup of the PEAC was a bold move; it sent a very strong signal to all stakeholders that are involved in medical device development that patients and caregivers—their lives and their experiences, what they’ve seen, and the burdens they’ve managed—have value and are valued by decision makers.”

This meeting “discussed the composition of the datasets on which [AI and ML] software learns, components of the device information shared with patients, and factors that impact patient trust in technology.”

These medical devices encompass a wide range of complex technologies, including virtual doctor apps, wearable sensors and drugstore chatbots, and algorithms that detect cancer in mammography. To improve regulatory frameworks and foster a greater trust in these systems, there needs to be a stringent means to “teach” both the software and the individuals using them.

“Despite rapid advancement, AI and ML systems may have algorithmic biases, limited general visibility, and lack transparency in their assumptions based on potential limitations of training datasets,” Conway said. “The recommendations by the committee will address the importance of including various demographic groups in AI/ML algorithm development and will also address the impact of user interface transparency, including what information to use and how that information can be communicated to foster patient trust in AI/ML devices.”

Leveraging Big Data

One of the prolific problems in this modern digital age is organizing and evaluating all of the available data. In the healthcare industry, leveraging big data is especially important, and that cannot be accomplished without the use of AI.

“Data are becoming the new water, and AI is helping organizations, health professionals, and patients gain more insights into how they can translate what we already knew in different silos into something useful,” said Bakul Patel, director of the FDA’s Digital Health Center of Excellence.

This is no easy feat, however. Patel suggested that for AI to be an effective mode of evaluating big data, we need to enhance patient access and trust in the systems and devise a way for manufacturers to quickly and effectively keep medical devices updated with the most recent iteration of software.

This can be mediated by the fact that there is a spectrum of AI systems based on their ability to evolve. Medical devices can utilize locked or continuously adaptive AI systems—or any version between the two. This means there is flexibility in how much the algorithm can change or when and under what circumstances the software is updated.

‘No Data Set Is Perfect’

With a physician shortage on the horizon, the healthcare industry needs all the help it can get. However, although AI may be considered a learning and evolving entity of its own, many would say it’s only as good as the datasets with which it’s taught. This means that humans are ultimately responsible for what it learns and what populations it can serve.

“Unfortunately, data without context, without knowledge of how the data was collected, and under what circumstances, is not very useful and can lead to incorrect conclusions,” said Pat Baird, head of global software standards at Philips and chair of the BI&T Editorial Board, wrote in his presentation. “There is a lot that can be learned from both homogenous and diverse datasets, but we need to understand the context of that data.”

This means that the datasets used to teach these AI systems need to be carefully curated for the population they are intended to help.

“An algorithm trained for one subset of the patient population might not be relevant for a different subset. For example, data from a pediatric hospital might not be valuable to a geriatric hospital, and vice versa. Similarly, an algorithm trained for the entire patient population might not be relevant for a subset,” Baird wrote.

Another question is whether these systems can be fed data over time, until they naturally adapt to a particular hospital’s patient population. However, this entails problems of its own, such as variability between hospitals diagnoses and potential biases in treatment.

These considerations should help inform stakeholders on how to make AI systems broadly applicable or specific to a group.

Humans Are Built Different

As more knowledge is gained about the human body at the molecular, cellular, and macro-organism scale, there is more to consider in how patient data is evaluated. A particular issue with how AI systems are taught is the fact that various groups are underrepresented within the datasets fed to AI systems.

“AI and ML algorithms may not represent you if the data do not include you,” said Terri Cornelison, director of the health of women at CDRH.

Although AI systems can “mimic human cognitive function to help improve diagnostic accuracy, assess response to interventions, and reliably predict prognosis,” she said, they tend to ignore sex, gender, age, race, and ethnicity.

“Science increasingly reveals that for each of us, these attributes are clinical variables and as scientifically valid as any other clinical characteristic, such as handicap status or comorbidity,” Cornelison added. “Failure in accounting for these distinctions in training datasets, as well as the lack of a representative sample of the population in training data sets creates bias, generates suboptimal results, and produces mistakes.”

A Precarious Relationship

Another consideration in developing AI systems for healthcare is the study of human factors, or the interaction between humans and a system. Having an understanding of these factors will minimize misinterpretations of the information provided by AI systems in medical devices.

The perception, cognition, action model, which exists both on the user and device side, can help to improve this understanding. Both the user and the device need to perceive results, process them, and act on them. The problem here is ensuring that the output from the device is easily understood by the user.

“This [output] information can be brought to the user’s attention through physical means, such as alarm sounds or vibrations or flashes of light, or it may just be displayed on a screen, but how this information is displayed may impact how we may perceive it,” said Kimberly Kontson, biomedical engineer at the CDRH. “Many studies have shown that attributes and objects are recognized more quickly when they are embedded in a consistent context. For example, when important keywords are embedded in meaningful sentences, they are more easily identified.”

This balance between user interface and the user’s expectations will lead to more intuitive design, thereby helping to ensure that important information doesn’t get lost in translation.

The Consensus

The breakout sessions yielded a consensus on various topics, with most members of the discussion agreeing that these technologies and the information they produce need to be accessible and transparent on the patient level.

Additionally, most participants agreed that patients should have the same information as physicians and that it should be displayed in a way that is understandable to a layperson. For this to be effective, these devices and their output also need to be inclusive of individuals with a variety of attributes, such as learning levels and those with disabilities.

“Patients should know what their data is being used for, how much data they are giving up, and whether or not they are even asked if that’s okay,” Conway said.