Focus on AI: FDA Developing Regulatory Framework for AI Devices

By: Gabrielle Hirneise

Categories: AAMI News, Government, Information Technology, Medical Device Manufacturing

fda-logoAlthough meeting all the specifications can be a grueling process, the FDA has developed a regulatory framework with the interests of making AI-enabled medical devices as safe and effective as possible. 

One vital element to the adoption of new AI-enabled medical devices is the International Medical Device Regulators Forum (IMDRF) Risk Framework for Software as a Medical Device (SaMD), said FDA law attorney Bradley Merrill Thompson, who discussed FDA regulations in the webinar “FDA and Medical AI: What you need to know” during Virtual Engineering Week in December. This framework gauges risk and provides insight into whether AI systems are safe to use in a healthcare setting. 

There are two dimensions to determining the risk of software–criticality of context, which is based on how serious and how fast-acting an ailment is, and the significance of the information being provided by the software, or how important it is to the healthcare provider’s decision making. Something that falls as low-risk within this framework and could be a safe way to integrate AI into the healthcare system is clinical decisions support software (CDS). 

CDS is “intended for the purpose of analyzing patient medical information or other information” and “supporting or providing recommendation to a healthcare professional about prevention, diagnosis or treatment of a disease,” Thompson said.

What makes such software low-risk is that it is used in scenarios where a healthcare provider doesn’t rely solely on the software’s output. Additionally, it is transparent in describing underlying data, includes plain language, describes rationale, and cites supporting information. 

But what else is exempt from strict FDA regulation and a stringent vetting process? The 21st Century Cures Act established that software used for administrative purposes or wellness, certified EHRs and medical device data storage (without AI or analytics) are exempt from FDA regulation. 

For many AI applications, though, the regulatory process isn’t so easy. There is a de novo regulatory process for novel AI applications, where only 41% are favorably decided, with the rest either being declined or pressured into withdrawing. For those that have a valid predicate device, there is the 510(k) submission, which has a relatively shorter process with a higher success rate.

“An awful lot of companies that come from a non-medical background use that non-medical approach to make it, get it out there on the market, see how it breaks, then fix it. That’s not appropriate for healthcare,” Thompson said.  

The regulations are intended to “level the playing field as to how you go about training, testing, validating, and assuring the transparency specifically of ML-based programs.” 

With more problems comes a demand for solutions. The FDA is developing these solutions via regulation while also weighing the need for AI-enabled system. As detailed in the agency’s Artificial Intelligence and Machine Learning (AI/ML) Software as a Medical Device Action Plan, the FDA’s Center for Devices and Radiological Health “is considering a total product lifecycle-based regulatory framework for these technologies that would allow for modifications to be made from real-world learning and adaptation, while ensuring that the safety and effectiveness of the software as a medical device are maintained.”