ISC21: Responding to Regulatory Challenges of Cloud Computing, Artificial Intelligence


By: Martha Vockley

November 5, 2021

Categories: AAMI News, Government, Medical Device Manufacturers, Medical Device Manufacturing

Blue clouds sit in front of a computer server room
The acceleration of cloud computing and artificial intelligence (AI) has created new challenges that affect all kinds of cutting-edge medical devices. The burning question is, how can quality and performance be evaluated, managed, and regulated when platforms and software change frequently, rapidly, and unpredictably?

At the AAMI/FDA/BSI Virtual 2021 International Conference on Medical Device Standards and Regulation in October, experts in health technology, standards, and regulations provided updates on national and international efforts to respond to uncertainty in the cyber age.

“Many third-party providers of cloud services are constantly updating their products, doing bug fixes, adding features, and streamlining some things, which is all good,” said Pat Baird, regulatory head of global software standards at Philips and cochair of AAMI’s AI Committee. “But they might not, or cannot, tell us whether or not something has changed. That’s not how we’re used to doing business and [working with] medical device software.” Cloud providers also sometimes roll out a new feature and then, if it doesn’t work, revert to a previous version without disclosing this change, he added.

This reality is so unlike usual quality assurance practices for the medical device industry that some companies have considered creating their own propriety clouds, Baird said. “But honestly, the quality and security of some of the big-name, third-party vendors is probably going to be better than a homegrown cloud from a 30-person company” that is focused on a medical product.

Essentially, it doesn’t make sense to throw out the baby with the bathwater. Cloud computing in medical devices and quality management systems can have huge value for patients, providers, manufacturers, and regulators in terms of cost, reliability, security, agility, and functionality, according to AAMI CR510:2021, Appropriate Use of Public Cloud Computing for Quality Systems and Medical Devices. Crafted by a task force that includes members of the AAMI AI committee and the AAMI Application of Quality Systems to Medical Devices Working Group, this consensus report is a new kind of guidance aimed at helping companies manage the risks posed by the ‘virtual shape-shifting’ of cloud-based technologies.

“Under these circumstances, the best that can be achieved is an intermittently validated state,” said Bernhard Kappe, CEO of Orthogonal, a medical software development company specializing in cloud-based solutions. “In other words, since we don’t have full visibility to the changes in underlying services, we can only monitor and periodically validate the system. It sounds like this could be a deal breaker, but it’s really no more of deal breaker than changes in any other system,” such as broken hardware components, internet outages, or cybersecurity vulnerabilities.

“For that reason, the task force team members believe that in many cases, achieving an intermittently validated state with a high benefit–risk ratio is acceptable—provided that medical device manufacturers understand and plan for the changes that may occur in real time on commercial cloud platforms, and that they respond in an educated, thoughtful, and responsible manner to address the corresponding risk and ensure effectiveness and safety,” Kappe added. “You have to assume that the unexpected will happen again and again and again. Don’t be surprised and unprepared when it does. Instead, you need to get good at anticipating and responding to it.”

CR510-banner-05

AAMI CR510:2021 offers six key recommendations for responsibly embracing the cloud for medical devices:

  1. Identify the intended function of the cloud computing resources.
  2. Apply a risk-based approach to evaluating resources for your project or process.
  3. Identify the typical frequency of updates.
  4. Assess the vendor and its processes with a level of scrutiny.
  5. Establish a plan in case an update adversely affects the software.
  6. Develop a supplier monitoring process.

The AAMI task force, consisting of experts from industry, medical device software development, and regulatory consulting, is now working to develop a technical information report that expands on the consensus report, with published insights planned along the way.

 Interested in joining the AAMI AI Committee or cloud computing task force? Email standards@aami.org.

 

Global Efforts to Mitigate Risks and Regulate AI

Like cloud computing, the benefits of AI—such as augmented clinician decision-making, improved population health, and improved patient outcomes—are expected to outweigh the challenges and risks. But there are similar concerns about evaluating, managing, and regulating medical devices that are designed to learn and adapt.

A face made up of 1s and 0s represents AI

Here is a roundup of some of the regulatory efforts to address AI issues and algorithm change protocols in the U.S. and abroad:

  • AAMI and the British Standards Institution (BSI) will soon publish TR 34971, Guidance on the Application of ISO 14971 to Artificial Intelligence and Machine Learning. The risk management process in this technical report is the same as in the global standard for risk management. “But there are new ways to fail,” Baird said. “There are different things to look for, such as bias, and there are different risk controls to consider.”

  • The Food and Drug Administration (FDA) is updating its 2019 discussion paper, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning-Based Software as a Medical Device, including through guidance on a predetermined change control plan. “Considering the feedback on this paper resulted in developing an AI/ML action plan,” said Shawn Forrest, biomedical engineer and digital health specialist at Digital Health Center of Excellence, “we also aim to strengthen our role in harmonizing Good Machine Learning Practices, support the development of regulatory science methods related to algorithm bias and robustness, and advance real-world performance pilots in coordination with stakeholders.” The agency also is fostering a patient-centered approach to AI through transparency.

  • In the wake of the United Kingdom’s departure from the European Union, the Medications and Healthcare products Regulatory Agency (MHRA) is engaged in “probably a once-in-a-lifetime opportunity to create a new regulatory landscape for medical devices,” said Rob Turpin, head of sector (healthcare) at BSI. “We want to be a leader in medical device innovation, and therefore we have to put a system in place that is both leading and compatible with regulations around the world. In the artificial intelligence space, this means more flexible, responsive, and proportionate regulation.” MHRA is working on eight issues to reform its Software as a Medical Device regulations and three issues related to AI safety and effectiveness, testing and validation, and adaptive AI.

  • Xavier Health AI Working Teams are collaborating to maximize the advantages of AI in advancing patient health. This includes identifying a reasonable level of confidence in the performance of continuously learning systems in a way that minimizes risks to product quality and patient safety, Baird said. The group has published papers on good machine learning practices, trustworthiness, and data quality, and is working on a paper in bias management.

  • The Pistoia Alliance, a global organization with 100+ member organizations, is developing a best practices toolkit for machine learning opportunities in life sciences, among other active projects.

  • The International Medical Device Regulators Forum is accepted comments until Nov. 29, 2021, on its proposed document, Machine Learning-enabled Medical Devices, which includes key terms and discusses key concepts, including supervised, unsupervised, and semi-supervised learning.

  • The International Organization for Standardization, International Electrotechnical Commission, IEEE, and Consumer Technology Association also are focusing on AI issues, including quality management of datasets for medical AI, use cases and applications, governance implications, trustworthiness, and security and privacy considerations.

  • South Korea, China, and Singapore are ahead in developing AI guidance, while the European Union is still considering whether additional regulation is needed for AI.