Preparing for an AI Revolution in Healthcare Technology


Posted October 15, 2018

Healthcare organizations are generating and collecting enormous amounts of data. Coupled with advances in wearable technology and sensors, the conditions are ripe for artificial intelligence (AI) and related technologies forever changing the healthcare technology field in the coming years.

Currently, AI is being used to drive healthcare research (e.g., an NYU and Facebook collaboration to read high-speed MRI scans), to help manage automation (e.g., patching large numbers of devices to save time), and to sort through vast amounts of data (e.g., IBM’s AI technology is guiding oncology treatment for veterans by examining tumor genes). While the use of AI to guide healthcare decisions is gaining traction, the field is still in its infancy—and that’s why it’s the perfect time for standards leaders to prepare for it.

In two recent workshops co-hosted by AAMI and BSI in London and one in Arlington, VA, regulators, standards leaders, industry representatives, and clinicians collaborated to lay the groundwork for future terminology, concepts, and collaboration for the safe use of AI in healthcare technology. The workshop series is intended to culminate with the publication of a white paper and recommendations that lay the groundwork for possible AI-related standards in the future.

“For any new area, discipline, technology, or process, there are issues that need to be clarified from the very beginning, and this is where standards are essential. They can help ensure a level playing field for market players and entrants to trade with the healthcare system and encourage innovation and knowledge sharing: for example, consensus codes and guides can help avoid duplication of effort and clear the way for real creativity,” said Sara Walton, market development manager at BSI. “Development of AI within healthcare is a global issue and needs a global response. With this project, we’re trying to encompass all perspectives from relevant stakeholders from the start.”

During the workshops, participants worked to define high-level principles as they relate to AI and its related technologies. Most important at this stage is coming to a consensus on common terminology—even for defining what an AI-utilizing device would mean.

“Some groups think that the definition of ‘AI’ means that it totally replaces a human. But for others, the term means that some AI applications are job aids, helping with preprocessing, for example, but not replacing a human,” said Pat Baird, regulatory head for global software standards for Philips North America and chair of the BI&T Editorial Board, who participated in the meeting.

Another major sticking point is how to define continuous learning systems. Does it mean that a system that learns should change its behavior from patient to patient? Or, does such a system receive periodic updates?

“Differences in what we mean with these terms is a real challenge,” Baird said. “At the end of the day, good software development is good software development. But with AI, there can be a lot that hasn’t been thought about.”

Major topics for continued discussion include software life cycles, verification and validation, determination of risk, the safety and effectiveness of algorithms, defining the levels of complexity in AI decision making, and understanding the quality of the data that’s used to train the AI system. A data set that wasn’t intended to be used for AI purposes, for example, may not be a good choice for training an AI system.

When it comes to AI, one of the most potent challenges is understanding the fundamental role of data. Poor or unexpected results may or may not be due to the software itself experiencing a problem. An unexpected output may instead be the result of the AI system working perfectly well but trying to learn from data that isn’t appropriate.

“Being able to track this down as a failure point will be important. AI can very much look like a black box where it’s difficult to know how outputs are being generated. We want to trust it but also want to be skeptical,” Baird said. “We can’t see and understand everything, but it’s important for us—regulators and users—to gain confidence in what the software says.”