Yesterday, the Niskanen Center submitted comments to the FDA in response to a request for input on the “benefits and risks to health associated with the software functions excluded from the device definition by the [21st Century] Cures Act.” We argue that “the risks to patient health and safety from these regulatory exemptions are likely to be negligible,” given the very narrow limitations of which software functions qualify under the law. As such, we instead turn the bulk of our analysis to a glaring omission in the exemption language and accompanying guidance documents: artificial intelligence (AI) and machine learning (ML).
Although the FDA has recently approved a number of diagnostic medical devices incorporating AI, considerable uncertainty remains. This is compounded by a complete absence of any guidance offering clear and unambiguous reference to either AI or ML technologies, leaving innovators, investors, and researchers largely in the dark as to the agency’s thinking on these matters. This lack of clarity is further compounded by the problematic focus on the statutory exemption applying only to those “rule-based tools” that can provide a “rationale” for diagnostic recommendations. Although such requirements only apply to software functions that would be exempt from the definition of a medical device, the lack of explicit reference to AI/ML in any of the FDA’s corpus of guidance documents could bode poorly for the technology’s future in the medical marketplace.
While we applaud the agency’s reasonable interpretation of the software function exemptions, there is much more to be done before we can begin actualizing the benefits from AI/ML health care technologies. The FDA will need to offer greater clarity on regulatory approval pathways for medical diagnostic devices using AI/ML, promote flexible standards that can keep pace with the rapid rate of technological change in this field, and jettison imprudent attempts to micro-manage the design and development process in an illusive quest to achieve “algorithmic transparency,” especially given the availability of more effective regulatory frameworks for governing AI.
In order to promote certainty in the market for AI researchers and investors working on diagnostic devices powered by machine learning systems, we argue the FDA should:
-
- Embrace a technologically neutral approach to regulating software in medical devices;
- Emphasize flexibility and adaptability in development standards;
- Prioritize new guidance describing and clarifying FDA’s thinking on AI diagnostic devices; and
- Expand the Software Precertification Program to include AI medical software developers.
As we note in our conclusion:
The ongoing fusion of big data and AI will continue transforming the global economy. The health care industry has the potential to witness the most exciting and immediate gains from the application of AI — from improved individual patient health outcomes to the social benefits of reduced financial strains on our domestic health care system. Before AI can begin having this significant impact, however, the FDA needs to set the stage for the 21st century of personalized precision medicine. When deciding how to regulate this broad category of emerging devices, the agency must take a technologically neutral approach, focusing its scrutiny on device outcomes, not on a quixotic quest to attempt to peer into the black box.
Read the full comments here.