Artificial intelligence-enabled medical devices disclose very little to the public about their inner workings, but the FDA can promote increased transparency by requiring more and better information on AI-enabled tools in the agency's public database of approval, Pew Research reported Feb. 18.
AI has the ability to save lives and reduce healthcare costs, but providers and patients need to know more about these products to use them safely and effectively.
The FDA oversees software intended to treat, diagnose, cure, mitigate or prevent disease or other conditions before it can be commercially sold, and in recent years, the agency looked at how to improve communication from developers in four key areas:
1. Intended use
For a product to be properly used in a medical setting, developers must clearly communicate how their products should be used.
2. Development
Clinicians should know about the kind of data an AI device was developed and trained by. This will help determine if specific tools in the device need to be used on specific patients. For example, if data comes from a limited population, the product may incorrectly detect or miss disease in people who are underrepresented.
3. Performance
Prescribers and patients need to know whether AI tools have been independently validated and, if so, how they were evaluated and how well they performed.
4. Logic
Some AI tools, developed by machine learning techniques, come to recommendations and results without explanation. If clinicians can't understand the logic that the tool uses to reach a conclusion, they may not trust the recommendations or be able to identify flaws.