Hospitals and health systems are increasingly adopting artificial intelligence, but researchers fear that these tools face a reproducibility crisis, according to a Jan. 9 article from Nature.
A surge in digital data and advances in computing power and performance have boosted the potential of machine learning to accelerate diagnoses, guide treatment strategies, conduct pandemic surveillance and address other health topics, but AI models lack reproducibility, according to the researchers.
Researchers say in order for AI models to be applicable, their codes and data need to be available and error-free, yet privacy issues, ethical concerns and regulatory hurdles have made this difficult for healthcare AI.
For example, in 62 studies that used AI to diagnose COVID-19 from medical scans, researchers found that none of the AI-based models was ready to be deployed for clinical use in diagnosing or predicting the prognosis of COVID-19, as the models contained biases in its data, methodology problems and reproducibility failures.
The lack of available data to train AI models means biases and issues in data that is available can become entrenched, according to Nature.
Researchers say despite the many concerns surrounding this technology, AI systems are already being consistently used in clinical practice.