A new study looks at what it will take to reduce bias in healthcare artificial intelligence.
Researchers from Massachusetts General Hospital analyzed four AI models used to predict which people would have the most severe cases of COVID-19 during the pandemic. They found biases against certain groups of people that were often inconsistent among the various models, resulting in less reliable predictions.
"Given that we face systemic bias in our country's core institutions, we need technologies that will reduce these disparities and not exacerbate them," the authors wrote in the study published May 2 in the Journal of the American Medical Informatics Association.
The researchers noted that some of the existing software programs used to identify and correct for bias in healthcare AI tend to be ad hoc and find only what they set out to identify.
“Only a holistic evaluation, a diligent search for unrecognized bias, can provide enough information for an unbiased judgment of AI bias that can invigorate follow-up investigations on identifying the underlying roots of bias and ultimately make a change,” the authors wrote.