AI could replicate, compound existing disparities in COVID-19 care, study finds

The COVID-19 pandemic is known to disproportionately affect disadvantaged communities. As artificial intelligence is deployed to turn large datasets into statistical predictions, it too can be susceptible to algorithmic biases that compound inequality in COVID-19 treatment, a March 16 report published in The BMJ found.

Advertisement

Four things to know about the report’s findings:

  1. The use of AI models may negatively affect disadvantaged communities because they are not properly represented in training databases and are already subject to health inequality.
  2. Preemptive biases and discrimination are embedded into data distributions, skewing results.
  3. Biases are compounded into artificial intelligence design and deployment practices because of power imbalances in agenda-setting, an exclusionary design model and testing practices, and biases in system monitoring preferences, among other factors.
  4. Application of artificial intelligence can worsen treatment gaps, ignore digital divides and allow for hazardous discriminatory repurposing of biased artificial intelligence systems.

“Despite their promise, AI systems are uniquely positioned to exacerbate health inequalities during the [COVID-19] pandemic if not responsibly designed and deployed,” the report said. “To mitigate these effects, we call for inclusive and responsible practices that ensure fair use of medical and public AI systems in times of crisis and normalcy alike.”

To read the full report on how artificial intelligence can exacerbate inequalities, click here.

More articles on artificial intelligence: 
Cleveland Clinic, NFL Players Association launch AI venture for neurological diseases
5 recent studies exploring AI in healthcare
AMA’s 7 tips for responsible AI use in healthcare

Advertisement

Next Up in Health IT

Advertisement

Comments are closed.