Four things to know about the report’s findings:
- The use of AI models may negatively affect disadvantaged communities because they are not properly represented in training databases and are already subject to health inequality.
- Preemptive biases and discrimination are embedded into data distributions, skewing results.
- Biases are compounded into artificial intelligence design and deployment practices because of power imbalances in agenda-setting, an exclusionary design model and testing practices, and biases in system monitoring preferences, among other factors.
- Application of artificial intelligence can worsen treatment gaps, ignore digital divides and allow for hazardous discriminatory repurposing of biased artificial intelligence systems.
“Despite their promise, AI systems are uniquely positioned to exacerbate health inequalities during the [COVID-19] pandemic if not responsibly designed and deployed,” the report said. “To mitigate these effects, we call for inclusive and responsible practices that ensure fair use of medical and public AI systems in times of crisis and normalcy alike.”
To read the full report on how artificial intelligence can exacerbate inequalities, click here.
More articles on artificial intelligence:
Cleveland Clinic, NFL Players Association launch AI venture for neurological diseases
5 recent studies exploring AI in healthcare
AMA’s 7 tips for responsible AI use in healthcare