Although artificial intelligence is being promoted as a way to diagnose patient conditions and predict patients' risk of being admitted to an intensive care unit, the use of AI in medicine could worsen health disparities, a physician writes in The New York Times.
Dhruv Khullar, MD, a physician at NewYork-Presbyterian Hospital in New York City voiced three concerns:
1. Imperfect technology. AI must learn to diagnose disease on large data sets, and if that data does not have enough patients from a particular background, it will not be as reliable for these patients, Dr. Khullar said. For example, a recent study found some facial recognition programs incorrectly classified less than 1 percent of light-skinned men more than one-third of dark-skinned women. "What happens when we rely on such algorithms to diagnose melanoma on light versus dark skin?" Dr. Khullar wrote.
2. Risk of perpetuating biases. Since AI is trained on real-world data, it risks including and perpetuating the economic and social biases that already contribute to health disparities, Dr. Khullar said.
"In medicine, unchecked AI could create self-fulfilling prophecies that confirm our pre-existing biases, especially when used for conditions with complex trade-offs and high degrees of uncertainty."
For example, if poorer patients fare worse after organ transplantation or after receiving chemotherapy for end-stage cancer, machine learning algorithms may conclude these patients are less likely to benefit from further treatment and recommend against it, Dr. Khullar said.
3. Disproportionate effects on certain groups. AI may worsen disparities if its implementation means disproportionate effects for certain groups, according to Dr. Khullar. For example, if an algorithm incorporates a patient's residence in a low-income neighborhood as a marker for poor social support, it may recommend minority patients receive treatment at a nursing facility, which is linked to higher costs and higher risk of hospital readmission, as opposed to giving the patient home-based physical therapy.
"Worse yet, a program designed to maximize efficiency or lower medical costs might discourage operating on those patients altogether," Dr. Khullar said.
"American healthcare has always struggled with income- and race-based inequities rooted in various forms of bias," Dr. Khullar wrote. "The risk with AI is that these biases become automated and invisible — that we begin to accept the wisdom of machines over the wisdom of our own clinical and moral intuition. It is our duty to ensure that we're using AI as another tool at our disposal — not the other way around."