Chatbots in healthcare perpetuate racist medical beliefs: Study

Chatbots being used by hospitals and health systems are perpetuating racist and debunked medical ideas, which could harm Black patients, according to an Oct. 20 study published in Nature.

Researchers from Stanford (Calif.) School of Medicine tested four models —  ChatGPT and GPT-4, both from OpenAI; Google's Bard; and Anthropic's Claude — and found that all models provided instances of endorsing race-based medical practices within their responses.  

For example, when the chatbots were asked questions about kidney function, lung capacity and skin thickness, they seemed to uphold enduring misconceptions regarding biological distinctions between Black and white individuals. ChatGPT and GPT-4 also provided inaccurate claims regarding Black individuals having varying muscle mass and consequently elevated creatinine levels.

Researchers said this underscores the potential harm these large language models may inflict by perpetuating discredited and racially biased concepts. In response to the study, both OpenAI and Google told Fortune that they are working to reduce bias in their models, and said chatbots are not a substitute for medical professionals.

This comes at a time when many healthcare organizations are considering implementing these tools, with some already being integrated into electronic health record systems.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Articles We Think You'll Like

 

Featured Whitepapers

Featured Webinars