Study finds hallucinations in hospital-used AI tool

About 1% of audio transcriptions written by Open AI's Whisper, a speech recognition tool launched in 2022 and currently used in hospitals, contain hallucinated phrases or sentences when nobody spoke, according to a recent study

Researchers at Cornell University in Ithaca, N.Y., University of Washington in Seattle, New York University in New York City and University of Virginia in Charlottesville investigated how aphasia, which often appears after strokes, could affect Whisper-made transcriptions. 

More than 85 health organizations and 45,000 clinicians are using a Whisper-based tool built by Nabla.

The researchers fed Whisper about 40 hours of data, including 23 hours of speech from people with aphasia. Of the 1% of hallucinated text, 38% of it featured a perpetuation of violence, inaccurate associations or false authority. 

The findings were presented in the Association for Computing Machinery's "FAccT '24: The 2024 ACM Conference on Fairness, Accountability and Transparency," held in Rio de Janeiro June 3-6.

Copyright © 2024 Becker's Healthcare. All Rights Reserved. Privacy Policy. Cookie Policy. Linking and Reprinting Policy.

 

Featured Whitepapers

Featured Webinars