Researchers report that OpenAI's Whisper transcription tool experiences hallucination issues.

According to the Associated Press report, concerns are being raised by software engineers, developers, and academic researchers over transcriptions coming out from OpenAI's Whisper.
Researchers report that OpenAI's Whisper transcription tool experiences hallucination issues.

According to the Associated Press report, concerns are being raised by software engineers, developers, and academic researchers over transcriptions coming out from OpenAI's Whisper.
While debates have not been scarce in going around regarding the hallucinating habit of generative AI-that is basically making things up-it's rather astonishing that this is a problem within the transcription process where a transcription should be quite identical with the audio that gets transcribed.

Instead, researchers told the AP that Whisper has introduced everything from racial commentary to imagined medical treatments into transcripts. And that could be particularly disastrous as Whisper is adopted in hospitals and other medical contexts.

A University of Michigan researcher who analyzed public meetings found hallucinations in eight out of every 10 audio transcriptions. A machine learning engineer reviewed more than 100 hours of Whisper transcriptions and found hallucinations in over half of them. And a developer reported finding hallucinations in nearly all the 26,000 transcriptions he created with Whisper.

As related to this, an OpenAI spokesperson said the firm is "continually working to improve the accuracy of our models, including reducing hallucinations" and noted how its usage policies prohibit "using Whisper in certain high-stakes decision-making contexts."

"We thank researchers for sharing their findings," they said.

 

Blog
|
2024-10-27 18:33:22