Quote:The researchers say they found “numerous” instances where publishers’ content was inaccurately cited by ChatGPT — also finding what they dub “a spectrum of accuracy in the responses.” So while they found “some” entirely correct citations (i.e. meaning ChatGPT accurately returned the publisher, date, and URL of the block quote shared with it), there were “many” citations that were entirely wrong, and “some” that fell somewhere in between.
In short, ChatGPT’s citations appear to be an unreliable mixed bag. The researchers also found very few instances where the chatbot didn’t project total confidence in its (wrong) answers.
Some of the quotes were sourced from publishers that have actively blocked OpenAI’s search crawlers. In those cases, the researchers say they were anticipating that it would have issues producing correct citations. But they found this scenario raised another issue — as the bot “rarely” ‘fessed up to being unable to produce an answer. Instead, it fell back on confabulation in order to generate some sourcing (albeit, incorrect sourcing).
“In total, ChatGPT returned partially or entirely incorrect responses on 153 occasions, though it only acknowledged an inability to accurately respond to a query seven times,” said the researchers. “Only in those seven outputs did the chatbot use qualifying words and phrases like ‘appears,’ ‘it’s possible,’ or ‘might,’ or statements like ‘I couldn’t locate the exact article’.”
https://techcrunch.com/2024/11/29/study-...ublishers/