[INDOLOGY] AI hallucinations

Daniel Simpson danielcsimpson at gmail.com
Sun Sep 21 07:13:15 UTC 2025


It's apparently a feature not a bug, according to research from OpenAI:

"We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty..."

https://cdn.openai.com/pdf/d04913be-3f6f-4d2b-b283-ff432ef4aaa5/why-language-models-hallucinate.pdf

Best wishes,
Daniel

> On 21 Sep 2025, at 07:57, Csaba Dezso via INDOLOGY <indology at list.indology.info> wrote:
> 
> Dear Colleagues,
> Recently I have experimented with using ChatGPT as a research tool. You can follow our interaction here:
> 
> https://chatgpt.com/share/68cefb37-52a4-800e-9da0-9960fbe2d5ad
> 
> The general answers I got looked promising and often intriguing, but when it came to more precise references, I got mostly hallucinations. It even confabulated Sanskrit quotations. (ChatGPT did not notice the mistake I made in the question, writing Īśvarakṛṣṇa isntead of Īśvaradatta. Claude did notice it, but then it also went on hallucinating.)
> My question to the AI savvies among us would be: is confabulation / hallucination an integral and therefore essentially ineliminable feature of LLM? 
> 
> Best wishes,
> Csaba Dezső
> 
> _______________________________________________
> INDOLOGY mailing list
> INDOLOGY at list.indology.info
> https://list.indology.info/mailman/listinfo/indology

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://list.indology.info/pipermail/indology/attachments/20250921/2651c3c2/attachment.htm>


More information about the INDOLOGY mailing list