It's apparently a feature not a bug, according to research from OpenAI:

"We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty..."


https://cdn.openai.com/pdf/d04913be-3f6f-4d2b-b283-ff432ef4aaa5/why-language-models-hallucinate.pdf


Best wishes,
Daniel

On 21 Sep 2025, at 07:57, Csaba Dezso via INDOLOGY <indology@list.indology.info> wrote:

Dear Colleagues,
Recently I have experimented with using ChatGPT as a research tool. You can follow our interaction here:


The general answers I got looked promising and often intriguing, but when it came to more precise references, I got mostly hallucinations. It even confabulated Sanskrit quotations. (ChatGPT did not notice the mistake I made in the question, writing Īśvarakṛṣṇa isntead of Īśvaradatta. Claude did notice it, but then it also went on hallucinating.)
My question to the AI savvies among us would be: is confabulation / hallucination an integral and therefore essentially ineliminable feature of LLM? 

Best wishes,
Csaba Dezső

_______________________________________________
INDOLOGY mailing list
INDOLOGY@list.indology.info
https://list.indology.info/mailman/listinfo/indology