[INDOLOGY] AI hallucinations

Csaba Dezso csaba_dezso at yahoo.co.uk
Sun Sep 21 06:57:40 UTC 2025


Dear Colleagues,
Recently I have experimented with using ChatGPT as a research tool. You can follow our interaction here:

https://chatgpt.com/share/68cefb37-52a4-800e-9da0-9960fbe2d5ad
Heaven critiques in literature
chatgpt.com

The general answers I got looked promising and often intriguing, but when it came to more precise references, I got mostly hallucinations. It even confabulated Sanskrit quotations. (ChatGPT did not notice the mistake I made in the question, writing Īśvarakṛṣṇa isntead of Īśvaradatta. Claude did notice it, but then it also went on hallucinating.)
My question to the AI savvies among us would be: is confabulation / hallucination an integral and therefore essentially ineliminable feature of LLM? 

Best wishes,
Csaba Dezső
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://list.indology.info/pipermail/indology/attachments/20250921/8361a010/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: chatgpt-share-og-u7j5uyao.webp
Type: image/webp
Size: 20620 bytes
Desc: not available
URL: <https://list.indology.info/pipermail/indology/attachments/20250921/8361a010/attachment.bin>


More information about the INDOLOGY mailing list