[INDOLOGY] AI hallucinations

Harry Spier vasishtha.spier at gmail.com
Sun Sep 21 10:46:02 UTC 2025


Csaba Dezso wrote:

My question to the AI savvies among us would be: is confabulation /
> hallucination an integral and therefore essentially ineliminable feature of
> LLM?
>

I have an extremely limited knowledge and experience  of AI but my
understanding of LLM's is that they work by choosing the next most
statistically  likely word in their answer (again I'm not exactly clear how
they determine that),  So there answers aren't based on any kind of
reasoning.
Harry Spier
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://list.indology.info/pipermail/indology/attachments/20250921/8507cc6c/attachment.htm>


More information about the INDOLOGY mailing list