On Sep 21, 2025, at 11:47 AM, Harry Spier via INDOLOGY <indology@list.indology.info> wrote:
Csaba Dezso wrote:My question to the AI savvies among us would be: is confabulation / hallucination an integral and therefore essentially ineliminable feature of LLM?I have an extremely limited knowledge and experience of AI but my understanding of LLM's is that they work by choosing the next most statistically likely word in their answer (again I'm not exactly clear how they determine that), So there answers aren't based on any kind of reasoning.Harry Spier
_______________________________________________
INDOLOGY mailing list
INDOLOGY@list.indology.info
https://list.indology.info/mailman/listinfo/indology