[INDOLOGY] AI hallucinations

Mauricio Najarro mauricio.jose.najarro at gmail.com
Sun Sep 21 11:18:22 UTC 2025


Just in case people find it useful, here’s an important and well-known critique of LLMs from people currently working and thinking carefully about all this: https://dl.acm.org/doi/10.1145/3442188.3445922

Mauricio 

Sent from my iPhone

> On Sep 21, 2025, at 11:47 AM, Harry Spier via INDOLOGY <indology at list.indology.info> wrote:
> 
> 
> Csaba Dezso wrote:
> 
>> My question to the AI savvies among us would be: is confabulation / hallucination an integral and therefore essentially ineliminable feature of LLM? 
> 
> I have an extremely limited knowledge and experience  of AI but my understanding of LLM's is that they work by choosing the next most statistically  likely word in their answer (again I'm not exactly clear how they determine that),  So there answers aren't based on any kind of reasoning. 
> Harry Spier
> 
> _______________________________________________
> INDOLOGY mailing list
> INDOLOGY at list.indology.info
> https://list.indology.info/mailman/listinfo/indology
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://list.indology.info/pipermail/indology/attachments/20250921/c13fdac7/attachment.htm>


More information about the INDOLOGY mailing list