[INDOLOGY] AI hallucinations
Eli Franco
franco at uni-leipzig.de
Sun Sep 21 18:45:03 UTC 2025
Also of interest is James Gleick’s The Parrot in the Machine, New York
Review of Books, July 24, 2025.
Incidentally, and unrelatedly, the same issue contains a heartbreaking
report by David Shulman on the current situation in the West Bank.
Shana Tova!
Eli
Zitat von Mauricio Najarro via INDOLOGY <indology at list.indology.info>:
> Just in case people find it useful, here’s an important and
> well-known critique of LLMs from people currently working and
> thinking carefully about all this:
> https://dl.acm.org/doi/10.1145/3442188.3445922
>
> Mauricio
>
> Sent from my iPhone
>
>> On Sep 21, 2025, at 11:47 AM, Harry Spier via INDOLOGY
>> <indology at list.indology.info> wrote:
>>
>>
>> Csaba Dezso wrote:
>>
>>> My question to the AI savvies among us would be: is confabulation
>>> / hallucination an integral and therefore essentially ineliminable
>>> feature of LLM?
>>
>> I have an extremely limited knowledge and experience of AI but my
>> understanding of LLM's is that they work by choosing the next most
>> statistically likely word in their answer (again I'm not exactly
>> clear how they determine that), So there answers aren't based on
>> any kind of reasoning.
>> Harry Spier
>>
>> _______________________________________________
>> INDOLOGY mailing list
>> INDOLOGY at list.indology.info
>> https://list.indology.info/mailman/listinfo/indology
--
Prof. Dr. Eli Franco
Hegergasse 8/15
Wien 1030
Austria
More information about the INDOLOGY
mailing list