[INDOLOGY] AI hallucinations
Madhav Deshpande
mmdesh at umich.edu
Sun Sep 21 14:27:32 UTC 2025
Several times when I asked ChatGPT and other AI chatbots something about
Panini, it gave me rules that were irrelevant and with wrong numbers.
Cannot trust these chatbots for specifics.
Madhav M. Deshpande
Professor Emeritus, Sanskrit and Linguistics
University of Michigan, Ann Arbor, Michigan, USA
Senior Fellow, Oxford Center for Hindu Studies
Adjunct Professor, National Institute of Advanced Studies, Bangalore, India
[Residence: Campbell, California, USA]
On Sun, Sep 21, 2025 at 7:03 AM Harry Spier via INDOLOGY <
indology at list.indology.info> wrote:
> Thank you Claudius,
> I've wondered if in additional to this statistical generation of the text
> there was some kind of "algorithmic monitoring" to eliminate undesirable
> answers (undesirable for perhaps good reasons or not so good reasons) .
>
> For example a few months ago, when AI was coming up on the list, I typed
> into google "what are the advantages of AI" and got an AI generated
> paragraph or two. But when I then typed in "What are the disadvantages of
> AI" into Google, I did not get any AI generated answer. A few weeks later
> I did the same experiment and the situation had changed. I got AI generated
> answers in google for both "What are the advantages of AI" and "What are
> the disadvantages of AI?".
>
> Harry Spier
>
>
>
> On Sun, Sep 21, 2025 at 8:03 AM Claudius Teodorescu <
> claudius.teodorescu at gmail.com> wrote:
>
>> Dear Harry,
>>
>> You gave an excellent definition on how the text is generated. The
>> probabilities for what word comes next are extracted from the input texts
>> (so no syntactic or semantic rules, just statistics).
>>
>> Besides these probabilities, there are also random number generators,
>> which are used for variations of the generated text.
>>
>> So, nothing new or creative could appear, only what was entered, and most
>> of the times in a distorted form.
>>
>> Claudius Teodorescu
>>
>> On Sun, 21 Sept 2025 at 14:19, Mauricio Najarro via INDOLOGY <
>> indology at list.indology.info> wrote:
>>
>>> Just in case people find it useful, here’s an important and well-known
>>> critique of LLMs from people currently working and thinking carefully about
>>> all this: https://dl.acm.org/doi/10.1145/3442188.3445922
>>>
>>> Mauricio
>>>
>>> Sent from my iPhone
>>>
>>> On Sep 21, 2025, at 11:47 AM, Harry Spier via INDOLOGY <
>>> indology at list.indology.info> wrote:
>>>
>>>
>>> Csaba Dezso wrote:
>>>
>>> My question to the AI savvies among us would be: is confabulation /
>>>> hallucination an integral and therefore essentially ineliminable feature of
>>>> LLM?
>>>>
>>>
>>> I have an extremely limited knowledge and experience of AI but my
>>> understanding of LLM's is that they work by choosing the next most
>>> statistically likely word in their answer (again I'm not exactly clear how
>>> they determine that), So there answers aren't based on any kind of
>>> reasoning.
>>> Harry Spier
>>>
>>> _______________________________________________
>>> INDOLOGY mailing list
>>> INDOLOGY at list.indology.info
>>> https://list.indology.info/mailman/listinfo/indology
>>>
>>>
>>> _______________________________________________
>>> INDOLOGY mailing list
>>> INDOLOGY at list.indology.info
>>> https://list.indology.info/mailman/listinfo/indology
>>>
>>
>>
>> --
>> Cu stimă,
>> Claudius Teodorescu
>>
>
> _______________________________________________
> INDOLOGY mailing list
> INDOLOGY at list.indology.info
> https://list.indology.info/mailman/listinfo/indology
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://list.indology.info/pipermail/indology/attachments/20250921/678b070f/attachment.htm>
More information about the INDOLOGY
mailing list