[INDOLOGY] AI hallucinations
Lyne Bansat-Boudon
Lyne.Bansat-Boudon at ephe.psl.eu
Sun Sep 21 17:30:29 UTC 2025
Another example found at random during a series of linked searches. I quote:
Aperçu IA
+2
"Dhvanyaloka" se traduit en français par
"monde des significations implicites" ou "lumière de la suggestion"
‘Dhvanyaloka’ translates into English as ‘world of implied meanings’ or ‘light of suggestion’.
Automatism as an intellectual principle.
Best wishes,
Lyne
Lyne Bansat-Boudon
Directeur d'études pour les Religions de l'Inde
Ecole pratique des hautes études, section des sciences religieuses
Membre senior honoraire de l'Institut universitaire de France
________________________________
De : INDOLOGY <indology-bounces at list.indology.info> de la part de Antonia Ruppel via INDOLOGY <indology at list.indology.info>
Envoyé : dimanche 21 septembre 2025 16:42
À : Madhav Deshpande <mmdesh at umich.edu>
Cc : Indology List <indology at list.indology.info>
Objet : Re: [INDOLOGY] AI hallucinations
I think the simple rule for using AI for knowledge purposes is: use it to do grunt work in cases where it is easier for you to proof the result than to do the work yourself. I've been using DeepSeek to generate running vocab commentaries (that then still take a fair while to get from being 75-80% to being actually correct); friends of mine who write code say that they find doing this themselves a lot easier than asking AI to do it and then checking the result for the inevitable bugs.
AI is made to sound convincing; when you ask it about something where you don't know the answer, you have no way of knowing whether what it tells you is right or just sounds right. It *is* good for brainstorming if you're looking for ideas and then intend to follow up on the answers it gives you to check whether any of the references (to articles, legal precedents, historical events or Pāṇinian rules) refer to things that actually exist.
And of course, the constant use of AI that its creators are trying to push us towards uses up huge amounts of natural resources (such as drinking-quality water to cool the machinery) and requires the generation of larger amounts of energy than can be safely generated if we are serious about wanting to prevent further climate change.
Antonia
On Sun, 21 Sept 2025 at 16:28, Madhav Deshpande via INDOLOGY <indology at list.indology.info<mailto:indology at list.indology.info>> wrote:
Several times when I asked ChatGPT and other AI chatbots something about Panini, it gave me rules that were irrelevant and with wrong numbers. Cannot trust these chatbots for specifics.
Madhav M. Deshpande
Professor Emeritus, Sanskrit and Linguistics
University of Michigan, Ann Arbor, Michigan, USA
Senior Fellow, Oxford Center for Hindu Studies
Adjunct Professor, National Institute of Advanced Studies, Bangalore, India
[Residence: Campbell, California, USA]
On Sun, Sep 21, 2025 at 7:03 AM Harry Spier via INDOLOGY <indology at list.indology.info<mailto:indology at list.indology.info>> wrote:
Thank you Claudius,
I've wondered if in additional to this statistical generation of the text there was some kind of "algorithmic monitoring" to eliminate undesirable answers (undesirable for perhaps good reasons or not so good reasons) .
For example a few months ago, when AI was coming up on the list, I typed into google "what are the advantages of AI" and got an AI generated paragraph or two. But when I then typed in "What are the disadvantages of AI" into Google, I did not get any AI generated answer. A few weeks later I did the same experiment and the situation had changed. I got AI generated answers in google for both "What are the advantages of AI" and "What are the disadvantages of AI?".
Harry Spier
On Sun, Sep 21, 2025 at 8:03 AM Claudius Teodorescu <claudius.teodorescu at gmail.com<mailto:claudius.teodorescu at gmail.com>> wrote:
Dear Harry,
You gave an excellent definition on how the text is generated. The probabilities for what word comes next are extracted from the input texts (so no syntactic or semantic rules, just statistics).
Besides these probabilities, there are also random number generators, which are used for variations of the generated text.
So, nothing new or creative could appear, only what was entered, and most of the times in a distorted form.
Claudius Teodorescu
On Sun, 21 Sept 2025 at 14:19, Mauricio Najarro via INDOLOGY <indology at list.indology.info<mailto:indology at list.indology.info>> wrote:
Just in case people find it useful, here’s an important and well-known critique of LLMs from people currently working and thinking carefully about all this: https://dl.acm.org/doi/10.1145/3442188.3445922
Mauricio
Sent from my iPhone
On Sep 21, 2025, at 11:47 AM, Harry Spier via INDOLOGY <indology at list.indology.info<mailto:indology at list.indology.info>> wrote:
Csaba Dezso wrote:
My question to the AI savvies among us would be: is confabulation / hallucination an integral and therefore essentially ineliminable feature of LLM?
I have an extremely limited knowledge and experience of AI but my understanding of LLM's is that they work by choosing the next most statistically likely word in their answer (again I'm not exactly clear how they determine that), So there answers aren't based on any kind of reasoning.
Harry Spier
_______________________________________________
INDOLOGY mailing list
INDOLOGY at list.indology.info<mailto:INDOLOGY at list.indology.info>
https://list.indology.info/mailman/listinfo/indology
_______________________________________________
INDOLOGY mailing list
INDOLOGY at list.indology.info<mailto:INDOLOGY at list.indology.info>
https://list.indology.info/mailman/listinfo/indology
--
Cu stimă,
Claudius Teodorescu
_______________________________________________
INDOLOGY mailing list
INDOLOGY at list.indology.info<mailto:INDOLOGY at list.indology.info>
https://list.indology.info/mailman/listinfo/indology
_______________________________________________
INDOLOGY mailing list
INDOLOGY at list.indology.info<mailto:INDOLOGY at list.indology.info>
https://list.indology.info/mailman/listinfo/indology
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://list.indology.info/pipermail/indology/attachments/20250921/fd4b3967/attachment.htm>
More information about the INDOLOGY
mailing list