[INDOLOGY] AI hallucinations

Claudius Teodorescu claudius.teodorescu at gmail.com
Mon Sep 22 04:38:55 UTC 2025


@Steven: I found an article about the mathematical limitations of language
large models (LLMs), which are exactly as you described. If I will find it
again, I will add its link here.

There is another, interdisciplinary, way of representing knowledge, which
is by far more productive than LLMs, namely ontologies, as they are
understood in the field of information science, see [1]. The encoding of
input data is done with RDF (see [2]), and consists of statements, with
each statement consisting of subject, predicate, and object.

Encoding the knowledge like this is harder than simply pushing texts to a
LLM, but what we can obtain is a knowledge graph (see [3] for an example),
with structured knowledge and limited real reasoning (see [4] for
reasoning).

I would really like to see the, for instance, the ayurvedic knowledge
encoded as such, with all its concepts (dosha, dushya, etc.) explained,
referred to, and linked one to each other, or a Sanskrit lexicon, with all
available Sanskrit simple or compound words (with references, grammatical
information, etc., for each of them).

Claudius Teodorescu

[1] https://www.w3.org/OWL/
[2] https://www.w3.org/2001/sw/wiki/RDF
[3] https://www.nfdi4objects.net/en/portal/services/nfdi4objects-graph/
[4]
https://info.stardog.com/hubfs/Stardog%20Academy%20-%20Stage%203%20Fundamentals_Slide%20Decks/Video%206_%20Reasoning.pdf


On Mon, 22 Sept 2025 at 03:23, Lindquist, Steven via INDOLOGY <
indology at list.indology.info> wrote:

> Here is a tidbit from a friend of mine who works in AI from a somewhat
> recent conversation when I asked him about why hallucination hasn’t been
> removed/corrected (my answer is refracted through my own understanding of
> his more technical answer, so take with a grain of salt).
>
>
>
> It can’t. Not fully. Calling  bad outcomes a “hallucination” is
> misleading, because it suggests such results are an aberration or an
> internal mistake (I would extend this and say any form of
> anthropomorphizing masks what actually occurs). They are not mistakes. They
> are an undesirable outcome, but one that is an unavoidable product of the
> “generative” algorithms. The capacity that allows such algorithms to
> “generate” also allows them to generate misleading or incorrect
> information; if you strangle that, “AI” is no longer “intelligence” (by
> their terms; I wouldn’t call it intelligence at all). AI can’t “judge” or
> “evaluate” or “think;” it can only execute algorithms based on larger and
> larger data sets.  To put it another way: “hallucinations” are not a
> feature or a bug; they are part of the structure. Techs can add layers of
> correctives and more data helps (it also creates its own problem), but my
> friend said they’ll never eliminate them without wholesale rebuilding from
> the ground up and rethinking what they take to be “intelligence.”  Of
> course, the other—rather terrifying—problem that he mentioned was that AI
> models have become so complex, “no one can really knows what is happening
> internally” so correctives are hit or miss and these problems are likely to
> only get worse. In this way, we’re the guinea pigs for both building them
> up with our data and surfacing the problems (with sometimes tragic
> outcomes, if you’ve followed recent news, particularly with ChatGPT adding
> “personality traits”).
>
>
>
> --
>
> STEVEN E. LINDQUIST, PH.D.
> ALTSHULER DISTINGUISHED TEACHING PROFESSOR
>
> ASSOCIATE PROFESSOR, RELIGIOUS STUDIES
>
> DIRECTOR, ASIAN STUDIES
>
> https://sunypress.edu/Books/T/The-Literary-Life-of-Yajnavalkya
>
>
>
> ____________________
>
>
>
> Dedman College of Humanities and Sciences, SMU
> PO Box 750202 | Dallas | TX | 75275-0202
> Email: slindqui at smu.edu
> Web:* http://people.smu.edu/slindqui <http://faculty.smu.edu/slindqui>*
>
>
>
>
>
>
>
> *From: *INDOLOGY <indology-bounces at list.indology.info> on behalf of Lyne
> Bansat-Boudon via INDOLOGY <indology at list.indology.info>
> *Date: *Sunday, September 21, 2025 at 12:30 PM
> *To: *Madhav Deshpande <mmdesh at umich.edu>, Antonia Ruppel <
> rhododaktylos at gmail.com>
> *Cc: *Indology List <indology at list.indology.info>
> *Subject: *Re: [INDOLOGY] AI hallucinations
>
> Another example found at random during a series of linked searches. I
> quote:
>
>
>
> *Aperçu IA*
>
> +2
>
> "Dhvanyaloka" se traduit en français par
>
> "monde des significations implicites" ou "lumière de la suggestion"
>
>
>
> ‘Dhvanyaloka’ translates into English as ‘world of implied meanings’ or
> ‘light of suggestion’.
>
>
>
> Automatism as an intellectual principle.
>
>
>
> Best wishes,
>
>
>
> Lyne
>
>
>
> Lyne Bansat-Boudon
>
> Directeur d'études pour les Religions de l'Inde
>
> Ecole pratique des hautes études, section des sciences religieuses
>
> Membre senior honoraire de l'Institut universitaire de France
> ------------------------------
>
> *De :* INDOLOGY <indology-bounces at list.indology.info> de la part de
> Antonia Ruppel via INDOLOGY <indology at list.indology.info>
> *Envoyé :* dimanche 21 septembre 2025 16:42
> *À :* Madhav Deshpande <mmdesh at umich.edu>
> *Cc :* Indology List <indology at list.indology.info>
> *Objet :* Re: [INDOLOGY] AI hallucinations
>
>
>
> I think the simple rule for using AI for knowledge purposes is: use it to
> do grunt work in cases where it is easier for you to proof the result than
> to do the work yourself. I've been using DeepSeek to generate running vocab
> commentaries (that then still take a fair while to get from being 75-80% to
> being actually correct); friends of mine who write code say that they find
> doing this themselves a lot easier than asking AI to do it and then
> checking the result for the inevitable bugs.
>
>
>
> AI is made to sound convincing; when you ask it about something where you
> don't know the answer, you have no way of knowing whether what it tells you
> is right or just sounds right. It *is* good for brainstorming if you're
> looking for ideas and then intend to follow up on the answers it gives you
> to check whether any of the references (to articles, legal precedents,
> historical events or Pāṇinian rules) refer to things that actually exist.
>
>
>
> And of course, the constant use of AI that its creators are trying to push
> us towards uses up huge amounts of natural resources (such as
> drinking-quality water to cool the machinery) and requires the generation
> of larger amounts of energy than can be safely generated if we are serious
> about wanting to prevent further climate change.
>
>
>
> Antonia
>
>
>
> On Sun, 21 Sept 2025 at 16:28, Madhav Deshpande via INDOLOGY <
> indology at list.indology.info> wrote:
>
> Several times when I asked ChatGPT and other AI chatbots something about
> Panini, it gave me rules that were irrelevant and with wrong numbers.
> Cannot trust these chatbots for specifics.
>
>
>
> Madhav M. Deshpande
>
> Professor Emeritus, Sanskrit and Linguistics
>
> University of Michigan, Ann Arbor, Michigan, USA
>
> Senior Fellow, Oxford Center for Hindu Studies
>
> Adjunct Professor, National Institute of Advanced Studies, Bangalore, India
>
>
>
> [Residence: Campbell, California, USA]
>
>
>
>
>
> On Sun, Sep 21, 2025 at 7:03 AM Harry Spier via INDOLOGY <
> indology at list.indology.info> wrote:
>
> Thank you Claudius,
>
> I've wondered if in additional to this statistical generation of the text
> there was some kind of "algorithmic monitoring" to eliminate undesirable
> answers (undesirable for perhaps good reasons or not so good reasons) .
>
>
>
> For example a few months ago, when AI was coming up on the list, I typed
> into google "what are the advantages of AI" and got an AI generated
> paragraph or two. But when I then typed in "What are the disadvantages of
> AI" into Google, I did not get any AI generated answer.  A few weeks later
> I did the same experiment and the situation had changed. I got AI generated
> answers in google for both "What are the advantages of AI" and "What are
> the disadvantages of AI?".
>
>
>
> Harry Spier
>
>
>
>
>
>
>
> On Sun, Sep 21, 2025 at 8:03 AM Claudius Teodorescu <
> claudius.teodorescu at gmail.com> wrote:
>
> Dear Harry,
>
>
>
> You gave an excellent definition on how the text is generated. The
> probabilities for what word comes next are extracted from the input texts
> (so no syntactic or semantic rules, just statistics).
>
>
>
> Besides these probabilities, there are also random number generators,
> which are used for variations of the generated text.
>
>
>
> So, nothing new or creative could appear, only what was entered, and most
> of the times in a distorted form.
>
>
>
> Claudius Teodorescu
>
>
>
> On Sun, 21 Sept 2025 at 14:19, Mauricio Najarro via INDOLOGY <
> indology at list.indology.info> wrote:
>
> Just in case people find it useful, here’s an important and well-known
> critique of LLMs from people currently working and thinking carefully about
> all this: https://dl.acm.org/doi/10.1145/3442188.3445922
>
>
>
> Mauricio
>
>
>
> Sent from my iPhone
>
>
>
> On Sep 21, 2025, at 11:47 AM, Harry Spier via INDOLOGY <
> indology at list.indology.info> wrote:
>
> 
>
> Csaba Dezso wrote:
>
>
>
> My question to the AI savvies among us would be: is confabulation /
> hallucination an integral and therefore essentially ineliminable feature of
> LLM?
>
>
>
> I have an extremely limited knowledge and experience  of AI but my
> understanding of LLM's is that they work by choosing the next most
> statistically  likely word in their answer (again I'm not exactly clear how
> they determine that),  So there answers aren't based on any kind of
> reasoning.
>
> Harry Spier
>
>
> _______________________________________________
> INDOLOGY mailing list
> INDOLOGY at list.indology.info
> https://list.indology.info/mailman/listinfo/indology
>
>
> _______________________________________________
> INDOLOGY mailing list
> INDOLOGY at list.indology.info
> https://list.indology.info/mailman/listinfo/indology
>
>
>
>
> --
>
> Cu stimă,
>
> Claudius Teodorescu
>
>
> _______________________________________________
> INDOLOGY mailing list
> INDOLOGY at list.indology.info
> https://list.indology.info/mailman/listinfo/indology
>
>
> _______________________________________________
> INDOLOGY mailing list
> INDOLOGY at list.indology.info
> https://list.indology.info/mailman/listinfo/indology
>
>
> _______________________________________________
> INDOLOGY mailing list
> INDOLOGY at list.indology.info
> https://list.indology.info/mailman/listinfo/indology
>


-- 
Cu stimă,
Claudius Teodorescu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://list.indology.info/pipermail/indology/attachments/20250922/9332432e/attachment.htm>


More information about the INDOLOGY mailing list