On 22 Sep 2025, at 6:38 AM, Claudius Teodorescu via INDOLOGY <indology@list.indology.info> wrote:
[EXTERNAL]
@Steven: I found an article about the mathematical limitations of language large models (LLMs), which are exactly as you described. If I will find it again, I will add its link here.
There is another, interdisciplinary, way of representing knowledge, which is by far more productive than LLMs, namely ontologies, as they are understood in the field of information science, see [1]. The encoding of input data is done with RDF (see [2]), and consists of statements, with each statement consisting of subject, predicate, and object.
Encoding the knowledge like this is harder than simply pushing texts to a LLM, but what we can obtain is a knowledge graph (see [3] for an example), with structured knowledge and limited real reasoning (see [4] for reasoning).
I would really like to see the, for instance, the ayurvedic knowledge encoded as such, with all its concepts (dosha, dushya, etc.) explained, referred to, and linked one to each other, or a Sanskrit lexicon, with all available Sanskrit simple or compound words (with references, grammatical information, etc., for each of them).
Claudius Teodorescu
On Mon, 22 Sept 2025 at 03:23, Lindquist, Steven via INDOLOGY <indology@list.indology.info> wrote:
Here is a tidbit from a friend of mine who works in AI from a somewhat recent conversation when I asked him about why hallucination hasn’t been removed/corrected (my answer is refracted through my own understanding of his more technical answer, so take with a grain of salt).
It can’t. Not fully. Calling bad outcomes a “hallucination” is misleading, because it suggests such results are an aberration or an internal mistake (I would extend this and say any form of anthropomorphizing masks what actually occurs). They are not mistakes. They are an undesirable outcome, but one that is an unavoidable product of the “generative” algorithms. The capacity that allows such algorithms to “generate” also allows them to generate misleading or incorrect information; if you strangle that, “AI” is no longer “intelligence” (by their terms; I wouldn’t call it intelligence at all). AI can’t “judge” or “evaluate” or “think;” it can only execute algorithms based on larger and larger data sets. To put it another way: “hallucinations” are not a feature or a bug; they are part of the structure. Techs can add layers of correctives and more data helps (it also creates its own problem), but my friend said they’ll never eliminate them without wholesale rebuilding from the ground up and rethinking what they take to be “intelligence.” Of course, the other—rather terrifying—problem that he mentioned was that AI models have become so complex, “no one can really knows what is happening internally” so correctives are hit or miss and these problems are likely to only get worse. In this way, we’re the guinea pigs for both building them up with our data and surfacing the problems (with sometimes tragic outcomes, if you’ve followed recent news, particularly with ChatGPT adding “personality traits”).
--
STEVEN E. LINDQUIST, PH.D.
ALTSHULER DISTINGUISHED TEACHING PROFESSORASSOCIATE PROFESSOR, RELIGIOUS STUDIES
DIRECTOR, ASIAN STUDIES
https://sunypress.edu/Books/T/The-Literary-Life-of-Yajnavalkya
____________________
Dedman College of Humanities and Sciences, SMU
PO Box 750202 | Dallas | TX | 75275-0202
Email: slindqui@smu.edu
Web: http://people.smu.edu/slindqui
_______________________________________________
INDOLOGY mailing list
INDOLOGY@list.indology.info
https://list.indology.info/mailman/listinfo/indology
--
Cu stimă,Claudius Teodorescu
_______________________________________________
INDOLOGY mailing list
INDOLOGY@list.indology.info
https://list.indology.info/mailman/listinfo/indology