Dear Colleagues,Recently I have experimented with using ChatGPT as a research tool. You can follow our interaction here:
The general answers I got looked promising and often intriguing, but when it came to more precise references, I got mostly hallucinations. It even confabulated Sanskrit quotations. (ChatGPT did not notice the mistake I made in the question, writing Īśvarakṛṣṇa isntead of Īśvaradatta. Claude did notice it, but then it also went on hallucinating.)
My question to the AI savvies among us would be: is confabulation / hallucination an integral and therefore essentially ineliminable feature of LLM?
Best wishes,
Csaba Dezső