Academia is proposing various mitigation techniques to address the "hallucination" phenomenon where artificial intelligence presents false or nonexistent information as fact.
A 2024 paper titled "Comprehensive Analysis of Hallucination Mitigation Techniques in Large Language Models," co-authored by researcher Tohidul Islam Tonmoy of Islamic University of Technology (IUT), introduces more than 32 techniques and emphasizes the importance of effective "prompting skills" for using AI wisely.

Approaches to reducing hallucinations fall into three main categories: refining questions, shifting the generation process toward verification, and improving the model's training structure.
First, user prompts must be clearer. This means providing AI with external materials or reference data first and including up-to-date facts in questions. Rather than throwing complex requests all at once, breaking them into stages proves effective. Instead of demanding lengthy reports or comprehensive analyses, dividing tasks into smaller units—such as "concept definition → case summary → comparative analysis"—reduces logical leaps.
Using specific terminology instead of vague expressions also helps. Defining parameters clearly, such as "papers published in international academic journals between 2023-2024" rather than "recent research," narrows the AI's response scope.
Users can also impose honesty conditions on models directly. For example, stating explicitly: "Present only content with solid evidence and do not speculate." After responses are generated, users can add a self-fact-checking step by requesting: "Review whether your content is accurate."
Furthermore, instructing the model to check for internal contradictions proves helpful. This involves cross-logical verification—confirming whether claims conflict or whether presented figures are consistent. Using Retrieval-Augmented Generation (RAG), which shows external materials for direct comparison, guides AI to base answers on actual documents rather than memory.
At the model design level, responses are restricted when uncertainty is high. This trains models to say they don't know when they don't know—learning to decline answers or respond that additional information is needed, rather than fabricating plausible-sounding nonexistent papers or information.
