A new study from Stanford University has found that generative AI systems, including ChatGPT, still βhallucinateβ or create false information even after years of progress. Researchers say this problem continues because these systems are built to guess rather than admit when they do not know an answer.
The issue of hallucination remains one of the biggest challenges in generative AI. Experts warn that the tendency to invent facts could have serious consequences, especially as AI tools are increasingly used in sensitive areas such as healthcare, law, and education. Despite improvements in model accuracy, these systems can still produce misleading information with great confidence.
Why Do AI Models Hallucinate?
According to researchers at OpenAI, the company behind ChatGPT, the main problem lies in how AI systems are trained. Most models are designed to maximize correct answers, which encourages them to make educated guesses. This process is similar to a student attempting every question on a test instead of leaving some blank.
AI Training Can Make It Worse
Generative AI models learn by predicting the next word in massive text datasets. While some parts of these datasets follow predictable patterns, others are random or incomplete. Hallucinations often occur when the model faces unclear or ambiguous questions. To fill in the gaps, it makes confident but sometimes false guesses.
Flaws in Evaluation Methods
The Stanford study also noted that current evaluation methods reward accuracy rather than honesty. Models are often praised for the number of correct answers, even if some responses are incorrect. This encourages AI systems to produce an answerβright or wrongβrather than admit uncertainty.
How It Could Be Fixed
Researchers suggest updating training and scoring systems to penalize confident mistakes and reward honest uncertainty. This approach, similar to grading systems that deduct marks for wrong answers, could help reduce hallucinations.
The study concludes that improving evaluation and training techniques will make generative AI more trustworthy and reliable as it continues to evolve.
In other news read more about: ChatGPT Launches Instinct Checkout For Shopping




