Upstox Originals

4 min read | Updated on November 06, 2025, 15:51 IST
SUMMARY
AI hallucinations are instances when AI tools generate false information. It’s a growing problem, with companies like Deloitte, Google and others also facing the fallout. These errors take place due to a lack of proper fact-checking, misunderstanding of prompts and other issues. Even as we hastily adopt AI, we need to guard against reputation risks caused by AI falsehoods.

Even the most powerful GenAI models might ‘lie’ or hallucinate | Image courtesy: Shutterstock
Powerful GenAI tools like ChatGPT, Perplexity and others have flooded the market. They can write your emails, and answer complex questions among other similar tasks. But what if they started producing false information? Imagine the chaos that would follow.
These AI-generated falsehoods, also referred to as AI hallucinations, are a real problem today. No company or individual is immune to it. And recent events have put the spotlight on AI blunders.
Consider this. Global consulting giant Deloitte stirred up a controversy for allegedly using GenAI tools in a report for the Australian government. The report, released in July 2025, had several errors referencing individuals who did not exist as well as some fabricated quotes. This report caused a public embarrassment for Deloitte, and the firm has decided to refund a part of its AUD 440,000 (USD 290,000) fee to the Australian government.
Here's another shocker. The Chicago-Sun Times and Philadelphia Inquirer took a hit on their reputation when they published a summer reading list of books in May 2025. The problem was that some of the books on the list did not exist. Marco Buscaglia, the author of the special section, said he used AI for putting the reading list together, and failed to fact-check the output. The list featured real authors but attributed some fake books to them.
Tech giants like Google have also faced the blunders of AI. When Google launched its AI chatbot Bard (now Gemini) in 2023, the promotional video at the launch was itself a disaster. The chatbot incorrectly claimed that the James Webb Space Telescope took the first picture of an exoplanet. Astronomers pointed out that the first image of an exoplanet was taken in 2004, by the European Southern Observatory's Very Large Telescope in Chile. Alphabet's stock dropped by 8-9% following this blunder.
Even the legal world is not safe. Judges around the world are witnessing legal briefs full of errors generated by AI. False quotes and fabricated cases are finding a way into public domain due to AI hallucinations. High-profile companies are also encountering problematic documents.
AI hallucinations are nothing but instances when large language models (LLMs) like ChatGPT and Gemini generate false, misleading or inaccurate information.
Hallucination rates are increasing even as technology is paving the way for newer and more advanced AI models.
LLMs are trained on large amounts of human-generated and refined data. GenAI models like ChatGPT are always trying to determine the most probable next word and focus more on smooth, human-like text rather than factual accuracy.
AI models cannot think like humans. Their outputs are based on statistical patterns and not on real-world understanding. That is why they sometimes come up with incorrect or absurd results.
These misinterpretations occur due to various reasons. For one, AI models struggle with the cultural and emotional contexts. They often mistake local customs for universal norms as they lack the lived experience. To explain this further, here’s an example. An AI chatbot is asked to visualise a dining scenario in Spain where the diners leave a tip of 4% of the bill. Based on its training on US data, the bot might find the tip too frugal and take the diners to be poorly off, while the cultural reality is that in Europe, a tip is less common or smaller. Similarly, a bot trained in American English may misunderstand an Indian customer who uses the English language differently.
This inability to grasp nuances leads to irrelevant results. When AI extracts data from external sources it has no means to check it for facts. Moreover, slang and idioms are sometimes misunderstood by AI and interpreted literally, leading to incorrect results. Further, if the data it was trained on had inherent biases, the AI tool will only perpetuate the bias.
Caution is the name of the game. AI hallucinations pose risks to companies and individuals. Several technology giants have launched frameworks and official policies to safeguard against these AI hallucinations. These efforts are being integrated into the companies’ Responsible AI standards. Tech giants like Microsoft, Google and Salesforce are tackling the problem head-on. They use specialised tools like Retrieval-Augmented Generation or RAG to ensure accuracy. Google fights hallucinations by using high-quality training data among other strategies.
So it is clear that AI might be fast but not perfect. Even the most powerful GenAI models might ‘lie’ or hallucinate as proved by the blunders at Google and Deloitte. Remember AI is not always right. So next time you use ChatGPT or any other AI tool, make sure you verify the facts as best you can. AI can assist you but never underestimate your own judgment.
By signing up you agree to Upstox’s Terms & Conditions
About The Author

Next Story
By signing up you agree to Upstox’s Terms & Conditions