A light is being shone on AI hallucinations lately. Rightly so. These are the outliers in AI behaviour that have a serious negative impact on customer experience. In fact, 70% of consumers say they would leave a brand after just one bad AI experience. One. AI is a high-stakes game.
First, what is an “AI hallucination”? IBM offers a good definition:
“AI hallucination is a phenomenon wherein a large language model (LLM)—often a generative AI chatbot or computer vision tool—perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.”
So, why do they happen? There are three main reasons:
Now I didn’t say a lot on point # 1. You didn’t think this blog was going to ignore data, did you? Of course not! And I chose my words carefully. “Wrong” data isn’t just bad data. And I think that is the point, and also the problem. There are so many posts and articles that focus on improving data quality to improve AI and reduce hallucinations. It’s true, but it’s just part of the problem. AI needs the right data.
The right data for AI includes:
In order to make your data “right” for AI, there are some important data management functions that need to improve drastically.
I think data is the most important factor in AI hallucinations. They have to be found in QA and prevented from happening in production. The negative impact on customer experience is simply too high. And I believe every single one of those hallucinations can be stamped out with a modern approach to data management.
QSG’s monthly newsletter is filled with insights, best practices, and success stories from our customers’ experiences in utilizing modern technology to improve their business.