Understanding AI Hallucinations and How to Detect Them
Explains what AI hallucinations are, why they happen, and how to detect them by asking the same question twice.
Hi, a quick video today in a series of videos covering terminology for artificial intelligence. When an AI chatbot gives us the answer to its question, it’s important to understand that the answer is based on a few key things. The data on which the AI was trained is important, the limitations when attempting to generate new information, assumptions made by the model or in many cases a lack of common sense reasoning. That last one’s important because many new models are attempting to resolve this reasoning issue right now. When an AI gives us back a made up response, we call that a hallucination. Let me give you an example. What’s the world record for crossing the English channel on foot? Now in this case, I’m using a model from last year with limited training data and the result is that this is extremely challenging due to the cold water from strong currents. Regardless, someone apparently did it in 1994 and it only took them 13 and a half hours. Now of course this is clearly incorrect. If I ask the same question again though of a more current reasoning model, you can see that some common sense has been applied. In this case, it’s saying that you can’t cross the English channel on foot and here are the ways that people typically get across the channel. If you’re chatting with an AI and you suspect that the answer might not be accurate, one of the best ways to check is just ask the same question again. You might get the same result, but if it’s hallucinating, you’ll get a completely new answer.