When people say AI “hallucinates,” that’s not really true.
LLMs don’t see or imagine. They predict text.
When they answer with something that seems totally made up, they’re not hallucinating. They’re doing exactly what they’re built to do: choosing the most likely next word, then the next, etc .