AI Hallucination – A thing or nonsense?

Recenttly I hear a lot of people talking about this thing we call AI "hallucinating".

Is this actualy a thing? I really like to hear opinions on this. Why? Because I think most (not all) of it is prompt optimization processes/human error/underspecified information.

I mean, if we honestly look at the complete picture, plus all the context-compressed prompts and all kinds of other tactics to save tokens are usually done, is it really hallucinating?

Isn't it just:

  • Underspecified input
  • Lost constraints (context decay)
  • Conflicting instructions
  • Forced continuation bias

Especially with the tactics that are being used for prompt compression and context handling. Like minifying code and Python AST, those kind of things.

Example:

I let AI build a webserver. I then compress/contextual strip the thing to save tokens, iso we remove function bodies for example from a code (this is one of the ways being used to compress inputs of code). Then I send the code and ask AI to extend it, with whatever some new functionalities for the webserver. The AI is trained/forced to continue, but you did not specify the body of the function, so the AI doesn't know what's there if we don't hint it. But it must continue, so it will think of a placeholder/stub/sane default/logical body for the function, and continues. Afterwards that was "not what you had in mind". But how could it have known what you wanted?

This is also why I do put comments in place of the function bodies in my current project that locally strips code by replacing function bodies with a oneliner description what it does.

But also with small simple prompts:
Hey Gemini, create a simple Python web-server for me.

You supplied no information after all, so it can do two things:

  1. Speculate, pick sane defaults, common sense kind of things, and just build that webserver for you in the most basic and default'ish way
  2. It can stop and come up with a ton of questions for you, to "prevent hallucinating", do you want this?

Sooo, let's build this webserver, but I will need some information about it.

  • Where are you going to run this, you want it locally?
  • What port do you want to let it listen on?
  • Do you want to have SSL with it?
  • and so on…

AI of course comes with suggestions afterwards, but yeah, AI is built to continue.

This made me think if AI hallucination is actually a thing or just one of 4 or more (or combined) that makes it go off-direction, either we didn't specify detailed info, combined with stripping inputs and whatever other processes optimizing the cost/token use and whatsoever.

I'm wondering what your opinions on this subject is.

Leave a Reply

Your email address will not be published. Required fields are marked *