
First, the Full Metal Jacket drama. I was 100% sure there’s a quote in the movie. One of the guys talks about jerking off “100 times a day” trying to get out off Vietnam, right before going in for a doctor’s appointment. I thought it was Rafterman or maybe Cowboy, but doesn’t really matter. Anyway, I mentioned this to ChatGPT and he flat out denied the quote ever existed in Full Metal Jacket. He kept saying it’s from Good Morning, Vietnam, that Robin Williams flick, which I’ve never even seen! I kept insisting it was FMJ, but no, ChatGPT doubled down. So I went on YouTube, dug up the actual scene (https://youtu.be/oBJXkMD72xA?si=fEHlJPuwPaXevDXe), and told ChatGPT: “Here it is, direct from the movie. Cowboy making the joke, clear as day.” But then he said the video was fake, some fan-made short or whatever. WTF?!
Second thing, I asked ChatGPT about a watch I bought 15 years ago from one of the biggest jewelry stores in Germany’s second largest city and I uploaded a picture of it. ChatGPT comes back saying, “it has to be fake, worthless, probably from China,” just because he couldn’t find info online. This is a legit watch, it came with the price, certificate, everything, from a top shop. But the bot just insisted it had to be a fake.
So, how can AI hallucinate this hard? It’s not even making stuff up, it’s just stubbornly flat wrong and refuses to accept clear proof or personal experience. Anyone else had AI just gaslight you like this? Or is ChatGPT getting worse at admitting when it’s wrong?
