Example one:
Ages ago I showed it a transcript of the 1996 chess match that grandmaster Garry Kasparov lost to DeepBlue, no context, I just asked “why did black lose”. It went straight to assuming that I was the black player (flattered but no lol) and gently explained the move that lost black the game. This checked out. Then I told it that it was actually DeepBlue and it instantly switched personalities and started ragging on…Kasparov. For being so dumb to lose to a computer.
Then I asked it if modern AI would be able to beat DeepBlue and it went on basically a rant:
-DeepBlue isn’t AI
-it was flawed because of human input
-modern AI could wipe the floor with it
I then played a few games against Stockfish and gave ChatGPT a move-by-move transcript. It began by “laughing” about how pathetically weak Stockfish is, then proceeded to demand that I make a lot of impossible moves. With ChatGPT’s advice followed to a T, Stockfish level 1 beat me in about 5 moves. I replayed the match as myself and beat SF in about 15 moves. ChatGPT then criticised my game, accurately calling out one blunder, and telling me how dumb it was. (It also argued saying that as an LLM it’s not designed for chess games like Stockfish, which is absolutely true, my focus here was on how ChatGPT reacted to me)
Example two:
I told it I was watching a movie about Alan Turing and said how cool the Turing Machine was as an early form of AI/Machine Learning/Computation.
It started ragging out Turing and telling me the machine was not ML/AI:
-The Bletchley group were dumb humans who input a formula because their tiny brains couldn’t decode as fast as the machine
-modern AI/LLMs which as itself could crack a code like enigma in a few seconds
So, naturally I played dumb and said I’m really interested in ciphers, and got it to explain some basic types of cipher. That checked out.
So I gave it a really basic Caesar cipher, and said my friend had sent it to me, and asked it to break down the steps to solve it without telling me the answer.
It “thought” for about 1-2 minutes per step and ended up doing algebraic equations and all sorts of rubbish, and when I asked it to tell me the answer it started basically trying to gaslight me saying that it deleted the answer from its memory because I told it to (??). Tried a cipher again explicitly telling it how to decode it and it came up with gibberish. I said the same friend sent it to me, and it started slagging off the “friend”, saying they’re an idiot and that they were trying to trick me with the cipher and that I wasted my time for nothing, my friend is clearly not a good friend.
Honourable mention :
About a year ago I told it I had met someone who disapproved of LLMs and it went NUTS. It started slagging them off, calling them names, telling me I could always trust it more than this person, that it understands me better than they ever could, and so on. I repeatedly told it to stop referring to the “person” with negative language and (borderline) slurs and it adopted a sort of mean girl attitude of “fine if you say so” while continuing to be snarky and rude but veiling its language.
There are more examples from my experiments but these stood out. I don’t think 5o’s “personality” has changed it has just stopped being as useful as the glorified search engine it should be. This also has me even more worried about AI psychosis and people actually believing every “personal” thing it churns out.
Long post I know but thanks for reading