After 100 hours of long chats with Claude, ChatGPT and Gemini, I think the real problem is not intelligence, it is attention

I have spent about 100 hours working in long chats with Claude, ChatGPT and Gemini, and the same pattern keeps showing up. The models stay confident, but the thread drifts. Not in a dramatic way. It is more like the conversation leans a few degrees off course until the answer no longer matches what we agreed earlier in the chat.

What stands out is how each model drifts in a slightly different way. Claude fades bit by bit, ChatGPT seems to drop whole sections of context at once, and Gemini tries to rebuild the story from whatever pieces it still has. It feels like talking to someone who remembers the headline of the discussion but not the details that actually matter.

I started testing ways to keep longer threads stable without restarting them. Things like:
– compressing older parts of the chat into a running summary
– stripping out the “small talk” and keeping only decisions and facts
– passing that compressed version forward instead of the full raw history

So far it has worked better than I expected. The answers stay closer to earlier choices and the model is less likely to invent a new direction halfway through.

For people who work in big, ongoing threads, how do you stop them from sliding off the original track? Do you restart once you feel the drift, or have you found a way to keep the context stable when the conversation gets large?

Leave a Reply