Follow-up: fixing AI forgetfulness was more powerful than any prompt tweak I’ve tried

A week ago, I posted about how most of AI’s problems don’t come from bad reasoning, they come from forgetfulness.
You spend hours building context, only to have the thread reset and all that progress vanish.

Since then, I’ve been experimenting with ways to actually carry reasoning forward instead of constantly rebuilding it.

The result’s been surprisingly effective, I built a small tool called thredly that turns full chat sessions into structured summaries you can reload into any model (Claude, GPT, Gemini, etc.) to restart seamlessly, tone and logic intact.

It’s wild how much smoother long projects feel when the AI just remembers.
Feels like unlocking a different kind of intelligence, one built on continuity, not just cleverness.

Curious how others here handle this:
– Do you use memory-like workflows (notes, JSONs, RAG, etc.)?
– Or do you just start fresh every session and rebuild the thread manually?

Would love to hear how people are experimenting with continuity lately, especially those juggling long-running research or creative work.

Leave a Reply