but slowly lose accuracy after a few runs.
Here’s the simple version of what I noticed:
Most people write prompts like sentences.
But the prompts that stay stable are designed like systems.
A prompt isn’t just “words.”
It’s the structure behind them — roles, rules, steps.
And when the structure is weak, the output drifts even if the wording looks fine.
Once I stopped thinking
“how do I phrase this?”
and switched to
“how do I design this so it can’t decay?”
the drift basically disappeared.
What fixed it:
・Layered structure (context → logic → output)
・Reusable rule blocks (not one long paragraph)
・No filler, no hidden assumptions
Same model. Same task. No drift.
So I’m curious:
When you make prompts,
do you write them like text?
Or design them like systems?