been messing with this a lot and found one thing that weirdly fixes like half of my prompt obedience issues: making the model echo the task back to me before it executes anything. not a full summary, just a one-liner like “here is what i understand u want me to do.” i feel like it forces the model into a verification mindset instead of a creativity mindset, so it stops drifting, over-helping, or jumping ahead.
idk why it works so well but pairing that with a small “ask before assuming” line (like the ones in god of prompt sanity modules) keeps the output way more literal and clean. anyone else doing this or got other micro-checks that tighten up compliance without turning the prompt into a novel?