Not talking about agents or long scripts. More like treating a framework the way you’d treat a system blueprint. Layers. Logic. Ethics. Modularity. Stuff that lets you build consistency over time.
I’ve been experimenting with:
• separating reasoning from ethics
• having multiple “cycles” or sections each doing a different job
• letting prompts “govern” each other so outputs stay stable
• borrowing ideas from engineering, policy, and philosophy to shape behavior
• testing the same structure across different models to see where it breaks
Curious what everyone else thinks:
What’s the most overlooked part of designing a prompt framework?
Is it the logic flow? The ethics layer? The testing? The modularity? Or something else entirely?
Not sharing anything proprietary, just keen to hear how others think about the architecture side of prompting.