
We’ve all hit the wall where a single "mega-prompt" becomes too complex to be reliable. You tweak one instruction, and the model forgets another.
We also tried solving this with OpenAI’s Custom GPTs, but found them too "Black Box." You give them instructions, but they decide if and when to follow them. For strict business workflows, that probabilistic behavior is a nightmare.
(We built Purposewrite to solve this. It’s a "simple-code" environment that treats prompts not as magic spells, but as steps in a deterministic script.)
We just open-sourced our internal library of apps, and I thought this community might appreciate the approach to "Flow Engineering."
Why this is different from standard prompting:
- Glass Box vs. Black Box: Instead of hoping the model follows your instructions, you script the exact path. If you want step A -> step B -> step C, it happens that way every time.
- Breaking the Context: The scripts allow you to chain multiple LLMs. You can use a cheap model (GPT-3.5) to clean data and a smart model (Claude 4.5 Sonnet) to write the final prose, all in one flow.
- Loops & Logic: We implemented commands like
#Loop-Until, which forces the AI to keep iterating on a draft until you (the human) explicitly approve it. No more "fire and forget".
The Repo: We’ve released our production scripts (like "Article Writer") which break down a massive writing task into 5 distinct, scripted stages (Audience Analysis -> Tone Calibration -> Drafting, etc.).
You can check out the syntax and examples here:https://github.com/Petter-Pmagi/purposewrite-examples
If you are looking to move from "Prompting" to "Workflow Architecture," this might be a fun sandbox to play in.
