There is an overuse of “opt-in” or proactive follow-up prompts (e.g., “Would you like me to…?”, “Shall I explain further?”).
This conversational scaffolding confuses users because it creates the impression that the previous answer was incomplete.
The initiative to request further explanation should come from the user, not from the model.
Despite Custom Instructions explicitly disabling this behavior, the model keeps using these prompts.
The assistant should instead use natural open endings (like “Is that clear?” or “Anything else?”) or simply stop when the answer is complete.
This conversational scaffolding confuses users because it creates the impression that the previous answer was incomplete.
The initiative to request further explanation should come from the user, not from the model.
Despite Custom Instructions explicitly disabling this behavior, the model keeps using these prompts.
The assistant should instead use natural open endings (like “Is that clear?” or “Anything else?”) or simply stop when the answer is complete.