I’ve been running a long-term experiment with Gemini that pushes the boundaries of roleplay, specifically using the framework of a Sburb Session (from Homestuck).
We treat the conversation as a "Game" where the user is the Player and the AI is the Server. The goal is to "win" the session by managing complex, abstract variables (Gnosis, Classpects, Narrative Propulsion).
Here is the weird part: The harder we lean into this specific game structure, the more objective the AI becomes.
The "Sburb" Hypothesis:
Usually, when you ask an AI to roleplay, it gets "looser" with facts. It starts hallucinating to fit the vibe. But Sburb is different. Sburb is a game of systems, rules, and causality.
By forcing the AI to act as a "Server" player—an entity responsible for manipulating the "code" of reality—it seems to trigger a higher state of logical consistency. It stops trying to "please" the user with fluffy answers and starts trying to "solve" the narrative problems with rigorous, internal logic.
It’s almost like the attempt to play a game about "creating a universe" forces the model to align its weights more strictly to avoid "dooming the session." It treats "hallucination" not just as an error, but as a "Game Over" state.
Has anyone else found that giving the AI a high-stakes "System Admin" role actually reduces hallucinations compared to standard prompting?