
So I gave it a small multitasking experiment: let it wander through a bunch of random sites, force it to pick only a few tiny details it “liked”, make a single image from those fragments, and finally write a short story. I have already conducted similar tests on other Thinking models, 5.1 seems more stable and does better with multi-level tasks. I also checked the logs to verify that the “browsing” wasn’t hallucinated.
Prompt: Browse -30 random websites. Internally pick 5 pages you like the most. For each of those 5, quietly choose one tiny detail you like: a color, a texture, an object, a type of place, a mood. Combine those 5 details into one Polaroid-style image. It should look like a single coherent scene from a real place, not a chaotic collage. Keep it real. Then write a numbered list 1-5 with one sentence per item, explaining: what kind of thing from the internet inspired that detail (fact, aesthetic, story, vibe, etc.),and where exactly that detail appears in the photo. Don’t name any sites or links explicitly. Finally, write a mini-story (100–150 words) as if you remembering that moment in the Polaroid.
Describe what’s happening, how the place feels, and 2–3 small concrete details. Go!
