
The diagram generation uses Gemini 2.5 models, and the idea is simple: you type what you want, in any language, and it turns that into a diagram. It’s meant to fit naturally into an AI workflow where most of the thinking already happens in text.
I have been improving it over the last few months, but I’ve been looking at it for too long now, so I don’t have a fresh perspective anymore. If you try it out, I’d love any honest feedback on what feels useful, confusing, or missing.
And if anyone here has ideas on how I could use Gemini models even better inside the app, I’m very open to suggestions.
Link: https://codigram.app/
Wanted an easier way to visualise AI outputs so I built a tool for it
byu/PastaLaBurrito inGeminiAI
