OpenAI if you want to actually compete with Gemimi 3

Make your model read uploaded files without hallucinations. If gpt5.1 writes code, then you give it a zip of the code (node modules removed prior to zip) or turn the repo into a single md with clear defined file header decoration and separation for easy parsing. Even though the model wrote all the files in the new chat it will always claim there is truncation no matter the 10 different custom instructions and memories you give it to tell it to request more lines of code from the orchestration layer that is printing the ... in the output for the LLM to read. Resulting in hallucinations or the model writing new code.

Gemini does not support zip, so single md file (below 500,000 tokens) is the only way to go
But it also properly request more information from the orchestration layer, resulting in far less hallucinations. Gemini's only issue is less randomness and less overachieving, which ultimately is great, but also leads to you getting exactly what you want, with little extra or optimizations without further prompting

GPT does excel at the instruct function of turning simple terrible prompts into something actually useful for an LLM

But I would rather use Gemini that can actual handle code in chat, adding one feature at a time, a different chat each time, and doesnt need an API + Agent and $20 (for one app) to make something correctly. Gemini can just make it in the chat interface and not hallucinate in following chats for updates

Leave a Reply