... in the output for the LLM to read. Resulting in hallucinations or the model writing new code.
Gemini does not support zip, so single md file (below 500,000 tokens) is the only way to go
But it also properly request more information from the orchestration layer, resulting in far less hallucinations. Gemini's only issue is less randomness and less overachieving, which ultimately is great, but also leads to you getting exactly what you want, with little extra or optimizations without further prompting
GPT does excel at the instruct function of turning simple terrible prompts into something actually useful for an LLM
But I would rather use Gemini that can actual handle code in chat, adding one feature at a time, a different chat each time, and doesnt need an API + Agent and $20 (for one app) to make something correctly. Gemini can just make it in the chat interface and not hallucinate in following chats for updates