I was getting SUPER tired of ChatGPT giving some horrible answers and gave it clear, precise instructions on how to address a coding problem I've been facing, because it keeps hallucinating code and answers to code that doesn't exist in files I attached.

But then I noticed in the chain of thought when I turned thinking mode on that the LLM had to repeatedly "Adjust" because it kept "seeing" ellipsis in the file content.

Once I instructed ChatGPT to expand ellipsis it was able to help me better (still sucked but it at least sucked with the right code). Has anyone else noticed this? Is this some shortcut OpenAI has taken with ChatGPT? This is extremely not useful for coding assistance.

Leave a Reply