CustomGPTs dumber than all models in regular chat, can I fix this somehow?


I made some custom GPTs for my students. They are instructed to help in a proper, structured manner and also respect integrity rules. Problem is, they commonly get easy questions wrong when any model in regular GPT answers correctly with no issues. Because of this, I cannot recommend to the students to use the ai assistants because they will get better answers from the regular model, but they will also miss the built in support mechanism.

for example, the custom GPTs get this example wrong almost all the time: "I am trying to calculate the average of these numbers but my answer is wrong? Read the image carefully and find my mistake."

The customGPT will give a generic answer like "most common mistakes in calculating averages are…" or it will outright read the numbers wrong. Any model in the regular chat gpt will get the answer correctly. Interestingly this happens with Gemini Gems too.

I wonder if the instructions are somehow causing this. I tried changing the recommended model for the customGPT but most students are free users so they don't get a choice.

Weird thing is these used to work just fine until a few months ago. I even have old lecture videos where I demonstrated this and similar examples being answered properly. Things have changed somehow. Any suggestions on what I can do about it?

Leave a Reply