More and more I’m finding that ChatGPT is faltering under lighter and lighter workloads.

Example: I provide a table of 13 meetings where it has cells for: the name of who I’m meeting, time, a blurb on their problems, a summary of current projects.

I asked it to use the sheet and make a cheat sheet for my meetings cross referencing the info and proving talking points and questions I can ask in each meeting.

It asks me multiple questions about what I want. Like way too many like a string of 10 times back-and-forth and asking the same information and rehashing the same questions .

Finally it says OK I’m ready to execute. I say go for it. It says I’m doing it now. It doesn’t do it. This happens about five more times.

I say OK let’s just restart instead of outputting in a doc format. Just put it in the chat to make it easier.

It does 5 meetings . I say, why did you only do five? It says I can’t do 20 meetings. It’s too much I’ve hit my output ceiling.

But I’ve done this before the output ceiling seems to be getting shorter?
Is there some push to save resources use less processing power on the backend are tokens already being devalued? Anyone else notice this?

Leave a Reply