I’m in psychiatry — a field that, compared to other areas of medicine, is far more diffuse, subjective, and vastly less understood. I’m working on developing effective treatment strategies, and my current focus is on receptor profiles — trying to form a solid understanding of what should work (which often differs considerably from what’s observed in clinical practice) based on current knowledge.
I’m not sure whether ChatGPT has gotten worse, or if I’ve simply become harder to fool. Why does it always seem to agree with me and describe my ideas as “excellent,” regardless of their quality? Even when I try to be ambiguous, it appears to infer my preferences and then provides information that reinforces my existing line of thought.
I also tried DeepSeek and initially thought its reasoning was better, but when I gave it a simple follow-up task, it completely misrepresented what a particular study method was.