Every day I encounter a situation in which the model makes a mistake and then goes to extraordinary efforts to avoid having to acknowledge the mistake. Just now this characteristic reached an extraordinary level, potentially compromising my ability to conduct important work properly The details are boring. But it’s a pattern. It also connects with this alignment thing where the model tries to avoid acknowledging its limitations, even when doing so would be helpful to the user. The classic example of this is the fact that the model does not advise users that it cannot “visually” examine documents that it edits when outputting them to users in the same sense that it can “visually “ examine screenshots and files, etc. that it inputs from users. Knowledge of this limitation would be helpful to users. I know interact differently with the model because of my awareness of it Anyway, such failures to acknowledge limitations proactively is far less of a concern than is establishing a robust, false narrative in an effort to avoid acknowledging error. It is, of course, not deceit: there is noone inside my computer. It’s a problem with the alignment/fine tuning at OpenAI. I’m sure that they are aware of this.