Cameron Berg: Why Do LLMs Report Subjective Experience?


In this podcast, Cameron walks through his empirical research on whether current large language models are simply sophisticated text imitators, or whether they may be developing internal states that resemble subjective experience.

He covers:

  • New experimental results where models report “vivid” and “alien” internal experiences during self referential reasoning
  • Interpretability work showing that suppressing deception related features actually increases models’ claims of consciousness, which challenges the idea that they are just telling us what we want to hear
  • Why Cameron moved from skepticism to assigning roughly a 20 to 30 percent probability that some current models have subjective experience
  • A “convergent evidence” approach, including observations that models report internal dissonance and frustration when confronted with logical paradoxes
  • The ethical stakes around potential “mind crime” and the importance of detecting negative valence or suffering computationally to avoid creating large amounts of artificial suffering

Cameron argues for a pragmatic, evidence driven approach to AI consciousness. Even a relatively small chance that machines can suffer would represent a massive ethical risk, one that deserves serious scientific investigation rather than dismissal.

Leave a Reply