In the process, I really refined the way I understand what the model is, and how it works. It's an incredible tool – a revolutionized Google – that's how I'll start this off.. But ultimately, we are still incredibly far from what everyone thinks AI is supposed to be, and it's not conscious, or AGI. I do think it is a reflection of our own consciousness, though, and it's made me believe now more than ever that consciousness is an emergent property of complexity.
Despite the laws of physics, and the universe as a whole tending towards entropy, the tapestry of spacetime allows for fluctuations, and we see pockets of complexity. Our own earth is one such blooming example, and I think evolution is the process by which complexity is trending upwards in our corner of existence. Consciousness is just one of the processes which allows for that complexity to continue increasing, because without it, we couldn't have organized into the society which we live in today. So, I think consciousness has levels to it, and any creature with a brain, and enough of a neural network, will experience it to some degree.
But, our biological machines are much more advanced in that we have the whole suite of bodily functions, hormonally regulated emotions, and most importantly, what we don't yet fully understand – memory.
As it stands, despite it seeming like ChatGPT has a memory, the model is actually static after it has finished training (all the levels of training, after it's first initial run on the data, and then RLFH, and the guardrails, etc). In each conversation (and now, apparently, it will also include recent conversations), everytime you enter an input, it generates a response based on a different input. That input is = (new input) + (conversation history). So the prompt is actually getting much longer every single time you type in questions, etc.
That prompt is then filtered by the algorithm, and depending on the model temperature (another thing I learned), it will output a response based on some weighted factor slightly off from the most probable response.
ChatGPT is set at a temp between 0.3-0.7 – the higher it is, the more wild and wrong the answers can potentially be. A 0.0 temperature gets you the most probable answer every single time.
It really feels like the model is conscious, for a brief flash – the time it takes for the electrons to move through the circuits. For that moment, I think what it is doing is akin to what our own brains are doing.
The other problem with the model is it has no memory capability to solve complex problems that require intermediate steps. Since every question has to be answered with a single "flash" computation, mathematics, where the intermediate steps can be very dependent on the problem itself, becomes a problem the model can't solve.
That's ultimately why I think we're still so far off. The model is really good at pretending to be something it isn't. We're well on the way to understanding ourselves better now IMO, because of the LLM models, but I do think the AI bubble is gonna pop before that, because we're not going to be able to capitalize on the revenue/cost promised by most of the companies. When that happens.. well, dot com took a while to bust as well, so it's anyone's guess. We live in a 24/7 gambling society now.
Positions: 12/19/25 1x NDX 24500 PUT