What it can say: "Based on patterns I've detected in my training data, the word 'fire' is followed by the word 'hot' 99% of the time, so I will predict 'hot'."
* Fire is not 99% hot. It is 100% always hot, as a 3-year-old understands intuitively (humans perceive truth through empirical evidence)
* AI transformer systems, fundamentally, will never be able to consider something 100% true (AI models perceive truth through probabilistic likelihood)
So if you are asking a question in which there IS an actual correct answer, one indeed that you do NOT know, there is inherently no guarantee the response you're receiving from an AI is going to be useful at all. The model makes no distinction between truth and garbage; it does not, and will not ever, have that ability.
It's worrying me how much I'm seeing this trend of people relying on AI for "hard" answers, even ostensibly intelligent people you would think know better.
I feel like this is going to burn us badly, and probably already has.
I'm not suggesting this means AI is useless or inherently dangerous, far from it. I'm just saying everybody involved with its public messaging is being grossly negligent and is putting society at undue risk.
———–
OK so perfect example of how it can be valuable: sometimes I'll use AI to help me write, and it helps because my process is iterative. I'm always reductively carving away or building up additively, stepping back and looking at it, smoothing, adding detail. Keeping a chatbot to bounce stuff off of is genuinely useful cause it gives me more raw options to consider and shape, so the end product had significantly more INPUT (cause it will always be able to make more associations than i can) than it would have otherwise, but the OUTPUT is still all mine, an important distinction.
So when I read it back, I can't really pinpoint its influence, it doesn't feel like AI wrote any of it, because it didn't. In fact, most of what it suggested was downright stupid lol. But if you go in with that EXPECTATION, you can still find some valuable seeds, separate the wheat from the chaff if you will
——————
this has REAL WORTH in the right hands. If you're already an expert in a given field and can already identify what a correct answer kind of LOOKS like, you can use AI as a powerful divining rod for truth. cause then even falsehoods can be used to your benefit, often wrong answers have hidden hints about the correct ones. But only to the discerning eye.
so yes, VALUABLE, but I will reiterate that its utility should be explicitly reserved for the beginning of a given pipeline and never, ever, ceded control of the end. (or otherwise accept any of its output as gospel)
People need to be educated on this stuff yesterday. It's not magic, it's a clever application of probability mathematics combined with the brute force of large datasets, and that's it. The perception that it KNOWS anything is an illusion. A quite convincing one (hence the danger), but an illusion all the same.
AND it has limitations, and not theoretical or temporary "we can fix it" limitations, inherent fundamental ones. People need to be SHOWN that quadratic scaling means exponential jumps in compute cost are an unavoidable bottleneck, and nothing even APPROACHING the complexity of our reality could EVER be quantized, tokenized, or sufficiently synthesized. In any aspect. It's always going to be a cheap approximation or simulacrum. So no, no long-form video, the cohesion doesn't scale. Real cameras remember ALL your context parameters for a minute, an hour, a month, at a cost of 0 compute cycles. AI can't replace photography, and won't. Yes it does have an application, but it's SO MUCH LESS LIMITED IN SCOPE than we've been lead to believe.
I welcome discussion on this, it both fascinates and horrifies me. it would be cool to discover im wrong about all this… also my apologies if these observations are terribly stale or pedestrian haha
edit: typos