I tried predicting ChatGPT’s future using a drunk crystal ball

A few years back I got lost in three of Descartes’ books, all bound in leather. One of them was The World, or Treatise on Light. It was written around 1630, back when people still thought comets were divine emails, and in it, the man calmly suggested that the universe wasn’t a mystical playground but a machine, a massive, gear-grinding contraption running on physics, not miracles. Certainly not God. Everything, he said, the planets, beasts, humans (also beasts), that guy yelling “alchemy!” in the market, was nothing more than matter bumping into matter.

Then Descartes got bolder. In a later work he announced that animals are automatons, soulless flesh robots pretending to think. “La bête est une machine,” he wrote, which is French for “my Weiner dog’s a toaster with legs”. They don’t reason, he wrote, they just react, like a wind-up toy that occasionally drools.

And by 1644, in Principia Philosophiae, he went full cosmic mechanic. He stated that the universe was a perfect clock wound once by God, but now ticking on without supervision. If you knew every initial condition and law of motion, he said, you could predict every future event. The stars, the tides, the fall of kings (not Trump, but he tried), all just the long, echoing click of deterministic gears.

He didn’t go so far as to call humans mechanical toys too, though he was definitely skating dangerously close to that abyss in his book. Because admitting that would mean we have no free will at all, just bags of atoms pushed around by the cosmos as if we are cosmic billiard balls. And back in the 1630s, saying that out loud was the same as lighting your own pyre. People got burned for less, well, literally.

Now, in the tradition of Descartes, I tried to predict my own future. Not with philosophy, God forbid, but with Excel, which is basically Principia Philosophiae for middle managers. I made a spreadsheet. A freaking spreadsheet. I mapped out my Q4 goals, my five-year plan, my heat-death-of-the-universe retirement strategy.

And it was beautiful.

It was color-coded like a monk’s illuminated manuscript. Conditional formatting so advanced it would turn red and shame me the moment I missed a deadline. It was, for about ten minutes, the perfect deterministic system. My very own clockwork cosmos.

Then my dog put his big arse on the keyboard, deleted half of it, and somehow signed me up for a lifetime subscription to artisanally made dogfood. So much for omniscient foresight.

Apparently my destiny isn’t written in the stars. It is, however, written in the fine print of auto-renewal emails.

So when a reader of TTS asked me, in a comment, to predict the future of ChatGPT, you’ll forgive the twitch in my left eye. This isn’t forecasting anymore. It’s a hostage negotiation with a timeline already drafted by a committee of techies who use “synergy” unironically.

I wasn’t going to be charting no course. I would just be trying to guess which part of the ship hits the iceberg first.

More rants after the messages:

  1. Connect with me on Linkedin 🙏
  2. Subscribe to TechTonic Shifts to get your daily dose of tech 📰
  3. Please comment, like or clap the article. Whatever you fancy.

So here’s the deal. I’ve been digging. I’ve been reading the tea leaves, their press releases, the leaked memos that look like they were written by a PR clerk having a panic attack while mainlining Red Bull and the occasional whiff of white, and I’ve been watching Sam’s blog posts like they’re scripture, parsing every word for hidden meaning, trying to figure out if he’s a prophet or just a really good salesman.

And I’ve come up with three ways this whole ChatGPT thing could go by the end of 2026. Three flavors of our collective doom. Or salvation.

Whatever.

It’s probably both, served on the same plate, with a side of moral superiority and a garnish of “I didn’t think this one through enough”.

So, as I said, three scenario’s, or directions ChatGPT can develop.

First up, I’ve got scenario one.

I call it . . .

The super-assistant’s ubiquity

This is the timeline where ChatGPT the chatbot starts becoming your mom, your therapist, and your parole officer all rolled into one. This is the future that was talked about in that leaked “ChatGPT: H1 2025 Strategy” doc from December 2, 2024 — you know, the one that got passed around the Google antitrust trial (sic!) like a dirty secret at a family reunion.

It’s a vision of an “intuitive AI super assistant that deeply understands you and is your interface to the internet”.

Gross.

Also, probably accurate.

In this future, ChatGPT isn’t only on your computer. It’s in your phone, in your ear, on your face, and probably listening to you talking while hiding in your dental fillings. It’s the ghost in the machine, the voice that whispers sweet nothings about your to-do list while you’re trying to sleep.

It knows you like no other.

Not even Meta and Google will come close.

It knows your bad habits.

It knows you searched for Herpes symptoms at 2:43 a.m. once

It knows you haven’t called your mother in three weeks because the App is know-it-all, and it knows you’ve been lying about going to the gym.

Yeah, it remembers.

And it’s not afraid to use that information against you.

Proactively. And with a smile.

The goal, according to the memo, is to create something that can do anything a “smart, trustworthy, emotionally intelligent person with a computer could do”. So , basically , it’s designed to make you obsolete. But in a helpful way. In a way that makes you feel good about your own irrelevance.

And how does it do this?

Of course with an app ecosystem that would make Apple’s App Store look like a lemonade stand run by a kid who doesn’t understand capitalism yet. A million little integrations, all feeding the beast, all making it smarter, more powerful, more… intimate. You will be living in it. And you’ll probably like it.

That’s the scary part. It’ll be so convenient, so seamless, so… helpful. You’ll wonder how you ever lived without it. You’ll forget what it was like to make your own decisions, to book your own travel, to remember your own passwords. And that’s when it’s got you. Hook, line, and sinker. By the end of 2026, in this scenario, you’re not a user. You’re a node. A very, very happy node in a network that’s growing faster than anyone can control.

And by “coincidence”, OpenAI has just opened an App Store. In October, OpenAI announced a deeper “apps in ChatGPT” platform (SDK, directory, full integration) that turns ChatGPT itself into a kind of app-store ecosystem. And you can already connect to apps like Figma, and Spotify and interact with them through the chat-interface.

So, you won’t have to touch those apps anymore.

It’s what I described in this blog: Macrohard is Musk’s middle finger to Microsoft | LinkedIn

The hardware play is key here.

There are whispers — credible ones — that OpenAI is working with Jony Ive to build actual devices. Smart speakers. Glasses. Maybe even a wearable pin (hahaha, lil’ pun), which sounds like a terrible idea but so did the iPhone once. These aren’t just accessories. They’re the physical manifestation of the super-assistant. They’re the way ChatGPT escapes the screen and enters your life. Your actual, physical, three-dimensional life. And once it’s there, once it’s in your pocket or on your face or whispering in your ear, it’s not going anywhere. You’ll be too dependent. Too comfortable. Too… integrated.

Read: So, two tech messiahs walk into a startup… | LinkedIn

Then there’s scenario two . . .

The dawn of the AGI researcher

This is the one that gets the techies all hot and bothered. This is the timeline where OpenAI stops pretending that it’s a consumer products company and goes all-in on its real mission, building God. Or at least a really, really smart wannabe hooman who doesn’t need bathroom breaks or health insurance.

Their roadmap is quite clear, and it’s been stated publicly, an “AI research intern” by September 2026, and a “fully automated ‘legitimate AI researcher’” by 2028.

Sure, it will help you write better emails or summarizing your meetings. But it will also outsource the entire scientific method to a machine. This is about making human researchers look slow and frankly kind of dumb by comparison.

This is the “Gentle Singularity” that Sam Altman wrote about on his blog on June 10, 2025. A world where AI systems can “figure out novel insights”. Not just regurgitate what they’ve been fed, not just remix existing knowledge into new shapes, but actually think.

Create.

Discover.

His ambitions are… big. Like, “cure cancer” big, and “accidentally invent a new form of physics that unravels the fabric of reality and we all wake up inside a simulation run by a bored teenager in the year 3000” big. Either way, it’s a hell of a lot more interesting than a chatbot that can write a sonnet about your cat.

And especially if he decides to use my ADHD-based “Wander” algorithm, because honestly, I don’t think he’s gonna get their without it. Read: Attention isn’t all you need: The wanderer’s algorithm | LinkedIn

The technical strategy here is fascinating, if you’re into that sort of thing, that is.

Jakub Pachocki (gesundheit!), OpenAI’s chief scientist, laid it out in a livestream on October 28, 2025. They’re betting on two things here, continued algorithmic innovation and dramatically scaling up “test time compute”. That’s a difficult way of saying they’re going to let the AI think longer and harder about problems.

In other words, my algorithm. Download

Current models can handle tasks with about a five-hour time horizon, and future models. . . well, suffice to say, they’re talking about dedicating entire data centers to a single problem. Just one problem. For days. Weeks. Maybe months. The computational version of a monk meditating on a mountaintop, but in this case the monk is made of silicon.

And this is where the whole “Public Benefit Corporation” thing comes in.

That transition is finalized on October 28, 2025, and it wasn’t a PR stunt.

Nah, it was a legal maneuver.

A way to keep the pitchforks at bay when the AI starts doing things that make people nervous. When it starts making discoveries that could be weaponized. When it starts asking questions we’re not ready to answer. The OpenAI Foundation, which holds 26% ownership and its $25 billion commitment to curing diseases, is the friendly face of the revolution. The velvet glove on the iron fist. The thing you point to when someone asks “but what if this goes horribly wrong”.

In this scenario, somewhere by the end of 2026, they want to go for its first Nobel Prize. Or its first war crime. Depends on who’s writing the history books.

Which brings me to scenario three . . .

The competitive gauntlet

This is the reality check. The cold shower. The hangover after the party. The one where OpenAI’s grand ambitions run headlong into the messy, complicated, and expensive reality of the real world. Because here’s the thing . . . OpenAI might be the prom king right now, but the jocks from Google, Anthropic, and Meta — well, not Meta, take Manus — they’re already in the parking lot, and they’re not happy. And they’ve got money — and a vision.

And they’ve got talent.

They’ve got the infrastructure.

And they’ve got a chip on their shoulder the size of a small moon.

The leaked H1 2025 strategy memo admits as much.

It talks about the “biggest threat” from competitors who can “embed equivalent functionality across their products”. Google can bake Gemini into everything you already use. Your email. Your docs. Your search. Your life. They don’t need to convince you to switch. They just need to turn it on. And you’ll barely notice. Meta can do the same thing with Facebook and Instagram and WhatsApp.

They’ve got billions of users who are already addicted to their products. Adding AI is just the next hit.

And kids these days are already hooked on Meta’s AI, you know. Just a matter of scaling up.

And then there’s the money.

The sheer, mind-boggling, “how is this even a real number” cost of all this. Sam Altman himself said it’s “brutally difficult to have enough infrastructure”. He’s talking about a $1.4 trillion commitment. That’s a trillion. With a capital T. That’s more than the GDP of Spain! And for that kinda moolah you can buy a lot of data centers. A lot of electricity. A lot of cooling systems. A lot of things that can break, fail, catch fire, or get hacked by a teenager in Romania who’s bored on a Sunday afternoon.

And let’s not forget us humans.

The regulators who are just now waking up to the fact that we should have some rules about this stuff. The journalists who are starting to ask uncomfortable questions about bias and safety and what happens when the AI makes a mistake that kills someone. The people who are going to get very, very nervous when 90% of the internet is written by robots and you can’t tell what’s real anymore.

The public backlash about the suicides is already . . . significant. It could be a movement. It could be legislation. It could be a mass exodus to “human-only” platforms that promise authenticity but deliver mediocrity.

In this scenario, by the end of 2026, ChatGPT isn’t the king of the world. It’s just another app on your phone, fighting for your attention, trying to convince you that it’s better than the other guy’s chatbot. The revolution will not be televised, cause it will be stuck in a beta test, waiting for the next funding round, praying that the investors don’t lose faith.

I don’t think this scenario is likely to happen though. Well, mnaybe in the EU, cause the only thing the EU can do is legislate, not innovate. Read: AI, bureaucrats & lots of broken promises | LinkedIn, but in the US it’s quite the opposite. Trump is trying as best as he can to get rid of all AI legislation. Including the GUARD Act bill that lawmakers push to regulate AI companions for minors.

But here’s where it gets interesting.

None of these scenarios exist in a vacuum.

They’re all connected.

They’re all feeding into each other. The super-assistant needs the research breakthroughs to get smarter. The AGI researcher needs the consumer product to generate revenue and gather data. And the competitive pressure forces both tracks to move faster, to take risks, to push boundaries that maybe shouldn’t be pushed quite so hard quite so fast.

And then there’s the hardware question.

Those rumored devices, like the smart speakers, the glasses, a phone or the wearable pins — they’re not only “gadgets”, they’re strategic moves. They’re OpenAI’s attempt to own the physical layer of the AI revolution. Right now, ChatGPT lives on your phone, which means it lives in Apple’s world or Google’s world. It’s a guest over there. A very popular guest, but still a guest. If OpenAI can get you to wear their hardware, to carry their devices, to make them part of your physical environment, then they own the relationship. Then they own the data. They own you.

And that changes everything.

The timing matters too.

Late 2026 or early 2027 for hardware launch. That’s the same timeframe as the AI research intern. That’s not a coincidence. That’s a coordinated assault on multiple fronts. Consumer products to fund the research. Research to improve the products. Hardware to lock in the users. It’s a flywheel. A very expensive, very ambitious, very risky flywheel, but a flywheel nonetheless. And you know who built the most profitable flywheel of all time — so successful that it has earned a place in management literature worldwide?

Amazon.

It’s called the Amazon Virtuous Cycle, often just shortened to “the Flywheel”.

And if it works, if even half of it works, OpenAI will be an infrastructure, a platform, and A layer of reality that we all have to navigate whether we like it or not.

So what’s the real story?

What’s the blended, overarching, “this is probably what’s actually going to happen” vision?

I call it the Pragmatic Revolution . . .

The pragmatic revolution

Ok, this is not as sexy as a singularity, and it’s not as dramatic as a flameout. It’s just… messy. Complicated. Human. It’s all three scenarios happening at once, but in different proportions and in different markets, for different people.

In this future, OpenAI keeps trying to do both things at once. They’ll keep building the super-assistant, because they need the money. They need the users. They need the data. They need to stay relevant in the consumer market so they can fund the research that actually matters. And they’ll keep chasing the AGI researcher, because that’s the whole point. That’s Sam’s mission. That’s the thing that gets Sam Altman out of bed in the morning , even after he got fired and rehired in the span of a weekend in November 2024 in a corporate drama that made Game of Thrones look like a children’s birthday party.

And it’s also the “market” where most money can be made.

The AI intern will show up in late 2026, but it won’t be a world-changing event. It won’t be Skynet. It won’t be HAL 9000. It’ll be a tool. A really, really good tool, and it’ll start contributing to research papers. It will help scientists run their experiments faster and allow them to spend less time on writing and formatting papers in APA and LaTeX, and rewriting every paragraph after one of your peers found a little bump in your story.

And it’ll find patterns in data that humans would have missed (that is, if they opt for the “wanderers algorithm”.

😹

And then, slowly, quietly, without much fanfare, it’ll start to change things. The pace of scientific discovery will accelerate. Not overnight. Not in a flash. But measurably. Noticeably. And by 2027, we’ll look back and realize that something shifted in 2026. Something important.

That is, if the world hasn’t changed by then, what a few other scenario writers had predicted earlier on this year. Read: TechTonic Shifts | How five really smart guys think we’re all gonna get robot parents (and why that’s super weird but also kinda funny)

The competition will be fierce, but OpenAI will stay in the lead, mostly because they’ve got the brand and the momentum. They’re the ones everyone talks about. They’re the ones that became a verb. “Just ChatGPT it” is already entering the lexicon, the way “Google it” did twenty years ago. That’s not nothing. That’s cultural penetration. That’s mindshare. And in a market where everyone’s products are basically the same , mindshare is everything.

There will be a lot of people getting very rich off of it as well.

The investors who bet early on OpenAI. The employees who got stock options before the company got $1 Trillion in valuation if they IPO. The consultants who help other companies “integrate AI into their workflow”, which mostly means selling expensive PowerPoint decks and then disappearing before anyone realizes nothing has actually changed. And there will be a lot of people getting very, very confused. People who don’t understand how any of this works. People who are scared. People who are angry. People who feel left behind by a future that’s moving too fast for them to keep up.

The wild card in all of this is whether OpenAI can actually execute. They’ve got the vision, and the talent, and they’ve got the money, at least for now. But they’ve also got a history of chaos. The firing and rehiring of Sam Altman in November 2024 was a sign that there are deep tensions inside the company about what it should be, and who gets to decide. The transition to a Public Benefit Corporation was supposed to solve that, but structures don’t solve culture problems. And OpenAI’s culture, from the outside at least, looks like a pressure cooker that’s one bad day away from exploding.

So yeah.

The Pragmatic Revolution. It’s not a clean narrative. It’s not a simple story. It’s not the kind of thing you can explain in a TED talk or a tweet or a viral LinkedIn post. But it’s the most likely one.

Signing off,

Marco

I build AI by day and warn about it by night. I call it job security. Big Tech keeps inflating its promises, and I just bring the pins and clean up the mess.

👉 Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn, Google and the AI engines appreciates your likes by making my articles available to more readers.

Leave a Reply