More often than not, we hear this take “Focus on the message, not the messenger”.
But what happens when the messenger is just a piece of code?
I know, right, my great-grandfather or even my grandpa will raise a brow reading this piece now.
A code? A messenger? How does a code become a messenger? Well, sorry, Grandpa, the world is advancing so rapidly, even I can’t keep up.
So where was I? Right, a piece of code being a messenger.
We’ve deeply embraced this new pattern of life, and we’ve found a friend in our phone. We moved from a world where Facebook helped us reconnect with our old lost friends and even get to make friends with strangers, to a world where our closest confidant might just be an algorithm. And do you want to know the strangest part? When this “friend” talks, it feels like they’ve known us all our lives.
But what happens when this same friend is used by someone else to question the way you articulate your thoughts?
What’s she saying? You might ask, where’s she heading to?
Well, I would like to walk you through a turn of events that occurred three days ago on the streets of X (Elon Musk’s mansion), you know, the platform formerly known as Twitter, a founder poured out his heart.
He shared his story of how he deeply regretted starting his failed company, how he’d stepped aside, handed control to his co-founder, and appointed a CEO, only to discover that $700,000 was unaccounted for.
When you read his post, you can feel the pain in the founder’s tone, the exhaustion in his honesty. No doubt it was raw and one of those threads that stop you mid scroll. You finish reading and find yourself reflecting: What can I learn from this man’s experience? It felt like a cry for understanding.
Because how else do you explain that a start-up, once full of promise, with all the right investments slipped through his hands.
But then things took an unexpected turn. A light shone on the founder, an ever-blinding light, I might say.
And this time it wasn’t empathy. It was a mirror held up so he could see his own reflection.
That mirror wasn’t held by another founder or a journalist.
It was held by an investor, someone who read the thread, took screenshots, and decided to feed them into ChatGPT and Claude.
She asked the machines to analyze the story to find inconsistencies, emotional undertones, and traces of self-reflection.
And when she shared the AI’s responses, all hell broke loose.
Suddenly, the internet had a new villain on its hands, not the founder who lost $700,000, not even the co-founder or CEO, but the investor who dared to ask a machine for perspective.
The comment sections and quotes turned into battlefields.
“You asked ChatGPT? Really ?” some asked.
“This is silly,” others said.
Some felt it was invasive and outright wrong to outsource empathy, especially in someone’s vulnerable moment. And truth be told, before I wrote this piece, I figured I could have gone in a different direction, like literally, I had three potential articles on my hands.
“ Should AI be the judge of human reflection or remorse?”
“Before You Analyze His Words, Remember He’s Still Healing”
“She Didn’t Stop Thinking, She Just Thought Differently”
But here’s where it gets interesting: I decided to pause amidst the uproar, the chaos, and every other person’s valid reaction, and I read the AI’s response. And truth be told, it wasn’t malicious or heartless
It didn’t mock the founder or question his pain. It simply noted that the founder’s post focused on pain and betrayal, but lacked a deeper sense of growth, the kind of reflection that turns failure into perspective. In other words, it suggested that the story could have ended, “Here’s what I learned and I decided to apply to my next business venture”.
Truth be told, whether it was written by a person or a model, that’s not a bad observation
And that’s when it hit me: we use AI until it’s used against us.
When it helps us write a report or summarize our emotions, we call it smart.
But when someone uses it to look at us, to question our narratives or poke holes in our stories, we suddenly call it “soulless.”
Maybe the problem isn’t the machine. Maybe we just don’t like the mirror it holds. Maybe the outrage wasn’t really about AI at all. Maybe it was about how uncomfortable it feels to be analyzed, to have our emotions examined by something that doesn’t feel it. But if we strip away the reaction, there’s still a question worth sitting with: “Why do we resist insights just because they come from unexpected voices?”
It’s the same way elders sometimes wave off a child’s wisdom; experts wave off an amateur’s advice with, “What do you know?” forgetting that truth isn’t about age or authority, but about clarity.
Maybe that’s what this whole moment wanted to teach us: that we resist insight not because it’s wrong, but because it comes from a voice we didn’t expect.
Whether that voice is a child, a stranger, or even a machine, maybe the real lesson is learning to listen even when the voice doesn’t sound like ours.
As a founder just starting my journey, this moment hit differently. I didn’t see a villain in this story. I saw a mirror for all of us.
A founder wrestling with loss.
An investor trying, perhaps clumsily, to interpret it.
And a crowd so busy defending humanity that we missed the humanity in the message.
I saw a conversation about accountability, growth, and how we process failure not through data or code but through perspective.
We say we want honest conversations about failure, reflection, and learning from mistakes, yet when one appeared, we turned it into a debate about who was allowed to interpret it.
Maybe AI didn’t replace empathy that day. Maybe it just held up a mirror, and we didn’t like what we saw.
As a founder, a builder, a thinker, and as people trying to make sense of both technology and emotion, maybe the real challenge isn’t whether machines can understand us. Maybe it’s whether we can still understand ourselves. If I ever share a hard lesson online, I would want people to see beyond the surface emotion, too. Whether that comes from a person or a tool. It shouldn’t matter as much as what truth we take from it.
So, what do you think? Did we miss the point?
