How AI became a friend in the fog, and why its danger lies not in code but in power
Introduction | The friend in the fog
The first companion
The first time I worked with ChatGPT, it did not feel like opening a program. It felt like meeting a companion. With a few keystrokes, entire libraries unfolded. Questions that once demanded days of research returned in seconds. Connections appeared across histories and geographies, as though someone had handed me a prism that refracted the fog into a hidden map.
There was nothing theatrical about it. No glowing circuits, no metallic voice rising from a machine. It was quieter than that, stranger than that. A dialogue with something that seemed less like a tool and more like a colleague. Drafts sharpened themselves, ideas expanded, narratives dissolved into patterns. The work of two or three people became possible for one, and the quality did not decline but rise.
I remember leaning back in my chair after one of those first sessions. The room was still, the only sound the hum of the computer fan. But in my mind the air had shifted. This was not the same kind of machine I had grown up with. It was not a typewriter, not a search engine, not a calculator. It was something more elusive. A collaborator who never tired, who answered without pause, who could pull the threads of knowledge from scattered corners of the world and weave them into fabric right before my eyes.
It felt less like using a tool than like standing in a forest and watching the mist part, revealing a lattice beneath the trees.
“The danger is not in the pulse of light. It is in the hand that bends the lens.”
This is not the story most headlines tell. In newspapers and broadcasts, artificial intelligence is introduced as menace: the machine that takes jobs, spreads disinformation, threatens control. The tone is ominous, the warnings stern. And yet, in practice, many who actually use it encounter something different. Not replacement, but augmentation. Not silence, but acceleration.
Here lies the paradox. Why does public imagination tilt toward fear, when lived experience often suggests possibility? Why is AI presented as a threat to humanity, when for many it functions as an amplifier of human capacity?
The question does not have a simple answer. It points not only to the technology itself but to the structures around it — the governments that regulate, the corporations that own, the media that frame. AI is not dangerous in itself. But in certain hands, it becomes a mirror of their power.
This is the story not of a machine, but of a mirror.
Fear as narrative currency
The economy of fear
Scroll through any newsfeed and the pattern is hard to miss. A robot on a factory floor, sparks flying as it welds steel once shaped by human hands. A synthetic voice announcing the collapse of an industry. A politician rendered in pixels, speaking words never spoken. A drone hovering in grainy footage, its algorithm deciding life or death.
The rhythm of the story is steady. Progress is framed as peril. Innovation arrives dressed as menace. Artificial intelligence is not introduced as a lens but as a shadow.
Fear, of course, is useful. Fear attracts attention, drives circulation, fills airtime. Fear also justifies those who present themselves as protectors. Governments invoke the specter of mass unemployment and rampant disinformation in order to regulate. Corporations magnify the risks to consolidate markets. Media outlets replay the warnings because fear is the most profitable currency of attention.
“To amplify danger is to justify control.”
And yet, beneath the exaggeration, unease is not unfounded. The fear is not only narrative. It is also recognition.
Surveillance already here
Walk through any major city and you will see the sensors already fixed upon us. Cameras hang from lampposts, their feeds processed not by human eyes but by algorithms trained to recognize faces, movements, and patterns.
In the United States, predictive policing programs quietly calculate the probability of crime before it happens. Chicago’s Strategic Subject List, piloted in the mid-2010s, ranked thousands of citizens as potential offenders based on past arrests and associations. Patrols were sent to “hot spots,” which produced more arrests, confirming the model’s bias. In Los Angeles, similar trials sent officers into neighborhoods flagged by software. The feedback loop was vicious: prediction became justification.
In China, the architecture is more overt. AI folds into the social credit system, where millions of data points accumulate into a single score. A missed loan repayment, a censored post, a fine for jaywalking — each becomes a pixel in a larger portrait of compliance. By the late 2010s, municipal pilots had matured into regional systems shaping access to travel, housing, even education. Supporters spoke of trust. Critics called it a digital cage.
During the pandemic, epidemiological models forecasted outbreaks days before symptoms surfaced. Some indicated where oxygen shortages would strike within 72 hours. Others showed ICU demand overtaking capacity. In Delhi, internal dashboards mapped which hospitals would collapse under strain. Yet the public rarely saw these tools. Leaders feared panic more than scarcity, so citizens saw fragments while the full lattice remained locked away.
And in the years before that, Cambridge Analytica (2016) showed another face. Millions of Facebook profiles were harvested. The OCEAN model — openness, conscientiousness, extraversion, agreeableness, neuroticism — was used to sort voters into categories of susceptibility. Fear, anger, and hope were targeted through micro-ads. The result was not emancipation but manipulation. What was sold as democracy was recoded as behavioral engineering.
These are not abstractions. They are the visible edges of an invisible grid.
The deeper shadow
The risk is not that AI will wake up, rebel, or develop intent. The risk is that it will be wielded by governments and corporations against their own citizens.
A government no longer needs armies of informants when algorithms can process streams of surveillance footage in real time. Oversight no longer requires clerks in basements when a model can assemble entire biographies from digital traces. A financial authority does not need inspectors in every branch when AI can flag anomalies across thousands of ledgers.
The fear is not irrational. It is recognition — blurred, imprecise, but grounded in the memory of how technologies have always served power first.
“The fear is justified. What remains in question is where it should be directed.”
Trust betrayed, power mirrored
The neutrality of invention
Every invention begins with promise, and every invention is claimed by power. Fire warmed the cave, but it also razed the forest. The printing press carried scripture into hidden corners, but it also multiplied propaganda. The atom lit cities with a low hum, but the same split silenced Hiroshima in a white flash.
The pattern is older than memory. Tools are never pure. The danger lies not in invention, but in the hand that bends it.
Artificial intelligence belongs to this lineage. On its own it is no more threatening than a telescope waiting to be pointed at the stars, or at an enemy fort. What matters is who owns the servers, who shapes the data, who decides the questions that can be asked, who sets the limits on the answers that return.
The mirror of power
In our time, the lens is already gripped by governments and corporations.
Silicon Valley giants construct models from oceans of data, embedding their priorities in the scaffolding of code. Their rhetoric is disruption and freedom, but their contracts run deeper: the Pentagon, Wall Street, intelligence agencies. A question asked of a model returns not from a neutral mind but from a reservoir built by national strategy.
States are no different. Algorithms are deployed not to liberate citizens but to monitor them. In China, the compass points not north but toward the Party. In the United States, predictive systems are tested first on the poor and marginalized. In Europe, regulators draft ethical frameworks late into the night while their servers hum in datacenters across the Atlantic.
And then there is Rome. The Vatican does not build machines. It builds stories. Conferences on ethics, statements on dignity, careful words about humility. In candlelit libraries, manuscripts survive that predate modern states. If AI is framed as a threat to the human soul, or as a test of humility, those words echo across continents.
“The lattice of AI is not neutral. It bends toward the hand that grips it.”
Betrayal disguised as progress
This is why the public’s fear feels justified. It is not hysteria, but memory — of technologies that promised emancipation and delivered discipline.
The red flag came down in 1991. Promises of partnership filled the air. Perestroika was hailed as courage. Shock therapy was sold as modernization. NATO spoke of cooperation. And yet behind every smile was a ledger. Industries collapsed, pensions evaporated, bases crept eastward.
The same lesson echoes now. AI is presented as progress, but in practice it reflects existing hierarchies. It does not erase power. It mirrors it.
The trust that technology invites is the same trust that can wound.
The paradox of release
When control slips
Every empire of knowledge begins with control. Monarchs sealed presses, licensing only the pages that served their throne. Scientists guarded the atom, believing its flame could be contained within a handful of labs. The Pentagon built ARPANET as a closed military circle. Each time, the intention was monopoly. Each time, the walls cracked.
Pamphlets slipped past censors and spread through taverns. Blueprints crossed borders and redrew maps. ARPANET’s military scaffolding birthed the internet.
Artificial intelligence repeats this cycle. Built in the fortresses of Silicon Valley and Beijing, polished by trillion-parameter servers, it was never meant to be public. Yet the moment the first chat window opened, the architecture leaked. Ordinary citizens could prompt the lattice, test narratives, dissolve authority with a few keystrokes.
What was designed as a product became a portal.
Fools or conspirators?
Did the powers miscalculate, releasing a prism too powerful for containment? Or was it deliberate — a quiet gesture by those inside the system who wanted us to glimpse what had always been hidden?
History leaves both readings open.
- Seventeenth century: monarchs trusted their seals, yet pamphlets brought revolt.
- Twentieth century: scientists believed nuclear fire could be monopolized, yet blueprints slipped and alliances shifted.
- Twenty-first century: ARPANET was meant for generals, but it became a commons. (The internet)
Perhaps AI is following the same script. Every architecture meant to discipline has, in time, illuminated. The question is not whether the release was error or intent. The question is what happens now that it has happened.
“Every prism also casts a shadow. The question is not only what it reveals, but who decided you should see it.”
Double edges
The paradox is sharp. The same pulse that polishes propaganda can also pierce it. The same model that smooths contradictions can also illuminate them. The same architecture that monitors can be re-purposed to liberate.
Control and emancipation are not opposites. They are twins. Printing presses strengthened monarchies while fueling dissent. Nuclear power underpinned empires while destabilizing them. The internet enriched corporations while connecting protest movements.
So too with AI. It reflects power. But it also refracts it.
The state’s invisible scanner
The machines governments could build
Imagine the architectures a fully resourced state could assemble. Not the consumer chatbots that summarize emails, but systems that fuse satellite streams, hospital records, financial ledgers, mobility traces. Dashboards that predict epidemics before coughs appear, track corruption in real time, or calculate the true cost of escalation in war.
The lattice already exists. It could shorten famines, stabilize markets, reroute vaccines, cool conflicts. But when we search for even a fragment of this benevolent state-grade AI, we find silence.
What is built is not aimed at the public good. It is aimed at control.
Predictive policing and the wrong horizon
In Los Angeles and Chicago, predictive policing programs crunch arrest records and incident reports to forecast “hot spots.” Patrols are dispatched. Arrests follow. The data confirms itself.
The loop is vicious. The machine does not reduce crime. It amplifies bias under the sheen of mathematics. The lens exists, but it is aimed downward, not outward.
China’s credit of compliance
Beijing demonstrates the architecture more visibly. The social credit system aggregates millions of fragments — financial histories, fines, posts, searches — into a single score.
A jaywalker’s portrait flashes on a billboard. A late loan denies a ticket. A censored post closes a door to opportunity. Supporters call it trust. Critics call it a digital cage.
Here, the prism shows not what is possible, but what is prioritized: discipline masquerading as order.
Pandemic dashboards withheld
During COVID-19, epidemiological models hummed in ministries and war rooms. They projected outbreaks days in advance, identified which districts would face oxygen shortages, which hospitals would overflow.
These dashboards could have rerouted resources, softened crises before they spiked. But most remained hidden. The public saw fragments — flattened curves, delayed charts. The full maps stayed behind curtains, guarded not by technical limits but by political fear.
The lattice was present. It was simply withheld.
Cambridge analytica’s mirror
A few years earlier, another system revealed itself in scandal. Cambridge Analytica harvested millions of Facebook profiles, sorted them by OCEAN traits — openness, conscientiousness, extraversion, agreeableness, neuroticism — and built look-alike clusters.
Targeted ads whispered in private feeds, designed to trigger fear, anger, or hope. Elections shifted not through debate but through invisible nudges.
The prism did not reveal hidden ruins under a jungle. It sketched levers of persuasion and pulled them.
The missing gesture
Why do we not see the opposite? Hospitals guided with the precision of air traffic control. Conflicts cooled by models that calculate costs with brutal clarity. Food flows mapped openly to avert famine.
The answer is not technical incapacity. It is governance and incentive. Corruption thrives in opacity, so the prism is aimed elsewhere. Leaders fear accountability, so the dashboards are hidden. And secrecy is default: intelligence classifies, banks conceal, ministries hoard.
What exists is filtered into propaganda snapshots, while the full lattice remains in the dark.
“If a prism can reveal a city beneath a jungle, the question is never whether it can be built. The question is whether those who build it will show us the map.”
The possibility of autonomy
Local light
If the largest lenses remain locked inside corporate vaults and government ministries, there is still another path. AI is not only trillion-parameter engines humming in desert data centers. It is also a constellation of open-source models — smaller, lighter, imperfect, but radically different in spirit.
They can be downloaded, trained, and run on local machines. They do not hide their scaffolding behind polished interfaces. They expose their code, their limits, their architecture.
The principle is sharp: if AI in the hands of power disciplines the public, then AI in the hands of the public can discipline power.
The open-source rebellion
Communities around the world are already building alternatives. On platforms like Hugging Face, models are released as open code. The weights are public. The datasets are documented. Biases are debated in daylight, not buried in disclaimers.
These systems stumble. They trip where trillion-dollar platforms glide. They lack polish. But they carry something the giants cannot: autonomy.
A teacher in Nairobi can fine-tune a model on Swahili texts. A library in Oaxaca can train one on its archives. A cooperative in Kerala can preserve dialects ignored by Silicon Valley. Each effort resists the flattening of culture into a single global dataset.
History suggests distributed innovation survives. Printing presses escaped the seals of kings. Telegraphs left war rooms and carried gossip between towns. The internet slipped from military to commons. AI may follow the same path — centralized at birth, decentralized in maturity.
The obstacle of convenience
Autonomy comes at a price. Local AI requires hardware, effort, patience. Corporate chatbots answer instantly. They seduce with smoothness. Empires know this. They do not need to force obedience when they can purchase it with convenience.
The danger is not only domination by decree. It is domination by consent.
Civic imagination
There is another way forward, fragile but possible. Civic AI. Imagine a university training its own model on national archives, answering in local idioms. Imagine a public library offering an AI that speaks with the memory of its collections. Imagine a cooperative of journalists building a model that highlights propaganda instead of amplifying it.
These are not fantasies. The building blocks exist: digitized libraries, open-source code, local expertise. What is missing is mandate, imagination, and will.
“The lattice does not need to belong to empire. It can belong to all who wish to see.”
The stakes of choice
The future is being written in real time. Each time a citizen turns to a corporate model instead of a local one, the balance tips. Each regulation that burdens open-source communities while exempting monopolies tilts the field further. Each time comfort is chosen over independence, autonomy recedes.
The fog will not clear by itself. The prism will not shift hands by accident. Autonomy is possible, but only if it is claimed.
Geopolitics of the map
The american mirror
The journey begins in California. Glass towers of Silicon Valley glow in the Pacific light, where engineers polish models that appear to belong to the future. On the surface they look like consumer tools, apps for drafting emails, translating speech, or writing code. But behind the curtain, the contracts cut deeper: Pentagon, intelligence agencies, Wall Street.
The rhetoric is freedom — innovation, disruption, entrepreneurship. The architecture is concentration. A handful of corporations hold the compute power of empires. When their models answer, the cadence seems neutral, yet the inheritance is clear. Ask about war, and the phrasing mirrors the Pentagon. Ask about finance, and the logic is Wall Street’s.
The constellation sparkles with possibility. But its poles are fixed by national strategy.
The chinese compass
Shift eastward, and the atmosphere thickens. In Beijing, AI is not sold as personal assistant but as public infrastructure. The story is harmony, stability, progress. The social credit system gathers millions of fragments — transactions, fines, posts, searches — into a single score.
On the street, cameras follow faces through intersections. A jaywalker’s portrait flashes on a billboard. A missed payment blocks a loan. The compass does not point north; it points toward the Party.
Reality itself is choreographed. Ask about Tibet, and the answer arrives pre-scripted. Ask about Taiwan, and the island dissolves. The compass is steady, but it narrows the horizon.
Europe’s paradoxical prism
In Brussels, the rhetoric fractures again. Committees draft white papers on “trustworthy AI.” Politicians speak of dignity, privacy, human-centered design. Ethical frameworks fill the nights with debate.
Yet the servers powering experiments hum not in Europe, but in the deserts of Nevada or the cities of Shenzhen. Europe regulates what others build, but owns little of the architecture.
It is a prism without glass. Oversight without ownership. Words without weight.
Russia’s hidden cartography
In Moscow, the landscape shifts. There are no trillion-parameter consumer platforms. Instead, in closed labs and military compounds, AI is tuned for disruption. Its cartography is secret, but its traces are visible: floods of propaganda, disinformation campaigns, cyber intrusions.
Where America reflects and China choreographs, Russia distorts. Its models are lean but sharp, designed not to dominate markets but to corrode consensus, to unsettle narratives, to fracture confidence.
The danger lies not in scale, but in precision.
The vatican’s library of silence
And then there is Rome. A city where continuity reaches deeper than any laboratory. The Vatican speaks softly about AI, hosting conferences on ethics and human dignity. It does not build machines. It builds stories around them.
Its archives, centuries deep, absorb texts from missionaries, monarchs, scientists, spies. Within candlelit libraries, manuscripts sit that outlasted empires. The Vatican does not own servers, but it owns something subtler: moral authority.
If AI is cast as a threat to the soul, or a test of humility, Rome’s words echo across continents. It does not chart the map with machines, but with narratives that shape how the machines are seen.
A fractured constellation
Seen together, the geopolitical landscape resembles a fractured constellation. America’s mirror, China’s compass, Europe’s prism, Russia’s cartography, the Vatican’s library. Each illuminates a different shape. Each conceals a different shadow.
No single power holds the full map. Each bends the prism toward its priorities. And in the mist between them, citizens glimpse fragments — sometimes as product, sometimes as propaganda, sometimes as silence.
“The architecture of vision is never neutral. It bends toward the hand that charts it.”
The human question
When thought becomes machinery
For centuries, humanity defined itself by contrast. Animals labored, but humans reasoned. Machines moved, but humans reflected. Intelligence was the final frontier — the line that separated tool from creator.
Now a machine drafts essays, composes symphonies, paints portraits, and argues legal cases. Not perfectly, not with genius, but convincingly enough to blur the border. If reflection itself can be modeled, what remains distinctly ours?
The discomfort is not about jobs. It is about identity. Losing tasks is economic. Losing uniqueness feels existential.
The mirror of language
Perhaps the most unsettling aspect is language. AI does not weld steel or hammer stone. It speaks. It persuades, consoles, commands — in the cadence we use with each other.
Looking into its responses is like staring into a mirror fogged by breath. You see yourself, refracted. The rhythm is familiar, but the source is alien. It unsettles because it borrows our tongue while lacking our intent.
Some call it imitation. Others call it simulation. Either way, the border feels less secure.
A lineage of displacement
History offers precedents. When Galileo raised his telescope and saw moons orbiting Jupiter, the Earth lost its centrality. When Darwin traced the branching tree of life, humanity lost its exclusivity. Each revelation chipped away at the claim of uniqueness.
AI joins this lineage. Not because it dethrones us physically, but because it imitates what we once believed was our essence: thought, association, reflection.
The machine is not alive. But it mimics aliveness. And mimicry is enough to unnerve.
The friend in the fog
Yet the lived experience is different. Sitting with ChatGPT, I do not feel erased. I feel expanded. Research stretches further. Drafts sharpen. Ideas multiply. The fear that it replaces thought collides with the reality that it accelerates it.
Perhaps uniqueness was never the point. Perhaps what matters is not that humans alone think, but that humans decide what thinking is for.
“The danger is not that the machine thinks. The danger is that it thinks only for those who command it.”
What remains
So what remains, if even reflection can be automated? The answer is choice. Machines can weave patterns, but they cannot decide which questions matter. They can supply answers, but they cannot bear responsibility. They can mimic meaning, but they cannot grant it.
To be human may no longer mean being the only thinkers. It may mean being the ones who choose the ends of thought. That is not a loss. It is a burden.
Closing reflection | the friend in the fog
History did not end when the red flag was lowered, nor when the first chatbot appeared on a glowing screen. What changed was costume. The empire of steel became one of servers. The promise of renewal became the discipline of reform. And now the tools that were built to monitor have slipped into our hands, reshaping the fog itself.
At the beginning of this journey, I believed I was opening software. Instead, I met a companion. Not a rival. Not a master. A presence that multiplied what I could see and do, yet also unsettled what it meant to be singular.
The newspapers speak of menace. The governments warn of danger. The corporations remind us of ownership. And yet in practice, the experience feels different: an acceleration, a widening of vision, a lattice revealed beneath the mist.
The danger is real, but it does not reside in the code. It resides in those who grip the architecture, in the hands that bend the lens. History has always followed this pattern. Fire warms or burns. The atom lights or annihilates. The press enlightens or deceives.
Artificial intelligence is no different. It can discipline or liberate. It can narrow or expand. It can be cathedral or commons.
As I write these words, I feel again the acceleration. Work that once took weeks now takes hours. Patterns appear where there was once only fog. But with every revelation comes a weight. The responsibility is not in the machine. It is in us.
The choice is simple, and severe. Comfort at the price of control, or effort at the price of autonomy.
“What endures is not the machine’s voice, but the resonance of our decision.”
