From Cold War Computers to TikTok Feeds: We Keep Mistaking Control for Progress

Photo by SpaceX on Unsplash

By San Vo
San Vo is a Sophomore at the University of Toronto studying Mechanical Engineering with interests in public policy, AI, and digital communications and technologies

Over the past year, the scale and velocity at which artificial intelligence has seeped into our daily lives has been nothing short of astonishing. Models like OpenAI’s GPT-5 or Anthropic’s Claude Sonnet now sit in hospitals drafting diagnoses, in classrooms grading essays, and in defense labs simulating warfare. To many, it feels as though technology has developed of its own accord — as if innovation were a spontaneous force of nature that simply materializes and trickles into every domain of life.

That myth — that technology is an autonomous, inevitable tide — is not new. It has been carefully cultivated by individuals like Musk, Altman, and Zuckerberg, who insist that connection at scale is inherently beneficial. Their message is consistent: technology is happening to us, with or without our consent — so we may as well embrace it.

But I argue that is not how history works. Technology is not an exogenous force; it is an instrument of power. And if we are not guiding its direction, someone else is.

To chart a clearer path through this moment, I want to revisit the scholars who’ve spent decades dissecting how technologies actually emerge — not through inevitability, but negotiation, ideology, and occasionally, outright coercion.

Technological Determinism — a shortsighted view

Donald MacKenzie and Judy Wajcman call the myth of autonomy technological determinism — the belief that “technologies change, either because of scientific advance or following a logic of their own; and then they have effects on society.” It’s a seductive narrative. It absolves us of responsibility. It allows CEOs to claim objectivity: “the future is coming whether you like it or not.”

MacKenzie and Wajcman offer a critical correction: technology is always socially shaped. To understand why something was built — and for whom — we must look not to circuit boards or datasets, but to institutions, incentives, and ideologies.

Let’s test that across history.

How Surveys in The 19th Century Lead to Google

We often imagine Google as the great pioneer of data collection. In reality, as historians Chris Wiggins and Matthew Jones show, data began not as a commercial commodity, but as a tool of governance.

In the 19th century, rapidly urbanizing nations — overwhelmed by migration, disease, and unrest — needed new ways to understand and control their populations. They built statistical bureaucracies, conducted massive surveys, and invented demographic categories. Data was born not from innovation, but anxiety.

A century later, in the 1960s, computer scientists proposed treating data as a commodity — only to be dismissed by consultants at McKinsey. It wasn’t until the early 2000s, when Google proved that behavioral prediction could be monetized, that the world finally pivoted.

What changed? Not technology — but business logic.

A New Type of Business

Economists David Evans and Richard Schmalensee describe today’s tech giants not as software firms, but as multisided platforms — matchmakers connecting different groups for profit. Uber links drivers and riders. DoorDash links diners and restaurants. Instagram links attention and advertisers. Their power emerges not from superior code, but from network effects — the simple fact that the more people join, the harder it becomes to leave.

Yet again, the pattern emerges: what appears to be innovation is often architecture — designed not to empower, but to enclose.

The Computer You Use Today Wasn’t Made for You

As historian Paul Edwards reminds us, the modern computer was not born in a garage or a startup accelerator — it was born in the Manhattan Project. Its first purpose was not to compute spreadsheets but to calculate bomb trajectories and simulate apocalyptic force.

Curiously, in the 1940s and 50s, analog machines were actually superior for military control tasks, yet the U.S. military funneled unprecedented resources into the digital computer anyway. Why? Because digital machines fit the narrative of the Cold War: a vision of total surveillance, global command, and automated containment. Edwards calls this the “Closed World” — a worldview in which safety was promised not by peace, but by control.

Precision Without Understanding is Not Progress

Even today, the logic persists. Donald MacKenzie and Lucy Suchman both warn that modern warfare is caught in an illusion: precision is not accuracy. We may build weapons that can strike a target exactly — but we still cannot determine with certainty who or what that target truly is. As Suchman argues, “No amount of precision in striking can compensate for uncertainty in identifying.” Once again: more control does not guarantee better judgment.

From War Machines to Symbols of Freedom

Fred Turner shows how, after the Cold War, computers shed their military identity and were rebranded as tools of liberation. Figures like Stewart Brand helped transform the computer from a symbol of command into a symbol of personal empowerment and countercultural freedom.
The hardware didn’t change. Its meaning did.

And once that meaning shifted, tech companies realized something profound: to be loved was more profitable than to be feared.

Who’s Guiding Our Innovation

Meredith Whittaker, former Google AI researcher and now President of Signal, issues one of the starkest warnings about today’s AI industry: we do not have an innovation economy — we have a capture economy. In her essay “The Steep Cost of Capture,” she argues that the trajectory of AI is not being set by scientists, citizens, or elected officials. It is being set by the very corporations that stand to profit the most from its deployment.

Whittaker calls this “corporate capture” — a process in which Big Tech firms embed themselves so deeply into our universities, nonprofits, and even government that they begin to define not only what technologies are possible, but which ones are thinkable. When companies like Microsoft or Google fund AI ethics labs, they don’t just write checks — they set the terms of debate. Questions like “Should we build this?” are replaced with “How do we deploy it responsibly?” — a subtle swap that shifts the focus from prevention to optimization.

The danger, Whittaker warns, is not that AI will be misused — it’s that the same small group will define what counts as “use.”

They Know You Better Than Yourself

If Whittaker reveals who is steering innovation, Shoshana Zuboff shows how they maintain control. In The Age of Surveillance Capitalism, she describes a radical shift in capitalism’s logic: corporations no longer sell products to you — they sell predictions about you.

Every search query, every paused video frame, every scroll hesitation is recorded, categorized, and fed into “prediction markets” that promise advertisers — and increasingly governments — the ability to shape behavior at scale. Zuboff calls the harvested behavioral data “surplus” — information that exceeds what is necessary for service delivery but is captured anyway to fuel psychological modeling.

The most chilling claim is not that companies track what we do. It’s that they predict what we will do — and begin shaping that future before we’re aware it’s happening. As Zuboff writes, “They know everything about us, while we know almost nothing about them.”

Market, State, Regulatory — Which to pick?

Legal scholar Anu Bradford argues that in the digital world, we now live under competing geopolitical philosophies of technology governance — each with its own freedoms and dangers. In Digital Empires, she identifies three dominant governance models: US Market-Driven, China State-Driven, and EU Regulatory-Driven.

Most of the world — including us here in Canada — is caught between these empires, often governed indirectly by foreign policies we never voted for.

How We Prevented Uber from Killing Lyft

Nobel economist Jean Tirole argues that digital platforms are not like traditional markets; if left unregulated, they do not just dominate — they devour. Companies like Uber were not just competing with Lyft. They were trying to starve it.

Tirole explains that digital markets tend toward “tipping points” — thresholds where one company becomes so dominant that users have no practical alternative. At that moment, choice evaporates, and the platform transitions from matchmaker to gatekeeper. Measures like interoperability mandates and anti-self-preferencing laws (as in the EU’s Digital Markets Act) are not nuisances — they are life-support systems for competition itself.

The survival of Lyft was not an accident of innovation — it was an act of policy. Without regulation, “move fast and break things” becomes “move first and own everything.”

How Musk Has More Power than The Pentagon

Journalist Ronan Farrow revealed perhaps the most startling fact of our digital age: a single unelected billionaire now holds strategic power once reserved for nation-states. In his reporting for The New Yorker, Farrow recounts how the U.S. defense establishment discovered — to its horror — that Starlink, the satellite network Ukraine relied on for wartime communications, was fully controlled by Elon Musk.

When Musk grew wary of escalation, he unilaterally restricted Ukrainian drone access — effectively making decisions of war and peace. No congressional vote. No court oversight. Just a private infrastructure king with a toggle switch.

We often speak of “public” and “private” sectors as separate spheres. Farrow demolishes that illusion: Today, the infrastructure of democracy — energy, communication, transportation, even space — is increasingly privately owned. And with that ownership comes discretion over life-and-death decisions.

Be The First to Invent or The First to Stabilize?

Jeffrey Ding disrupts one of Silicon Valley’s favorite myths: that being first automatically guarantees leadership. Through his research on general purpose technologies (GPTs — the real kind, not the AI acronym), he distinguishes between two models of technological advantage: Labor specialization and GPT diffusion. Ding argues that the U.S. won the last technological century not because it invented first, but because it distributed skills widely — enabling millions to build atop inventions. If AI today is hoarded by a few labs rather than integrated across society, China or Europe may yet become the true leaders — not in invention, but in institution.

Cutting Underwater Fiberoptic Cables and Surveilling The Data

Political scientists Henry Farrell and Abraham Newman warn that globalization did not make the world more peaceful or equal — it made it hackable. In Weaponized Interdependence, they show how the most “connected” nodes in global networks — undersea cables, cloud providers, financial clearinghouses — become tools of surveillance (panopticons) and coercion (chokepoints).

If U.S. intelligence wants to listen in on the world, it taps into the core routers of the internet. If it wants to cut off a nation’s economy, it blocks access to SWIFT banking messages or cloud hosting. Russia once experimented with severing itself from the global internet entirely, simply to test whether disconnection was even possible anymore.

The internet was supposed to decentralize power. Instead, it centralized it invisibly.

So Where Do We Go From Here?

Across two centuries of technological change, one truth keeps resurfacing: the future is not something that simply happens — it is engineered. It is engineered in very particular social, political, and economic contexts which we must not ignore.

What we call innovation has too often been the consolidation of control. What we call efficiency has too often been the quiet erosion of agency. And what we call progress has too often been justified only by its own momentum.

We stand today at the same crossroads faced by every society that adopted a powerful new system before fully understanding its consequences. The printing press destabilized monarchies. The telegraph redrew empires. The atomic bomb redefined mortality and global politics. And AI — with its unprecedented scale of influence — may be even more consequential.

I strongly believe the question is not whether AI will reshape society but rather who will it reshape society for?

If we accept the logic of inevitability — that innovation is a force of nature rather than a field of choices — then we concede by default. We hand over not only our data, our infrastructure and our governance. We begin to outsource not just tasks, but judgment. Not just labor, but will.

But history gives us a different script.

It tells us that technology is not self-evolving. We design it. We’ve seen that systems do not just emerge. They are built — funded, debated, resisted, redirected.

So perhaps the first step toward responsible innovation is not to ask “What can we build next?” but “Who must be at the table when we build it?”

Scientists, yes. Engineers, yes. But also teachers, nurses, historians, machinists, caregivers — the full spectrum of human experience. Because a system built only by the powerful will always serve power.

The future will not be written by AI. It will be written by whoever governs AI.

And if we believe in freedom — not the version sold to us through glossy keynotes and platform onboarding flows, but the real kind, the kind that honors dignity, privacy, and agency — then we cannot wait for Silicon Valley, Brussels, or Beijing to hand it back to us.

We must claim it. Design for it. Legislate for it. Build for it.

Not because we fear technology —

But because, for the first time in history, technology fears absolutely nothing.

And that is precisely why we must make it answer to someone.

Leave a Reply