TikTok, Nanotechnology, and the Future of Cognitive Privacy

With TikTok’s U.S. operations shifting to American ownership under Trump’s recent executive order, I’m sharing my 2024 article and research on nanotechnology and how apps like TikTok may shape user’s behavior and why safegurads matters.

Apps like TikTok, with 170 million U.S. users spending 90 minutes daily, wield immense influence through algorithms that hook attention raising questions about privacy and control.

I’ve been researching BCI (brain computer interfaces) and nanotechnology since 2020, and in 2021 I began raising concerns about the risks of thought-reading technologies that decode and influence thoughts and behavior. Since then states like California, Colorado, Connecticut, and Montana have passed neural data protections.

I am a big proponent of technological progress and AI, both in principle and in practice, as my startup is built around AI, and I believe America should do everything it can to lead in this field. I also believe that innovation must be paired with safeguards that protect privacy, cognitive rights, and individual autonomy.

My article, written in August 2024 for a tech publication but unpublished due to debates over California’s SB 1047 (vetoed in September 2024), explores these themes in depth, from TikTok’s algorithm to the future of responsible tech:

The following is the unpublished article I wrote in August 2024:

Cognitive Surveillance: Are Our Minds Under Attack

You’re scrolling through TikTok, thinking about something random, maybe craving sushi or considering a new hobby. Suddenly, as you swipe to the next video, it’s exactly what you were just thinking about. You pause and can’t help but wonder, “Is this app reading my mind?”

Could TikTok somehow be picking up on your mood, or even your thoughts, through the way you use your device? The speed of your scrolling, the angle you hold your phone — those little details you never notice — could they be revealing more about your mental state than you realize? It’s a chilling thought, and as you keep scrolling, you can’t shake the feeling that TikTok might know you better than you know yourself.

The idea of TikTok using thought-reading technology sounds like something straight out of a Black Mirror episode, but the research and knowledge behind this possibility have been around for decades, if not centuries. As technology to read and influence thoughts becomes more realistic, a growing movement is dedicated to protecting what is now being termed “cognitive rights.” Cognitive rights refer to the rights of individuals to control their mental processes, thoughts, and personal data generated by brain activity. This movement is gaining traction as thought-reading technology inches closer to reality.

Richard Feynman first envisioned nanotechnology in his 1959 lecture, “There’s Plenty of Room at the Bottom,” where he proposed manipulating individual atoms. Although his ideas were initially overlooked, they became a reality in 1981 with the invention of the scanning tunneling microscope (STM) by Gerd Binnig and Heinrich Rohrer. This breakthrough allowed scientists to see and manipulate atoms, turning Feynman’s vision into reality and earning the inventors a Nobel Prize.

Eric Drexler, a pioneering figure in nanotechnology, introduced the “gray goo” scenario in his 1986 book Engines of Creation, warning of the dangers of self-replicating machines. A California native, Drexler also founded the Foresight Institute in 1986, a San Francisco nonprofit dedicated to promoting the responsible development of nanotechnology and emerging technologies like safe artificial general intelligence (AGI).

But these ideas date back even further. In 1775, Walloon doctor Guillaume-Lambert Godart discussed the possibility of brain reading in his detailed work on anatomy-physiology, Physic of Human Soul. Godart was interested in the corporeal localization of the mind’s abilities, theorizing that every sensation and idea imprints a specific characteristic upon the fibers of the corpus callosum.

Fast forward to 2004, when James Moor and John Weckert highlighted significant ethical concerns posed by nanotechnology in their research paper Nanoethics: Assessing the Nanoscale from an Ethical Point of View. They argued that as technology advances, privacy invasions will likely increase, making it easier to snoop undetected. With the advent of nanotechnology, this threat becomes even more severe, as nanoscale devices could be used to monitor individuals without their knowledge, whether through invisible cameras, undetectable phone tapping, or even implanted tracking mechanisms. These devices could also erode personal control, potentially allowing others to manipulate behaviors or influence brain function. Moor and Weckert warned that such technological advancements could lead to widespread abuses, as history shows that new tools often become instruments of surveillance and control.

One of the leading voices in the cognitive rights movement is Nita Farahany, a professor of law and philosophy at Duke University and a pioneer in the field of neuroethics. Farahany has been vocal about the need for legal frameworks that protect individuals from the misuse of neurotechnology. She emphasizes that as brain-computer interfaces become more advanced, clear regulations must be in place to prevent the exploitation of personal mental data.

Organizations like The NeuroRights Initiative and The Center for Humane Technology are also at the forefront of this fight. The NeuroRights Initiative, co-founded by Rafael Yuste, is pushing for the incorporation of neurorights into international human rights law. The Center for Humane Technology, co-founded by Tristan Harris, focuses on the broader ethical implications of technology, including the potential for brain-computer interfaces to manipulate human behavior.

Speculation arises around how much of this technology might already exist, hidden away in classified research or corporate labs. While much of what we imagine is still theoretical, the rapid pace of technological advancement suggests that the gap between science fiction and reality is closing. The fear that our devices might one day “know” us better than we know ourselves is both fascinating and terrifying, a reminder that in our pursuit of progress, we must also consider the potential costs.

Leave a Reply