The Great (Brain) Heist: How TikTok Hijacks Your Brain And Why That Should Be Illegal (Part II)

Disclaimer: I have nothing against social media networks in general — in fact, I use some of them very regularly (read: too much). Quite the opposite, actually. I think they’re, for the most part, fantastic tools that help us stay connected in a globalised world. TikTok is an exception.

Writing about difficult topics is… well… difficult. Who knew, right? Almost as difficult as having an opinion, especially one that doesn’t flow with the warm, numbing current of “everything’s fine as long as I’m entertained”. But alas, I was raised with the apparently radical idea that if you have a thought, you should probably back it up and then say it out loud.

So, in Part I of this series, that’s exactly what I did: offered my personal, yet still annoyingly evidence-based, opinion on why TikTok’s algorithmic practices don’t just flirt with ethical boundaries but rather ghost those boundaries completely and go straight for manipulation. This part, on the other hand, is a little less “here’s why I’m disturbed” and a little more “here’s why this might actually be illegal.”

We’re shifting gears — from opinions to rules. From moral discomfort to legal scrutiny. Specifically, here, we’re getting into whether TikTok’s fun little brain-hacking tricks might not just be creepy, but also incompatible with the EU’s shiny new legal framework (aka the AI Act).

As of early February 2025, the first provisions of the EU AI Act officially became applicable — which is legal-lingo for “you can now actually get in trouble for ignoring this.” These initial rules cover the usual groundwork: the scope (because you can’t enforce against what doesn’t matter, legally), definitions (because lawyers love nothing more than arguing over the meaning of “is”), AI literacy (because maybe, just maybe, users should know they can be manipulated by math and the pros and cons of it), and — finally — prohibitions. And here’s where it gets interesting.

The real meat of the AI Act lies in Article 5 — the official “don’t even think about it” list. If an AI system engages in any of these prohibited practices, it’s not just problematic — it’s legally considered an illegal product being placed on the market. That’s a very bad look in the EU, where consumer protection laws don’t play around. And especially considering the AI Act treats AI systems like products, violating Article 5 is the regulatory equivalent of selling exploding toasters to teenagers — and then acting surprised when someone gets burned.

Among the no-gos? AI systems that use subliminal or manipulative techniques capable of causing harm, and systems designed to exploit people’s vulnerabilities, whether those vulnerabilities are age, disability, or just good old neurobiology.

Which brings us back to Part I. Because if designing an algorithm that uses the human dopaminergic system to shove users down bottomless rabbit holes — or that leverages the not-yet-fully-wired-up prefrontal cortex of teens to nudge them into compulsive scrolling doesn’t count as manipulative and exploitative, I honestly don’t know what does.

But before we dive headfirst into the juicy “is this actually illegal?” stuff, we need to answer a few (annoying but unfortunately unavoidable) legal questions. Because — surprise! — nothing in law is ever that simple.

So, first onto the buzzkill checklist:

  1. Is the TikTok algorithm even considered an AI system? (Because if it’s not, then none of this applies and we all go home early.)
  2. Does the AI Act even apply here, given that we’re dealing with a Chinese company? (Spoiler: jurisdiction is fun.(It really is, I promise.))
  3. Can the authorities actually enforce anything yet — and if so, how? (Translation: is the AI Act still just a strongly worded pdf or an actual regulatory slap?)

Once we’ve made our way through that, we can finally move on to the more interesting existential stuff, like: Is this really illegal? And more importantly, where exactly is the line between cleverly induced engagement and full-blown digital exploitation?

Photo by visuals on Unsplash

Legal Scope Check or Does the AI Act (Already) Catch TikTok?

1. Is the TikTok algorithm an AI system under the AI Act?

Short answer: Yes. Long answer: also yes — with citations.

Under Article 3(1) of the EU AI Act, an “AI system” is defined as:

“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

Translation: if it’s a machine that takes in data, figures out what to do with it (on its own), and then spits out decisions or recommendations that shape their environment (including also user experience) — ding ding, we have an AI.

Let’s break that down in TikTok terms:

  • Machine-based system generating output from input? Yup. TikTok’s algorithm gathers user data and uses it to spit out eerily (sometimes even disturbingly) on-point content recommendations. Check.
  • Adaptive post-deployment? Absolutely. It learns. It evolves. It morphs in real-time based on every scroll, like, and linger. (Firghtingly fast, might I add.) Check.
  • Influences physical or virtual environments? Oh, the algorithm defines your virtual environment. The algorithm decides what you see, in what order, and how often, which (spoiler alert) can shape everything from your mood to your worldview. Big check.

So yes — TikTok’s algorithm is not just some innocent code matching videos to users. It is, by the AI Act’s definition, an AI system, which means that the EU AI Act rules apply.

2. Does the issue fall under the AI Act — even though TikTok is a Chinese company?

Answer: I’m afraid so.

Under the AI Act, TikTok qualifies as a “provider” — that is, a legal entity that develops an AI system and places it on the market or puts it into service under its own name or trademark. Which TikTok very much does. It built its recommendation algorithm, it branded it, and it serves it up to millions of users daily. The fact that users aren’t paying for it doesn’t matter. So that box? Ticked.

Now, for the “but it’s a Chinese company” argument. The AI Act anticipated this line of defence and promptly shut it down in Article 2(1)(a): the regulation applies to any provider placing or using AI systems in the Union, regardless of where the company is based. So yes, TikTok can technically operate from the other side of the planet — but the second it offers services to users inside the EU, it’s inside the AI Act’s jurisdictional net.

And just in case that wasn’t clear enough, Article 2(1)(c) doubles down, applying the regulation to non-EU providers “where the output produced by the AI system is used in the Union.” So, unless TikTok’s algorithm shuts off at the EU border (spoiler: it doesn’t), the law applies.

Check and check. Easy. Moving on.

3. Can we already enforce against the algorithm — and how, exactly?

Ah yes, the million-euro question. The answer? A hopeful-but-legally-nuanced “maybe, fingers crossed.”

As with most things involving novel regulations and tech giants, it’s complicated — and ultimately up to the courts to interpret how early is early enough.

The optimistic argument relies on Article 113(3)(a), which clearly states that Chapters I and II — including scope, definitions, and the all-important Article 5 on prohibited AI practices — became applicable on 2 February 2025. That means, in theory, you should be able to already base a claim on one of those unacceptable-risk violations, even before the full regulatory machine kicks in come August 2026.

Then there’s Article 110, which integrates the AI Act into the Collective Redress Directive. This, in principle, allows for collective legal action against violations of the EU AI Act. So, if TikTok’s algorithm is found violating Article 5, this could be the basis for a class action.

But — and it’s a big butArticle 110 doesn’t appear in Article 113(3)(a). And unfortunately, Article 110 lives in Chapter XIII, which doesn’t become enforceable until 2 August 2026. Translation: while the redress mechanism exists on paper, it’s not quite ready for prime time.

[Insert audible sigh.]

So where does that leave us?

Well, if you’re feeling resourceful (and a little litigious), you could try invoking Article 47 of the EU Charter of Fundamental Rights — the right to an effective remedy — because what good is having a right if you can’t actually enforce it?

Then there’s also room to stretch arguments under consumer protection and product safety law, especially considering that deploying a system violating Article 5 could be framed as placing an illegal product on the market. Not a slam dunk, but definitely not nothing.

And if you’re looking for moral support, there’s always Recital 179, which, in its own legalese way, basically says: “Hey, we know enforcement mechanisms won’t be fully baked for a while, but the prohibitions are really important — so they should influence things like civil law proceedings.” Helpful? Slightly. Legally binding? Not quite.

So… was this a legislative oversight? A compromise? An intentional delay tactic? Who knows. For now, all we can do is wait, strategise, and see if the very strongly worded letters from Spirit Legal (aka the lawsuits) were enough to affect these civil proceedings, while we watch enforcement structures slowly come online.

Finally. The interesting stuff.

Violation of Article 5(1) Prohibited Practices

Warning: Now, to follow the discussion here, you will have to have read Part I of the series. So, if you still haven’t, this is your cue to go do your homework and don’t come to class unprepared.

Now, let’s talk about why TikTok’s algorithm may very well be strolling straight into Article 5(1)(a) and (b) territory of the EU AI Act — the “absolutely don’t do this” list.

Article 5(1)(a): Subliminal & Manipulative Techniques

“The use of subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques… materially distorting the behavior of a person or group… in a way that causes or is reasonably likely to cause significant harm.”

Let’s unpack that with TikTok in mind:

  • Subliminal Techniques? Check. TikTok’s autoplay, infinite scroll, and conveniently missing time indicators aren’t UI quirks — they’re behaviour-modification tools designed to alter user engagement subconsciously. Even if some users are aware of these tricks, this is largely irrelevant. The point is: they will work on most. And that covertly.
  • Manipulative Effects? Absolutely. These features don’t just “suggest” content — they nudge, loop, and hook users into prolonged, often compulsive behaviour. The goal isn’t user satisfaction — it’s retention and engagement, at any cost.
  • Resulting Harm? Yes, and it’s far from trivial. We’re talking about real, measurable harm: reduced attention spans, sleep deprivation, tanked academic and work performance, and in extreme cases, disorders and life-threatening behaviors. Importantly, the law doesn’t require every user to be harmed — only that the practice is reasonably likely to cause significant harm. Which, here, it very clearly is.

Article 5(1)(b): Exploiting Vulnerabilities

“The exploitation of vulnerabilities of a person or a specific group (e.g., age, disability, or socioeconomic status)… with the objective or effect of materially distorting behavior… in a way that causes or is reasonably likely to cause significant harm.”

  • Vulnerable Group? Children and teenagers. TikTok’s entire design — from its peer-reinforced content loops to its reward-driven engagement system — is practically engineered to exploit the underdeveloped self-regulation and heightened susceptibility of younger users. It’s not just potentially addictive; it’s strategically addictive to the most impressionable demographic.
  • Harmful Impact? Again, undeniably present. Excessive screen time. Exposure to harmful or age-inappropriate content. Increased rates of anxiety, eating disorders, attention issues, and — in the worst cases — self-harming or fatal behaviours. Unfortunately, all checked out. These aren’t edge cases. They’re patterns.

But What Do You Have Against TikTok?

I started both articles in this little series with the same disclaimer: I generally like social media.

As an expat living in a country I don’t exactly consider home — and as someone who’s been on Instagram since the tender age of 13 — these platforms have helped me stay connected and, frankly, somewhat sane. So, where’s the magic line? Why am I taking TikTok personally enough to start drafting legal arguments against its algorithm?

Simple. Because in this case, it’s not just “oops, this can actually be addictive” — it’s intentional exploitation, paired with an absolutely staggering failure to implement safeguards that very much could have been implemented.

First things first: intent.

No, not an intent to cause direct harm — let’s not overdramatise. The intent was to maximise the profits based on the human (and child) dopaminergic system as much as legally possible… and then push it just a little bit further for good measure and a few extra zeros on those bank accounts.

How can we possibly dare to assume such a thing? Easy: internal documents.

Thanks to disclosures in a lawsuit filed in Utah, some previously redacted TikTok internal communications — internally known as the Project Meramec[1] documents — have come to light. And they paint a picture that’s about as subtle as a neon billboard:

  • Explicit design of addictive features.
  • Prioritising engagement metrics over user well-being.
  • Focusing more on “public trust optics” than on fixing actual harms.

And because no descent into dystopia would be complete without it: all this happened while TikTok LIVE allowed adults to send minors virtual gifts like “diamonds” and “coins” in exchange for posing, dancing provocatively, or worse… much worse… — with real-world cashing out options. (No, I will not be going down the TikTok LIVE rabbit hole in this article, because I value your remaining faith in humanity. But if you’re unfamiliar with it — you really should look it up. 🙃)

But could TikTok actually have done something to prevent all this? — One might ask.

Yes. Yes, they could have.

In fact, they already did — just not where it would have mattered for EU or US users. (Or, you know, for basic decency.)

Despite being fully aware of the harms caused by their platform, TikTok failed — spectacularly — to take meaningful action:

  • Delay tactics: It took years to introduce basic safeguards like an opt-in screen-time limit feature. Legal investigations in several U.S. states confirmed that TikTok dragged its feet especially hard when it came to addressing child safety concerns.
  • Content moderation failure: TikTok’s moderation system — both automated and human — has been consistently bad at catching harmful content. According to TikTok’s own internal studies (also unredacted in recent legal filings), suicide and self-harm content routinely slipped through the cracks. Some videos racked up over 75,000 views before anyone at TikTok noticed. Yes, you read that right.[2]
  • Fake fixes: Internal experiments revealed that even when TikTok did introduce “safety features,” their impact was so minimal it bordered on parody. One internal study showed that screen-time reminders lowered teens’ daily usage from 108.5 minutes to 107 minutes. Barely a rounding error. But of course, TikTok still promoted the feature as a major “safeguard.”[3]

And just to really underline the gist of the problem at hand, let’s talk about Douyin — TikTok’s sister company, providing a twin app for the Chinese market. There, effective safety features aren’t just a suggestion, they’re standard:

  • Minors are limited to 40 minutes of daily use.
  • App access is restricted to daytime hours (because maybe kids should sleep? Wild idea).
  • Overuse triggers five-second pauses between videos to curb binge behaviour.

So yes, there are plenty of alternatives. And yes, TikTok knows exactly how to implement them, when they want to. All of this serves to demonstrate one thing beyond any reasonable doubt: TikTok has the capacity to protect its users. It simply chose not to. At least not when it comes to protecting minors in the EU and the US without being forced to do so.

Main Takeaways

  • TikTok’s safety features are too little, too late.
    It took them ages to introduce even the most basic features such as opt-in screen-time limits, and even now, those limits are barely effective — shaving off a whopping 1.5 minutes from teen usage per day as per their internal research.
  • Harmful content moderation? Still failing.
    Internal studies show that videos promoting self-harm and suicide often rack up tens of thousands of views before anyone at TikTok catches them — if they ever do.
  • Public appearance over user well-being.
    TikTok’s own data shows that their “safeguards” don’t meaningfully change user behaviour. But they still promote them as protective measures anyway.
  • Alternatives exist.
    Over in China, the Douyin app limits minors to 40 minutes of use per day, restricts nighttime access, and even adds delays between videos to combat overuse. So yes, they can build safer systems — when they feel like it.
  • This is not about technological limitation.
    It’s about calculated, strategic inaction — knowing what the risks are, knowing how to mitigate them, and deciding not to.

This isn’t about tech that can’t. It’s about a company that won’t.

Photo by visuals on Unsplash

[1] Office of the Utah Attorney General, ‘Utah DCP and AG’s Office Announce Release of Previously Redacted Information: TikTok Execs Knew They Were Profiting Off the Sexual Exploitation of Minors’, 3 January 2025, <https://attorneygeneral.utah.gov/2025/01/03/utah-dcp-and-ags-office-announce-release-of-previously-redacted-information-tiktok-execs-knew-they-were-profiting-off-the-sexual-exploitation-of-minors/> [accessed 27 January 2025].

[2] Allyn, Bobby, Goodman, Sylvia, and Kerr, Dara. “TikTok knows its app is harming kids, new internal documents show.” NPR, 11 Oct. 2024, https://www.npr.org/2024/10/11/g-s1-27676/tiktok-redacted-documents-in-teen-safety-lawsuit-revealed [accessed 27 January 2025].

[3] Lima-Strong, Cristiano, Harwell, Drew, and Mark, Julian. “TikTok lawsuit claims app is addictive, harmful to children’s mental health.” The Washington Post, 11 Oct. 2024, https://www.washingtonpost.com/technology/2024/10/11/tiktok-lawsuit-children-addiction-mental-health/ [accessed 27 January 2025].

Leave a Reply