Introduction:Does Clicking “Agree” Truly Represent User Choice?
Do you remember that pop-up the last time when you opened TikTok? Do you recall what you clicked? The screen clearly stated: “By agreeing and continuing, you accept our Terms of Service and confirm you have read our Privacy Policy, understanding how we collect, use, and share your data.” Privacy policies stretching thousands of words — most people, like me, probably just tap “Agree” and rush into the dazzling world of short videos. The problem is — there’s no “Disagree” button here, nor any option to “Come back later.” Your only choice is to tap “Agree and Continue.” At first glance, this appears to be a standard step found in all apps. But upon closer inspection, it becomes clear that this is actually a forced consent: refusal means you simply cannot access TikTok, let alone start scrolling through videos.
This is not merely an issue of pop-up windows, but rather a microcosm of the operational logic of digital platforms: in the digital world, so-called choices are often nothing more than superficial illusions. This illusion is precisely the focus of my subsequent analysis. It not only shapes our relationship with data but also profoundly impacts users’ trust in platforms.
Illusion of Freedom: Choice or Constraint?
So-called illusory freedom refers to a situation where users appear to possess the right to opt out, yet refusal effectively means abandoning the platform. Clicking “Agree” is merely a perfunctory action. Platforms often use the “Agree” button to create an illusion of choice, misleading users into believing they are exercising autonomy. As Zuboff notes in Surveillance Capitalism and the Challenge of Collective Action, platforms rely on “meaningless mechanisms of notice and consent (privacy policies, end-user agreements, etc.)” to legitimize their practices.
Ironically, pop-ups assume users have read the privacy policy. The reality is that most users don’t actually read privacy policies when they appear. Even if someone does take the time to review them, it’s difficult to grasp the underlying data processing logic behind the terms. According to data from the Pew Research Center, only 9% of American adults always read privacy policies in full, while 36% never read them. In this context of information asymmetry, so-called “Agree” is almost always a forced click action rather than a truly informed choice based on understanding.
By clicking “Agree and Continue,” you are effectively handing over your data and privacy to the platform, allowing them to freely collect, store, and use it. This data encompasses not only explicit inputs (username, comments, posts) but also implicit behavioral traces: the number of seconds a user spends on certain content, whether they like it, and their scrolling speed. All these seemingly insignificant actions are recorded by the platform, piecing together an increasingly precise user profile. The deeper question is: even if users truly had the option to click “Disagree”. What difference would it make? The answer is — almost none. Because the very nature of data-driven and platform-based business models dictates that this illusion isn’t merely a result of interface design, but an inevitable outcome of the entire system’s operation.
Data-Driven Pipeline: A Process That Cannot Be Stopped or Rejected
Datafication is not merely the collection of information, but a staged process that translates everyday life into computable symbols. As van Dijck, Poell, and de Waal explain in The Platform Society, platforms operate through three ecosystem mechanisms: datafication, commodification, and selection. Building on this framework, in the case of TikTok the process can be further understood through four dimensions.
· Capture
As mentioned earlier, the platform continuously collects explicit inputs and implicit behaviors while also recording device information, network environment, geographic location, and more. Even if users actively provide nothing, these traces are silently captured.
· Commensuration
Captured content undergoes recoding: dwell time translates into interest intensity, swipe frequency into preference shifts, and location into consumption potential. Complex behaviors are compressed into algorithmically computable metrics.
· Circulation
These standardized data points are not only repeatedly accessed within TikTok to train its recommendation system, but may also flow to advertisers and third-party partners. Even if you opt out of personalized ads, your data will still circulate within a broader commercial network.
· Commodification
Finally, data becomes a commodity that can predict and intervene in behavior. Targeted advertising is precisely the product of this logic: platforms use behavioral patterns to deliver ads precisely to your screen, thereby converting data into capital.
In other words, even if there were a “disagree” button, your privacy wouldn’t be spared. These four steps won’t stop just because you refuse once. They are the fundamental logic upon which the platform operates. Users may believe they’ve evaded surveillance, but their data continues to be processed and circulated in the background, finally becoming a commodity. Superficial refusals cannot prevent privacy from being quietly stripped away along this assembly line.
Techlash: When Skepticism Turns into Collective Boycott
As more and more data scandals come to light, users are finally realizing that the “Agree” they clicked was never a real choice. This growing awareness has sparked global skepticism and resistance, dubbed the “Techlash.” “Techlash” rapidly entered mainstream discourse after being reported by The Economist in 2018. The article noted that major tech companies like Google, Facebook, and Amazon were facing intense scrutiny from both the public and governments due to data misuse, monopolistic competition, and inadequate regulation (The Economist, 2018). This backlash initially targeted Silicon Valley giants, but as TikTok surged globally, it too became caught up in the wave.
Regulators in the United States and Europe have repeatedly questioned TikTok’s data processing practices, particularly focusing on cross-border data transfers and the protection of minors. In 2025, Ireland’s Data Protection Commission (DPC) ruled that TikTok had failed to provide sufficient transparency and safeguards when transferring European user data to China, resulting in the imposition of a substantial fine under the GDPR. During the same period, the United States passed legislation prohibiting using TikTok on federal government devices, and multiple state governments subsequently ordered the app to be uninstalled from their official devices. As early as 2020, the Indian government had already imposed a complete ban on TikTok, citing national security and data privacy concerns. For ordinary users, these regulatory incidents have reinforced their sense of distrust — when even national-level institutions question the legitimacy of data processing, individual consent becomes virtually meaningless.
At the public level, this skepticism manifests as a widespread ambivalence. On one hand, users are hooked on the immersive short-video experience TikTok offers; on the other, they frequently voice concerns about privacy violations on social media. So-called “privacy settings” or “personalized ad switches” have done little to alleviate this anxiety. Ads persist, merely disguised in another form; recommendation algorithms never cease, as they continuously gather and interpret traces of user behavior. These settings merely create the illusion that “I made a choice,” yet they cannot alter the fact that data is being captured and used.
In other words, Techlash is not an isolated backlash but the culmination of accumulated grievances. It exposes a fundamental contradiction: within the data-driven platform economy, individual choices cannot halt the march of datafication and commodification. The issue with TikTok thus extends beyond whether privacy settings are effective — it raises a more fundamental question: Are we truly in control of our choices?
Criticising TikTok’s response: Are Users Really Protected?
Faced with mounting scrutiny, TikTok hasn’t remained entirely silent. Instead, like other tech platforms, it frequently offers various justifications to reassure users and persuade regulators: “We do this for the user experience,” “We prioritize the protection of minors,” “We strictly comply with the law.” These claims sound reasonable at first glance, but upon closer examination, they often fail to hold up to scrutiny.
1. “We collect data for a better experience.”
The most common excuse of TikTok is that the algorithm needs data to improve recommendations so that users can see better, more interest-aligned videos. On the surface, this does make things easier for users. Who doesn’t want accurate recommendations and less boring content? But the problem is that this logic of “improving the experience” does not deny the fact that platforms rely on data to make money. The more accurate the user profile, the more effective the advertising, and the more profitable the platform. The so-called “better experience” is more often than not a side effect. What’s really essential is advertising revenue. In other words, TikTok’s experience optimization and Commodification are tied together, and the former is often used to rationalize the latter.
2. “Users don’t care about privacy, they’re willing to trade.”
Another common voice is that users wouldn’t download and use TikTok if they really minded, and since people willingly spend hours swiping through videos knowing that the platform collects data, it suggests that they’ve traded privacy for entertainment. This sounds like a “voluntary transaction”, but it doesn’t really work. Firstly, the user never really has the option of refusing — it’s not a fair exchange to not be able to use it without “Disagreeing”. Secondly, the so-called “user disinterest” is often a form of helplessness. Research has called this the privacy paradox: people are concerned about privacy, but at the same time they can’t leave the platform because it means losing access to social interaction, entertainment and information. To say that users “don’t care” is to ignore the situation they are forced to accept.
3. “We have legal and regulatory constraints.”
TikTok also often emphasizes that it has to comply with local laws, such as the EU’s GDPR and US privacy laws. It will point out that it has updated its privacy policy, increased transparency, and even given users more setting options. It may sound like things are changing for the better, but here’s the reality: these laws and policies often lag behind technological developments. By the time a regulator makes a ruling, much of the data may already have been collected and used. Even if a platform is fined for this, it is often difficult to fundamentally change its business model when compared to its massive global revenues. Users may see “We offer control options” in their privacy policy, but then click on it and realize that there’s still no way to stop the flow of data in the background. Just because the law exists doesn’t mean the problem is solved.
4. “We protect teenagers and vulnerable groups.”
TikTok also often emphasizes that it offers special protections for teens, such as defaulting accounts to private for 13 to 15 year olds, restricting ad targeting, and introducing parental monitoring tools. These may seem like responsible initiatives, but in practice the problems are obvious: age verification relies heavily on users filling in their birthdays, which can easily be bypassed. Parental monitoring tools have hidden entrances, are complicated to activate and have low usage rates. Even if targeted advertising is reduced, it doesn’t mean the risks disappear. Recommendation algorithms still collect and analyze behavioral data in the background, potentially exposing teenagers to unhealthy or even dangerous content. These so-called protective measures are more of an “illusion of protection”. They create a superficial sense of security, but do not prevent platforms from continuing to collect, analyze and exploit data on adolescents.
Conclusion: Are We Really in Control of Our Choices?
In the end, the TikTok case illustrates a common digital dilemma: users seemingly have choice, but in reality are stuck in the pipeline where data is collected, disseminated, and commoditized. The pop-up window at the entrance creates an illusion of choice, and the data-driven mechanism makes rejection ineffective. Global-scale resistance and regulation show that the illusion is collapsing, while the platform’s various excuses are striving to maintain it. The four links are layered on top of each other, outlining a clear fact: the so-called “choice” is limited from the very beginning.
What’s more, this is not a problem for TikTok alone, but an inevitable logic of the entire data-driven society. The visions we see on TikTok today could appear on any platform tomorrow. Individual clicks can’t stop data from being captured, and laws and regulations, while necessary, often lag behind technological developments. The loss of privacy is thus no longer an accidental event but a normalized and institutionalized routine.
Against this background, we need to rethink the meaning of choice itself. For the individual, choice seems to be a symbol of freedom, but when freedom is only a choice between “Agree” and “Disagree”, it has long since lost its original meaning. For society, the forces of regulation and resistance, while present, have struggled to shake the platforms’ core business model; for the future, with the spread of AI and predictive algorithms, this illusion may even deepen — platforms are no longer just waiting for us to “Agree” to them. platforms are no longer just waiting for our “Agree”, but actively predicting and even shaping what we will choose.
As a result, we are not really in control of our choices, but surrounded by the illusion of choice. TikTok is just a microcosm of the paradox of the data society: with every click, seemingly free, we are perpetuating a situation of passivity. The real question is not whether users have read privacy policies carefully, but whether we can push for structural changes that put choice back in the hands of users.
The last time you clicked “Agree and Continue”, you probably didn’t think twice about it. But it’s in this seemingly simple action that we give our data and freedom to the platform. And this “Agree” is never a choice we really control.
References
Auxier, B., Rainie, L., Anderson, M., Perrin, A., Kumar, M., & Turner, E. (2019, November 15). Americans’ attitudes and experiences with privacy policies and laws. Pew Research Center. https://www.pewresearch.org/internet/2019/11/15/americans-attitudes-and-experiences-with-privacy-policies-and-laws/
Australian Competition and Consumer Commission. (2019, July 26). Digital platforms inquiry: Final report. https://www.accc.gov.au/publications/digital-platforms-inquiry-final-report
Dao, L. (2016, July 7). Can you mix and match your way to a platform business model? Medium. https://medium.com/enabled-innovation/can-you-mix-and-match-your-way-to-a-platform-business-model-b0a829e3f79e
European Data Protection Board. (2025, January 4). Irish supervisory authority fines TikTok €530 million and orders corrective measures following inquiry into transfers of EEA user data to China. https://www.edpb.europa.eu/news/news/2025/irish-supervisory-authority-fines-tiktok-eu530-million-and-orders-corrective_en
Jones, R. P., & Cheng, M. (2025, September 23). TikTok collected sensitive data on Canadian children, investigation finds. Reuters. https://www.reuters.com/business/media-telecom/tiktok-improve-steps-keep-children-off-app-canadian-officials-say-2025-09-23/
Nava, V. (2022, December 15). Senate passes bill that bans TikTok on government devices. New York Post. https://nypost.com/2022/12/15/senate-passes-bill-that-bans-tiktok-on-government-devices/
Press Information Bureau, Government of India. (2020, June 29). Government bans 59 mobile apps which are prejudicial to sovereignty and integrity of India, defence of India, security of state and public order. https://pib.gov.in/PressReleasePage.aspx?PRID=1635206
The Economist. (2018, January 20). The techlash against Amazon, Facebook and Google — and what they can do. The Economist. https://www.economist.com/briefing/2018/01/20/the-techlash-against-amazon-facebook-and-google-and-what-they-can-do
van Dijck, J. (2018). Platform mechanisms. In J. van Dijck, T. Poell, & M. de Waal, The platform society: Public values in a connective world (pp. 31–48). Oxford University Press. https://doi.org/10.1093/oso/9780190889760.003.0003
Zuboff, S. (2019). Surveillance capitalism and the challenge of collective action. New Labor Forum, 28(1), 10–29. https://doi.org/10.1177/1095796018819461