Originally published as “A Psycholinguistic Assessment from 2098” in Conlangs Monthly (Vol. 7, 2015)
Back in 2015, before I ever wrote a line of code, I published this speculative essay about the future of translation. I left it exactly as it was.
A Psycholinguistic Assessment From 2098
DEGREE IN PSYCHOLINGUISTICS, DEPARTMENT OF COGNITIVE SCIENCE
FINAL ASSESSMENT, MODULE 2, YEAR 2098
After carefully reading the excerpt below, and briefly outlining the way automated translators were originally designed, you will explain the impact of the Très Belle Neurowear translator on its users.
“Once upon a time, an American engineer was surfing the web with a lot of flair, when he met an incredibly beautiful woman from Romania – it was love at first sight. However, he could not speak Romanian, and she could not speak English. Solving problems being the kind of thing that engineers do, the super smart guy created a translating device for his paramour. Their love story was better than any advertising campaign, and after a few months everybody was using his translating device. Investors got interested, and the beta version evolved through all the updates of the Greek alphabet. At first people were only using them to talk to foreigners, but the designers gave it such a sexy look and intoxicating touch that it soon became essential to the everyday life. Bye bye Babel! Tower language divisions, bonjour Très Belle Neurowear translation! That is how, in less than a generation, all languages were wiped off the planet, as everybody started communicating solely with the device. That is how we lost empathy. That is how we lost poetry too.”
From Lost and Found linguistics, 2095
In the first decade of the 21st century, automated translation software became accessible to everyone via the internet. Cheap and user-friendly, they allowed people who were otherwise divided by language barriers to communicate instantaneously in the same language. If most commercial machine translation systems were first developed using a rules-based approach, providing users with cryptic translations, there was a considerable improvement with the implementation of statistical learning techniques to build a new translation model. As Google research scientist Franz Och explained:
“Languages are complicated and, as any language learner can tell you, there are exceptions to almost any rule. When you try to capture all of these exceptions, and exceptions to the exceptions, in a computer program, the translation quality begins to break down. Google Translate takes a different approach. Once the computer finds a pattern, it can use this pattern to translate similar texts in the future. When you repeat this process billions of times you end up with billions of patterns and one very smart computer program.”
While one may admire the fantastic resourcefulness of the research scientists who designed this statistical translating system, one cannot help but notice the limitations that resulted in our current conundrums: the damage done to language as a communicational and cultural tool, and the damage done to the human mind.
First, the documents used to teach computers how to translate were mostly multilingual human-translated United Nations and European Union official documents, i.e. administrative and highly formal language. This choice entailed an immediate redefinition of Jakobson’s functions of language. From six functions, only one was left: definite descriptions and deictic words, i.e. the referential function.
What got lost then? The internal state of the speaker that is conveyed by the expressive function, the vocatives of the conative function, the Parnassian beauty of the poetic function, the thirst for contact and checking for openness and engagement of the phatic function, and the so useful distance-taking possibility allowed by the metalinguistic function.
Most were convinced at the time that these were little loss compared to the gains: limiting language to its mere referential function would ensure clarity of information transmission. No more garden path sentences, ambiguities, and vague utterances that confuse readers and listeners — language would finally become a dramatically efficient communicational tool thanks to the Très Belle neurowear translator, de facto cognitive prosthesis. Vocabulary and connotations were rationalised, and soon ambiguity was obliterated from human speech. However, Piantadosi and his colleagues argued that:
“Contrary to the Chomskyan view, [that] ambiguity is in fact a desirable property of communication systems, precisely because it allows for a communication system which is ‘short and simple’. We argue for two beneficial properties of ambiguity: first, where context is informative about meaning, unambiguous language is partly redundant with the context and therefore inefficient; and second, ambiguity allows the re-use of words and sounds which are more easily produced or understood.”
Indeed, researchers at MIT’s Brain and Cognitive Sciences department demonstrated that when every word is different, the information rate is very high; consequently, the speech channel’s bandwidth may reach its saturation limit. However, when many words sound the same but are made distinguishable by context, then the communication system is more efficient. Therefore stripping language from all its functions, with the rational of increasing its efficiency, actually resulted in a loss of comprehension capacity and damaged the communicational tool that is language.
When people stopped learning languages and resorted only to automatic translations, the software would only be fed with documents that were translated using automated translation. It entered into a loop – there was absolutely no human feedback any more. After a while, not only did the target language stop evolving, but also the native language of the human users. The more they were using the automated translator for communicating, the more they were modelling their own human language after the one of the machine. As we explained earlier, this rationalisation of language that occurred by use of extremely precise words mined from institutional speech material data was counter-productive: instead of fostering efficiency, it achieved saturation. But it also had a more deleterious impact: it damaged language as a cultural tool.
As Sapir stated, human beings construct their understanding of the world through language, by means of expressing and describing. Quine insisted that language is a social art, and Everett, with fantastic intellectual iconoclasm for his times, made clear that language is instrumentum linguae, the tool through which all cultural constructs are made possible.
As Korzybski stated, “the map is not the territory”. Human are five-sense beings evolving in a three-dimensional plane: their nervous system allows them to sense objects and events, but they leave out many characteristics as they proceed, and continue to use those inaccurate descriptions to make more inferences about the world. In his new corollaries for general semantics, Anton proposed that “there is no territory”, because the territory (reality) consists of many maps.
Human beings are not mere homo loquax, they are in a constant tension between homo deducto and homo narrans, building their understanding of the world on inferences, and telling stories about them, that then lay the basis for new inferences and more stories.
But human beings are also above all, homo metaphorica. And this is where the designers of the initial automated translations went wrong: they only fed the smart software with very little metaphorical language. However everyday abstract concepts such as time, change, and causation are metaphorical! As Anton explained, “humans navigate towards the future thanks to language”. But as they do, they must also navigate across different forms of conceptual mappings, and this is achieved through a system of conventional conceptual metaphors as demonstrated by Reddy in The Conduit Metaphor. It is not just about communication — cognitive metaphors shape the way humans think and act. And as Lakoff and Jacobs suggested in their ethics-driven approach to language, metaphors allow for social conditioning and pressure to form specific cognitive bias.
When the automated translators entered the feedback loop of recursive updating and the lingua franca froze into a highly technical, unambiguous, and literal medium for referential communication, it is the Theory of Mind of the 21st century humans that got lost. Except for a few cognitive résistants who went on learning languages and avoiding the systematic use of the Très Belle Neurowear translator in their everyday communications, most people’s internal workings changed at a dramatic pace. As Astington and Baird recalled, research showed strong relations between children’s linguistic abilities and their theory of mind. Furthermore Gopnik stated that:
“Hearing language is particularly important for understanding others, while other kinds of experience, such as the visual modality, are less important.”
This machine rationalised language that could be heard from the translating neurodevice had a deleterious impact on users.
“You talk like who you talk with.”
Deprived from exposure to ambiguity, lacking the practice of positing intentional contexts, they started to lose the ability for empathy. And as their linguistic metaphors got threatened by the literalness of the new language, and poetry disappeared, their conceptual metaphors petrified, and so did their understanding of the world.
I believe as Everett did that language diversity is “the cognitive fire of human life.” We should therefore protect and encourage those who still pursue language studies, despite the easy solution that is the Très Belle Neurowear translator.
🌐💬🧠
Ten Years Later: Self-Reflection
I wrote this piece over ten years ago. After giving birth to my daughter and moving abroad, I completed my PGCE (QTS) and my MA in Education with Merit. I was still a very classical-knowledge, literature-bookish person. While I admired technology and engineering from afar, coding was completely foreign to me. Apart from using Word and PowerPoint, I wasn’t really doing much with my computer.
And somehow, I ended up in a conlanging Facebook group and wrote this: a partially prescient, partially overcooked psycholinguistic essay about how translation devices could one day erase empathy from human speech.
Looking back now, I’m both amused and oddly impressed by how much of it aged well… and how much aged like milk.
What I Somehow Got Right
I warned that automated translation could trap itself in a loop, with models feeding on their own machine-generated output until nuance vanished. At the time, that sounded like speculative fiction. Today, we call it model collapse.
I also argued that ambiguity wasn’t a flaw, but a feature of human communication — essential to creativity and empathy. That point, it turns out, aged beautifully. Modern neuroscience agrees: ambiguity and metaphor engage the brain’s richest associative networks.
And speaking of metaphor, I insisted that without it, human cognition would freeze. Ironically, this was years before large language models began producing metaphors faster than poets on espresso.
So yes, 2015-me somehow guessed that machines would one day write poetry.
What Aged Like Milk 😅
Back then, I imagined humans and machines as total opposites: one soulful and expressive, the other rigid and cold.
Reality turned out to be way messier and also very hopeful.
We didn’t become linguistic zombies. We adapted. We collaborate with AI rather than surrender to it.
We refine our words, we don’t replace them.
Even my old fear that technology would be the end of poetry feels misplaced. Instead, we’ve seen an explosion of creative writing, multilingual experiments, and hybrid genres born from human–AI collaboration.
So no, Très Belle Neurowear didn’t destroy language. It just evolved into tools like ChatGPT and instead of wiping out empathy, they’re learning to mimic it (sometimes maybe a little too well).
Jakobson Revisited
In 2015, I worried that technology would flatten Jakobson’s six functions of language into one dry referential stream.
Ten years later, I see something else.
LLMs can inform (referential), express emotion (emotive), instruct (conative), maintain contact (phatic), play with rhythm and imagery (poetic), and even explain their own workings (metalingual).
In other words — they do all six. Sometimes all at once.
What Time Taught Me
If I could speak to my 2015 self, I’d tell her this: in just a few months, she’s going to embark on an extraordinary learning journey — one she can’t yet imagine.
Thanks to the generosity of Google, Meta, PyTorch, NVIDIA, Bertelsmann, Amazon AWS, and all the companies that joined forces with Udacity to create life-changing tech scholarships, she’ll discover the true nature of coding: creativity in another language.
She’ll build Android apps, write neural networks from scratch, explore deep learning, reinforcement learning, and even self-driving car engineering and still, in 2025, be happily learning more, this time in AI for healthcare and wearable devices.
She’ll also become a data science mentor, helping students prepare for mock exams, guiding them through doubts and breakthroughs, and showing that teaching and learning are just two sides of the same joy.
And that’s exactly why, looking back, I can smile at my old fears. Because now I see what happens inside these models. I understand their architectures, their biases, their limitations and also their beauty.
Back then, I imagined the machine as a sterile translator draining meaning from words.
Now I know that what really matters is the data we feed it and the humanity we keep in the loop.
So yes, the girl who was once told at school that “girls study literature, not maths or tech” did both and she learned that poetry and code aren’t rivals. They’re just two dialects of the same desire to understand the world.
What began as a simple story about AI and language turned out to be something else entirely: a story about glimpsing the other side of your own prophecy, and discovering that technology turned out to be far more human than you ever thought.
Original story: A Psycholinguistic Assessment from 2098
First published in Conlangs Monthly, Vol. 7, June 1st, 2015.
If this piece resonated with you, please feel free to connect with me on LinkedIn.