Nassim Nicholas Taleb on technofeudalism + reframing “I have to” as “I get to” at work (Issue #397)
Back in July, we wrote about how AI is making peer review easier to exploit. Many authors plant hidden prompts to trick automated reviewers into overlooking flaws, and the same AI programs are often being used to write and review the articles, rendering the entire process less robust.
Compounding these new vulnerabilities is the widespread assumption that “peer-reviewed” automatically means “true,” which, as astrophysicist and NASA columnist
writes, was never what the peer review was meant to guarantee. His recent essay explains that when a paper clears peer review, it means an editor and a couple of referees decided the work was worth circulating to the wider community. Plenty of flashy but shaky claims — such as hints of alien life on distant exoplanets or a theory that doubled the universe’s age to 27 billion years — have passed peer review only to collapse under further scrutiny. That doesn’t mean peer review failed: Its purpose is to get ideas, even wrong ones, onto the table so they can be tested, debated, and, if necessary, dismantled. But when journals issue splashy press releases or media outlets equate “peer reviewed” with “settled,” dubious findings get inflated into fact. And public trust in science takes the hit.
Because peer review is so often mistaken for proof, bad actors can take advantage — whether through AI tricks, as noted earlier, or people gaming the system outright. Professor and science reporter Kit Yates recently wrote for Live Science that in 2023, more than 10,000 papers were retracted, including cases of fake reviewer accounts, citation cartels that traded references, and ghostwritten studies passed off as legitimate research. Once published, these papers carried the same “peer-reviewed” label as rigorous work, at least until they were exposed. Yates argues that the academic reward system itself is to blame. Universities and journals judge success by how many papers a researcher produces and how often they’re cited, creating incentives for speed and volume over substance — conditions where flimsy work and outright fraud can flourish.
—
Recommended reading:
- It’s an unspoken rule to keep pregnancy news quiet until the three-month mark. ignored it, shared her pregnancy widely, and then faced the devastation of miscarriage in public. Her essay captures how breaking the rule made grief less isolating, turning loss into connection rather than silence.
- Global networks don’t just spread ideas, they concentrate power. Essayist argues that connectivity has created a “technofeudalist” culture, where dominant players become almost impossible to dislodge. From Google’s chokehold on search to music industries dominated by a handful of global stars, he shows how hyper-connection locks power in place.
- A bag of sour candy sparks a parenting debate. recalls clashing with her partner over whether to take the bag of sweets away from their daughter or leave it within reach. She argues that children only learn self-control when they’re trusted to face temptation.
- A tiny shift in language can change how the brain processes stress. Psychiatrist recalls working with a startup employee who kept saying he “had to” meet deadlines, until Brewer suggested reframing it as “I get to.” That swap, he explains, activated the brain’s reward system and helped the patient approach the same tasks with less dread and more motivation.
Deepen your understanding every day with the Medium Newsletter. Sign up here.
Edited and produced by
&
.
Like what you see in this newsletter but not already a Medium member? Read without limits or ads, fund great writers, and join a community that believes in human storytelling.
Learn more about What “peer review” actually means