There’s a curious tension unfolding in the workplace right now.
Generative AI use has doubled since 2023. More companies than ever are running with fully AI-led processes. But despite the surge in tools and adoption, a recent MIT Media Lab report found that 95% of organizations aren’t seeing a measurable return on their AI investments (as cited in HBR, 2025).
So much activity. So much polish. So little payoff.
Maybe part of the problem is that we’ve started confusing what looks impressive with what’s actually impactful.
In a world where beautiful decks, clean copy, and smooth designs can be generated in seconds, polish has never been easier to produce or been more misleading.
Especially for the realm of knowledge workers, high-impact work rarely comes from polish alone. It comes from strong judgment, thoughtful iteration, and meaningful decisions. This is the invisible labor that’s easy to miss and even easier to skip.
This one’s a bit of a departure from my normal Psykobabble posts where I usually share some juicy psych research. But this time, humor me while I meditate on something that’s been bothering me:
How do we redefine value in the age of AI?
SO WHAT DO WE KNOW?
→ We judge quality by how much effort we think someone put in.
This is called the effort heuristic. We like, value, and trust things more when we think that it took longer, was harder, or required more work.
- It’s why the IKEA effect exists: people overvalue things they’ve built themselves, even if they’re lopsided and wobbly.
- It’s why we trust reviews typed on mobile more than desktop because we assume it took more effort to write.
- It’s why people rate content as less creative and less favorable if it’s labeled as AI-generated, even when it’s identical to the human version. In another study, 54% rated their colleagues as less creative and 50% less capable if they sent them “workslop” (poor AI generated work).
In short, effort is a proxy for value. We use it (consciously or not) to decide:
- What’s trustworthy
- Who’s credible
- Whether something is actually good.
→ AI breaks this signal.
With AI, you can now create something that looks high-effort in seconds. The visual, verbal, and strategic outputs that used to require expertise, time, and iteration are now accessible with a well-formed prompt.
So here’s the tension:
- The quality of output is going up
- But our ability to infer the effort behind it is going down
That means:
- We trust polished content less (because it could be fake)
- We start valuing visible effort more (because it’s harder to fake)
In other words, we’re shifting from only considering “how good does it look?” to “how much deliberation went into getting there?”
→ Effort becomes the new luxury.
Just like luxury fashion highlights their artisanal process (not just end product) as a signal of value, knowledge work is heading in the same direction.
This creates a new kind of signal: process transparency.
People will want to know:
- What constraints were you working with?
- What iterations and dead-ends did you encounter?
- Where did AI fall short, and you stepped in?
- What judgments did you make?
Effort isn’t disappearing. It’s just moving upstream. Intellectual friction (Judgment, critical thinking, and iteration) become the new luxury goods.
And unlike a shiny output, it’s much harder to fake.
→ Reflection helps you value your own work.
Here’s something weird: while AI helps us work faster, it might also make us feel less attached to what we produce.
The IKEA effect tells us that effort leads to attachment, in other words, we value what we toil over. But when AI does the heavy lifting, we lose some of that connection. The danger? We might start undervaluing our own contributions.
But here’s the fix: process transparency isn’t just for others, it’s for you too. Reflecting on the story of your work like the problem you solved, the pivots you made, the prompts you tried and rewrote, re-injects meaning. It turns output into personal narrative.
There’s research that supports this too: people rate tasks as more meaningful and satisfying when they engage in reflection and mentally reconstruct their journey. Simply put: reviewing the steps we took helps us value what we did.
In short: reviewing the process helps you appreciate the product.
THE NEW SIGNALS OF QUALITY
Old markers like polish, grammar, or design precision? Easy for AI to fake.
Which means people (consciously or not) are reaching for new cues that reveal not just what was made, but how and why.
You don’t need to show all of these all the time (quick executive update? Keep it high-level.) But when trust matters, like when stakes are high, when you’re being evaluated, or evaluating someone else, choose one or two to spotlight.
These aren’t just quality checks. They’re credibility builders.
Process transparency: These probe how the work was created.
- What was the thinking behind this?
- Can I see the iterations or decision points?
- What constraints or tradeoffs did you work with?
- How did you improve on the initial idea/output?
- Where did you disagree with the AI or override it?
- What dead ends did you explore and why didn’t they work?
Critical thinking: These try to assess invisible effort and intellectual discernment.
- How did you choose between competing options?
- What risks or edge cases did you anticipate?
- How do you know this is the right solution and not just a plausible one?
- What feedback changed your mind?
- Why is this the right solution not just a plausible one?
Human signals: These are gut-checks to see if there’s a real person behind the polish.
- What makes this yours and not just something AI (or anyone else) could’ve done?
- What experience informed this decision?
- Does this match your usual tone or values?
Effort-as-luxury: These reflect the new premium on discernment and intentionality.
- Where’s the evidence of care?
- How was this tailored for this audience or moment?
- Would this have been different if you rushed it?
WHAT COULD THE FUTURE LOOK LIKE?
Let’s dream a little, here’s what might change:
First-Order (Immediate) Changes
- Evaluation shift: Immediate pressure on educators and HR to stop judging the final artifact alone. Tools for plagiarism detection are replaced by tools or rubrics for process documentation and reflection.
- Process overhead: The immediate need for documentation of effort increases. Students keep process journals; employees maintain detailed decision logs. This adds a layer of administrative effort that must be justified by its evaluative value.
- Skill revaluation: Prompt literacy, critical evaluation of AI outputs, and metacognitive reflection are explicitly added to required skill sets for students and job candidates.
Second-Order (Cultural) Shifts
- Assessment redesign: Traditional high-stakes assessments (e.g., standardized final exams, static resumes) get replaced by portfolio-based assessments, oral defenses (vivas), and real-time problem-solving challenges.
- Organizational culture change: Companies shift their performance metrics to reward learning, agility, and failure analysis (the process of adjustment) rather than just success (the outcome). Learning from failure becomes a quantifiable, highly valued form of effort.
- Process becomes a trust signal: Since a great process is harder to fake than a great outcome, a well-documented process becomes a powerful and highly valued trust signal.
Third-Order (Societal) Reframes
- Wisdom > Knowledge: Focus shifts from domain knowledge (which AI can provide instantly) to domain wisdom (the human capacity to critique, synthesize, and apply that knowledge in a novel, ethically sound way). Judgment becomes the most valuable commodity.
- Effort redefined as discernment: While AI democratizes the output of quality (everyone can produce a “better” essay), true distinction lies in how you thought, not just what you made.
HOW DO YOU PREPARE FOR THIS FUTURE?
What does this mean for how we work, evaluate, and connect with others?
Whether you’re in UX, marketing, hiring, or education, here are a few shifts to keep an eye on:
- Don’t just show outcomes, show origin stories. The best portfolios won’t just show the hero image or polished deck. They’ll show the decision log. What did the early drafts look like? What did the AI suggest? What got thrown out? And why?
- Make critical thinking visible. Teach and reward people for how they think, not just what they create. In feedback sessions or case studies, surface the invisible work: the tradeoffs, critical questions, the moments where your judgment mattered.
- Build process signals into your workflow. Prompt chains, comment threads, tradeoff memos, error audits. Think of these not just as internal tools but as evidence of work.
- Reward iteration, not just polish. If something was done in one shot, assume it was AI. If it shows layers of refinement, hard decisions, and principled pivots? That’s probably a human at work. Reward that.
BOTTOM LINE
Whether you’re a leader, creator, marketer, or developer, your audience is starting to sense the difference even if they can’t articulate it. In an era of infinite output, it’s the path, not the polish, that builds trust.
In short: AI didn’t kill effort. It changed what effort looks like.
WANT TO LEARN MORE?
- Di Stefano, G., Gino, F., Pisano, G. P., & Staats, B. (2016). Making experience count: The role of reflection in individual learning (pp. 14–93). Boston: Harvard Business School.
- Grewal, L., & Stephen, A. T. (2019). In mobile we trust: The effects of mobile versus nonmobile reviews on consumer purchase intentions. Journal of Marketing Research, 56(5), 791–808.
- Kruger, J., Wirtz, D., Van Boven, L., & Altermatt, T. W. (2004). The effort heuristic. Journal of Experimental Social Psychology, 40(1), 91–98.
- Kuzmanovic, B., Golubovic, J., & Zivadinovik, J. (2023). The impact of AI authorship on creative evaluation. Scientific Reports, 13(1), 5487.
- Mochon, D., Norton, M. I., & Ariely, D. (2012). Bolstering and restoring feelings of competence via the IKEA effect. International Journal of Research in Marketing, 29(4), 363–369.
- Niederhoffer, K., Kellerman, G. R., Lee, A., Liebscher, A., Rapuano, K., & Hancock, J. T. (2025). AI-Generated “Workslop” Is Destroying Productivity. Harvard Business Review.
WHO IS PSY.KO?
PsyKoBabble is a curation of some of my favorite psych concepts and also the latest and greatest in the realms of social psychology. Why? My background is in cultural and digital psychology — this newsletter helps me stay on top of a field I love so much, share what I’ve found, and constantly push psychology’s application to life and work in meaningful ways. Anything you want me to dig deeper in? Email me at [email protected].
