Tame the Bot: My ChatGPT Classroom Fix

I stopped fighting ChatGPT and started guiding students to use it wisely. Here’s my three-level approach to foster authentic learning in an AI world — transforming passive copy-pasters into critical thinkers.

The essay was flawless. Too flawless.

My third-year student had submitted a reflection paper on cultural identity that read like it came from a professional journal. Sophisticated vocabulary, complex sentence structures, perfectly balanced arguments. The kind of writing that takes years to develop.

When I asked her about a specific concept she’d referenced — one that hadn’t appeared in our assigned reading — she looked confused. “I didn’t notice that part,” she admitted quietly. She’d given ChatGPT her rough notes and the reading title, then submitted what it generated without reading it carefully. She trusted the AI’s output more than her own understanding.

That’s when I knew I needed a different approach.

The Reality We’re Ignoring

In my English and global studies courses that I teach, in a very short time, I’ve seen students integrate AI into every aspect of their academic lives. The numbers are stark: 70–80% of my students openly acknowledge using ChatGPT for translation. Many more use it for homework, test prep, brainstorming, and idea generation.

But here’s what the statistics don’t capture: the spectrum of use.

Some students engage in what I call “dialogic translation” — they write in Japanese, translate to English, evaluate the result, revise their thinking, and translate again. This iterative process actually supports learning. They’re using AI as a thinking partner.

Others copy and paste. One student told me: “Sometimes I don’t even read what ChatGPT writes in English. If it looks long enough and has no red underlines in Word, I submit it.”

The difference is everything. And most university policies don’t distinguish between them.

Meanwhile, faculty are stuck. We can tell when students use AI — the writing is suddenly too polished, the voice too generic. But confronting them leads nowhere. They deny it, or worse, they’re genuinely confused about what’s acceptable. Traditional plagiarism detection doesn’t work on AI-generated text because it’s technically original, even if the student didn’t write it.

We’re fighting a battle we’ve already lost. Students will use these tools whether we permit them or not. The question isn’t whether to allow AI in education — it’s whether we’ll teach students to use it thoughtfully.

What Actually Works: The Three-Level Approach

After experimenting with hundreds of students in a dozen classes, speaking with teachers of all grades in many different countries, and talking with students about their experience, I’ve developed a three-level framework for addressing AI literacy. You can implement the first level tomorrow. The others take longer but create lasting change.

Level 1: In Your Classroom Tomorrow

Establish clear expectations from day one. I frame the student-AI relationship explicitly in my syllabus and opening class:

“You will learn to work alongside AI as a helper, as a private tutor. You will accept the role of manager of your own growth and learning. You will remain committed to refining your thinking, improving your communication, and increasing your proficiency in AI literacy.

This translates to pursuing learning with the idea that the student is responsible for the inspiration, the opinions, the driving force. The AI is responsible for support, clarification, organizing, and refining of ideas and language.”

This clear division of labor changes everything. Students understand they can’t outsource the thinking — the inspiration, the opinions, the driving force must come from them. But they can get help with expression, organization, and clarity.

Teach a three-stage framework for every assignment. I’ve evolved from vague permission to “use AI to help you” to a structured approach that makes cognitive growth visible:

Stage 1: Independent Thinking

Students do the cognitive work first:

  • Original creation of ideas
  • Brainstorming
  • Stating their position
  • Outlining their argument

No AI at this stage. This is where learning happens — in the struggle to formulate thoughts, wrestle with complexity, and find their own voice.

Stage 2: AI Consultation

Now students can engage AI as a thinking partner. I teach them to structure their prompts with four components:

  1. Describe the context: “I’m writing an essay about cultural identity for my Global Studies course.”
  2. Describe the task: “I need to write about how social media affects Japanese culture.”
  3. Describe your original ideas: “I think social media makes Japanese people copy American trends. Like everyone uses Instagram and TikTok now. But some Japanese traditions are still popular too.”
  4. Specify the assistance you want: “Can you help me organize these ideas better?” or “What examples could I use to explain this?”

The AI expands on what they’ve already created, offering possibilities they might not have considered.

Stage 3: Critical Integration

Students evaluate the AI’s responses, choose what resonates with their message, and integrate their ideas with AI’s suggestions — often with AI assistance — to create a polished final product.

This is where they develop judgment. Sometimes the AI suggestion is better. Sometimes their original phrasing is stronger. They learn to be critical consumers, selecting what serves their purpose and rejecting what doesn’t.

One student told me: “Before, I would just let ChatGPT write everything. Now I write first, then use it to see different options. I’m actually choosing instead of just accepting.”

But I’d be dishonest if I said this works perfectly every time. Dependence creep is real. Even with clear expectations, some students gradually shift more cognitive work to Stage 2, starting with bullet points instead of full ideas. I combat this by requiring evidence of Stage 1 work for major assignments — their initial brainstorm, rough outline, first attempt at articulating their argument. It’s not foolproof, but it makes the cognitive work visible and harder to skip.

Be specific about which activities should and shouldn’t involve AI. For each assignment, I now explicitly state:

  • Which activities should involve AI (and how): “Use ChatGPT to generate five possible research angles on climate policy. Then choose one and research it yourself.”
  • Which activities shouldn’t: “Do not use AI to write your analysis. The thinking must be yours.”
  • Specific prompts they can use: “After writing your initial paragraph, try asking: ‘I wrote this analysis of [topic]. Can you suggest three different ways to express this main idea more clearly in English?’”

This specificity helps students develop judgment. They start to see that brainstorming and final analysis are fundamentally different cognitive tasks.

Make the purpose of struggle visible. Students now understand that their educational journey is their responsibility. They must treat the growth of their knowledge with care and consideration.

I’ve had students tell me this framing changed everything. One said: “I never thought about whether I was actually learning. I just thought about finishing assignments. Now I ask myself: is this making me smarter or just making my work look better?”

That’s the metacognitive awareness we’re after. Students understanding that difficulty isn’t a bug in the education system — it’s a feature. The hard work of formulating your own thoughts, wrestling with complex ideas, finding your own voice — that’s where learning happens. AI can support that process, but it can’t replace it.

Students still struggle with critical evaluation. When ChatGPT uses terminology they don’t fully understand or makes authoritative-sounding claims, many simply trust it. I now include “AI fact-checking” exercises where students verify AI-generated claims, and “voice recognition” activities where they identify which parts of a hybrid text sound like them and which don’t. It’s an ongoing learning process, not a one-time lesson.

Level 2: With Your Department

Individual solutions help, but inconsistency creates problems. When one professor prohibits AI entirely while another encourages it, students get contradictory signals that undermine both approaches.

Start a once-a-term faculty meeting. Not a lecture, not a policy presentation — a structured conversation where faculty meet as equals.

Here’s the format I use:

I begin with a brief framing of the issue (10 minutes), then divide faculty into small groups of 3–4. Each group gets thoughtful prompts designed to surface real classroom experiences:

  • “Describe a moment when you suspected or confirmed AI use in student work. What did you do?”
  • “What’s one assignment or teaching practice that seems to work well in the AI era?”
  • “What’s your biggest concern or challenge right now regarding AI in your courses?”
  • “What guidance or support would actually help you address AI in your teaching?”

The groups discuss for 20–30 minutes while I circulate. Then we reconvene for brief sharing — not to reach consensus, but to make visible the range of approaches and challenges across our department.

This format works because it puts everyone on equal footing. Senior faculty aren’t lecturing junior faculty. Those comfortable with technology aren’t condescending to those who aren’t. Everyone has classroom experience to contribute. The technophobe teaching literature has insights the digital native teaching data science needs, and vice versa.

Follow up with documentation. After each session, I send a brief questionnaire asking faculty to reflect on:

  • One thing they learned from the discussion
  • One thing they plan to try in their teaching
  • One ongoing challenge they’re still grappling with
  • Any resources or support they need

I tally the responses, analyze patterns, and log the data. This serves three purposes:

First, it shows me what’s actually happening in classrooms across our department — the ground truth, not assumptions.

Second, it reveals where faculty need support. If eight people mention struggling with redesigning essay assignments, that tells me what the next session should address.

Third, it creates institutional memory. When we meet next term, I can share anonymized patterns: “Last term, we discussed concerns about academic integrity. Since then, six faculty tried process-based assignments. Here’s what they found…”

This documentation transforms isolated experiments into collective knowledge-building.

The result? I held my first faculty meeting last term, and the response exceeded my expectations. Faculty now know what colleagues are trying. New approaches are beginning to spread organically. When a student moves from Professor A’s class (where AI use for translation is explicitly permitted) to Professor B’s class (where it’s prohibited for final essays), the inconsistency can now be explained and defensible rather than arbitrary and confusing.

More importantly, faculty felt less alone. The overwhelmed professor who thought everyone else had it figured out discovered that her colleagues are equally uncertain. The early adopter experimenting with AI-integrated assignments found allies. The skeptic worried about academic integrity found his concerns validated while also hearing approaches he hadn’t considered.

The barriers dissolved. Senior faculty and junior faculty talked as equals. Those comfortable with technology and those less so shared insights. Everyone had classroom experience to contribute.

That’s why I’m planning to make this a regular, once-a-term practice. One meeting opened doors and created connections. Regular meetings can build the shared understanding and collective knowledge our department needs to navigate this challenge together.

Level 3: Institution-Wide Support

Individual faculty and departments can accomplish a lot, but some things require institutional investment.

Faculty development that actually helps. Most faculty workshops on AI focus on technology demonstrations. We need pedagogical training.

Effective faculty development addresses:

  • How to design assignments where AI supports rather than replaces learning
  • How to teach students the three-stage approach to AI use
  • How to have productive conversations with students about cognitive growth
  • How to assess learning when students have AI access
  • How to structure prompts that make AI a thinking partner rather than a replacement

The best approach? Learning communities, not lectures. Small groups of faculty from different disciplines experimenting together, meeting regularly to share results.

Policies that enable rather than constrain. The worst institutional response is blanket prohibition that everyone ignores. The best provides clear principles while allowing flexibility.

A model policy includes:

  • Recognition that AI is part of the contemporary information landscape
  • General principles distinguishing appropriate from inappropriate use (students responsible for inspiration and ideas, AI responsible for support and refinement)
  • Framework for individual instructors to set specific expectations for their courses
  • Resources and support for both students and faculty
  • Regular review and revision as technology evolves

Public perception can be challenging. Parents and broader society may view AI integration as lowering standards. This can be mitigated with clear communication about how AI is changing the institution’s pedagogical mission. We’re not lowering standards; we’re raising different standards. Instead of measuring only final products, we’re teaching judgment, critical thinking, and metacognitive awareness. When universities explain that developing autonomous thinking and authentic voice in an AI-saturated world is harder, not easier, than traditional education, stakeholders can understand the pedagogical reasoning.

The Japanese Advantage

Here’s something that might surprise you: Japanese universities are actually well-positioned to lead in AI literacy education, despite being slower to develop formal policies than some Western institutions.

Why? Three cultural strengths:

Collective responsibility over individual policing. Instead of framing AI ethics as individual moral choices, Japanese approaches can emphasize how individual choices affect the broader community. “Your AI use impacts your classmates’ learning environment and our collective trust” resonates more deeply than “Don’t cheat because it’s wrong.”

This aligns perfectly with the three-stage approach. It’s not about policing individual behavior but about everyone accepting responsibility for their own cognitive growth and learning journey.

Process emphasis over product obsession. Japanese education already values the learning journey. This aligns perfectly with making Stage 1 (Independent Thinking) non-negotiable. When we make visible the stages of intellectual work — formulating questions, gathering information, analyzing evidence, constructing arguments — students can see where AI fits and where it doesn’t.

The three-stage framework isn’t just about achieving a polished final product. It’s about honoring the process of growth, which Japanese educational culture already respects.

Long-term development over quick fixes. While American universities raced to implement policies within weeks of ChatGPT’s release, Japanese institutions took time to study the issue. That deliberation, though frustrating to some, may produce more thoughtful, sustainable approaches than reactive prohibition.

Teaching students to be “managers of their own growth and learning” is inherently a long-term developmental goal, not a quick fix. Japanese universities can embrace this timeframe as a strength rather than apologizing for it.

The challenge is balancing these strengths with the need for timely action. Students are using AI now, not waiting for perfect policies. But Japanese universities can develop responses that are both culturally grounded and globally relevant.

From Gatekeeper to Guide

The fundamental shift required is in how we see our role as educators.

For decades, we’ve been gatekeepers — controlling access to knowledge, detecting cheating, and maintaining academic standards through enforcement. AI makes that position untenable. We can’t effectively police AI use, and trying makes us adversaries rather than mentors.

Instead, we need to become guides. This means teaching judgment, not just rules.

The three-stage framework does exactly this. We’re not telling students “never use AI” or “use AI however you want.” We’re teaching them to distinguish between cognitive work that builds their capacity and assistance that supports their expression. We’re helping them develop the metacognitive awareness to ask: “Is this making me smarter or just making my work look better?”

That’s a question they’ll need to answer throughout their lives, in contexts we can’t predict. Our job isn’t to control their choices but to equip them to make good ones.

This requires vulnerability from us as educators. It means admitting we don’t have all the answers. It means experimenting alongside students, making our thinking visible, and showing them our own process of learning to work with AI.

One of my most powerful classroom moments came when I demonstrated my own use of AI for a teaching task. I showed students my initial brainstorm, my prompt to ChatGPT, my evaluation of its suggestions, and my final synthesis. Several students told me afterward that seeing my process — including what I rejected from the AI and why — helped them understand the critical judgment they should be developing.

I won’t pretend this approach is easier than prohibition. Teaching students to use AI thoughtfully requires more one-on-one conversations, more modeling, more process check-ins. It’s more labor-intensive. But the time investment pays off. Students who learn this framework become more independent, not less. By mid-semester, they’re asking better questions and making better choices. The upfront investment reduces back-end problems of academic dishonesty confrontations and shallow learning.

We model the behavior we want to see. If we hide our AI use or pretend we don’t need these tools, students learn that AI use is shameful. If we demonstrate thoughtful, strategic use, they see a path forward.

The Choice Ahead

My students will graduate into a world where AI is everywhere — in their workplaces, their research, their daily communications. They’ll use AI to write reports, analyze data, translate languages, and make decisions.

But more fundamentally, they’ll graduate into an uncertain future where the ability to think autonomously matters more than ever. Where voicing true opinions — not algorithmically generated ones — creates genuine connection. Where sharing unique thoughts and feelings, not polished but empty prose, allows us to learn about ourselves, our cultural place in the world, and humanity at large.

That’s what I try to give my students: English, communication, environmental awareness, and AI literacy — skills for navigating uncertainty. But underlying all of it is the goal of beefing up their brains, so to speak. Getting them to do their own original thinking. To practice voicing what they truly believe. To practice telling the world about their unique perspectives.

In my opinion, that should be a primary quest during the university years. Through sharing and connecting authentically, students discover who they are and who they can become.

The question isn’t whether they’ll use AI. It’s whether they’ll have developed the critical judgment, ethical awareness, and metacognitive skills to use it responsibly — and more importantly, whether they’ll have strengthened their capacity for original thought, authentic voice, and genuine human connection.

We can prepare them for that reality, or we can pretend it doesn’t exist.

I chose preparation. That student who submitted the essay she hadn’t read? She’s now one of my most thoughtful AI users. After we talked, she started following the three-stage approach. She does her thinking first, then consults AI for specific assistance, then critically evaluates what to keep and what to discard. She uses AI strategically — for brainstorming, for language polishing, for exploring angles she might not have considered — but the intellectual work is unmistakably hers.

More importantly, she’s found her voice. Her essays now sound like her — not like a professional journal, but like an intelligent young woman wrestling with complex ideas and finding her own answers. That authenticity, that willingness to share her unique perspective even when it’s still developing, is worth more than any polished AI-generated prose.

That transformation didn’t happen through prohibition. It happened through clear expectations, structured guidance, and explicit teaching about the difference between AI as support and AI as replacement.

You can start tomorrow. Introduce the three-stage framework. Frame students as managers of their own learning. Be specific about which activities should involve AI and which shouldn’t. Give them concrete prompts they can use.

The students are already using these tools. The only question is whether we’ll teach them to use them well — and whether we’ll help them strengthen the autonomous thinking, authentic voice, and genuine connection that make them irreplaceably human.

Anthony Lavigne prepares students for an uncertain future by teaching what AI can’t replace: autonomous thinking, authentic voice, and genuine human connection. He is based in Kyoto, Japan.

Leave a Reply