Gen Z Gets It: They’re Wary of AI in School, and That’s Smart

Gen Z Gets It: They're Wary of AI in School, and That's Smart - Professional coverage

According to Fortune, new research from the College Board shows a fascinating tension among Gen Z students. Between January and May of this year, the percentage of high schoolers using generative AI for schoolwork jumped from 79% to 84%. But here’s the kicker: two-thirds of them agree that overusing AI could make them overly dependent on the tech or even less intelligent. The vast majority of school administrators (93%) see value in students learning AI, yet nearly one in five districts still have no formal AI policy, leaving teachers scrambling. High school teacher Gonzalo R. Laverde argues that students want guardrails because they don’t want to cheat—they want to be proud of their own work.

Special Offer Banner

The policy void is a real problem

Look, an 84% adoption rate among teenagers for *anything* is a tidal wave. And schools are famously bad at reacting to tidal waves. The stat about a quarter of districts letting individual teachers decide how to handle AI is telling. It basically outsources a massive, systemic challenge to already overburdened educators. The result? A chaotic patchwork where what’s “cheating” in one classroom is “creative tool use” in the one next door. That’s not fair to anyone—teachers or students. And when over 90% of principals worry about teacher preparedness, you’ve got a system playing catch-up in a race it’s already losing.

school-now”>What’s the point of school now?

This is the existential question lurking behind all the survey data. If an AI can brainstorm, research, and polish writing, what are we actually teaching? The article hints at the answer: discernment. That’s the skill that separates using a tool from being used by it. It’s encouraging that students seem to intuitively grasp this risk. They’re not mindless tech zombies; they’re cautious participants. But instinct isn’t enough. Without structured guidance, that caution can easily give way to convenience. “Why struggle with this essay intro when ChatGPT can give me five good ones in two seconds?” That’s the siren song we’re asking teenagers to resist on their own.

This is bigger than essays

Think about it. We’re preparing kids for a world where this tech will be embedded in everything. The workplace equivalent isn’t just writing reports; it’s analyzing data, diagnosing problems, and managing complex systems where AI is a co-pilot. The core challenge is the same: critical thinking and judgment can’t be outsourced. The College Board research and other surveys, like one covered by Inside Higher Ed on college students, show this awareness is there. The question is whether the education system can build curricula around that principle before the “gray area” of use solidifies into bad habits.

The path is clear, but rocky

So what’s the move? The article’s call for “guardrails” is right, but vague. It means clear, age-appropriate policies on what constitutes ethical AI assistance versus academic dishonesty. It means professional development for teachers that goes beyond just fear-mongering. And honestly, it might mean radically rethinking some assignments. If an AI can do it trivially, maybe it wasn’t assessing a valuable skill in the first place. The good news is the students seem ready for this conversation. They’re waiting for the adults to catch up and provide the framework. The bad news? In education, catching up can take a decade. AI isn’t going to wait.

Leave a Reply

Your email address will not be published. Required fields are marked *