Students Are Using AI More Than Ever — And They Know It’s a Problem
The Numbers Don’t Lie — But They Do Contradict
Here’s a number that should stop every educator and parent mid-scroll: 62 percent. That’s the share of U.S. middle school, high school, and college students who used AI to help with homework as of December 2025 — up from 48 percent just seven months earlier. A 14-point jump in half a year.
But here’s the part that changes everything: 67 percent of those same students believe AI is actively harming their ability to think critically.
That’s not a contradiction. It’s a diagnosis. Students aren’t oblivious to the risks — they’re caught in a system that hasn’t given them a better option. And that gap between what students do with AI and what they understand about it is the single biggest literacy crisis in education right now.
These findings come from RAND’s American Youth Panel, a nationally representative survey of 1,214 youth between ages 12 and 29. Published March 17, 2026, it’s the most comprehensive snapshot we have of how young Americans are navigating AI in their academic lives. The picture it paints isn’t one of reckless kids gaming the system. It’s one of students sprinting ahead of every institution meant to guide them.
The Adoption Curve Is Steeper Than Anyone Predicted
The RAND data confirms what many educators suspected but couldn’t prove: AI adoption among students isn’t leveling off. It’s accelerating. Middle and high schoolers — not college students — are driving the surge. That matters, because these are the students with the least formal training and the fewest institutional guardrails.
Chatbots dominate the landscape. Sixty percent of students report using AI chatbots for school, with ChatGPT commanding 53 percent of usage. Google Gemini more than doubled its share, climbing to 28 percent between May and December 2025. Students aren’t just experimenting — they’re embedding these tools into their daily workflows.
The most common uses tell their own story: 38 percent use AI to get better explanations of assignments, 35 percent for brainstorming ideas, and 33 percent for looking up facts or drafting and revising writing. Now, you’re probably thinking — aren’t those legitimate learning activities? Some of them are. But here’s the critical distinction the RAND researchers make: there’s a world of difference between cognitive augmentation (using AI to deepen your thinking) and cognitive offloading (using AI to skip the thinking entirely).
And students know the difference. They just don’t know which one they’re doing.
The Critical Thinking Paradox
This is where the data gets genuinely fascinating — and genuinely concerning. Two-thirds of students believe AI use harms critical thinking skills. That’s up more than 10 percentage points from earlier in 2025. Female students feel this even more acutely: 75 percent of young women surveyed agreed that AI erodes thinking skills, compared to 59 percent of young men.
Yet the same students who express this concern continue using AI at increasing rates. This isn’t hypocrisy. It’s a rational response to an irrational environment. When your school doesn’t have a clear AI policy — and RAND found that only about one-third of students say theirs does — you default to whatever gets the assignment done. When rules vary from teacher to teacher, you learn to navigate ambiguity rather than principle.
67%
of students say AI use harms critical thinking — yet 62% use it for homework anyway. The gap between awareness and action is where literacy breaks down.
Students in higher grade levels were more likely to report that AI rules depended on the specific teacher, that teachers were checking for AI use, and that they worried about being accused of cheating. In other words, the older and more experienced the student, the more they feel the absence of clear, consistent guidance.
Like what you’re reading? Get insights like this delivered daily.
The Cheating Question Is More Nuanced Than You Think
Here’s where the conversation usually derails. “Students are using AI to cheat” is the headline that writes itself — but the data tells a more complicated story.
Nearly 80 percent of students said using AI to understand an assignment is not cheating. Seventy-two percent said the same about brainstorming ideas. Sixty-seven percent said looking up facts with AI isn’t cheating either. The one clear line? Getting direct answers to homework questions — 45 percent of students acknowledged that crosses into cheating territory.
Students have already built their own ethical framework for AI use. It’s not perfect, and it’s not consistent, but it exists. The mistake most schools make is ignoring it entirely — either banning AI without nuance or permitting it without boundaries. Neither approach respects what students already understand about the tool.
A separate Jobs for the Future (JFF) survey of 3,020 learners, released the same week, adds another dimension: 70 percent of students now use AI daily or weekly for education, and institutions are responding — 69 percent of learners received AI training from their school. Yet when it comes to learning how to use AI, 48 percent of students turn to social media first. They’re bypassing formal channels because those channels haven’t earned their trust yet.
What the Data Actually Demands
The RAND researchers don’t just present the problem — they offer a framework that aligns perfectly with what Harvest Kernel has been arguing about AI and assessment. Their core recommendation: schools need to distinguish between cognitive offloading and cognitive augmentation, then build policies around that distinction.
In practice, this means something specific. The flipped classroom model — where students encounter new content at home (AI-assisted or not) and then practice applying it during teacher-led, AI-free class time — emerges as one of the strongest structural responses to the homework AI problem.
But structure alone isn’t enough. The RAND team specifically recommends that educators have direct conversations with students about their own perceptions of AI use and its effects on their thinking. Not lectures. Not warnings. Conversations — the kind where students’ existing ethical frameworks are acknowledged and built upon rather than dismissed.
🌱 The Seed
Students don’t need to be told AI is risky. They already know. What they need is a framework that helps them use AI to think harder, not less. That’s not a policy problem — it’s a literacy problem. And literacy is built through practice, not prohibition.
Why This Matters Beyond the Classroom
The students filling out the RAND survey today are the employees, entrepreneurs, and citizens of 2030. The habits they build now around AI — whether those habits are thoughtful engagement or mindless offloading — will define their professional capability for decades.
When 62 percent of students use a tool they believe damages their thinking, and the institutions around them can’t agree on whether that’s acceptable, we have a systemic failure of preparation. Not a technology problem. Not a cheating problem. A literacy problem.
This is exactly the gap that SeedStacking was designed to close. Not by banning tools or blindly adopting them, but by building the layered competency — understanding, then applying, then creating, then evaluating — that turns AI from a crutch into a capability. The students in this RAND study aren’t broken. They’re ahead of the curve, navigating complexity that most adults haven’t confronted yet.
They just need the literacy framework to match their adoption speed.
Ready to build that framework? Start with the free Harvest Kernel community →
Sources
- Schwartz, H.L. and Diliberti, M.K. “More Students Use AI for Homework, and More Believe It Harms Critical Thinking: Selected Findings from the American Youth Panel.” RAND Corporation, March 17, 2026. rand.org
- “Student Use of AI for Homework Rises as Concerns Grow About Critical Thinking Skills.” RAND Corporation Press Release, March 17, 2026. rand.org
- Palmer, K. “For AI Help, More College Students Ask Social Media First.” Inside Higher Ed, March 20, 2026. insidehighered.com
Ready to go beyond reading and start building AI fluency?
Join the free Harvest Kernel community for practical guidance, fresh ideas, and tools that help you make AI useful in real life.
