Future teachers and AI in the classroom - Harvest Kernel
|

Future Teachers Are Already Split on AI. That’s the Problem.

Two future teachers. Same generation. Two completely different answers when you ask them about artificial intelligence in the classroom.

That’s the core finding of a new National Education Association report on aspiring educators and AI, published April 17, 2026. The NEA surfaced real voices from pre-service teachers. One, a Venezuelan-American future educator named Pacheco, sees AI as a potential equity tool for multilingual learners. Another, named Rapp, sees it as a hallucination machine that once fed him a list of ten court cases, half of which were fabricated.

They are both right. And the fact that they are both right is exactly the problem nobody is naming.

The split is not the issue. The silence is.

There is a comfortable narrative in education right now that says Gen Z teachers are going to solve the AI problem because they grew up digital. The thinking goes something like this: they know their way around a screen, so they will figure out a screen that talks back. They are the first cohort of full digital natives entering the profession. Handing them AI is handing a fish to a river.

This narrative is wrong, and it’s a convenient kind of wrong. It lets teacher prep programs off the hook.

Being comfortable with technology is not the same thing as being fluent with it. A student who can scroll TikTok at three posts per second is not automatically a student who can tell you when an AI output is wrong. Those are different skills, and the second one is what actually matters in a classroom.

80%+
of middle and high school teachers report using AI at school or personally this year, up from roughly half last year (Center for Democracy and Technology)

Here is what the adoption curve looks like in real numbers. One year ago, roughly half of middle and high school teachers said they had used AI at school or personally. This year, according to the Center for Democracy and Technology, that figure is over 80 percent. Ninety-three percent of those same teachers say their students are using it too.

Pre-service programs are still deciding whether to add an AI unit.

What Pacheco sees that most policy papers miss

The most interesting moment in the NEA piece is not the debate over whether AI belongs in the classroom. It’s a small line from Pacheco about her childhood.

Growing up as a Spanish-speaking child of Venezuelan immigrants, she was regularly pulled out of her own learning to translate for other students, parents, and teachers. She did not sign up for that role. The school system did not resource it. It was just expected of her because she was the kid who could do it.

AI, in her view, could close that gap. Real-time translation between teachers, students, and families, at scale, without turning a twelve-year-old into a part-time interpreter.

This is not a pros-and-cons debate. This is an equity argument with a specific use case and a specific harm it prevents. And it’s coming from someone who has not graduated yet.

Pre-service teachers like Pacheco are bringing a level of lived experience to this conversation that most tech rollouts never consult. If your AI policy does not include her point of view, your AI policy is incomplete.

What Rapp saw that should scare every future teacher

Rapp’s warning is the other half of the same coin, and it’s equally concrete. He asked ChatGPT for ten court cases on Pennsylvania homeschooling law. Some of the cases were federal. Some were not related to homeschooling at all. Some were entirely fabricated.

That is not an edge case. That is how language models work when they don’t have a grounded source. They will produce an answer-shaped object that satisfies the prompt and fails the fact check.

A future teacher who does not know that is going to put fabricated content in front of students. Not because they are careless. Because nobody told them this was a failure mode built into the tool.

Like what you’re reading? Get insights like this delivered daily.

Join the Free Community

The fix here is not a warning sticker on ChatGPT. The fix is treating AI output the way we already teach pre-service educators to treat student work and source material: verify, cross-reference, and never trust on face value. We already know how to teach that skill. We teach it every time we run a research methods course.

The problem is that nobody has connected that muscle to AI yet. The frame changed, but the curriculum did not.

The real divide is not pro-AI versus anti-AI

Read the NEA piece carefully and the split inside it is not what it looks like on the surface. It is not Team AI versus Team No AI. Pacheco and Rapp both use it. Pacheco just sees an equity use case clearly. Rapp just got burned by hallucinations clearly.

The real divide is between future teachers who have been taught a framework for using AI and future teachers who are figuring it out alone on a Tuesday night. That divide is not generational. It’s institutional. It reflects which programs took AI literacy seriously and which ones kept it optional.

If we let that divide harden, we will produce two classes of new teachers. The ones who had an AI-fluent methods professor, and the ones who didn’t. The first group will arrive ready to make practical calls: when to use it, when not to, how to check its work, how to explain it to students. The second group will arrive guessing.

The kids in their classrooms will not know which group got the prepared teacher and which got the improvising one. But the outcomes will show up anyway, slowly, in the quality of instruction and in which students get an adult who can model thinking with AI instead of flinching from it.

What pre-service programs actually need to teach

You don’t fix this with a workshop. A two-hour session during orientation week is the educational equivalent of showing a new driver a steering wheel and calling it done.

The work is smaller and more honest than that. Aspiring educators need daily, low-stakes practice with AI tools in the context of the actual teaching tasks they will face. Lesson design. Feedback on student drafts. Differentiated questions for different reading levels. Parent communication across languages, which is Pacheco’s use case. Source verification, which is Rapp’s warning.

That is where SeedStacking comes in. At Harvest Kernel, we build AI fluency by stacking small wins, one deliberate practice session at a time, until the skill is muscle memory instead of a special occasion. For aspiring educators, that means a semester where they use AI on real assignments, get real feedback on their outputs, and compare results with classmates. Not a demonstration. A repetition.

Once that base is there, the harder conversations become possible. Student data privacy, which is the IEP concern Pacheco named. Academic integrity. Bias in outputs. Hallucinations. These topics land differently when the aspiring teacher has already used the tool enough to know what it does and does not do well.

The Harvest Kernel take

If you are running a pre-service program, the question is not whether to add AI. Your students are already using it. The question is whether you are going to add it as a structured part of the training or as a gap they fill in by themselves, after hours, with no one to correct course.

Every Gen Z teacher you graduate is carrying AI into their first classroom either way. The only variable you control is whether they walk in prepared or improvising.

Pacheco and Rapp are both right. AI is an equity tool and AI is a hallucination machine. Both things are true at the same time, and a well-trained future teacher needs to hold both thoughts in the same hand. That’s not a character trait. That’s a curriculum outcome.

Get the curriculum right, and the split in the NEA report stops looking like a problem. It starts looking like a class discussion.

Ready to go beyond reading and start building AI fluency?

Join the free Harvest Kernel community for practical guidance, fresh ideas, and tools that help you make AI useful in real life.

Join the Free Community

Dean Le Blanc, Founder of Harvest Kernel

Dean Le Blanc

Founder, Harvest Kernel

AI literacy educator and creator of the SeedStacking methodology. Dean teaches educators, professionals, and lifelong learners how to build genuine AI fluency through small daily wins that compound into real capability. Join the Learning Community →

Similar Posts