What If AI Made Students Think More, Not Less?
Most conversations about AI in education follow the same tired script. One side says ban it. The other side says embrace it. Both sides are having the wrong conversation.
A small group of professors decided to skip the debate entirely. They stopped arguing about what students were doing with AI and started building AI tools that change what AI does to students. The result is a new category of educational technology that flips the entire premise of generative AI on its head.
Instead of tools that answer questions, they built tools that ask them.
The Moment Everything Changed
In the fall of 2022, Dan Wang noticed something shift in his Technology Strategy class at Columbia Business School. Students who used to arrive with carefully reasoned arguments about business decisions were suddenly showing up with summaries generated by ChatGPT. The preparation was faster. The thinking was thinner.
Wang could have done what most professors did: ban the tool, add honor code language to the syllabus, and try to outrun the technology. He chose a different path. He started building.
The result is CAiSEY (Classroom Artificial Intelligence Studio for Engaging You), a voice-powered AI discussion partner that does something commercial AI tools were never designed to do. It argues with students. It challenges their assumptions. It pushes back on weak reasoning. And critically, it refuses to give them answers.
What began as a prototype in one course has grown into a platform used at Columbia, UC Berkeley, the University of Pennsylvania, the University of Virginia, and over a dozen other institutions globally. And Wang’s research is backing up what he suspected from the beginning: the students who argue with AI before class arrive more prepared, more confident, and more willing to engage with perspectives they disagree with.
Why Voice Changes Everything
Here is the detail that most people miss. CAiSEY is not a chatbot. It is a voice-to-voice conversation partner. Students do not type prompts. They talk. And that distinction turns out to be the difference between thinking and performing.
Wang’s team found that students who engaged in voice-based AI conversations covered a significantly wider diversity of topics than those who used text-based interactions. The voice sessions averaged 22 minutes each, with some students debating for up to 45 minutes. Over 93% of students rated the experience “great” and wanted more.
The classroom impact was even more striking. When students arrived having already debated their positions out loud with an AI adversary, the in-person discussions became richer, lasted longer, and were more civil. Students were better prepared to accommodate opposing views because they had already been forced to defend their own.
“A lot of AI tools in education are designed to make things more efficient. CAiSEY capitalizes on precisely the opposite: the capacity to slow students down.”
Dan Wang, Columbia Business School
That quote captures something profound about the current moment in AI education. The dominant narrative says AI makes everything faster. These professors are proving that the real power of AI is making certain things deliberately slower.
The Cognitive Divide Nobody Is Talking About
This story matters because the alternative is already happening at scale. A 2026 survey by the UK National Education Union found that 66% of secondary teachers have observed a decline in critical thinking skills among their students as a result of AI use. Two thirds. That is not a warning sign. That is a signal that the default pattern of AI adoption in education is actively degrading the skill it should be developing.
Like what you’re reading? Get insights like this delivered daily.
Vivienne Ming, chief scientist at the Possibility Institute, describes this as a growing cognitive divide. A small minority of people use AI to enhance their thinking. A much larger majority use it to replace their thinking entirely. The tool is the same. The outcome depends entirely on the intention behind its use.
The OECD Digital Education Outlook 2026 reinforces this finding. Hybrid systems that combine AI with explicit pedagogical models, such as structured debate or evidence-centered assessment, show far more promise than general-purpose chatbots used without instructional design. The tool alone does nothing. The design behind the tool does everything.
That is exactly what Wang and his colleagues figured out early. They did not adopt AI. They designed AI. There is a canyon between those two approaches.
It Is Not Just Columbia
At Georgia Tech, professors in the School of Electrical and Computer Engineering built a custom AI tutor designed to help students work through challenging homework problems without simply handing them solutions. The tool asks guiding questions, identifies where students’ reasoning breaks down, and redirects their thinking rather than replacing it.
Students reported that this faculty-designed AI tutor was more helpful than commercial language models precisely because it was constrained. When a general-purpose chatbot gives you the answer, you move on. When a purpose-built tutor asks you why you think the answer is what it is, you learn.
Rahul Bhandari, distinguished senior lecturer at UVA’s Darden School of Business, has adapted CAiSEY for his own courses and seen similar results. The tool is not a substitute for classroom discussion, he notes, but it prepares students to show up with more articulate, well-structured arguments. The AI becomes a rehearsal partner for human conversation, not a replacement for it.
The Accessibility Breakthrough Nobody Expected
One of the most powerful findings from CAiSEY’s deployment has nothing to do with debate preparation. Around 20% of US adults are dyslexic or dysgraphic, conditions that make reading and writing significantly more difficult. The traditional education system is built almost entirely around those two skills.
Wang received feedback from students who told him that CAiSEY was the first educational tool in 20 years of schooling that accommodated their learning style. Voice-based AI conversation gave them a way to engage deeply with course material without the barrier of text. For these students, the tool did not just improve preparation. It changed their relationship with learning itself.
The Takeaway
The professors who built these tools understood something that the broader education system is still catching up to: AI that gives answers creates dependency. AI that asks questions creates capability. The design choice between those two outcomes is the most consequential decision in AI education right now.
What This Means for Your AI Practice
You do not need to build your own AI platform to apply this principle. The insight transfers directly to how anyone uses AI tools right now.
When you use AI to generate an answer, you get efficiency. When you use AI to stress-test your own thinking, challenge your assumptions, or argue against your first instinct, you get growth. Same technology. Completely different outcomes. The tool did not change. Your approach to it did.
This is the core of what we call SeedStacking at Harvest Kernel. Small, daily practices that compound into real AI fluency. Not adopting AI. Not avoiding AI. Building with AI in a way that makes your thinking sharper, not thinner.
The professors at Columbia, Georgia Tech, and UVA figured this out first because they had to. Their students were already using AI in the worst possible way. Instead of fighting it, they built something better. The question for the rest of us is whether we will wait for someone to build the right tool for us, or whether we will start designing our own AI practices with the same intentionality.
The answer you choose will determine which side of the cognitive divide you end up on.
Ready to go beyond reading and start building AI fluency?
Join a community of educators, professionals, and lifelong learners who are building real AI skills through daily practice. Free courses, guided exercises, and a conversation that goes deeper than any article.
