| |

The One AI Skill That Could Save a Generation of Readers

Here’s a number that should stop every parent, educator, and policymaker in their tracks: forty percent of America’s fourth graders cannot read at a basic level.1 Not below proficient. Below basic. That means two out of every five nine-year-olds in this country struggle to identify the main idea in a simple passage. And the numbers are getting worse, not better.

Now here’s the part that should genuinely alarm you. Into this crisis, a massive wave of AI-powered tools is flooding classrooms at an unprecedented pace — and almost none of them have been proven to actually help children learn.

A cognitive neuroscientist at Boston University just laid this out in a way that demands attention. But the takeaway isn’t about reading scores or AI technology. It’s about a skill that every single person — educator, parent, professional — needs right now and doesn’t realize they’re missing.

The Reading Crisis Is Worse Than You Think

Share

The 2024 National Assessment of Educational Progress — often called the Nation’s Report Card — revealed that only 31 percent of fourth graders scored at or above the proficient level in reading.1 That’s the lowest percentage in over two decades. Average reading scores fell five points from pre-pandemic levels and continued declining even from the already dismal 2022 results.2

69%of America’s fourth graders are reading below proficiency levels — and the lowest-performing students posted their worst scores in more than 30 years.1

Now, you might be thinking: Isn’t this just pandemic learning loss? Won’t it recover? That’s what most people assumed. Except the data tells a different story. Reading scores were already declining before COVID-19 hit. The pandemic accelerated a crisis that was already underway — one driven not by a single event, but by a systemic failure to apply what decades of research have proven about how children actually learn to read.3

In Massachusetts — a state widely considered among the best in education — nearly 60 percent of third- through eighth-graders are not proficient readers. Among Black and Latino students, the numbers are even more devastating.3 This isn’t a fringe problem. It’s a foundational failure that affects a child’s entire trajectory: their confidence, their economic future, and their ability to participate in a world increasingly shaped by complex information.

The question that comes up here is obvious: If we know how to teach reading, why aren’t we doing it? That question — and its answer — is exactly where AI enters the picture.

The Pattern That Should Terrify You

Dr. Ola Ozernov-Palchik, a cognitive neuroscientist at Boston University’s Wheelock College of Education, studies how the brain learns to read and how AI intersects with that process.4 Her argument is straightforward and unsettling: the same dynamic that created America’s reading crisis is about to repeat itself with artificial intelligence.

Here’s the pattern. For decades, America’s “reading wars” weren’t actually a debate between two equal approaches. They were a clash between scientific evidence and deeply held assumptions. Approaches that sounded right to decision-makers were adopted at scale without being subjected to rigorous testing. The result was a generation of teachers trained in methods that neuroscience has since shown to be ineffective for many learners.3

That pattern is now repeating — but faster and with higher stakes.

42educational technology tools used by the average American student each year — and less than 10% have undergone rigorous research evaluation.3

Educational technology is now a multi-hundred-billion-dollar industry expanding rapidly as AI becomes embedded in the tools children use every day.3 During the school day, grade school students spend an average of 98 minutes on school-issued digital devices. The tools are multiplying, the investment is enormous — and the evidence trail is almost nonexistent.

Like what you’re reading? Get insights like this delivered daily.

Join the free community →

The Evidence Gap Nobody’s Talking About

This is where it gets concrete. Instructure’s 2026 Evidence Report, released days ago at SXSW EDU, analyzed 150 of the most widely used classroom technologies against federally recognized research standards. The findings are stark: among purpose-built edtech tools, only 2 percent meet the strongest evidence standard under the Every Student Succeeds Act. Another 5 percent meet Level II, and 14 percent meet Level III.5

Put differently: the overwhelming majority of digital tools students use every day have never been independently verified to improve learning outcomes. Schools are spending billions on tools that might work, might not work, and might actually be making things worse — and there’s no systematic way to know which is which.

The widespread adoption of ideas that sound compelling but are rarely tested — that’s what helped produce the literacy crisis in the first place. AI risks amplifying that exact dynamic.

— Based on analysis by Dr. Ola Ozernov-Palchik, Boston University

Dr. Ozernov-Palchik’s team at BU launched the Evidence-Based AI in Learning Industry Collaborative specifically to address this gap. They invited companies to compete for an independent research study — with a twist. The research would be fully independent, transparent, and published regardless of the outcome. Thirty-two companies applied.3 That number itself tells you something important: even the industry recognizes that market forces alone can’t substitute for evidence.

The OECD’s 2026 Digital Education Outlook echoes this concern from an international perspective. While their analysis found that AI can support learning when guided by clear teaching principles, they also found that without pedagogical guidance, outsourcing tasks to AI simply enhances performance with no real learning gains.6

The Skill That Changes Everything

Most conversations about AI literacy focus on prompt engineering, understanding how large language models work, or learning which tools do what. Those skills matter. But they’re not the most urgent one.

The most urgent AI literacy skill in 2026 is evidence literacy — the ability and willingness to ask one question before adopting any AI tool: “Where’s the proof this works?”

I call this the Evidence-First Reflex. And it applies to everyone:

If you’re an educator: Before your district adopts the next AI reading tutor, ask the vendor for independently published research — not testimonials, not pilot data from their own team, not projected outcomes. Published, peer-reviewed, or ESSA-aligned research conducted by someone other than the company selling the tool.

If you’re a parent: When your child’s school announces a new AI-powered learning platform, ask a single question at the next parent-teacher conference: “Has this tool been independently tested with real students, and what were the results?” You don’t need a PhD. You need that one question.

If you’re a professional: Apply the same standard to the AI tools your organization is adopting for training and development. The evidence gap isn’t limited to K-12 classrooms — it exists everywhere AI is being sold as a solution.

The Seed to Plant Today

The next time anyone recommends an AI tool for learning — in your classroom, your child’s school, or your workplace — ask this: “Has an independent researcher tested this with real learners, and where can I read the results?” That single question is the most powerful AI literacy move you can make this year.

Why This Matters More Than Prompt Engineering

Here’s what most AI education conversations miss. Technology is not neutral. Every AI tool carries embedded assumptions about how learning works — assumptions that may or may not be supported by evidence. When the reading wars were raging, the damage wasn’t from any single curriculum. It was from an entire system that treated beliefs about learning as facts, and then scaled those beliefs into millions of classrooms without testing them.

AI tools have the potential to repeat this mistake at a speed and scale that makes the reading wars look like a minor disagreement.

The optimistic case for AI in education is real and worth fighting for. AI could fundamentally transform how we collect and analyze learning data, deliver personalized instruction at scale, and free teachers from administrative burdens that consume their time. Dr. Ozernov-Palchik herself holds both the optimistic and pessimistic view — she’s not anti-AI. She’s pro-evidence.3

The pessimistic case is equally real: classrooms filled with sophisticated systems that appear promising but have never been shown to improve learning. Tools that look impressive in a demo but crumble under the weight of rigorous evaluation. And the students who pay the price are the ones who were already struggling — the same students the technology was supposed to help.

This is exactly why the SeedStacking™ approach starts where it does: with a small, specific, evidence-grounded first step. Not with the flashiest tool. Not with the biggest promise. With the simplest question that creates the biggest impact.

Your Evidence-First Action Plan

This week: Pick one AI tool currently being used in your classroom, your child’s school, or your workplace. Search for independent research on it. Not the company’s website — look for published studies, ESSA evidence badges, or third-party evaluations. If you can’t find any, that’s your answer.

This month: Share the Evidence-First Reflex with one colleague or fellow parent. The more people who start asking “where’s the proof?” the faster the market will respond with tools that actually have it.

This quarter: Advocate for evidence requirements in your school or organization’s technology procurement process. Instructure’s 2026 Evidence Report framework provides a practical starting point for evaluating what “evidence” actually means.5

This year: Follow the BU Evidence-Based AI in Learning Collaborative’s research as it publishes results. Organizations like this are building the infrastructure that the entire AI-in-education field desperately needs.3

Want to go deeper on AI literacy? Explore the SeedStacking™ methodology →

Ready to go beyond reading and start building AI fluency?

Join the free Harvest Kernel community for practical guidance, fresh ideas, and tools that help you make AI useful in real life.

Join the Free Community

Sources

  1. National Center for Education Statistics. “2024 NAEP Reading Assessment.” nationsreportcard.gov
  2. K-12 Dive. “Reading, math continue slide in 2024 NAEP results.” Jan. 2025. k12dive.com
  3. Ozernov-Palchik, O. “Will AI solve America’s reading crisis, or make it worse?” The Boston Globe, Mar. 13, 2026. bostonglobe.com
  4. Boston University Wheelock College. “Ola Ozernov-Palchik, Ph.D.” bu.edu
  5. Instructure & InnovateEDU. “2026 Evidence Report.” Mar. 10, 2026. prnewswire.com
  6. OECD. “Digital Education Outlook 2026.” Jan. 2026. oecd.org
Dean Le Blanc, Founder of Harvest Kernel

Dean Le Blanc

Founder, Harvest Kernel

AI literacy educator and creator of the SeedStacking™ methodology. Dean teaches educators, professionals, and lifelong learners how to build genuine AI fluency through small daily wins that compound into real capability. Join the Learning Community →

Similar Posts