If AI Can Pass Your Assignment, the Assignment Is Broken
Colleges across the country are doing something remarkable right now. They are responding to the most powerful learning technology in a generation by pulling out blue books and No. 2 pencils.
Proctored handwritten exams. Timed in-class essays. Lockdown browsers. The message to students is unmistakable: we do not trust you, and we do not know what else to do.
Brad Fuster, provost at San Francisco Bay University, calls this what it is. Retreating to analog assessment does not defend academic standards. It self-incriminates. It tells students that the education they are paying for was built for a world that no longer exists.
And he is not wrong.
The Assignment Was Already Broken
Here is the question nobody wants to ask: if a chatbot can earn a passing grade on your final exam, what exactly was that exam measuring?
Not critical thinking. Not original analysis. Not the ability to connect ideas across disciplines or apply concepts to unfamiliar problems. AI passes those assignments because they were never designed to test understanding in the first place. They tested recall. They tested formulaic structure. They tested a student’s ability to perform learning rather than demonstrate it.
This is what I call the Performance Trap. For decades, higher education has optimized for outputs that look like learning without verifying that learning actually happened. Write a five-paragraph essay. Answer the discussion prompt with at least 200 words. Cite three sources. These are instructions a machine can follow because they describe format, not thought.
Research from institutions redesigning their assessment models supports this. Schools that shifted to process-based evaluation, including draft submissions, revision history, and personalized topics, saw 40% fewer integrity issues compared to institutions relying solely on detection tools. The assignments themselves became the safeguard.
AI did not create this problem. It exposed it. And the exposure is a gift, if educators are willing to accept it.
Detection Is a Dead End
The instinct to catch cheaters is understandable. But the data on AI detection tells a story that should concern every educator leaning on those tools.
Faculty rate AI-specific plagiarism policies as only 28% effective. Traditional plagiarism policies fare slightly better at 49%. Meanwhile, 94% of students report using generative AI to help with assessed work, according to a 2026 survey by the Higher Education Policy Institute and Kortext. That is not a fringe behavior. That is the new baseline.
Princeton and MIT have both advised against relying solely on AI detection tools, citing reliability and bias concerns. A report from the Center for Democracy and Technology warns that over-reliance on detection erodes the trust between teachers and students, which is the very foundation learning depends on.
The arms race between AI generation and AI detection has a predictable winner. And it is not the detectors.
Like what you’re reading? Get insights like this delivered daily.
What Redesigned Assessment Actually Looks Like
The provost at San Francisco Bay University did not just critique the system. He rebuilt it. Every general education course now includes structured AI assignments tied to concrete learning outcomes. Students are not told to avoid AI. They are taught to use it as a tool for thinking, with assessments designed to prove they can still think without it.
Xinyao Yi, an assistant professor at the University of Virginia, took a similar approach. After noticing that students could submit AI-generated code that compiled and ran correctly but could not explain why it worked, Yi stopped grading output and started grading understanding. Students now explain why their solutions work, predict outcomes before running code, and evaluate trade-offs between multiple solutions, including AI-generated ones.
The University of Sydney went further. They separated all coursework into two lanes: secure assessments where students demonstrate knowledge independently (oral defenses, live questioning, hands-on demos), and open assessments where AI use is permitted and students document how they used it. The clarity removed ambiguity for both students and faculty.
None of these approaches ban AI. All of them require students to prove they can think. That is the difference between policing tools and building competence.
The SeedStacking Lens: Small Shifts, Compounding Impact
You do not need to overhaul your entire curriculum overnight. The SeedStacking approach applies here: start with one assignment in one course. Replace a recall-based question with one that requires explanation. Swap a written submission for an oral defense. Add a reflection component where students document their thinking process, including where they used AI and where they chose not to.
These small changes stack. One redesigned assignment becomes a redesigned module. One module becomes a redesigned course. One course becomes a department-wide shift in how you define what it means to learn.
The institutions that figure this out first will produce graduates who can work alongside AI because they were trained to think with it, not hide from it. The institutions that retreat to blue books will produce graduates who learned to perform on command in controlled environments. Employers already know which graduates they want.
Three Questions Every Educator Should Ask This Week
If you teach, advise, or design curriculum, run your current assessments through this filter:
1. Could AI complete this assignment to a passing standard? If yes, the assignment is testing output, not understanding. Redesign it to require explanation, prediction, or evaluation.
2. Does this assessment reward process or product? If students can submit a polished final product without showing how they got there, you are grading performance, not learning.
3. Would a student who truly understands this material produce something AI cannot? If the answer is no, the assessment is not measuring what you think it is.
These three questions cost nothing to ask. But they will change how you think about every assignment you create from this point forward.
THE TAKEAWAY
AI did not break education. It revealed which parts of education were never working. The question is not how to stop students from using AI. It is whether your assignments were ever designed to prove that students actually learned. Every educator has the power to answer that question this semester.
Ready to go beyond reading and start building AI fluency?
The Harvest Kernel Learning Community is where educators, professionals, and lifelong learners build real AI skills together. Research-backed content five times a week. The SeedStacking methodology that turns daily practice into genuine fluency. Free courses. Live office hours. A community that actually responds.
The article gives you the what. The community gives you the how.
