Published on
When the Signal Breaks, the System Must Change
Many institutions of higher education are reluctant to embrace AI use, but its infiltration into higher ed opens up an opportunity to redesign assessments to ensure they measure and evaluate what the institution needs them to.
Institutions and faculty around the world are trying to determine how AI affects the reliability of student work as evidence of learning. Faculty are asking the question, “How do I know this work reflects what the student actually understands?”
Some institutions have not moved past AI use and are attempting to engage in detection tools and policies related to its use by students, in assignments and by faculty and staff. I have seen policies ranging from no AI use allowed, to the idea that students can use AI but need to reference it, to instances where AI is part of the assignment, but students need to post their thought processes and prompts while engaging with it.
However, the issue isn’t AI use. Students are going to use AI. The issue is that the work itself—the assignment—no longer functions as reliable evidence of learning. When the faculty looks at a paper or assignment and cannot tell if it was completed by a student or by AI, the issue is the assignment, not the AI.
This is a strong signal that AI helps to point out in institutions today. When this starts to happen, the structure around the signal starts to fail. There are adjustments, but when the work stays the same, the structure is the problem, not the work. It is difficult to see this shift because all the systems around this structural issue continue to operate. Grades are still assigned. Courses still run. Students still submit work. On the surface, everything appears stable, but underneath, the reliability of the signal has weakened substantially.
If an assignment can no longer demonstrate what a student understands, then the decisions built on that assignment begin to lose their foundation. Grades become less meaningful. Progress becomes harder to interpret. Feedback becomes less connected to actual learning. The system continues to function, but the confidence in what it produces begins to erode. The question today no longer is how to regulate AI in assessment. It’s whether our current forms of assessment still serve their original purpose.
What This Means for Institutions
When the signal breaks and the structure shakes, institutions are faced with a choice. When an institution takes the pathway of resistance and support for the structure, they might institute more policies. They might enforce assignment submissions to an AI checker. They might institute severe penalties for students who are caught using AI. They can continue to layer on controls to verify authorship of an assignment, but these only stabilize the appearance of the system without restoring the function.
This response is normal, understandable and expected. Institutions are designed to be bedrocks of stability. When something feels at risk, the natural instinct is to reinforce it, but reinforcing a structure that does not produce reliable outcomes does not solve the problem. It simply delays its recognition. Each additional layer of control adds complexity without restoring clarity, creating more work for faculty, more confusion for students and less confidence in the results. Or the institution can begin to redesign the work itself.
Such a redesign does not mean that faculty abandon assessment, but it does mean they abandon assessment as they have traditionally used it. This means rethinking with the outcome in mind, and the outcome should be evidence of learning. That’s why we give assignments—to see if the student has learned. If faculty look at an assignment and cannot tell if a student has learned, we need to find a new way to measure that learning.
Some institutions are already moving in this direction:
- Shifting towards work that requires real-time interaction, dialogue, conversations, questions and explanations
- Emphasizing the process of learning instead of the final, completed product
- Designing assignments where the value is not in measured word count, format or length but in interpretation, judgment and context.
These shifts are structural, not just responsive. This is about restoring the original purpose that assessment is supposed to be doing: measuring learning. The challenge is not to control or detect AI use. It is to ensure the structures around the student and student work provide meaningful, useful evidence.
For institutional leaders, this approach brings up a different kind of decision. The question is not whether AI should be allowed or restricted but whether the work assigned aligns with what the institution wants to measure. It requires looking past individual courses and considering how assessment functions across programs, departments and the entire institution. Where is evidence of learning being generated? Where is it assumed? Where is it no longer reliable? These are not technical questions but structural questions.
If this shift goes unaddressed, the consequences will not appear all at once. Institutional leaders will see faculty spending more time questioning student work. Students will become less certain about what they are expected to do. Grades will continue to be assigned, but their meaning will fade. Over time, these effects create a gap between what institutions say they are measuring, and what they can confidently claim they are measuring. Credentials will still be awarded, but the evidence behind them is hard to interpret.
This is not a failure of faculty effort or student work. It is a misalignment between the work the student is doing, and the signals relied upon to make decisions. Recognizing that misalignment is the first step. Redesigning the work is the next. It is now a design problem, not a policy problem.