Published on
Completion Is Not Competence: Why Accountability Requires a Definition of Learning
The Institutional Paradox
Higher education measures what it can count with extraordinary accuracy. Institutions track enrollment, credit hours, retention rates, progression, completion and time to degree down to the decimal. These metrics are reportable, auditable and often tied directly to funding and accreditation. Few disputes arise over what qualifies as a completed course or a conferred degree.
Most institutions can define admission criteria in detail. They can specify graduation requirements with legal clarity and can document compliance with accreditation standards through extensive reporting. However, when asked a more foundational question—what demonstrable capabilities a graduate now possesses that were not present at entry—the answer is often less concrete. The central purpose those metrics are meant to serve is student learning. However, this core mission rarely receives the same operational definition.
This is not a matter of commitment. Colleges and universities are deeply invested in academic quality and student success. The issue is structural. Institutional systems are optimized to track progression through programs. They are far less consistently designed to define and verify what learning must consist of in practice.
The result is a persistent asymmetry. Participation is quantified. Completion is recorded. Compliance is documented. Learning, by contrast, is frequently inferred from grades, credit accumulation or aggregated reports whose criteria vary across programs and disciplines. We count what moves through the system. We struggle to specify what changes because of it.
As scrutiny over the value of higher education increases, this asymmetry carries significant consequences. If learning is the core promise of the enterprise, its meaning cannot remain implicit. Without a shared operational anchor, accountability risks becoming procedural rather than substantive. The question is not whether learning occurs. It is whether institutions can clearly state what must be present when it does.
The Proxy Trap: How Process Replaced Performance
The absence of a shared definition of learning did not arise from indifference. It emerged gradually as higher education expanded and accountability systems had to scale. Enrollment, credit hours, completion rates and aggregated grades were measurable across programs and institutions. They traveled easily through reports and accreditation documents. They provided comparability and administrative order.
Learning, by contrast, is contextual and discipline specific. Rather than establish common performance criteria across contexts, institutions relied increasingly on proxies that were easier to aggregate. Grades stood in for capability. Credit accumulation stood in for progress. Completion stood in for mastery.
Assessment systems developed within this logic. Programs drafted learning statements, collected artifacts and produced reports. Committees reviewed findings. Over time, documenting that assessment occurred became a substitute for defining what demonstrated competency must look like. No policy declared proxies sufficient. No institution abandoned its commitment to learning. Administrative logic simply favored what could be standardized and reported efficiently. Process became visible. Capability became uneven.
For years, this ambiguity remained largely invisible. As long as students completed assignments and advanced toward degrees, the system appeared to function smoothly. The difference between finishing work and demonstrating durable capability rarely drew sustained attention.
That environment has changed. Generative AI did not create the definitional problem. It exposed it. When polished work can be produced with minimal effort, reliance on performance as indirect evidence becomes unstable. If institutions cannot clearly articulate what competencies a graduate must demonstrably possess, the line between assisted production and independent mastery blurs.
At the same time, external stakeholders are asking harder questions about value. Employers seek graduates who can perform. Policymakers tie funding to outcomes. Students and families weigh cost against return. In this climate, a credential signals more than time served. It signals capability. When that signal weakens, confidence follows. What once functioned as a manageable ambiguity now carries reputational and strategic risk. When learning remains undefined, accountability defaults to what is countable rather than what is demonstrable. What cannot be clearly defined cannot be convincingly defended.
The Cost of Definitional Ambiguity
When learning lacks a stable institutional definition, the consequences are uneven but systemic. For students, the signal of progress becomes unreliable. Grades, credit accumulation and positive feedback suggest advancement, yet those indicators do not always correspond to durable capability. A student may graduate with strong marks and remain uncertain about what competencies they have secured. Without shared criteria for demonstrated mastery, completion is easily mistaken for competence.
For faculty, ambiguity increases evaluative strain. Instructors design assignments, apply rubrics and provide feedback in good faith. However, without calibrated expectations across courses and programs, standards remain locally interpreted. The criteria for proficiency in one section may differ significantly from those in another. Assessment becomes episodic rather than cumulative. The issue is not effort but fragmentation.
At the institutional level, the effects compound. Accreditation requires documentation of assessment activity, and institutions respond with reports, action plans and review cycles. But when the meaning of learning varies across units, reporting outpaces alignment. Process is verified. Capability is unevenly demonstrated.
Over time, a credibility gap emerges. Degrees function as public signals. Employers, policymakers and students interpret them as indicators of preparedness. When institutions cannot state clearly what competencies a credential represents, that signal weakens. The issue is not that learning fails to occur. It is that institutions cannot consistently show what must be present when it has. A system that cannot articulate what it develops cannot convincingly defend what it awards.
A Usable Definition and What It Requires
If learning is to anchor accountability, it must be defined in terms institutions can apply consistently across programs and contexts. A workable definition is direct: Learning is a durable change in what a student can demonstrably do as a result of instruction. This definition focuses on capability rather than attempting to capture internal experience, motivation or intellectual growth. It focuses on capability. What can the learner now perform, apply, construct, analyze or produce that was not reliably present before? The emphasis shifts from exposure and participation to sustained, transferable performance.
Framing learning in this way does not diminish its complexity. It clarifies its boundary. Students may feel more confident or engaged, and those experiences matter, but institutional claims about learning must rest on demonstrated competency, not inferred internal change. Observable does not mean simplistic. It means verifiable. Complex reasoning and judgment remain central. They are made visible through disciplined performance in context. The shift is not from depth to surface. It is from assumption to evidence.
Once learning is defined as durable, demonstrable capability, institutional alignment becomes possible. Alignment begins with clarity about what a degree implies, expressed in performance terms. What must a graduate reliably be able to do? Analyze data using defensible methods. Construct arguments supported by credible evidence. Design and evaluate solutions under defined constraints. Conduct professional assessments to established standards. These expectations must be explicit, public and stable across sections of the same program.
Clarity alone is insufficient without calibration. When instructors apply rubrics independently, expectations drift. Moderated review of student work across instructors and program levels establishes shared standards for competent performance. The goal is not uniform pedagogy. It is coherence in judgment.
Evidence must also rest on artifacts that demonstrate capability. Aggregated grades and survey data describe experience. They do not show performance. Capstones, clinical evaluations, portfolios, research presentations and applied demonstrations can serve as durable records of competency when evaluated against shared criteria. Claims about learning become visible rather than assumed.
Reporting structures must follow that same logic. Instead of documenting that assessment cycles occurred, institutions can document which competencies they verified, where gaps emerged and how instruction responded. Accountability shifts from procedural compliance to demonstrated capability.
Enrollment metrics and completion rates still matter, but they must rest on a clearly articulated definition of learning that links curriculum, instruction, evaluation and credentials. Without that anchor, coherence remains aspirational. With it, institutional claims regain substance.
From Compliance to Credibility
Higher education does not lack commitment to student learning. It lacks a stable institutional definition capable of anchoring accountability across programs and contexts. When learning remains implicit, accountability defaults to what is easiest to count. Completion and credit accumulation stand in for capability. When learning is defined in demonstrable terms, institutional systems can align around evidence rather than assumption.
The choice facing institutions is not whether to measure more. It is whether to define more clearly. A system built on activity can always generate additional reports. A system built on capability must generate proof. Degrees are not transcripts of time spent. They are claims about what a graduate can do. If those claims are to withstand scrutiny, they must rest on shared criteria and visible evidence. Compliance maintains operations. Definition sustains credibility.