Visit Modern Campus

Assessment is Not a Four-Letter Word

Assessment models today fail to acknowledge the key components to a learner’s experience and outcomes—incorporating them into the learning process will help both students and employers see the full potential a learner has. 

We regularly encounter curriculum that is engaging, creative and effective, followed by…a multiple-choice quiz. What is there about assessment that seems to be off-limits to reimagination? One possibility is that assessment is seen as the vegetable, not the entree, and certainly not the dessert, in the meal of education. It is supposed to be good for you, a required part of a nutritious meal but not necessarily the feature that anyone looks forward to. But just as vegetables have been rehabilitated (full disclosure: we are huge fans of broccoli), it is time to rebrand assessment.  

The problem with quizzes and similar approaches is not only that they are unimaginative but also that, especially in the hands of the assessment-phobic, they tend to emphasize the least useful aspects of the learning experience, e.g., the identification of facts rather than the application of concepts. Multiple-choice tests can be effective, but by their very nature they shortchange students of the difficult and essential work of making sense of what they are learning for themselves. And most seriously, they treat assessment as separate from learning.

The risk of separating assessment from learning 

Decoupling assessment from learning treats it as non-learning. It implies a linear and unintegrated process: first we teach you, then you show how much you have actually learned. This keeps agency in the hands of the instructor, but it also guarantees a lack of alignment between what is taught, what is learned, and what is demonstrated.  For this, among other reasons, backward design represents an essential shift. It begins, rather than ends, with what students are expected to know and be able to do by the end of the learning experience. The curriculum and assessments are then aligned with those expectations.  But wait, some will say, “Doesn’t that process encourage the dreaded ‘teaching to the test’?”  

“Teaching to the test” has become shorthand for everything that is considered lacking in contemporary education, especially at the K-12 level:  the overemphasis on standardized testing and the consequent lack of attention to or outright banishment of subjects that are not included in those tests, such as art and music, even social studies and science.  But the real problem with TTTT is that so many tests represent the lowest rung on the learning ladder. However, If the “test” requires learners not to remember and regurgitate what they have been taught but rather to integrate and apply what they have learned, especially in the service of realistic problem-solving, then preparing students for such a test is a good thing, not a bad one. And of course, assessment does not have to take the form of a test.

CBE, done right, can change the assessment paradigm 

To be truly meaningful, even the best assessment needs to be integrated within the learning process. Competency-based models that require the demonstration of competencies can make this seamless. For example, the project-based model we developed at SNHU’s College for America provided the learner with multiple opportunities to try, get feedback, and try again; each instance of assessment continued, rather than concluded, the learning. This is in stark contrast to the end-of-semester exam whose only use, from the student’s perspective, is to receive a grade. But if feedback is genuinely useful and the learner has reason to apply it, then assessment not only continues but also fosters learning. 

While much has been made of the distinction between assessment that is formative (“for learning”) versus assessment that is summative (“of learning”), in the College for America model, the clear-cut distinction collapses intentionally. The assessment is formative up until the moment that the competencies have all been demonstrated, at which point it becomes summative.  This model of assessment also embodies the growth mindset developed by Carol Dweck. While her concept has been misunderstood, when applied appropriately, it not only helps students learn the material, it also helps them develop metacognitive skills (the capacity to think critically about one’s own learning). In other words, it shifts the focus from “how did I do” (translation: “how good am I”?) to “what do I need to do?” (translation: “what strategies or approaches might I try next?”). The assessment is on useful and usable information, not a judgment on one’s personal worth. The ideal competency-based model has another secret weapon: no grades.  In the face of grades, inflated or not, it is nearly impossible for students to focus on substantive feedback.  Sadly, the well-intended focus on student success has exacerbated the fear of failure, treating it as something to be dreaded and avoided at all costs. But without mistakes, there is no learning.  Error is an essential feature of the learning process, not a bug in the system.  

Assessment as it is commonly practiced has intensified this problem.  Divorced from learning, it simply carries the message “you’re good!” or “you’re bad!” (and even less useful, “you’re better or worse than the people ranked above or below you”). But when assessment is intentionally integrated within the learning process, it ceases to be a separate judgment and instead becomes another and essential source of information to both learners and instructors.  Sadly, even formative assessment is now commonly graded, turning it into yet another test rather than a vital opportunity within the learning context for both students and teachers to make change.

Aren’t tests necessary?

So, if instructors do not give tests or quizzes, how do they know what–or if–the student has actually learned?  The premise underlying this question is that typical multiple-choice tests and quizzes do show in a meaningful way what or if students have learned.  The better question is why learning should be segregated from feedback.  The inevitable final exam is a relic of the past, in an era that saw the professor as the sole imparter of information and the student as its passive recipient.  But we now know that imparting information is the least valuable function of teaching. We also know that student engagement is essential to student learning that lasts long after the final grade has been received. And we know that student agency is a precondition for engagement. Once students are seen rightly as partners in their learning, it follows that they must be partners in the assessment of their learning.  Unfortunately, the failure to see students as partners in their learning has intensified with the panicked rush to remote education in the face of the pandemic.    

It doesn’t need to be this way. And scale does not demand it. Technology can be harnessed to create and deliver engaging, contextualized–and personalized–assessment that promotes, rather than concludes, learning.

Project-based learning and assessment

Much of our work, both at College for America and for Volta Learning Group clients, has involved the design and development of realistic, competency-based projects that serve as integrated learning and assessment opportunities. This type of integrated approach offers numerous advantages.  It creates alignment between learning and assessment; it is active rather than passive; it is engaging, fostering learners’ capacity for self-direction; and it develops the creative problem-solving and ability to apply knowledge and skills that students need to succeed in the workforce. 

Not incidentally, all surveys of business executives and hiring managers tell us that while a vast majority regard applicants’ ability to apply knowledge and skills to real world settings as “very important,” only a minority see recent college graduates as “well prepared” to do so. This disconnect between what employers want and what colleges do permeates higher education.  Even institutions that value applied and experiential learning opportunities often default to knowledge-based tests rather than performance assessment. Sequestering assessment from learning is not the only reason that so many college graduates are unprepared for the world of work, but it is a serious symptom. Fortunately, there are better alternatives and we know that they work.

Five Principles for Re-imagining Assessment

We recommend grounding assessment in five core principles.  These are especially critical for adults but apply to all learners:

Learning and assessment are part of the same process

People learn through trying, making mistakes, getting feedback (ideally immediate and targeted), then trying again. Think of the most addicting video or computer game; it is the perfect embodiment of this process.  

Good assessment develops critical metacognitive and learning-to-learn skills

Whether we are focused on “assessment of learning” or “assessment for learning” (a distinction without much difference, from our perspective), assessment represents learning about learning. Ideally, it provides students with actionable information that they can use to sharpen both their knowledge and skills.

Good assessment requires and develops student agency as well as self-direction  

In contrast to the “pour-in-and-spit-back” model of testing, assessment that recognizes student agency provides opportunities for students to integrate and apply what they have learned. This gives students the responsibility for making learning their own.

Realistic, problem-centered assessment promotes transferable skills

The great photographer Dorothea Lange once said: “The camera is an instrument that teaches people how to see without a camera.”  Similarly, the best academic learning helps students learn without the classroom – i.e., to develop transferable skills and competencies that are useful far beyond academia. Realistic projects combining learning and assessment enable learners not only to develop but also to demonstrate their competencies.   

Useful feedback is prompt, targeted, and actionable  

Much has been written about the characteristics of good feedback in the context of grading. Less attention has been devoted to forms of feedback that are not provided by faculty — for example, in video games or in well-designed interactive learning environments. In both cases, it is the consequence of the learner’s action that provides the feedback and the learning, not a judgment from an instructor.  While prompt, targeted, and actionable human feedback is, of course, invaluable, well-designed educational technology can provide an exemplar of feedback that teaches — which is to say, good assessment. 

Disclaimer: Embedded links in articles don’t represent author endorsement, but aim to provide readers with additional context and service.

Author Perspective: