Consolidated Administration: The Key to Delivering a 60-Year Curriculum
Shift the status quo to achieve long-term success and viability for your university.
Quality assurance is “the maintenance of a desired level of quality in a service or product, especially by means of attention to every stage of the process of delivery or production.” In higher education, we’re challenged to determine how we identify whether we maintain the level of desired quality at every level of the education process. Paraphrasing Brennan and Shah, David Dill adapts the notion of quality assurance to higher education.
The term quality assurance in higher education is increasingly used to denote the practices whereby academic standards, i.e., the level of academic achievement attained by higher education graduates, are maintained and improved. This definition of academic quality as equivalent to academic standards is consistent with the emerging focus in higher education policies on student learning outcomes — the specific levels of knowledge, skills, and abilities that students achieve as a consequence of their engagement in a particular education program.
Given that the only way we know if a student has learned a skill or knowledge set is through some kind of assessment, it’s fair to say quality assurance happens at the assessment level of the learning process. In other words, it’s through assessment of student learning that we discover the extent to which students have learned whatever it is they’re required to know to meet their learning objectives.
Quality assurance doesn’t happen at the level of instruction. After all, instructors have no idea if their students are actually learning, regardless of how good their lectures or other instructional materials might be, until they test students’ understanding, mastery or knowledge. Regardless of the process of assessment — objective exams, written assignments, projects, clinical demonstrations, etc. — if assessments are structured properly to reflect what a student has learned, it’s the assessments that constitute evidence of student learning.
In traditional classes, assessment and instruction are highly integrated parts of the classroom experience; students are examined only on the materials directly covered in a class. Given that faculty tailor classes to their particular interests and interpretations of a subject, the extent to which most exams reflect subject matter mastery (as opposed to mastery of the faculty member’s view of the subject) varies considerably by instructor. For example, if professors A and B both teach American literature and both cover Faulkner’s “Light in August,” what professor A considers a thorough understanding of the text might vary a lot from what professor B considers the same, and their assessments will directly reflect that bias.
Separating instruction from assessment can help overcome this kind of bias and a lot more.
A thought experiment might be helpful here. Imagine there are two universities, X and Y, and that each institution has an English department with 10 faculty members. University X faculty teach but do not assess, and University Y faculty assess but do not teach. The faculty members at the two schools are not in contact with one another, but students who learn at X must pass assessments at Y to graduate. The faculty at both institutions are aware of this requirement, and they want students to be successful in their studies, to learn what they need to learn and to graduate. One of the classes taught at X and assessed at Y is a class on Faulkner. How will faculty at University X structure lectures and learning materials on “Light in August” to ensure they help students learn the book, and how will faculty at Y structure assessments to determine if students understand “Light in August”?
Given this “veil of ignorance,” to adapt John Rawls’ term, a rational process would be for faculty at X to approach the development of their instructional materials as a team. That will help reduce bias, highlight common understandings and focus on what they together consider the most important parts of the book. Although they don’t know how faculty at Y will assess students, they know highly individualized interpretations of the text are more likely to be missed by faculty at Y, and that focusing on common understandings of the text are likely to better mesh with commonly developed assessments by faculty at Y.
Put differently, although there’s no guarantee that faculty at X will focus on the issues faculty at Y decide to assess, it’s far more likely to work if done collectively than if Professor B at X teaches only by himself and Professor A at Y assesses only by herself. The likelihood of objectivity and focus on the most important aspects of the material is far greater when groups of content experts develop the instructional materials and the assessment materials and when the two groups are not one and the same.
Leaving this thought experiment and applying the principles to the real world of instruction and assessment, there’s practical value in separating instruction from assessment for the purpose of objectivity and to focus on what’s most central to a given discipline. In addition, the separation of where and how students learn from where and how they’re assessed enables students to take advantage of the increasingly ubiquitous array of learning opportunities available across platforms and venues.
The process of assessment, however, must be more structured and formal than the learning process if the purpose of the assessment process is the assurance that students have mastered a given discipline. In other words, robust assessments that test student mastery of commonly accepted disciplinary knowledge are essential to verify students have learned what they need to know to sufficiently understand a discipline and, presumably, to function effectively within related work. Assessments are critical as indicators of quality because, if structured well, students who pass the disciplinary assessments indicate subject matter mastery of that discipline.
For colleges and universities, divorcing teaching from assessment has both challenges and benefits. The primary challenges are cultural. Faculty are expected to include teaching and assessing as part of their classroom duties, and they like to control both processes. Students are used to this as well, and separating instruction from assessment requires considerable re-thinking of the educational process and experience. The opportunities, however, are significant because separating learning from assessment can expedite time to degree by allowing students to move through curricula at their own paces, taking assessments in areas where they have prior knowledge and engaging in formal instruction only when then need it. This, in turn, frees faculty to spend time with students who really need their help, rather than spending most of their time teaching to multiple sections of the same classes. Institutions can scale this process by combining technology and faculty to provide instruction at larger scale, focusing faculty on helping students learn and developing robust assessments that lead to better student outcomes.
Whether an institution pursues this process — and whether its faculty allow it — depends on the institution. However, it’s difficult to see how higher education can be truly scaled unless it begins to seriously reconsider its historical models, and one of those is packaging instruction and assessment within the format of term-based classes.
– – – –
 Oxford Dictionaries, accessed Aug. 13, 2014, at http://www.oxforddictionaries.com/us/definition/american_english/quality-assurance
 Brennan, J. and Shah, T. (2000) Managing quality in higher education: An international perspective on institutional assessment and change. Buckingham,UK: OECD, SRHE & Open University Press.
 Dill, D. D. “Quality Assurance in Higher Education: Practices and Issues.” In P. P. Peterson, E. Baker, and B. McGaw, (eds.), International Encyclopedia of Education, Third Edition, pp. 377-383.
Shift the status quo to achieve long-term success and viability for your university.
Author Perspective: Administrator
Fascinating thought experiment on the separation of instruction and assessment. It’s true that instructors, when left to develop their own curricula, often focus on an area of interest or comfort rather than broad themes or knowledge. However, I would ask: isn’t this the point of higher education? If we look back at the Socrates-Plato relationship, higher education was always about learning under the tutelage of one instructor, who would introduce the student to a distinct world view, not about gaining general knowledge. The best way to develop critical thinking that is so valued in liberal arts education is by engaging closely with materials. That means shutting certain ideas out. That means having the same person responsible for both instruction and assessment.
I think concerns over assessment of learning outcomes are valid. In fact, I would add that part of the criticism is that assessment isn’t transparent enough. There’s a growing push for greater accountability in higher ed, and institutions that look at how to improve assessment will be able to make a stronger case for why their programming is better than other’s.
The teaching and assessment functions should be completely separate from one-another. Moreover, the design function should also be separate. Pedagogy is not course design, nor is it quality control. The faculty model, in its current form, is unsustainable.
I wonder if, by going down this path, we will fall into the trap of standardized testing that has hamstrung our K-12 system. Educators in the classroom can understand the context of the lesson and pair learning outcomes with individual student progress. By shipping a set of tests, or an assignment, or a portfolio to some third party, that personal connection is lost, as is the empathy critical to education.
For my undergraduate degree I participated in the External Examination program at Swarthmore College: it was a lot like the thought experiment in the article, in that we were assessed by faculty from other institutions who were independent of those who had taught us. Contrary to fears about losing the personal connection or being exposed to a distinct world view, we participated in in-depth seminars in which we explored the topics in great depth. We all knew we were going to be responsible for understanding the subject area both broadly and deeply, and it was assumed that we’d have to read enough to understand the general trends in the field. It was a fantastic learning experience. So if done right, there is no reason that separating assessment from instruction has to produce a least-common-denominator dumbing down, as some commenters have hinted at. It can be very liberating.