Going Beyond the Quick and Dirty: Crafting a Truly Representative Ratings System
Thoughtful evaluations of educational institutions are few and far between. The reasons are clear, but first and foremost is the fact that one type of evaluations does not fit all. Clearly, small private liberal arts institutions differ significantly from large urban state universities. But even within a category like urban institutions the focus and missions of the institutions are often quite different. Even when you can find groups of institutions that share characteristics, differences among departments such as English and Psychology are often stark. The faculty of a college or school is not necessarily homogeneous, and it is ultimately the faculty that defines an institution.
If one is willing to sort through a number of criteria, on a number of dimensions within different categories of schools, evaluations and even rankings are indeed possible. So, for example, many urban schools have a commitment to community service and economic development, and many focus on experiential learning, or community building. They have to be compared against one another—not an arts school that trains only painters.
From here, though, it becomes even more complicated. Much of a student’s college experience has so much to do with the student affairs side of a college experience. Are students in dorms well matched? Are learning communities in residence halls well coordinated and adjunctive to the classroom experience? Urban-serving universities that have many commuter students may have significantly different students, faculty and outcomes than do residential urban universities.
Well-financed students do better than students who hold two jobs. Students with support from families do better. Students who come from backgrounds with less than stellar preparation do worse despite their IQ and abilities. And yet these students are thrown into the “average” and this makes comparisons of schools wildly difficult. This becomes especially relevant when we consider post-graduation success. While we know that educational attainment is correlated with overall lifetime earnings, we know relatively little about how a specific education affects first-job salaries, second-job salaries, or career paths—and these are relevant outcomes to consider.
Reams of data show that residential students who are well situated do better than commuters—even at the same school in the same major. Students who take part in learning communities do better. And students with good, vigilant advising do better. Add to the mix the variability among advisors and faculty, and comparisons again become especially complicated. And then there is the campus climate with respect to diversity: Is the campus a place where all people and ideas are truly welcomed and embraced, or is there an undercurrent—well hidden—of disdain and dismissiveness of people who are not “like me.”
In the end, there are clearly some schools that are better than others. There are schools that have national reputations in science, the humanities, or in writing programs. But even these well-regarded schools gain national reputations more often from their graduate programs than their undergraduate programs.
A key variable that must be considered when a prospective student begins looking for an institution is the actual commitment to excellent outcomes that a school has adopted.
So, are rating and evaluations doomed? The answer is yes, if you are looking for a quick and dirty heuristic evaluation (“this school is number 7 in the nation”). The truth is ratings and evaluations can be done, but they must be nuanced, evaluative and multidimensional, not a quick and dirty number among all national universities.
We have to collect data—lots of it—and sift through the variables to make good predictions about both outcomes and quality. We have the ability to do it. We just need the will and the honesty to be evaluated.
Author Perspective: Administrator