Visit Modern Campus

No Institutional Change Without Incentive: Reinventing College Rankings

The EvoLLLution | No Institutional Change Without Incentive: Reinventing College Rankings
If institutions continue to be measured and ranked on their adherence to the traditional factors that helped institutions stand out a century ago, there is minimal incentive investment in change.

Higher education is often accused of being a slow-moving industry. But with the drastic changes in today’s student demographics and expectations, not only must the priorities of institutions evolve but so too must the factors we use to define institutional success. After all, it’s hard to drive change if that change is not incentivized. In this interview, MJ Bishop shares her thoughts on how outmoded approaches to rankings impact institutional will to serve traditionally underserved populations and reflects on the factors that would characterize a truly student-centric ranking system.

The EvoLLLution (Evo): What are some of the central issues with the current approach to institutional rankings?

MJ Bishop (MB): Coming up with an institutional ranking system that supplies the kind of information that students need to make a decision about college is no small feat.

The way we’re seeing a lot of organizations create their rankings is they find metrics that are easy to collect and compare across institutions. In fact, that is the exact approach the U.S. Department of Education is taking to develop their College Scorecard. But when we use this approach, we end up with measures that are difficult to attribute to work institutions are doing in the service of their students.

Metrics like graduation rate might be easy to quantify but it’s also easy to show that these are just indicators of the type of students some institution attract. This approach fails to explore things that I think are important, like student learning outcomes, non-cognitive skills, career readiness upon graduation, and students’ feelings of fit and satisfaction with the institution. While factors like these are harder to measure and compare across institutions, these are the kinds of things that would be of great interest to those making the decision about what higher education institution they want to attend.

As long as ranking system metrics remain focused on broad quantitative institutional measures and aren’t more aligned to more precise measures of student outcomes, there’s really no way for prospective students to determine what the return on their higher education investment will be.

Evo: How does the institution-centric focus on prestige and exclusivity impact colleges and universities that serve mainly non-traditional and underserved audiences?

MB: Metrics like reputation, student social activity and all those things that are currently prioritized by ranking systems conflict directly with the access mission that many public institutions have, like ours here across the University System of Maryland.

Given that these students’ economic and social prosperity depends so much on achievement of their postsecondary education, we have a moral obligation to be more proactive in improving access and increasing opportunities to attend college. That’s not to say that higher education is right for everybody, but for those who do want to pursue a postsecondary education, they should have the opportunity to do so. At the University System of Maryland, we’re working to increase access to higher education by improving pathways to college. We’re also trying to keep costs down and learning outcomes high, which gets back to the old iron triangle problem. Traditionally it has been difficult to increase access, affordability and quality simultaneously, but we need to find ways to do that by leveraging emerging instructional technologies. This is a large part of what the William E. Kirwan Center for Academic Innovation at the University System of Maryland is doing.

The rankings, and the ways we look at what institutions are doing more generally, ought to be focused on their efforts to solve the iron triangle problem and increase student success and access.

Evo: What do you think the long-term impact of this type of homogenous ranking system will be on the rich and diverse American postsecondary environment?

MB: I worry about the long-term impact the rankings could have. The public at large, our state legislatures, higher education funders and students, both prospective and current, really do need a decision-making tool. I’m not arguing against that. But given no alternative at the moment, they’re tending to take those institutional-centric rankings to heart, even though they are an imprecise measure of institutional quality.

Unfortunately those rankings negatively influence student enrollment decisions over time, which will continue to lead to increasing budgetary disparity between the more prestigious, elite institutions and those whose mission is to support success of non-traditional and underserved populations. The rich will get richer, the poor will get poorer. If this trend continues it could mean those institutions with access missions could eventually disappear and we’ll have even fewer opportunities for those populations to pursue college.

We need to develop a ranking system that speaks to unique needs of different groups of student rather than the more traditional factors that are typically used.

Evo: What factors do you think should be highlighted in a student-centric ranking system?

MB: A student-centric ranking system should focus on the factors that truly make a difference for students: persistence, completion, preparation for life after college, things of that nature.

The metrics should be used to judge our higher education institution and should answer the question, “What specifically is the institution doing to support the success of the students and how are they going about doing that?”

We should be measuring the extent to which our higher education institutions are creating an environment through which students gain the necessary skills for lifelong learning. We should also look at things like information and quantitative literacy, problem solving, creative thinking and civic engagement for life. We might also measure things like the availability services and tools for students who require additional help to be prepared for college-level coursework and the extent to which all students feel like they belong and are included in the institution. Student-centric rankings could also include student and alumni assessments of the relevance of the curriculum and flexibility of the offerings, whether the institution engages and continues the assessment of learning and the willingness or ability to adjust instruction as needed based on metrics. Student access to tools and mentoring towards career and life after college are also critical.

We really do need to be measuring student outcomes in term of learning as well as the development of other non-cognitive skills, and institutions should be ranked on that.

Evo: How would such a student-centric ranking system impact the higher education environment?

MB: As the saying goes, “What gets measured, gets done.” If we hold higher education institutions accountable for the activities and programs that we already know from research lead to student success, institutions would be incentivized to incorporate them into their practice.

This could be a great shot in the arm for efforts to transform higher education, which is a big part of the focus of the Kirwin Center for Academic Innovation. If the metrics are better aligned with student outcomes, we may see increases in things like retention and graduation rates, but more importantly we’ll have a better sense of the extent to which our institutions are actually responsible for those outcomes, instead of them being an indirect outcome of other factors.

Author Perspective: