Published on
Rankings Oversimplify the Complex World of Higher Education
Higher education rankings are growing in influence, but what impact are they having on the postsecondary industry itself? After all, if increasing numbers of students and external stakeholders are paying attention to these rankings, it’s important to understand their influence on institutional management. Even more so, it’s critical to understand the factors that play into the development of the rankings themselves. In this interview, Ellen Hazelkorn sheds some light on the misconceptions that plague the rankings space and shares her thoughts on what needs to change in order for more representative information to become available to prospective students and industry observers.
The EvoLLLution (Evo): What are the benefits of higher education ranking systems like the QS Global Academic Survey, the Times Higher Education rankings or the US News and World rankings?
Ellen Hazelkorn (EH): In terms of benefits, you could say that rankings provide a shorthand list of institutions that might provide prospective students some information on what they’re looking for from a university. Increasingly, many of these ranking systems allow for some variability in terms of the type of institution, so whether you’re looking for a medical school or a liberal arts college you have an opportunity to get an answer to the question, “Which one is the best?”
It’s a shorthand way of identifying a handful of institutions that prospective students might have a look at and research in more depth.
Evo: On the flip side, what are some of the negative impacts these systems have on postsecondary institutions?
EH: The biggest negative impact is the assumption that the top-ranked institutions are the best. However, because of the inadequacy of the indicators, these rankings can’t provide the more granular or focused information that today’s students want and need. Defining “the best institution” solely depends on who’s asking and the purpose for asking. Each college is different, the student requirements and interests are different but with these rankings we have a common, and single set of criteria.
When you look at institutions, there are differences in terms of their missions, their roles in the community, the type of education or curriculum focus they offer, and the availability of internship programs. These are all factors that set different institutions apart and make them distinctive. But none of that information is available in the rankings. The rankings are based purely on basic criteria and, though people look at them, they provide insufficient and often misleading information.
One of the very great difficulties is the assumption that what they’re measuring is quality, and that’s not the case.
Evo: What impact do these rankings have on the availability of diverse postsecondary options for students who have different goals or different experiences?
EH: That’s a really huge difficulty with these rankings.; despite all the problem, we know that lots of students with varying goals, expectations and requirements are using them. Let’s dig into the types of students who tend to use these rankings. Students who go into professional programs tend to be more likely to use them. They are important also for international students, as they don’t have access to local intelligence on availability and quality of postsecondary options. Graduate students will also use them, as will students who are high achievers and students for whom money is less a problem.
There has been an increasing amount of research done about these rankings, so we do know that they are widely used and, what’s more, they are widely used to inform decisions. While they’re not necessarily the only tool prospective students use, they can be incredibly influential. It’s important however to determine whether they’re having an impact in skewing people’s views about what constitutes quality.
These rankings can have a huge negative effect on more specialized institutions—schools that are performing a particular mission—because students don’t get a clear view of what the institutions are about. The type of thing that the Obama Scorecard is trying to gather is a different set of issues but equally relevant: It’s trying to give students a sense of their chances of finding work after investing time and money into their education at different institutions. The issues around salary, which gain a lot of focus, are hugely problematic. On the one hand, students will want to know that they will get a job and that it’s going to pay. On the other hand, however, salary is conditional on a range of factors so these kinds of indicators can be very problematic.
One of the other problems with rankings is how they’re read. Since they tend to be ordinal rankings, there’s the assumption that being ranked fourth is better than being ranked fifth, which is better than being ranked 20th. However, in truth the statistical difference between these institutions is insignificant although it appears to be very significant. It’s a very false assumption about what we think these rankings are really presenting. Obviously, because the rankings have a huge impact on students’ choices and decision making, these rankings also have a huge impact on the public perceptions of institutions. The institutions themselves seek to enhance their position in the rankings and so they have become a huge driver influencing and affecting institutional behavior and academic behavior.
Evo: You mentioned in an earlier interview that some of the central problems with the rankings are not the rankings themselves, but the influence they bear on both prospective students and policymakers. What can be done to lessen the impact of these rankings?
EH: Part of lessening the impact of rankings comes from making more information available about the pluses and minuses of the rankings themselves. There is a rational explanation as to why people have responded so immediately to these ranking systems: They’re very simple, they look very simple and they provide very simple information.
So how do we get beyond this?
Frankly, some people believe the more rankings we have, the better. There are lots of different sources of information, but there are probably just two or three ranking systems that are dominating. In the US, US News and World Report dominates. Once you go internationally and look more widely, there are the QS rankings, the Times Higher Education rankings and the Shanghai Academic Ranking of World Universities, which are more focused on research. That’s probably it. Those are the most popular and influential rankings systems.
Secondly, universities and colleges must provide a lot more information to and for the public. There is a large responsibility on higher education institutions to provide more genuine information about the learning environment and experience for students, including student learning outcomes.
Thirdly, there is a need to identify a more appropriate way to compare higher education institutions internationally. The European Union has tried to develop an alternative ranking system called U-Multirank. It has a much wider set of indicators and operates interactively so each person can select the indicators which are most important and meaningful to their own experience.
Getting past that desire for simplicity is a major difficulty.
Evo: Do you see any trends that would indicate a change in the reliance on these rankings for prospective students?
EH: Unfortunately, rankings are becoming more influential. Over the last 10 years, the number of people using the rankings has increased and they have become increasingly more influential. Will they taper off or plateau? That’s unclear but what is obvious is that there is a growing demand and requirement for increasing information and transparency about the performance and contribution of higher education.
In the US, US News and World Report has remained the most influential source of information. Recently, they launched a global edition. To me, the reason they’ve launched a global edition is that the market of American students looking abroad and international students coming to the US is growing and they want to be a source for that demographic. I don’t see them competing with the Shanghai ranking, the QS or the Times Higher Education ranking, but it shows the opening of the U.S. market in a way that may not have been so obvious before.
Evo: Is there anything you’d like to add about the impact that these rankings have on how students think and how institutions behave?
EH: Some of the big issues also have to do with the rankings’ long standing support of, and influence on, higher education through benchmarking. What the rankings have succeeded in doing is to acknowledge that higher education operates in an international environment and that international comparisons are part of that global market. There’s no way that we can simply self-reference or continue to operate as universities and colleges have traditionally done—in a very closed circle.
Rankings have exposed us to the world in a way that wasn’t obvious before. It really shows the acceleration of globalization and internationalization in the higher education marketplace. Governments are as concerned about the competitiveness of their institutions as are the colleges and universities themselves because higher education plays a huge role as the beacon for economic growth and investment. It’s a bellwether for the global competitiveness of a country. There are lots of governments who are very concerned about the status of their institutions and using rankings to measure domestic higher education performance and to compare against other countries. Most of that is geared towards research because those indicators are more readily available.
At this point, there are no meaningful global indicators for comparing teaching or measuring learning. This means that developing a comprehensive meaningful way to compare institutional performance and quality, at the international level, will be problematic over the short term. However, this is where the most attention is being placed at the national and international level – what we really want and need to know is about the quality of the education and educational experience. We’re not there yet.
This interview has been edited for length and clarity.
Author Perspective: Analyst