Visit Modern Campus

College Rankings: Just When You Thought It Was Safe To Go Back In The Water

—Co-Written with Chris Mohr | Associate, University Ventures

The EvoLLLution | College Rankings: Just When You Thought It Was Safe To Go Back In The Water
The College Scorecard released by the Department of Education is a good start on the path to a student-centric institutional ranking system, but it must be further developed to truly meet the needs of today’s prospective learners.

It has become fashionable in recent years to slam the college rankings racket. The U.S. News and 14 other rankings currently active in the U.S. have historically ranked higher education institutions based almost entirely on easy-to-measure inputs like student selectivity, faculty resources, spending per student and library holdings. Most rankings have also included a research component; many are so research-heavy they’re in danger of toppling over.

We’ve enjoyed pointing to studies that show year-to-year rankings variability among institutions is greater than could be plausibly attributed to actual changes in institutional quality. We’ve poked fun at Phil Baty of Times Higher Education who manages an entire section of that publication devoted to the rankings racket and who tweets newfangled rankings on a daily basis. We’ve admired rankings disrupters like Jess Brondo Davidoff of Admittedly who has surveyed students and found that rankings rank quite low (20th out of 27 factors) in students’ college selection process. And we’re fond of making the point that ranking based on easy-to-measure metrics rather than on the most important metrics is akin to the patron stumbling out of the bar late at night, looking for his lost car keys under the parking lot light because that’s the only place he can see.

But just when you thought the college admissions waters were safe (i.e., refreshingly free of rankings flotsam and jetsam), last September the U.S. Department of Education (ED) released an updated version of its College Scorecard that provided data on the income of former students, compiled by matching federal student loan data to the IRS database. The College Scorecard now provides two really useful pieces of data: the percentage of graduates earning more than $25k per year six years after entering college; and the median salary of former students a decade after starting college. In other words, for the first time, we have real and valuable outcome data on an institutional level. Or, as the Center for American Progress noted, the new Scorecard “contains important indicators that have never previously been available for all institutions of higher education.” (It should be noted that prior to September, some rankings had attempted to utilize income data from Payscale to do the same thing. However, Payscale data is self-reported and only covered about 1,300 colleges with much smaller sample sizes.)

As with every major development in higher education, the new Scorecard has not arrived without controversy. First, the income data is limited to students who partake of federal financial aid, leaving out students from the most well-off families. Second, the Scorecard doesn’t distinguish between graduates and dropouts; all students who utilized federal grants or loans are included. Third, it doesn’t distinguish by program (and we know there’s often more variation within an institution than across institutions). Finally, there’s the blatant monetization of the college experience. Andrew Delbanco, a Columbia professor and author of College: What It Was, Is, And Should Be, was quoted in the New York Times saying “Holding colleges accountable for how well they prepare students for post-college life is a good thing in principle. But measuring that preparation in purely monetary terms raises many dangers. Should colleges be encouraged first and foremost to maximize the net worth of their graduates? I don’t think so.”

To these objections, we say, first, anyone who thinks we shouldn’t judge an investment in higher education based on graduates’ income is probably someone who’s never had to worry about money. Second and more generally, don’t let the best be the enemy of the good. Higher education is moving inexorably from an isomorphic faculty-centric model offering a one-size-fits-all product (i.e., degrees) to a diverse, student-centric model offering a wide range of shorter and less expensive credentials that will be valued by employers (and that will produce a stronger return on investment for students). Any outcome metric that allows students to evaluate where universities are positioned on this road is valuable. As enrollment begins to shift, colleges and universities will have no choice but to adapt.

The best news is that by focusing on making this data available, ED seems to have galvanized a new generation of rankings that are a departure from the racket we love to hate. In October, The Economist released its first rankings by running ED’s income data through a multiple regression analysis to determine whether schools over- or under-achieved on projected income based on inputs such as SAT scores, % Pell, STEM-focus, and geography. From this methodology, we learned that Washington and Lee graduates overachieve their predicted income by about $22k per annum.

Washington and Lee are clearly doing something right.

At the same time, by using the College Scorecard data, Brookings Institution was able to update its college rankings first released earlier last year and initially completed with data from Payscale. Brookings’ “value-added” rankings take a similar approach to The Economist: Students shouldn’t be interested in evaluating colleges based solely on inputs or outputs, but rather on how much additional value students derive from the educational experience.

You may recall that the new College Scorecard was preceded by a 2013 announcement from the President that the federal government would be developing its own ratings system. It took two years for ED to get to the right answer, which was not to disrupt the ratings racket head-on, but rather to provide better data so organizations like The Economist and Brookings could incorporate it in their next-generation rankings and get the word out more effectively.

As long as ED ensures that the current iteration of the College Scorecard is just the beginning and not the end of the data improvement process, these next-generation rankings powered by ED’s scorecard data have the potential to redirect attention from the brightest and shiniest institutions, to the most productive and useful. And while that may be detrimental to institutions that refuse to budge from their isomorphic faculty-centric model and one-size-fits-all product, for the first time we’ll be able to say with conviction that rankings are actually helpful to students.

Author Perspective: