Reimagining College Rankings: What Information Do Students Truly Need?Martha Ellis | Dean of Faculty at the Roueche Graduate Center, National American University
In the United States we appear to be obsessed with rankings. Everything from sports teams to refrigerators are ranked. Rankings of higher education institutions are not new with college rating services like U.S. News and World Report. What is different is the idea of one national institutional ranking system and how this ranking system will be used. The primary purpose espoused by ranking systems is to help potential students and their families make informed decisions about selecting a college. The more ominous side of rankings is to publicly shame low-rated schools.
In 2013 President Obama proposed the government create a system that would rank all higher education institutions. This proposal was abandoned. Why the uproar? There are two primary areas of concern: data limitations and oversimplification.
Ranking systems rely on IPEDS data for graduation rate, which leaves out significant numbers of students at many institutions and does not track students who successfully transfer prior to receiving a degree. However, most agree this is the only federal data source currently available to compare institutions. (The National Student Clearinghouse is a promising light in this data darkness but it will be a few more years before we know.)
Second, data gathered about salary after attending are generally calculated by determining the median salary of all former students after a certain time period, rather than by major. If this number is a critical component for nationally published institutional ranking, then institutions are discouraged from offering programs that prepare students for careers such as teaching and social work.
Thirdly, tuition is not separated by field of study even though many institutions have differentiated tuition across its majors and colleges.
The fourth issue relates specifically to two-year colleges. For community colleges, the Community College Survey of Student Engagement (CCSSE) is used in a ranking system. The CCSSE data were never meant to be used to rank institutions but rather for institutions to benchmark against a national average and to those of peer groups, such as institutions of similar size or in a similar geographic area.
Finally, learning outcomes and labor market outcomes are generally not included as there is no consensus on standard metrics. Interestingly, these factors are the primary reason for completing a higher education credential.
The second concern in the ranking system is the oversimplification of assigning one number to the entire institution. This process does not take into comparison major variables. For example, a ranking number is the same for whether a student studies humanities or computer science or mechanical engineering. The one number does not take into account significant difference in missions, student socioeconomic status, types of institutions, admissions criteria and financial resources. Even if this number is a good starting point the number cannot be taken at face value without more research by the student and family. Some critics claim that the oversimplification may actually mislead rather than help. Additionally, the spotlight on community colleges has led to ranking of community colleges, which brings a new level of concern about the benefit of ranking systems when students are geographically bound.
The issue of valuable information for data-informed college choice decision making is still needed. The U.S. Department of Education released the College Scorecard to allow students to compare schools based on average annual costs, graduation rates and median salaries after attending with all three benchmarked to a national average. The website can be searched by type of institution, location, specialization and size with additional information about student demographics, student retention, typical student debt and programs of study. While a better approach than ranking, there is still no way to know the salary earned by graduates in specific undergraduate programs. No doubt some prospective students and parents are more savvy consumers than in the past. However, this scorecard can be misleading to first generation students who do not know the questions to ask or the difference between a public, non-profit, for-profit, two-year or four-year institution. For example, when looking at the scorecard profile for Bellin College and University of Maryland at Baltimore, there is no way to tell the difference in admissions requirements or degrees earned. The national average for salary earned after attending and average annual cost is not a helpful benchmark for regional universities and community colleges. Finally the data issue arises again as the sample students included in the scorecard are only those who received federal financial aid.
A More Student-Centric Approach to Rankings
A more student-centric approach to assist in a data-informed decision for choosing a college is sharing of dashboards of student success. These dashboards are developed for metamajors at community colleges and by colleges at universities, e.g. engineering, science and mathematics, business, education, etc. Tuition, admissions selectivity rate, retention rates, successful transfer numbers, graduation numbers by credential type, and salary after attending are metrics that individual institutions track. A set of standard definitions seems much easier to establish than an agreed-upon national data set. Benchmarking with peer institutions by geographical area, size, and/or type of institution rather than a national average will provide more valuable information for the vast majority of people attending higher education.
Institutional, field-of-study dashboard information will benefit higher education leaders who are striving to improve timely student completion and lower student debt. Most institutions are actively engaged in assessing student success and are reviewing these metrics on a regular basis to see institutional improvement from previous years. By having benchmarks with similar institutions, institutional leaders will have an additional reference point for improvement, assessment of progress, and areas that need attention.
These types of dashboards will benefit students, particularly the large number of first generation, non traditional, and part time students who need specific information not only on an institution but more importantly, information on specific fields of study within that institution. For many of these students, relocation is not an option so the choice between institutions is less of a decision unless there are multiple institutions in the area. The more important information is about the potential for return on my time and financial investment in the local labor market if I choose this particular field of study at this particular college or university.
Author Perspective: Administrator