Visit Modern Campus

Courage Needed to Reinvent College Ranking Factors

The EvoLLLution | Courage Needed to Reinvent College Ranking Factors
Though institutions currently sitting at the top of the rankings list have little incentive to push for rankings reinvention, this process is critical to ensuring students have access to the information they need to make informed decisions.

How do we know if a college is a “good” college? The most visible measure of “quality” of academic institutions is the college rankings.  We can learn a lot about how we define quality from the metrics collected.  In the US News and World Report (USNWR) rankings, for example, roughly 69.5 percent of the 2015 metrics for undergraduate programs were based on inputs—what goes in to the institutions (money from alumni, admissions selectivity, faculty characteristics, reputation from peers, etc.)—and about 30 percent were outputs (graduation and retention rates).

Many have written about the implications of measuring quality this way (e.g., Pike, 2003; Tagg, 1995). Multiple studies discovered institutions manipulating data to improve their rankings (Hossler, 2000; Pollock, 1992; Stecklow, 1995).  More importantly, Clarke (2002) observed that the measures of any assessment of quality are determined by those designing the ranking, and therefore are subject to their values. The measures about faculty, for example, reflect a paradigm that values teaching rather than learning, a distinction Tagg (1995) articulated 20 years ago.

The metrics used by groups like USNWR are important because they influence some institutions’ behavior (Hossler, 2000), and certainly some prospective students’ choices (Morse, 2013).  Most worrisome, the measures in USNWR, with their emphasis on resources and reputation, do not reflect the research about the elements that contribute to student learning and development—most notably student engagement.  Pike’s 2003 study found that “educational quality seems to have little to do with resources and reputation” (p. 16). The measures used by USNWR have only a distant relationship to the ultimate outcome, learning, or to the unique contributions of individual campuses serving the diversity of our students.

What if we recalibrated our definition of quality to focus on what the research says really matters in student learning and development? We would consider metrics such as:

1. Quality of instruction (Astin, 1993; Chickering and Gamson, 1987; Henard, 2012):

How are faculty applying principles of effective pedagogy, and to what degree is effective differentiated instruction being employed? To what extent is continuous improvement of instruction part of the faculty conversation? Where do students start in their knowledge and what have they gained at the end of a course or a major, and to what extent do faculty attend to those data and modify their program design or instruction? Are there clear learning goals for each major and each course, and how is progress towards those goals assessed?

2. Engagement with learning (Astin, 1977 and 1993; Pascarella and Terenzini, 1991):

What kind of structured, facilitated, formal and informal leadership development opportunities are there? To what extent is there evidence that the institution and students value the co-curricular experience and integrate it with the formal academic experience? How is the application of theory to real-time problem solving and skill development integrated in each discipline?

3. Accessibility (Long, 2008):

Financial, physical and temporal—what dollar amount do students end up paying, by themselves, for their education? How accessible are the facilities and opportunities? How are courses adapting to where the learner is starting and how quickly the learner is learning? To what extent are the goals noted under “quality of instruction” clearly articulated with students’ career goals?

4. Exposure to new ideas and networks of people (Astin, 1993; Chickering and Gamson, 1987):

How diverse are the course offerings in terms of disciplines and perspectives? How diverse are the student, faculty and staff populations? How effective are opportunities for making connections and expanding networks for students? What are the mechanisms for helping students engage new ideas, inside and outside the classroom?

The Likelihood of a Transformation

Of course it is much easier to list the kinds of factors we should be attending to, and much more difficult to find effective measures of them. The National Study for Student Engagement (NSSE) has for the past 15 years been gathering data about some of the engagement indicators noted above . These data come from individual students, rather than institutions, making them less likely to be manipulated yet they are subject to the vagaries of self-reporting (Pike, 2003).

I think, though, that the challenge of changing the metrics is less a conceptual or practical one, and more a motivational and political challenge. We do know what to measure, and we have the capability, thanks to so many new technologies. But do we have the will?

I see little incentive for those presently sitting in the top tier to change their metrics. They are caught in a type of arms race with one another, vying for the top 25 slots. The rest of us, however, could risk embracing an alternative, especially one that will benefit the majority of students, who, for a variety of reasons, may never consider attending the institutions sitting at the top of the USNWR ranking.

There is where I hope we see a more informative, student-centric definition of academic quality emerge.

– – – –

References

Astin, A. (1977). Four critical years: Effects of college on beliefs, attitudes, and knowledge. San Francisco: Jossey-Bass.

Astin, A. (1993). What matters in college? Liberal education, 79(4), 4-16.

Barr, R., & Tagg, J. (1995). From teaching to learning: A new paradigm for undergraduate education. Change, 27(6), 12-25.

Chickering, A., and Gamson, P. (1987, March). Seven principles for good practice in undergraduate education. AAHE Bulletin, pp 3-7.

Clarke, M. (2002). Some guidelines for academic quality rankings. Higher Education in Europe, 27(4), 443-459.

Henard, F., and Roseveare, D. (2012). Fostering quality teaching in higher education: Policies and practices. OECD Institute for Management in Higher Education, retrieved from http://www.oecd.org/edu/imhe/QT%20policies%20and%20practices.pdf

Hossler, D. (2000, March). The problem with college rankings. About Campus, 20-24

Long, B. (2010). Making college affordable by improving aid policy. Issues in Science and Technology, 26(4), retrieved from http://issues.org/26-4/long-2/

Morse, R. (2013, October 31). Rankings play increasing role in college application choices. US News and World Report, retrieved from http://www.usnews.com/education/blogs/college-rankings-blog/2013/10/31/rankings-play-increasing-role-in-college-application-choices on 1/21/2016.

Pascarella, E., and Terenzini, P. (1991). How college affects students. San Francisco: Jossey-Bass.

Pike, G. (2003). Measuring college rankings: A comparison of U.S. News rankings and NSSE benchmarks. Paper presented at the annual meeting of the Association for Institutional Research, Tampa, FL. Retrieved from http://www.nsse.indiana.edu/pdf/research_papers/Pike_Measuring_Quality.pdf January 18, 2016.

Pollock, C. (1992) College guidebooks—users beware. Journal of College Admissions, 135, 21-28.

Stecklow, S. (1995, April 5). Colleges inflate SATs and graduation rates in popular guidebooks. The Wall Street Journal. pA1

Author Perspective: