Visit Modern Campus

Policymakers and Leaders Need More Timely Measures of Student Progress

The EvoLLLution | Policymakers and Leaders Need More Timely Measures of Student Progress
Without timely data or incentives to react to current institutional conditions, it’s immensely challenging for higher education leaders to make the changes they need to improve student outcomes and create wider accessibility.

Imagine showing up in your employer’s office for an annual performance review and hearing something like this: “Congratulations, Nate. Results are in from 2008 and it turns out your predecessor did really outstanding work. So we are doubling your bonus from last year.”

Absurd, right? But that is exactly what trustees, states and the federal government do when they evaluate institutions and their leaders using graduation rate data like those released in January by the National Center for Education Statistics (NCES). These data often feed into other high-stakes measures, like those embedded in college rankings and state accountability systems or funding formulas.

The federal graduation rate calculation was started to help inform students choosing where to attend college. And indeed, if the purpose is to provide students with information, even outdated graduation rates are much better than none at all.

But for the purpose of motivating and tracking institutional change, or improving state and federal policy and budget choices, the current method of calculating graduation rates makes them useless or even counterproductive as management tools.

Consider that for four-year institutions, the graduation rate data just reported are for students who started in fall 2008. At the same time, much of the attrition from college happens in students’ first two or three years, which is when institutions have the best chance to improve their graduation rate. But the average tenure of a college president, according to the Association of Governing Boards, is seven years. For provosts, it’s even less. That means most college leaders currently in office were probably not there when the graduation rate was determined, and even less likely to have been in place long enough to shape the policies, practices and institutional culture that moved the rate up or down.

And even when the same people have been in place for a long time, publicly reported graduation rates have little diagnostic or incentive value for an institution. They are archaeological artifacts: important for the historical record, but hard to connect to current management issues. To rely on them to drive institutional accountability—or to attach financial stakes to them, as many outcomes funding and accountability systems do—is to violate principles of sound management, behavioral economics and sheer common sense. Faced with the near-impossibility of moving the graduation rate, leaders are likely to focus more on other things they can affect and claim credit for: fundraising, faculty recruitment, research, physical infrastructure etc.

There are at least two ways that institutions, policymakers, boards of trustees, and news media could make measures of student progress more relevant and give them the urgency they deserve.

First, we should focus on actual recent college graduates—students whose hands current college leaders may even have shook personally as they crossed the stage. The math couldn’t be easier, and no one is left out. Every summer after graduation, state leaders (and education reporters) should be asking: How many students graduated in our state last year? Was that up or down from the prior year? Every state should be able to answer that question within a few weeks of the end of spring term. It’s a more important story, but much less reported, than the fall enrollment trend pieces during the first week of classes each year.

Those questions will lead to others that may be harder but are the right ones to ask. Why did the number go up or down? How long did graduates take to finish? What about key subgroups of interest: graduates in particular majors, minority and low-income students, graduates from underrepresented parts of the state or the region? And, of course, what did the current crop of graduates actually learn?

Second, institutions should be monitoring and reporting current rates of progress and completion, not those from six or seven years ago. It is easy for an institution to determine, for example, that 90 percent of freshmen make it to sophomore status, 90 percent of sophomores to juniors, 90 percent of juniors to seniors, and 90 percent of seniors to actual graduation. That institution’s current predicted graduation rate, would be 90% x 90% x 90% x 90%—or about 66%. Many institutions use a similar approach to create timely forecasts of enrollment and tuition revenue. An associate degree might be broken down into different progress benchmarks.

There is more than one way to shift the focus to more recent and relevant trends. New Zealand’s higher education agency uses recent progress and completion data for accountability for its public and private institutions. And in the United States, at least one regional accrediting agency has developed a measure of graduation efficiency that uses the most recent available graduation and enrollment data.

State and federal policymakers who want real institutional accountability (as well as timely feedback about the impact of their own policy and funding choices) should also be interested in measures that don’t have a seven-year time lag. College completion rates are too low, sometimes scandalously so, and we should be demanding that our higher education institutions and leaders improve. But for that demand to be meaningful, we need measures of current performance.

And that annual review with your boss? Actually, many employers nationally are moving away from annual performance reviews and focusing on more frequent and immediate feedback that is more likely to produce results. Higher education needs to move in that direction, too.

Author Perspective: