Visit Modern Campus

Debating the Graduate Rate Metric: Understanding Higher Education from Multiple Perspectives (Part 1)

—Co-Written with Christine Hansen | Hawaii Administrator, Victory University and Mary-Ann Swendsen | Regional Engagement Team Leader, United States Coast Guard

Debating the Graduate Rate Metric: Understanding Higher Education from Multiple Perspectives (Part 1)
The undue focus on graduation rates is pushing government bodies to continue encouraging students to enroll in programs that do not necessarily lead to success.

Recently, we conducted a mini-research study to ascertain if a relationship existed between the levels of cognitive skills displayed by students in discussion board posts within online classes and their university graduation rates. We sought to explore whether graduation rates were indicative of student abilities.

Because we committed to encountering data directly, rather than relying on second-hand analyses, we decided to investigate pedagogical factors outside the context of institutional research. Despite the crucial nature of institutional research reporting, and the further need to release some of this information to prospective students as consumer data, we suspected a key part of the picture was missing from this data. Perhaps the easiest way to express our misgivings is to say we had a concern about educational institutions being treated as homogeneous entities for data purposes when there are a multitude of roles within any single institution, including administrators, instructor and students, among others.

Every educational institution is comprised of various roles and layers of roles, with some of these in conflict, some resulting in mutual reinforcement and none without its contribution to the graduation rate. Graduation rates (along with other institutional research frequently reported as consumer data) are assessed with too great a weighting toward the administrative role. It is true data from faculty and student roles are sometimes used in reporting, but this data is almost always filtered through the administrative layer and is seldom available in raw form. As an experiment, we decided to split off administrative, instructional and student perspectives on the same data. In this way, we hoped to go beyond mere administrative reporting.

Our proximity and daily involvement is with the lives of military Soldiers, Airmen, Marines, Sailors and Coast Guardsmen (Coasties). We have been surprised to notice many of the schools most popular with military members and veterans do not have high graduation rates. Despite the fact that some schools with the lowest graduation rates are repeatedly contracted to provide education services exclusively to military personnel stationed abroad, military personnel stationed within the United States do have choice. This is a paradox. Selingo successfully argues that:

“Students who start college but do not finish are typically no better off than those who never even started, and in some cases might be worse off, if they took on debt. Given the subsidies they give to colleges, federal and state governments (to include the military) have a stake in making sure that students finish what they started.” (2012)

Though our team does not necessarily agree students who start but do not finish college gain no advantage over those who never started, we are sure military personnel do transition much more successfully to civilian life if they have earned a higher education credential.

Why keep throwing good money after bad? Our rationale for this phenomenon is three pronged:

  1. The importance of graduation rates is minimized by leadership because there is an assumption the high rate of military student transfers from one school to another blurs the actual graduation rate numbers, rendering them useless;
  2. The importance of graduation rates is minimized by students because this information is overshadowed by the school’s packaged convenience and other selling points;
  3. The importance of graduation rates is minimized by educators because there is no direct interpretation that learning is not occurring.

This latter point occurs mainly because of a widespread belief it is just plain difficult to assess learning (especially in distance education). Later we will offer several other challenges in assessing learning.

One often hears that educational traditionalists — those with a distrust of online programs and newer schools, especially for-profit schools — are “protecting” military members. What are the facts about this? The 1990 Student Right-to-Know Act and resulting reporting of Integrated Postsecondary Education Data System (IPEDS) data provide some starting point for objective assessment of institutional success, which is why we join others in applauding its use. There are various well-known concerns about shortcomings in this data, such as the failure to account properly for transfer students and an inability to properly track students who take more than six years to graduate. These have often been discussed (Glenn, 2010) and we will not consider them in depth here. The concern prompting our mini-research study was different.

Graduate rates are a kind of informational “black box,” providing insight from an administrative perspective on what should be reported to IPEDS. We feel something more is needed that encompasses a greater range of roles in the institution and that recognizes the non-homogeneous nature of an institutional identity. We are also concerned that while institutional data can be an excellent starting point for the student, such data does not tell the whole story of whether the student is likely to succeed in a particular school.

We believe an examination of the school through the lens of various roles can reveal more about the school than a simple examination of data released from an office of institutional research. For the purposes of our study, we chose three roles — administrator, instructor and student — but it would be possible to examine other roles as well. While concerns about low graduation rates have been broadly addressed through assessment, self-study and other administratively-driven sets, we wanted to get down more to the bedrock of everyday student and faculty experiences, as yet unfiltered through an administrative perspective.

We wish to unpack the black box and gain more specific information about the relationship between institutional data and various roles — and, in particular, about non-administrative roles. We believe the relationship between institutional data and the administrator role predominates in characterizations of supposedly “homogenous” institutions. The institution is not considered homogeneous in other terms for reporting data, but somehow roles and their differential power are not considered as a factor in analyzing institutional research data.

Donna Duellberg, Mary Ann Swendsen and Christine Hansen will be discussing this topic in more detail this September at the annual NUTN Network conference in Albuquerque. To learn more about the NUTN Network conference, please click here.

This was the first installment of a two-part series. To read the conclusion, please click here.

Author Perspective: