Published on
Ignoring Non-Traditional Students Invalidates Most College Ratings
There’s no shortage of publications and systems available for students to track institutional performance and student success. Unfortunately, the truth of many of these systems is that they ignore or overlook the vast majority of learners enrolled in higher education today. In this interview, Cathy Sandeen shares her thoughts on the validity of the College Scorecard, released by the federal government, and reflects on its value as a mechanism to measure institutional success.
The EvoLLLution (Evo): What are a few of the most significant flaws you see with the College Ratings system the Department of Education released in September 2015?
Cathy Sandeen (CS): The most significant problems are the data that it is based on, because the data is imperfect. They only include students who are participating in Title IV financial aid programs, so it covers a large flock of students but it’s not all college and university students. It’s also based on first-time, full-time students only and we know that at least 75 percent of students in the U.S. are not in that category. Most students today are non-traditional students who are older, working, maybe started and stopped and are coming back, or have transferred.
There’s a huge number of students who aren’t included in the data the scorecard is based on and that is a major problem.
Evo: Does the scorecard count non-credit students or just students pursuing degrees?
CS: It only measures students who are pursuing degrees because Title IV funds are only available for approved programs. While there are some credit-bearing certificate programs, where students are eligible to receive Title IV financial aid, the majority of non-degree programs are not eligible, therefore those students would not be included in the data behind the scorecard.
The scorecard tells us some information, it’s true, and I think that impulse behind it is very positive. If you looked at the scorecard, it gives three average scores for institutions:
1. Average annual cost for that institution, showing whether it costs more or less than the national average;
2.Institutional graduation rate compared to the national average. Again, this only counts first-time, full-time students, not transfer students or students that transfer and graduate someplace else.
3. Salary after graduation, which tracks the average salary of graduated students and shows whether that’s above or below the national average.
A student can see how what they’re paying compares to the national average and can get a general sense of what kind of salary they can expect after leaving. If they will be paying less than the national average and making more after graduation, they can identify that a given school is probably a pretty good deal. The problem is that it’s providing averages. If it’s an institution where they have a lot of health care degrees or STEM degrees, those students are likely to earn more money. The average salary after attending will look high but it depends on your major. The scorecard is an abstraction and it doesn’t tell the full story for each and every circumstance.
Evo: In a recent article, you mentioned one of the significant benefits of the College Scorecard was that it took affordability into account. How common is that among popular institutional rating and ranking systems?
CS: I think it’s fairly uncommon for ranking systems to take affordability into account. Ranking systems will factor in a number of different things. They might include average cost per student per year, they might factor in tuition and fees. However, the Scorecard focuses on affordability metrics: the average cost of attendance, the graduate rate and the salary after attending. It’s pretty rare that you would be looking at only those things in a ranking system, though I should point out the scorecard is not a ranking system.
On that note, it was interesting how institutions that came out looking good on the scorecard started touting it as though it was a ranking. There were a number of institutions who said, “We do not want the federal government to impose a ranking or rating system because it wouldn’t be fair as different institutions are serving different student segments and you can’t compare apples to oranges.” But the minute the scorecard came out, I got an email from one of my alma maters, UCLA, touting how well they did on the Scorecard.
Evo: What impact does the decision not to include the experiences of non-traditional students in the federal ratings system have on its effectiveness?
CS: If you’re eliminating a big chunk of the population that you say you’re serving with the scorecard, it’s going to kill the results. It has a huge impact on the scorecard’s effectiveness.
There is another scorecard system that has been developed by the Association of Public and Land Grant Universities (APLU) called the Student Achievement Measure (SAM). They take into account that there are mobile students who enroll in multiple institutions or who transfer and then graduate. The traditional way of counting only the graduation rates of first-time, full-time students really doesn’t show the true picture and it can really have an negative representation for some institutions. The SAM tracks student movement across various postsecondary institutions. A link between the the scorecard and SAM would be good because that would give us a more complete picture of how an institution is doing in terms of student success and completion.
Evo: How does the rating system in its current form impact the institutions that are currently under your purview and what impact would a movement to a system like SAM have on the capacity for institutions like yours to showcase the work they do?
CS: We are an odd duck, especially the UW Colleges. UW Colleges is composed of 13 two-year transfer institutions, and that’s all we do. We encourage our students to earn their associates degree before they transfer—in fact there’s a huge incentive because if they have the associates degree it’s fully transferrable to any other University of Wisconsin institution—but not all students complete that degree. Often, they transfer when they’re a few courses away from it, which impacts our graduation rate. However, we’re fulfilling our mission in helping students transfer to another institution.
I don’t think students who come to us will consult the scorecard before making their decision. I think they make their decision based on the affordability of our tuition, which is the lowest in Wisconsin, and the convenience we can provide. They want to attend an institution that’s close to their home, and they want to attend a smaller institution where they can get personal attention and a good start on their bachelor’s degree. Obviously those other factors would not be reflected in the scorecard—just the tuition would be.
Evo: How important is it for the federal government to provide a system of ratings that supports decision making for today’s discerning students?
CS: I’ve been reflecting on that question and thinking about the process I went through in choosing my first undergraduate institution and also reflecting on the process that my daughters went through choosing their own institutions.
I don’t think I would have turned to a federal database or scorecard to help me make the decision. I lived in California where there was a very robust, high-quality public higher education system and we knew about it from when we were little children. We knew about the different institutions, where they were and, in my case, I don’t think the scorecard would have swung me one way or another. I can see that if a student is first-generation or if a student is considering a private, out-of-state institution, then it might be somewhat useful.
However, I worry that people don’t understand the limitations to these data, or that the scorecard shows only averages. They might not understand that’s it’s not predictive of how a particular student is going to do. I think it could be useful as one piece of data in making decisions but I really would be curious to know how many prospective students are actually using the scorecard. The first weekend that it was out, I’m convinced that the majority of individuals using it were from higher education institutions looking at their own scores, along with various associations and think tanks digging into the data to try to provide information and context. I’m not sure how much it’s used by students, families or high school counsellors.
Evo: How must the federal government’s rating system evolve in order for it to paint a truly representative picture of the opportunities available for students across the postsecondary space?
CS: I doubt that the rating system will ever be truly representative. When you have a dashboard with three variables on it, it’s presenting an abstraction. I would feel a lot better about it if it included non-traditional students—learners outside the strict confines of first-time, full-time students. It would also be good if it could expand to students who were not participating in Title IV financial aid programs so that we included the performance, experience and outcomes of all students in these important databases.
Evo: Is there anything you’d like to add about what it will take to create a more student centric rating system for colleges and universities?
CS: I am not in favor of a rating system, in general. I think having a variety of different sources of information that are credible, have been curated, that individuals can look at is important, though.
The notion that we can go to one place to get all of our information and press a button and make this important decision isn’t useful. We need to try to figure out how we can address the diversity of different institutions in this country, institutions with different missions and institutions that serve different student segments.
Measuring and being accountable for students’ success and completion is important. If you are going to spend your time and your money on this very important investment, knowing that you have a good chance of completing is probably one of the key pieces of information that students, families and counsellors should pay attention to. However, that measure of student success and completion has to be accurate and fair and I don’t think we’re there yet.
Author Perspective: Administrator