Four Key Variables Institutions Must Track to Measure Online Student Success
In today’s rapidly evolving economy, higher education is under increasing fire to demonstrate its value on investment by quantifying academic quality within the context of student success. This is especially true when it comes to the growing numbers of non-traditional students, studying either wholly or partly online.
Yet given our technological capacity to collect and analyze enormous amounts of data, it’s often difficult to come up with a consistent approach for using it to measure both quality and success. So after years of experience in the online education space, I have settled on four interrelated variables that form an effective framework for real-time assessment, as well as continuous improvement.
Given that most students still equate higher education with desks in a classroom, the online learning environment can be a bit intimidating, particularly when it is the sole mode of academic delivery. Still, studies indicate that the better prepared students are to navigate the virtual classroom, the more successful they will be when the rubber meets the road—which is why it’s important to benchmark the readiness factor against performance and persistence.
At Drexel University Online, we have developed a week-long virtual “course” that allows prospective students to “test-drive” the Blackboard learning environment, at no cost, before they enroll. This course not only closely mirrors the online learning experience at our university, it also provides pertinent information about virtual support services and resources, along with a unique opportunity to connect with current online students and faculty.
But equally important, it yields plenty of essential student readiness data, including prior experience with online learning; motivation for pursuing further education; reasons for choosing the virtual option; and perceived comfort around using the learning management system before, during, and after Test Drive participation. And now two years after launching this tool, we are beginning to look at how we might employ the data it generates to compare academic success levels between those students who took part in this event and those who did not.
The rate at which our online students engage with their coursework, their professors, and their classmates enables us to not only predict student performance, but also pinpoint areas for course quality improvement.
Fortunately, today’s learning management systems track a multitude of student engagement data points that are critical for analyzing and enhancing student outcomes—from time spent and materials accessed, to assignments submitted and active participation in the virtual classroom. In fact, by having a robust system for monitoring and reporting these metrics, instructors can quickly flag students for targeted assistance, while also reworking course design and learning activities to boost student performance.
Likewise, we must have systems in place for tracking and assessing support service engagement to ensure that the online resources we provide meet the unique needs of non-traditional students. For example, students who connect regularly with their academic advisors and effectively use support services like the virtual writing center and digital library are generally more successful. At the same time, decreasing engagement may point to specific institutional shortcomings in service quality and efficiency that can then be addressed to improve student outcomes.
Because non-traditional students are often balancing the demands of school with the responsibilities of work and family, they are more likely to stop out—or even drop out altogether—when the going gets tough. That’s why it’s important to ensure that the support systems we have in place are immediately responsive and highly effective. On the other hand, low persistence rates can also signal poor quality curricula, course design and/or instruction.
Consequently, by analyzing enrollment patterns, over time we can potentially identify significant barriers to retention and completion. For example, given research findings that online students who successfully complete two courses are more likely to keep moving, it’s essential to track re-enrollment data—by academic program—that align with this timepoint. Consistently decreasing rates are usually a sign of either poor quality coursework or inadequate support services, and, in some cases, both.
There is abundant evidence to suggest that satisfied students tend to experience greater academic and professional success. We typically gauge student satisfaction within the context of personal attainment factors—such as academic achievement, student experience, university affinity, and post-graduate career outcomes.
What’s more, we have developed a variety of tools and processes for measuring it (including surveys, focus groups, and individual student interviews), and can then use the data they generate for continuous improvement in any number of areas—from academic quality, to student engagement, to service support.
Like most institutions, Drexel has implemented an assortment of assessment instruments, designed to appraise online student satisfaction—from end-of-course evaluations to pre- and post-graduation surveys. In addition, we interview current online students and recent graduates to elicit testimonials around their student experience and related career outcomes, while tracking such other relevant statistics as student pass rates on professional licensure exams, another key indicator of student success.
Author Perspective: Administrator