Published on
A New Lens on Credentials: How Harvard Extension School Is Rethinking Non-Degree Program Evaluation
At Harvard Extension School’s Office of Certificates and Non-Degree Offerings, we manage a large and diverse portfolio, including over 40 graduate certificates, four undergraduate certificates and a growing set of microcredentials. Our students, often midcareer professionals, can enroll in these programs without any admissions process. They simply indicate their interest in a credential at registration and, if they complete all the required courses within the corresponding timeframe with a grade of B or above, they earn their credential. Our team supports students throughout their journey, providing advising and tools to help them track course requirements and request their credential upon completion. Notably, our certificates are designed to stack to our master’s degrees, offering a flexible pathway for continued learning.
Despite managing innovative, interdisciplinary offerings, the Certificates and Non-degree Offerings team had an opportunity for a formal review process to evaluate credit-bearing microcredential effectiveness. Traditional degree program evaluations didn’t fit our shorter, more flexible credentials. As a result, we developed a new evaluation system using a weighted system to regularly assess the health of our programs and support data-informed decisions to retire, revise or continue offerings. Here, “evaluation” refers to determining the ongoing value and structure of a credential, while “assessment” encompasses the data gathering process that inform those decisions.
Why the Weighted Systems Approach?
Over time, we learned that certain data points have a stronger impact on a credential’s success. For example, if a key course within a credential consistently receives poor reviews without administrative intervention, that credential is unlikely to be successful. Similarly, if we find that a topic is outdated in the industry, enrollments for that credential are likely to decline over time. Therefore, we determined a weighted approach would work best for us when deciding which data points to include in our evaluation process.
Understanding your institution and collaborating with program leaders and directors is essential in developing these weightings. Some of our departments rely heavily on advisory boards, while others benefit from program directors who are currently active as industry experts. The Office of Certificates and Non-Degree Offerings navigates this landscape to develop and support non-degree academic credentials.
To begin, we identified data points that we were using to measure a credential’s success and labeled them as having high weight. Rather than immediately assigning numerical values, we initially categorized each data point as high, moderate or low weight. This weighting step is crucial and should be customized for each institution using this approach, particularly for non-degree programs.
Selecting and Weighing KPIs: Process and Rationale
We conducted multiple brainstorming sessions to identify which data points, or key performance indicators (KPI), should be included in our evaluation system. Each KPI was then categorized as high, moderate or low in terms of weighting (see Appendix A). For instance, new student interest numbers and curriculum alignment with credential learning objectives were considered high-weight KPIs. Moderate-weight KPIs included elements such as competitor analysis and results from the certificate earner survey students completed upon requesting their credentials. These served as potential indicators of suboptimal performance within a credential. Low-weight KPIs, such as student career fields and job titles, were considered supplementary data that can inform decision making but are less central to evaluating program success. Due to the interdisciplinary nature of programs and students’ diverse backgrounds, we determined alignment between job titles and specific programs was not a reliable success indicator for us.
After compiling a comprehensive list of KPIs, they were organized into thematic categories to form sub-evaluations. This approach resulted in the development of sub-assessments in four key areas: market demand, curriculum accuracy, program design and overall outcomes. Each area receives a distinct score, which collectively contribute to the overall health score for each evaluated credential. The information is presented within a single Excel workbook template (see Appendix B), with each sub-assessment documented on a separate sheet. For scoring, point values are assigned to each KPI based on its weighting category. Additionally, we established specific criteria to determine whether KPIs receive full, partial or no points during the evaluation process.
Lessons Learned and Recommendations for Adoption
Developing and implementing a new assessment system required both flexibility and agility. After analyzing a few certificates using our initial system, we realized that some adjustments were necessary. These changes included moving certain KPIs to different categories, reassigning weights to specific KPIs and creating a new sub-assessment. We also discovered that some KPIs fit multiple categories. To avoid duplicating KPIs, we made sure to clearly define what each KPI measured and how it fit within its assigned category. Overall, this process is ongoing and has taken time but has been valuable in providing evidence-based assessment data on our credential evaluations.
Conclusion and Future Direction
Understanding your institution’s values and direction is essential when developing an evaluation system for non-degree credentials. At HES, the non-degree credentials we offer, which include graduate certificates, undergraduate certificates and microcredentials, do not fit within the typical degree assessment cycle. Likewise, how we measure credential success and the type of data we collect differ from tradition program assessments.
In response, we developed our own evaluation process using a weighted system that provides an overall health score for each credential, along with sub-scores in key areas such as market demand, curriculum accuracy, program design and outcomes. This process has taken over a year to develop, and we continue to refine it as we determine the best way to present these evaluations to stakeholders. Our goal is to eventually run all our non-degree credentials through this evaluation system to gain a comprehensive understanding of program performance across the non-degree credentials. For now, we plan to evaluate our graduate certificates every five years and our microcredentials every three years. This cycle allows time for enough students to complete the programs, ensuring we have enough data to derive meaningful insights.
Ultimately, while non-degree credentials are often assessed using the same criteria as degree programs in academic continuing education, certificates and microcredentials are fundamentally different and require a distinct, individualized approach to evaluation. By developing a tailored, evidence-based system, we can make informed decisions regarding the future direction of our non-degree offerings, ensuring they remain relevant, rigorous and responsive to our diverse student population’s needs.

