Microcredentials: Questions About Quality, Transparency, Learner Rights and Issuer Concerns
Microcredentials promise a greater level of transparency into educational achievement than our current credit hour-based postsecondary degree system. To make good on that promise, we need a national infrastructure that supports a broad understanding, acceptance and use of these credentials. There are many individuals and organizations working toward this goal. Individually, they are too numerous to mention but groups like the Lumina Foundation are coordinating the national conversation to create such a system, and organizations like IMS Global are working on the technological requirements for the credentials themselves. The need for quality assurance for credentials has been raised as part of this work. One way to approach this is to identify where and how much transparency is needed to empower credential earners, consumers and other stakeholders to evaluate for themselves the quality of the credential. For Quality Matters (QM), it’s a very timely question.
In the midst of all of the good work happening around microcredentials, QM is evaluating its own credentialing processes and making the move to a competency-based digital credentialing system. In the process, we have made some observations about the tension between the transparency needed to evaluate quality, and considerations of learner privacy and issuer rights. Ensuring the quality of microcredentials offered by providers is a goal that is not necessarily consistent with the self-interest of credential issuers, students, employers and other stakeholders.
Some Credentialing “Street Cred” for QM
QM has long had a digital credential that reflects the application of measurable, research-supported, and well-articulated criteria. QM’s certification of an online course is represented by a QM mark that is hyperlinked to the data about the review of the course. However, not all of the evidence that supports the award is available through the credential link. The final report, containing detailed feedback about how a course meets, or doesn’t meet, each particular standard is provided directly to the course developers for their information and use. The decision about sharing that essential piece of evidence is made by the course developer and their institution. This evidence about the evaluation of the award is sensitive and proprietary to the institution. Although the rationale for this was clear to us, we didn’t initially equate this with the idea of privacy concerns of individual credential earners.
Because we had been following the national initiatives, we became interested in providing digital credentials, or badges, to the faculty, staff and administrators who take QM professional development. We had been offering only a certificate of completion for courses, which did not identify what participants had learned or could do. It could not easily be shared on platforms. And it didn’t reveal the evidence of proficiency. It was time to offer meaningful microcredentials for our own learners—to meet their needs, and ours, with a more effective way to communicate and validate their proficiencies. Or so we thought.
Taking the Plunge into Microcredentials
As a quality assurance organization with a strong reputation for rigor in the way we support our standards with research from the field, we could not very well “wing it” in our own approach to microcredentials. Although there is not yet a body of research that can guide the practice of microcredentialing, we did have the work of those pioneering in this area. What we found was a set of criteria for digital credentials that supports learners and that fit well with our needs as a credential issuer.
The Connecting Credentials framework (Lumina Foundation) identified four benefits of a national credentialing infrastructure: Equity, Credential transparency, Comparability, and Portability. Although all were meaningful for us, we were especially interested in the use of digital credentials to address our requirements for transparency in the validation of our award. We know that our individual role certifications, such as QM Peer Reviewer and QM Master Reviewer, carry value in the field beyond their intended use. They are used in job descriptions, in institution service criteria and, in some cases, in promotion and tenure systems. The ability to expire certain credentials and to make transparent the evidence and evaluation required for the award were important considerations. In addition to explaining what was done to earn the credential, we wanted the evidence supporting the competency to be transparent as well. That’s where we encountered the tension, the competing needs, that will need to be addressed as we consider what’s required for quality assurance of microcredentials.
It’s Never as Straightforward as it Seems
Our transition from issuing a certificate of completion to awarding a set of microcredentials has raised questions about what needs to be shared to be transparent about the validity of the award. As we begin to roll out our system more broadly, we are engaged in this discussion about the balance between learner self-determination and right to privacy and our desire for full transparency.
What forms the evidence for the award, what is appropriate to connect to the digital credential, and who gets to decide? Sharing the core piece of student work, perhaps a culminating activity, can clearly illustrate what was done to earn the competency credential. But not the context in which it was done or how it was evaluated. Stakeholders can look at the evidence behind the badge as well as the descriptions in the meta data to see what was done to achieve it, but how much information do they need to make judgments about the validity of the credential? How much is practical to share? And how much is private or proprietary information?
Do credential earners have a say in what work gets connected to their credential? Will they view this as a quasi portfolio to share publicly or with specific stakeholders or will they see this as private information to be shared only on an “as needed” basis? Should we care about that? Issuers have the ability to expire a credential for a competency award that is perishable. As credential earners increase their skills beyond the level presented by the existing evidence, should they have a way to expire or improve the evidence but keep the credential?
We would be grateful to hear how others have handled these questions. For example, we are considering the use of an e-portfolio system as a logical adjunct to a microcredentialing system to provide learners more self-determination around the evidence of their learning and the sharing of their work. How has this worked for you? How are you handling the trade-off between transparency and privacy concerns?