Visit Modern Campus

Constructing Quality Digital Learning Experiences

The EvoLLLution | Constructing Quality Digital Learning Experiences
In looking at how to improve the quality of their digital learning experiences, postsecondary institutions should consider user experience/friendliness, quality benchmarks to meet context-specific needs.

The increasing importance of digital learning experiences in education has, of course, disrupted many of the conventional ways of doings things. One of the major questions this change has sparked is: “How can I be sure that my students are getting everything they need to be successful?” In essence, this question speaks to quality; are students engaging with a learning experience that is providing them with relevant content, meaningful assignments, clear direction and the holistic support they need to be successful? While the word “quality” itself is one that many may find distasteful, it is a concept that must be addressed in today’s educational landscape.

Institutions, more and more, will continue offer varying forms of digital learning experiences, such as traditionally paced online classes, competency-based courses, and even modular microlearning and credentialing units. As such, it is critical that institutions use a guiding framework or set of standards to define their approach to quality across these offerings. And, for an optimal approach, this definition of quality should combine field benchmarks, institution-specific needs and attention to user experience.

As many reading this probably know, there are a number of tools already used in the field to measure some of the so-called quality benchmarks. In the early 2000s, Quality Matters (QM) was formed when a small group of concerned educators wanted to help define a consistent and easy learning experience for students. [1] Now, QM is fully fledged organization offering systemized processes for measuring course quality, programs to train educational professional to measure quality and a host of other helpful resources. Similarly, the Online Learning Consortium (OLC) offers a suite of scorecards that measures quality in areas beyond course design, including program administration, instructional technology use and student support. OLC has even partnered with the State University of New York (SUNY) to create a free and open course review tool, OSCQR. And, like QM, OLC offers comprehensive support for their scorecards with professional development courses for instructional designers and faculty, official reviewers, among other consulting services. [2]

While the tools QM and OLC (and others) offer each have their own benefits and specialties, they reveal the benchmarks of quality in learning experience. Why exactly are they benchmarks? Because there are distinct aspects of learning experience design that are measured across all of these tools, denoting a field-wide agreement of their significance. The implications of this significance, then, are that these points should be included in any learning experience delivered to students. For many in the field, these agreed-upon aspects (benchmarks) are ideas with which many educational professionals are familiar:

  • Any learning experience should have clearly defined and communicated goals (i.e. learning outcomes) that students aim to reach by the end of their experience.
  • Students should be able to achieve these goals through active learning assessments and content directly aligned to them.
  • Content, assignments and the overall learning environment should adhere to accessibility guidelines to ensure students of all different abilities have an equitable experience.

While this small list of benchmarks is not comprehensive, it is representative of some of the core points that people attribute to a high-quality learning experience.

Though these benchmarks are important to defining quality, they can miss a critical component that needs to be considered: institutional context. Many institutions find it important to imbue their learning experiences with aspects of design that are representative of their identity and mission. For example, a college might use a curriculum framework to ensure that 100-level courses offer these benchmarks. At this level, however, students might only be introduced to content (with no assumption of prior knowledge), be required to perform tasks at the application and analysis levels of Bloom’s Taxonomy and have constant and ready support. 400-level courses, conversely, might assume much more students’ knowledge, require performances from the evaluation and creation level of Bloom’s Taxonomy and offer less holistic support. Similarly, a college might require that some course assignments serve as key programmatic assessments for learning measurement purposes and that these assessments meet special guidelines. These two examples are the types of things that need to be considered as a part of quality, because they are part of an institutional identity. They will likely not, however, be measured in any meaningful way by just using the tools that evaluate benchmarks.

For institutions concerned about quality, then, it is necessary to start thinking about these context-specific needs. And luckily, the process, though potentially time consuming, is relatively easy. If a specific tool is already being utilized (especially QM or one of OLC’s scorecards), it is safe to assume that tool covers the benchmarks. Then, it is only a matter of identifying the context-specific aspects of learning that need to be accounted for. Are there special requirements for courses based on their level or for programmatic assessments, as previously mentioned? If so, make note of those. Should there be a specific alignment of the course outcomes to in-demand workplace skills? Make note of that, too.

Essentially, this process is similar to performing a gap analysis. What is covered by any quality tool currently in use, and what are the missing pieces that need to be addressed? The benchmarks will provide a solid foundation for what to consider quality, but the list cannot be considered complete until context-specific needs have been identified and noted. When complete, however, this list of context-specific items can be combined with the benchmarks, creating a more complete list of an institution’s quality standards.

The last piece that should be considered essential for any digital learning experience in today’s world is user experience. User experience (commonly referred to as UX or UX/UI) is a mainstay in the world of software development and web design; it represents the way that users feel when interacting with a digital product. Vermeeren, Roto, and Väänänen (2016) assert that this concept refers specifically to the “experience(s) derived from encountering systems, where ‘encountering’ can be interpreted as using, interacting with, or being confronted passively with, and where ‘system’ is used to denote products, services, and artifacts… that a person can interact with through a user interface.” [3] The means that UX professionals are concerned with things such as users’ needs being readily met, intuitive navigation of a system to complete tasks and building trust in the consistency of said experience. The overall idea is that the experience is designed with the user in mind.

The logical leap, then, to why UX is critical for educators who want to provide meaningful and quality learning experiences to students should be clear. The world of digital learning experiences can be daunting enough for students, and they should not have to spend mental energy being frustrated with a cluttered, confusing, inefficient, and/or unintuitive experience. This is especially true with vulnerable student populations, such as those just returning to college after a long break or new students who may lack confidence. If they are forced to expend cognitive effort to navigate the learning environment, they could be less engaged in the course and therefore less likely to meet their educational goals. Thus, any list of standards or guiding frameworks that define a quality learning experience needs to include at least some incorporation of UX best practices.

It should be noted that measuring UX is not like ensuring aligned resources, clear learning outcomes or valid and reliable assessments. The tools for doing so may look foreign to educational professionals. However, while UX professionals have a host of metrics for assessing user experience, there are a few easy ways in which educational professionals can measure their UX-centered guidelines. The most obvious tool for this would be the end-of-course surveys that students are often asked to complete. While many already incorporate questions about usability, it should be ensured that the survey includes questions adequately covering UX standards. The data garnered from these survey answers (and perhaps a little prompting would be needed to ensure a solid student completion rate for these surveys) could be analyzed for overarching themes about the student experience.

For a more involved approach, a group of student volunteers could be gathered in the same physical location, all with access to the same digital learning experience. There, they would be asked to accomplish simple tasks, such as finding the syllabus, locating instructions for a certain assignment or contacting their faculty member. They would also be asked to speak aloud about their feelings as they accomplish the tasks, and this data would be recorded and then analyzed for key themes and takeaways. These data can lead to valuable insights, stripping away assumptions about student experience and leading to concrete actions that can be taken to improve the learning environment.

Whatever the exact tools selected to measure quality, it is important for institutions to first settle on their specific definition of quality for digital learning experiences. When created as a set of standards or framework that incorporates field benchmarks, institution-specific needs and attention to UX, the result is a comprehensive and flexible guide that can be used to drive all efforts around quality. They can be used to structure training programs, formulate checklists and peer reviews and inform course and program reviews. Better yet, the end result will be a more meaningful and enjoyable learning experiences for the students, driving their engagement and, hopefully, their success.