Published on
Closing the “Soft” Skills Gap: It Begins with Assessment
Year after year, organizations like AAC&U (American Association of Colleges and Universities) and NACE (the National Association of Colleges and Employers) survey employers to learn what skills they seek in college graduates. Year after year, the results are nearly identical. Regardless of major, employers want skills that cut across majors. These include written and oral communication, teamwork, creativity, problem solving, critical thinking, and the ability to apply knowledge in real-world settings—the capabilities that are sometimes dismissively called “soft” skills. The term downplays their importance: They are fundamental competencies required for both academic and workplace success, which is why they are increasingly called “power skills.”
Critical as power skills are to employers, however, most college graduates leave school without them. In the most recent AAC&U survey, for example, employers described only 37 percent of recent college graduates as well prepared to work in teams; 30 percent to exercise ethical decision making; 23 percent to apply knowledge to real-world problems.
Results like these raise the question: If power skills are so important in the workplace, why don’t colleges and universities do a better job of developing students who have them?
A cynical answer is that schools don’t know—or worse, don’t care—what employers actually want. It is true that there is a vast disconnect between the two perspectives. In the 2017 Inside Higher Ed survey of Chief Academic Officers, virtually all respondents said that their institutions were “very effective” or “somewhat effective” in preparing students for the world of work. Yet their confidence is belied by the perspective of those actually hiring graduates for the world of work.
Perhaps there is a different reason why so many students leave college without the power skills required for workforce readiness. These competencies are not assessed, despite the fact virtually every school includes them in its institutional outcomes and aims to develop them through General Education courses.
Why aren’t they assessed? One reason is that no one owns them, precisely because they are institutional rather than departmental. And at most schools the power to assess remains chiefly within departments. But faculty are inherently content- rather than skill-driven. They are, understandably, focused on their discipline, not crosscutting competencies, regardless of whether their discipline is Art History or Business. Their view of assessment tends to be that first you teach the content, then you determine how much of the content students actually learned. This approach necessarily leads to a worldview in which students’ ability to apply the content is always secondary.
This content-centricity leads in turn to a limited view of assessment. Even if you have students work in teams, for example, it is difficult to assess teamwork when you think of assessment as testing of content. Even communication skills, surely a bedrock of liberal arts education, are often seen as “not my job” by professors outside English and rhetoric departments.
Furthermore, many academics consider the needs of industry to be irrelevant, even antithetical, to the aims of higher education. I recently heard an argument between an employer and the provost of a liberal arts college. It exemplifies the perceived dichotomy between workforce relevance and higher education. The employer said he was desperate to hire people who could think critically and write effectively. The provost replied that higher education was not vocational school. When I looked up his college on the web, however, I found that its institutional outcomes included the claims that graduates of the school would be able to communicate orally and in writing and as well as demonstrate critical thinking. Of course, we don’t know whether these claims are true. Like many institutions, the college does not assess its institutional outcomes.
Why aren’t these outcomes assessed? One problem is the fact that many academics simply do not believe that “soft skills”—however defined—can be assessed.
Fortunately, this belief is—to use a technical term—untrue. ETS, for example, has a cadre of world-class scientists who specialize in researching and developing assessments of what they (perhaps unwisely) call “noncognitive skills.” The military has been training and assessing crosscutting competencies, such as leadership and adaptability, for years. Survival literally depends on them. Within academia, people like MacArthur winner Angela Duckworth have shown that the quality of one “soft skill”—grit—is key to success in a wide variety of activities, from boot camp to spelling bees. Her Grit Scale has a whopping ten items on it.
The good news is that if colleges and universities truly do want to assess power skills, they do not need to be certified MacArthur geniuses like Duckworth or educational measurement organizations like ETS. There is an established methodology for designing assessments that can tell you if your graduates have demonstrated the competencies you wanted them to develop, regardless of what these competencies are. Whether the methodology is called Evidence-centered Design or its cousin, Backwards Design, it begins with the end in mind: the claims you wish to make about what your graduates know and can do (and can do with what they know). But developing appropriate assessments requires being very clear from the outset about what you are looking for—and how you will know it if you see it. Establishing employer advisory councils and really listening to what employers say can be invaluable, even revelatory.
At College for America (CfA) at Southern New Hampshire University (SNHU), where I served as founding Chief Academic Officer, we adapted evidence-centered design to develop CfA’s competency-based, project-centered curriculum and assessment model. We also made a key decision from the outset not to distinguish between “soft” and “hard” skills, weaving foundational competencies such as communication, teamwork, and critical thinking into the curriculum along with more typical “academic” competencies in areas like Art History and Business. We also decided that, instead of taking tests, students would demonstrate competency by applying their content knowledge and skills in real-world settings, for example, by curating and presenting a virtual art exhibit or developing a marketing plan for a non-profit organization.
Projects like these do not ignore academic content, but rather take it out of the strictly content realm into the world of application. And asking students to perform the tasks you want to assess just makes sense.
Performance-based assessment is well established in fields that depend on performance. This too makes sense. Who would trust a phlebotomist who had never taken blood, only multiple-choice tests? The question is why are we satisfied with producing college graduates who cannot write a memo, deliver a presentation, or think critically about the results of their own web search? While most Gen Ed courses are tagged with institutional outcomes like “critical thinking,” this approach does not produce graduates who can think critically.
So what will?
If we are genuinely willing to meet the challenge of producing college graduates who are prepared for the world of work, we need to take seriously what employers are telling us and have been telling us for years: That soft skills matter—as much if not more than majors.
This may be a bitter pill for some faculty members to swallow. But it should be equally painful that so many college graduates are not yet ready for the world of work. This will change only when we become willing to assess all of our outcomes, soft and hard. The first step is admitting they can be assessed.
This article is part of a monthly series by Kazin exploring different facets of the evolving postsecondary landscape.
Author Perspective: Analyst