Visit Modern Campus

Limitations, Visualization and What Employers Are Looking For: How the Job Market Is Fueling Decision-Making

By understanding what the job market is looking for, colleges and universities are better prepared when developing the programming that employers will find appealing down the line.
By understanding what the job market is looking for, colleges and universities are better prepared when developing the programming that employers will find appealing down the line.

College programs and the labor market are in a symbiotic relationship, feeding off one another.

Grads enter the workforce, and employers require skills they learned in their programs. One cannot exist without the other. To ensure both parties get the most value, compromises and initiatives need to be taken to make sure students get the full value of the program and the best job out of school.

EvoLLLution (Evo): What are some limitations of the more traditional approaches to mining information from the labor market and job posting data?

Michael Bettersworth (MB): Our college system started in 1965 with a focus on job placement, and it’s still our primary mission today. So, understanding the labor market remains essential. The best labor market data do not come from a report or browser—they come from highly engaged expert faculty and industry partners who know their field of study first-hand and remain closely connected to their craft. These are the labor market data we rely on most.

Of course, we also use a variety of secondary data to help manage our portfolio of 115 technical training programs. These include data generated by state workforce agencies and federal government data collection programs from the Commerce Department, Census Bureau, and the Bureau of Labor Statistics. These programs rely on generic taxonomies such as O*NET-SOC occupational codes and North American Industry Classification (NAICS) industry codes.

These data points are designed for myriad purposes but generally to track macro employment trends across the country. Tracking long-term employment structure and trends applicable to all states requires considerable data set stability and uniformity. So, there’s frustration at times with the limitations of SOC occupational codes. They’re not designed for detailed job analysis or competency development but rather intended to convey general occupational supply/demand trends and related labor market dynamics.

On the educator side, we’re trying to stay as up to date as possible with changes in the labor market to align our programs. And many of these changes have less to do with occupational titles than with mastery of new skills and changing job roles. The cycle time we require to keep up with these changes must be much faster than the decennial SOC classification system updates allow. If you want higher resolution, more granular skill requirements and regionality in skill hiring preferences, you have to look beyond SOC code-based approaches. You have to go much deeper.

This is why SkillsEngine ( was created. The SOC code-based macroeconomic data point us in the general direction of labor market change, but SkillsEngine is designed to understand the work activities performed and the enabling knowledge and employability skills associated with more granular job roles. These are the critical elements to drive market-aligned curricula. As important, students can learn the critical skills for their chosen career path that have been validated by the employer community.

Evo: I’m curious about how you see that data being used on both the supply and demand sides. How does this kind of data support student decision-making?

MB: Traditional labor market data provides a high-level macro understanding of occupational supply/demand trends and job characteristics such as wages. However, there’s a lack of resolution in some of these areas. For example, if a student wants to be a helicopter pilot, there is little occupational data to inform that choice. Helicopter pilots are included within the commercial pilot SOC code. Similarly, if you’re considering starting a helicopter training program, the lack of an SOC code to convey regional job demand can limit training opportunities, especially for emerging or significantly evolving job roles. There’s not enough resolution for supply/demand modeling to make programmatic decisions. To facilitate informed market-based choice both for students and training providers, we need to look to alternate sources of data. SkillsEngine is actively engaged in understanding the knowledge and skill requirements of these more granular job roles.

The biggest push to assess current job market demand is to data-mine online job postings. This so-called real-time job data scraped from job postings on aggregator job boards and company websites was pioneered by The Conference Board some 20 years ago to supplant economic indicators that were previously based on newspaper help-wanted ads. Wanted Analytics was one of the first companies to create internet spidering algorithms to source job ads quickly moving online. As these techniques have become more mainstream, many familiar vendors now collect, format and market online job postings as a panacea to the lengthy time lags and the lack of granularity in many government data programs.

The good thing about job posting data is its timeliness. You can see today which companies are actively hiring, the job titles they post and in which geographies they’re looking for workers. But there are some significant limitations to these data. An advertisement for a job posting differs from the job description, competency model or detailed job analysis. Job posting data analysis starts to hit some serious walls when we get into understanding skills.

Data mining a help wanted ad offers limited insight because only certain aspects of a job role are included in the advertisement. Even the job descriptions are often incomplete and designed at a high level. Many postings offer more marketing for the company than a description of job hiring requirements. These documents, when public, typically only include critical work functions and not a full detailed competency analysis that you would get through a more robust job analysis exercise. So, while adding another arrow to the market analysis quiver, where do we go beyond job postings to get a more in-depth understanding of what employers want?

SkillsEngine conducted tests using data based on job postings through a Jobs for the Future project many years ago. We put these tools in the hands of instructional leaders, instructional designers and career services staff around the state and quickly witnessed limitations. In terms of skills data, these job-ad-scraping processes output lists of keywords—again, providing incomplete and often out-of-context data. They also tend to resort to the familiar SOC codes which, again, may lack sufficient resolution. These tools are helpful when you want to identify who’s hiring today—perhaps to find a new industry partner—but they come up short in understanding the specific knowledge and skills a business is looking for. And knowledge and skills are what drive competency development and training curricula.

Again, the best way to understand job market supply and demand is fostering rich faculty and industry engagement. And the best real-time labor market data doesn’t come from a browser but from having conversations and developing relationships with employers. TSTC starts this process by engaging industry advisory groups. We have around 1,300 industry advisory members that constitute a primary source of input—not only about skills but hiring demand and feedback on the quality of hired graduates. That feedback loop between the hiring manager or supervisor and the instructional program is invaluable.

But we’re going a step further. SkillsEngine is having excellent success through our Calibrate tool by electronically marshaling employer input to capture direct validation of critical hiring requirements. We’re able to reach a greater number of businesses and other subject matter experts to authenticate the skills that matter most to them.

Evo: What role does the college play in supporting talent supply chains?

MB: Stoking the talent supply chain requires multiple commitments. TSTC has an economic development mission to offer occupationally oriented programs that emphasize new and evolving technical vocational areas. That unique, statewide focus puts TSTC on the front lines of preparing the technical workforce of tomorrow. 

Our responsibility to the talent supply chain means concentrating on both appropriate technical programs to address business talent needs and the competencies that align with hiring requirements. As important, it includes an unwavering commitment to student success.

One of the colleges’ primary goals is to place Texans in good-paying jobs. More precisely, at SkillsEngine, one of our mantras is to teach the skills that matter, including intentionality in the technical competencies we teach and the importance of employability skills in developing a qualified student.

Intentionality means more than just hoping our students get jobs—but purposefully preparing them for in-demand jobs. It requires a deep cultural commitment to student outcomes and an openness to real accountability. How committed is the institution to its students’ employment outcomes?

We have created a value funding model in which the college’s operational funding is 100% based on student post-exit earnings. How accountable are TSTC programs to their students’ employment outcomes? The college routinely reviews programs based on a series of internal and labor market metrics and sunsets programs that are not meeting the intended outcome standards. Is a program with low earnings and low placement rates but positive cash flow a program worth keeping? We would say no. In fact, we have closed 14 programs that lacked adequate starting wages and placement rates.

What’s most important for the institution is a real commitment to employability outcomes, which involves a lot of accountability, self-monitoring, tough decision-making and evolving programs in parallel to market changes.

The second component is providing our departments and faculty with sufficient resources. If we’re going to hold programs accountable for placement and earnings, we must give them the resources, autonomy and authority to make program changes and remain responsive. That includes investing in professional development, labs and equipment, instructional technologies, data and reporting systems, and hiring additional faculty with the right skills. At TSTC, we go even further by investing in these capabilities both for ourselves and for the broader community and regional economy.

Evo: You mentioned that this kind of data is more available to folks outside TSTC. What’s the reasoning behind making the data more accessible?

MB: At one point, TSTC employed a team of instructional designers who focused on doing DACUMs, which involves a very laborious process of getting subject matter experts in a room for multiple days and going back and forth to create this document. Similar exercises and outputs at many institutions are considered somewhat proprietary and rarely put out on people’s websites, which is understandable considering the time and investment that went into them. However, that manual process is very expensive. Once the document is created, it begins to fall out of date. And then how do you objectively connect the outputs of something like a DACUM to the curriculum? Not at just the course level but in the learning outcomes, course outcomes and competency levels?

Doing this activity in an insular fashion is illogical and inefficient, especially when there’s a massive amount of duplicated efforts. For example, how many accounting programs are there at different colleges and universities? How many advisory boards are informing those curricula? Doing that in a closed system behind closed doors is really inefficient and doesn’t make economic sense.

Instead, we can share our collective understanding of changes in the labor market for any sector and distribute that information to our fellow educators for dramatic gains in efficiency, cost reduction and responsiveness to labor market demand. So, how do you do that? How do you collaborate on job analysis across these various institutions? You must build an infrastructure. This is what we are building now and will launch later this year: a new public skills infrastructure for broad use along with purpose-built applications to solve particular user problems.

Additionally, several movements around the country are focusing on data standards and skills and how to make these repositories exist. We’re a member of many of those groups while also creating our own content and infrastructure to help support that effort. By creating and curating a rich skills database to underpin our understanding of the labor market as a broader community, we can share insights collectively and reduce the burden on any single institution.

Evo: Is there anything else you’d like to add about your teams’ efforts to create more accessible and better labor market data to inform decision-making across the college system and for learners?

MB: We’re committed to the sustainability of these various efforts and promoting a larger programmatic and cultural change in the way we address the talent pipeline. Sustaining these various efforts is critical. Having various software, technical processes and systems subsidized by grants with an expiration date is risky if you want to create an enduring sustainable infrastructure. You need to delicately balance the investment needed up front, then maintain and populate it. And the most challenging component is keeping it updated. Our focus has been on the advantages of linked data that can create a community of curators for ongoing updates. It’s a necessary component of ensuring long-term systemic, enduring change. And if it’s really successful, it becomes a public utility to some degree.

I’m very encouraged by the variety of efforts to ensure and promote market alignment of education and training programs. A lot of complimentary work is happening today. The defining characteristic of an enduring system and process is perceived value to the intended user community. Do these systems solve problems that people care about enough to contribute to ongoing data curation? The tools and data sets must solve problems that people care enough about to engage in, so the data don’t go stale. A second component is an enduring financial model. As I mentioned, a sustainability model is critical if we want to effect systemic change. Third, do these systems have a governance structure focused on the larger economic impact and goals of shared prosperity? Economic mobility is at the core of these public-based initiatives’ missions and values. So, are you solving a problem people care about? Do you have a sustainable financial model? Do you have an aligned stakeholder community? And do you have a governance structure that ensures alignment with the public good? These are all questions we think about in moving our work forward.

Author Perspective:

Author Perspective: