Published on
The Totally Un-Sexy Side of Innovation: Building an Infrastructure for Big and Better Data
Our mission at Southern New Hampshire University is to transform students’ lives. Our leadership team, faculty and staff all live and breathe this mission by relentlessly challenging the status quo and building a culture of resilience and adaptability to change. These conditions alone, however laudable they may be, are not enough to enable innovation.
A lot of what we find ourselves doing at the Sandbox, the R&D concept development lab at SNHU, is helping to cultivate the conditions for creativity. Much of that, however, entails ensuring the foundation and infrastructure exist before we can start dreaming up new solutions and pathways or evaluating new-fangled technology solutions on the market. Transformation and innovation occur by creating the right environment and infrastructure to foster great ideas from good leaders.
Almost daily, our teams encounter a dizzying array of new edtech products. Many of them upend conventional pedagogical practices and traverse product verticals with a complex number of applications and potential use cases. We also face the enduring challenge of embracing innovation as a way to sustain our existing business models while also looking ahead to more disruptive models for the future. These realities require new ways of thinking about how discrete products fit not only within our institutional strategy, which includes continued focus on access, equity, and student success, but also within our current delivery models.
Not surprisingly, Big Data is on our minds a lot these days. Buzzwords like predictive analytics, data visualization and adaptive learning are driving significant forms of academic and business model innovations in the marketplace today. Big Data brings transparency and makes things like academic program prioritization, activity-based costing (ABC), Integrated Planning and Advising for Student Success (IPASS) and curricular redesign all seem within reach.
Let’s face it: Big Data is sexy. Rich data on the back end of online learning experiences can alter fundamental aspects of the teaching and learning process. I’ve always loved the way that Frederick M. Hess, Bror Saxberg and Taryn Hochleitner beautifully describe the process of “‘[w]atching’ (with data).” With better analytics, we can “provide intense instruction, real-time customized assessment, and intensive, personalized practice.”[1] It is as though, armed with data, we can provide each student with a special tutor. What used to be reserved only for the wealthy because it was too expensive to provide at scale is now within the realm of possibility for all.
We must continue to aspire to those heights, but, more often than not, there is a lot of un-sexy work that goes into laying the foundation for innovation. Big Data also exposes redundancies and inefficiencies within an institution. It enables us to think holistically about our strategy, forcing us to do the hard and messy work of building strategies for the future. In other words, Big Data is both our friend and foe.
In order to leverage Big Data, we need to have all of our systems speaking to one another. In technical terms, we need more than just data; we need interoperability. And most of higher education is notoriously data-deprived. There are thousands of integrations, point-to-point solutions, legacy systems, siloes and proprietary data warehouses. It’s all kind of a mess. For one, it’s no easy lift to extract or mine data from all of these different pools and have them serve as a single source of data. But even if we could, we’re also not always nimble or fast moving enough in higher education to execute on what these data tell us—at least not without this critical foundation in place.
Core to that foundation is what William Massy describes as “equilibrium” in his recent book, Reengineering the University. Equilibrium is a strategy in transparency and control over what goes in and what comes out of the institution. With equilibrium, an institution is neither overextended nor underinvested. It is aware of its every move. It has “no hidden liabilities” and can account for everything it does in every corner of the institution and in every circumstance and condition. Equilibrium serves as the basis for planning efforts for the future, but it is inordinately difficult, time-consuming, and costly to build these models. There is a reason why really only a few universities, for instance, can pull off something like activity-based costing. According to our research, UC Riverside, Johnson County Community College and Brenau University are the first few to have engaged in the not-so-sexy work of getting their data houses in order.
The same goes for predictive analytics. Across the U.S., universities are doing more with predictive analytics tools. We see the tremendous gains in retention at places like Georgia State University, which became the predictive analytics powerhouse that it is today largely because it first focused on its broader data strategy.
So why aren’t we all doing this and more? It comes back to all of that un-sexy work. We need the right foundation with the right kinds of data talking to one another.
We recently learned about the work that DXtera Institute is doing to create a software solution based on an open-source digital exchange. Make no mistake: What DXtera does is important but hardly sexy. It facilitates a secure, extensible, and real-time information exchange between all data systems—academic, student support services, legacy systems, etc. Their team has worked with the likes of MIT, Tennessee Board of Regents, Georgia State University, University of Hawaii system, and other partners—all on large-scale data strategy and infrastructure improvement efforts. A key differentiating factor is that this is an open-source enterprise—not proprietary. They’re a non-profit supported by Strada Education Network (formerly USA Funds) to help double college completion by enabling better access to data. It all sounds profoundly simple—enabling institutions to mine their own data—but the challenge is real.
Our collective inability, as institutions of higher education, to leverage data is one of the reasons why so many of us suffer from business model inertia—why traditional institutions remain unchanged in the face of innovation. It’s not because we are somehow unaware of disruptive entrants. It is because we are fundamentally constrained in responding to systemic shifts in higher education.
Overwrought with constraints, most colleges and universities are structurally incapable of facilitating innovations that deviate from the way we currently deliver education. Time and precedence have normalized processes that might have been jury-rigged in the past as workable solutions. The result is a normalization of what we call embedded inefficiencies.
Ask any IT team on a college campus, and you’ll hear them list the number of ways they have tried to cobble together legacy systems through APIs and other solutions to get to a workable but mediocre result. It is incredible to see the ways in which our technology add-ons and solutions have come together to prohibit our response to the market and to the needs of our students.
Big Data is becoming increasingly important to the management and viability of post-secondary institutions. At the same time, it’s foundational to driving transformative change, even at a place like SNHU. Innovation and transformation can occur only if we make sure to resolve this simple but magnificent pain point: How do we stitch together all of our data to render better business intelligence? When we can create the right environment for all of our systems to talk to one another, then we can start “bringing sexy back.”
– – – –
References
[1] Bror Saxberg, Frederick M. Hess and Taryn Hochleitner, “E-Rate, education technology, and school reform,” American Enterprise Institute, October 22, 2013. Accessed at http://www.aei.org/publication/e-rate-education-technology-and-school-reform/print/
Author Perspective: Administrator