Published on
Balancing Risk and Innovation in Higher Education
I recently read the British Government’s white paper outlining the future of higher education in the UK. Jo Johnson, the universities and science minister, really wants to shake things up, calling for more innovation, more diversity, and allowing new and private providers to enter the rather closed education system.
However, that agenda implies some risks, so the same paper also introduces a matching risk-management strategy. This is where the classic “risk paradox” emerges, which many of us have to deal with.
For example, as part of the new higher education strategy, the government wants to make it easier for specialist, small education providers to achieve university status and increase innovation and diversity in the sector. At the same time, public sector risk management will make it clear that the university title can be easily taken away when things go wrong.
As a result of this threat, young universities will be very cautious. This automatically implies that they will avoid taking risks, as this might jeopardize their new status—but that is precisely what will prevent further innovation.
In other words, risk management will undo the original aims of the policy.
The attitude towards risk is one of the big differences between the public and private sector. In the private sector, it is common knowledge that there will be no gain unless risks are taken.
I was once told by the CEO of a very large multinational that if a member of his senior executive team did not have to apologize at least once a year for an initiative going wrong, they would be removed for not trying hard enough.
By contrast, in the public sector, gain is primarily based on the avoidance of failures—so, by and large, careful risk avoidance is the most sensible career strategy.
However, don’t assume that risk taking is by definition better than risk avoidance. If taking risks is the equivalent of being reckless, then that certainly is not wise. Risks need to be considered carefully. What is required is balancing “fast” and “slow” thinking.
Fast thinking refers to gut instinct, in other words, whether it feels right or not. But that is not always a reliable indicator. Slow thinking refers to calculating the risks; for example, doing a proper analysis and then deciding whether a risk is worth taking. It also means considering the size of the risk and its possible impact, as well as the likelihood that things might go wrong.
For example, the impact of a nuclear power failure is massive, so the likelihood that might happen needs to be extremely low. The likelihood of a short circuit is quite high, but the impact normally minimal.
This type of risk assessment needs to underpin major decisions, but especially outstanding entrepreneurs are often willing to overrule calculated risks for their gut feeling. After all, dithering does not work in business, and there is not always enough time for careful analysis.
Moreover, no calculation will give you a truly reliable answer; and sometimes your fast thinking might give a better judgement than slow thinking. Besides, what happens if your fast and slow thinking show different conclusions? Maybe you should trust your instinct. This can give you the edge over your competitors.
You also need to consider the bigger picture: How is your overall risk exposure? Risks are not free-standing. You make decisions that carry risks all the time, and the cumulated effect of several low risks could still expose you to major failure.
Many organizations will sustain a shock (a big negative impact of a risk going wrong), but rarely will survive two or three shocks in a short period. A sustained series of minor shocks can spell disaster, so cumulative risk exposure is something to bear in mind.
Taking risks is imperative for achieving innovation, though not all innovation will lead to the desired result. In fact, the experimentation that underpins innovation can easily go both ways.
Successful innovation depends on what happens if an initiative or project fails to deliver, because innovation is a process. Progress needs to be monitored, and if it fails to reach targets within a set period of time, it is important to accept failure.
The most crucial part of taking risks for innovation is to analyze why the target was not reached. What went wrong? Innovation is above else a learning process, and typically, one learns more from when something did not work than when it did.
That brings me back to this lengthy white paper I was reading. How can we include innovation in a context where reputation needs to be protected and experimentation is seen as rather risky?
First of all, we could lower the impact level. So, maybe instead of a yes/no position when it comes to the university title, develop a clear path whereby a university institute with degree-awarding powers has the option to progress into a university college and finally into a full-blown university.
In order to achieve sustainable innovation, maybe we could make it a condition that for institutions to progress, they have to prove themselves willing and able to create professionally managed innovation processes. This includes learning from the inevitable failures. After all, in order to create a dynamic, successful higher education sector, both risk avoidance and reckless, irresponsible behavior are real threats.
That conclusion applies to the lives and careers of individuals as well—but where human beings are concerned, a further factor is character. Some of us thrive on risks; others see the absence of stress as crucial to their happiness. Society as a whole, every organization and even every team, needs to include both attitudes except for the extreme gamblers and dodgers, as these types are definitely too risky.
Author Perspective: Administrator