Risk is an integral part of our world and our lives and so too of the workplace.
Risk management, an industry in in’s own right, defined as “the identification, assessment, and prioritisation of risks (defined in ISO 31000 as the effect of uncertainty on objectives, whether positive or negative) followed by coordinated and economical application of resources to minimise, monitor, and control the probability and/or impact of unfortunate events or to maximize the realisation of opportunities”. Source: Wikipedia
Corporately risk is often seen in the use of ‘risk logs’ – a project management tool used to list, assess and (even) managing risk– used to support larger streams of work. Then of course the more implicit approaches in activities like SWOT analysis – strengths, weaknesses, opportunities and threats. Those threats being potential risks. However those opportunities present a more positive approach to risk – the risk of not doing positive things.
There’s some really helpful pointers to getting to grips with risk from David Spiegelhalter, who is Professor of Public Understanding or Risk at Cambridge University. The author of some really useful materials, not just studying public understanding of risk, but also enabling that understanding too. So some really helpful materials. Having seen the talk (the University’s Department of Continuing Education) on communication risk and uncertainty explaining “how people's perceptions of risk and uncertainty are influenced by the words, numbers and pictures used to communicate them”, there’s some really helpful observations and pointers to help get to grips with practically thinking about risk. Here my key observations from that talk….
We have two approaches to risk. Firstly there’s the feeling approach. That’s the gut reaction, fuelled by intuition, emotion and culture. Secondly there’s the rational approach, weighing things up in a structured way, with careful analysis and measurement. We’re seem to be comfortable with the more intuitive approach especially for the more personal matters. When it comes to decisions taken on our behalf – for example by local and national government – we of course expect to see the more rational approach. There in lies the world of business cases and cost benefit analysis.
Framing this issue…
The purpose or context in which risks are presented is key. It might simply be to attract attention to the issue, perhaps to inform or educate, or even change behaviour.
Numbers...
Risk is often reported as tables of numbers, but increasingly as graphics too. Interesting that the smallest number actually represents the biggest risk, 1 in 100 (higher risk) vs 1 in 10,000 (lower risk) when represented this way. It’s proven that we act is some pre-disposed ways to how risks or probabilities are presented. For example, Ratio Bias is where we focus on the numerator more so than the denominator. So those two black bags...bag ‘A’ with one while and two red balls, and bag ‘B’ with two white and six red balls… so if the challenge is to get a while ball then we’ll opt for bag B, the chances are exactly the same but the fact that we know there’s more while balls is too much to resist. While that’s subtle, then how about Denominator Neglect where we ignore the denominator entirely. So with those balls in the bag, we behave as it the red balls don’t exist at all.
Evidence and consensus…
Certainly for some of the big areas of risk with lots of focus by lots of experts, we can look to a “meta” assessment of the risk, to get an overall impression of the understanding of the risk. The two dimensions here are (a) the robustness of the evidence and (b) the degree of consensus. So the robustness covers type, amount, quality and consistency, and the consensus covers the extent of agreement amongst experts. Here’s how the assessment quality of evidence might considered, based confidence in estimates of medical treatment effects:
- High quality – further research is very unlikely to change our confidence in the estimate of the effect.
- Moderate quality – further research is likely to have an important impact on our confidence in the estimate and may change the estimate.
- Low quality – further research is very likely to have an important impact on our confidence in the estimate and is likely to change the estimate.
- Very low quality - any estimate of the effect is uncertain.
Of course the same evidence can be interpreted in different ways. A favourite example of mine is the measurement of “staff turnover”, which for several years a national performance indicator for all police forces in England. So a low turnover either means “staff are very happy here and are not drawn elsewhere” or “staff are not good enough to achieve employment elsewhere”. Probably the former that holds sway by default.
So arguably what we seek is a strong expert consensus based on robust evidence, rather than disagreement based on weak evidence. That said, the world of consensus has had plenty of wake up calls, or “paradigm shifts” which can turn current thinking on it’s head….the worlds in now officially round.
Tools...
There’s interesting ways to help present risk to ease understanding. The idea of the “micro mort” which is a 1 in a million chance of dying, being used to more easily compare probabilities. Examples of experiencing a micromort are travelling 6,000 miles in a train, driving 230 miles in a car, riding six miles on a motorbike.
Then there are the visual tools. For example, a risk ladder shows the level of risks on a log scale to make comparing bigger and smaller risks easier. Arrays of icons – grids of icons – help see the risk relative to a larger group. Blocked arrays (where risk icons are clustered) are good for comparing, while scattered arrays (where risk icons are scattered) are good for seeing personal risk.