Risk quantification - other factors (UPDATED)
- Quality of information and analysis: risks that are commonplace and conventional are generally better understood than those which are novel or rare (such as AI risks, right now);
- Volatility: if the threats, vulnerabilities and business are reasonably stable, the risks are more easily determined/predicted than if they are volatile, changing unpredictably;
- Complexity: ugly, horrendously complicated risks are more likely to involve unrecognised interactions;
- Discoverability: hackers and fraudsters, for example, usually fly under-the-radar for a considerable period;
- Speed of onset: some incidents are immediately obvious and undeniable, whereas others may fester for some while (week, months or years) before being recognised and addressed;
- Urgency of response: this largely relates to impact but is complicated by other dynamic factors such as precisely when incidents occur, other business priorities at those points, available resources and costs of remediation;
- Compounding effects: coincident or successive incidents or activities such as major business or IT changes stretch resources, increase stress levels and complicate matters;
- Incident management impacts: incidents that interrupt communications, for instance, make it harder for those involved to gather and share information about what is going on;
- Resolvability: different kinds of incident vary in terms of how easy and straightforward they are to resolve.
Aside from thinking and writing about these and further factors in my book, I'm not sure whether or how to deal with them explicitly in the analysis and quanitification of risks - not in an objective, reproducible or scientific manner at least. Encouraging those involved to 'take things into account', subjectively, is a pragmatic approach, perhaps using this little set of moderating factors as a checklist when reviewing risks.
I am toying with the idea of convincing risk analysts to 'explore the empty areas of their Probability-Impact Graphics', actively hunting for other incidents or situations as well as reconsidering the risks already shown in light of the additional factors. A visual approach might help, fuzzing the edges of risks with significant inherent uncertainties or using suitable icons on the graphic.
The point is not just to firm-up the analysis but to extend it, persuading those involved to invest some brain-time in thinking more deeply about what's going on. True insight doesn't come cheap!
PS Visually 'fuzzing the edges' of risks plotted on a PIG makes me wonder whether and how such additional risk factors might be incorporated into 'objective' (mathematical) RA methods, if not already included. I guess it is feasible to define distributions rather than discrete values for each of the risk factors in FAIR, for instance - fine in theory but maybe unworkable/too costly in practice, except perhaps for the most heavily-weighted factors, the ones most likely to influence risk calculations. It's much less work and much easier to point out the inherent uncertainties in all information risk analysis methods, when presenting/reporting, interpreting and using the information to make risk-based decisions, formulate strategies, identify problem areas etc. I've said before that risk management is risky business. Managers ought to know.