Critically appraising security surveys
A thought-provoking paper by Alexis Guillot and Sue Kennedy of Edith Cowan University in Perth, Australia, examined typical information security surveys, concluding that, despite questions over their scientific validity, they are a useful source of management information.
From a scientific perspective, the following criticisms are commonly leveled:
- Survey design/method
- Sample selection, including demographics and self-selection
- Sample size
- Bias, particularly where sponsors have a vested/commercial interest in the topic
These are acknowledged as valid concerns by the paper's authors. The authors also commented that surveys do not appear to account for the limited knowledge/expertise of the individual respondents, a subtler issue. The scope of the surveys can be quite broad (e.g. covering IT security, physical security, risk management, incident management and financial management), yet do respondents (normally IT security professionals, I guess) take the trouble to seek out answers from their professional colleagues who are more familiar with each of these aspects, or do they simply make up answers on their behalf? This hints at a further concern: the tendency for busy respondents to complete survey questions carelessly or dismissively, probably tending towards risk-averse responses given the mind-set typical of security professionals.
The authors acknowledge that survey-derived information - even though it is often biased in various ways and may be of dubious scientific value - may still prove useful for persuading management to support and invest in information security [in the absence of anything better]. I can almost hear Douglas Hubbard (author of How to Measure Anything) applauding from the back of the room. In a sense, the end justifies the means.
If we were to apply the PRAGMATIC method to these surveys, such criticisms would be reflected in depressed ratings for Genuinness and Independence, and perhaps also Accuracy. The Timeliness of surveys is also of concern, since they are usually annual or bi-annual snapshots, and take some months to produce. On the other hand, their Predictive value, Relevance and Meaningfulness would be quite high, along with Cost-effectiveness (given that many security survey reports are provided free of charge, at least to those who responded if not to the general public) and Actionability (they are evidently being used as awareness vehicles to prompt management into responding).
The paper did not discuss whether the criticisms can or ought to be addressed, and if so how. Using PRAGMATIC, we see that improving the Accuracy, Genuinness and Independence by, for example, commissioning a 'proper' scientific study of information security by a professional survey team would further impact the Cost and Timeliness - in other words, the net result may not be a markedly different PRAGMATIC score. That's not to say that improving survey methods is pointless, rather that there are clearly trade-offs to be made.
This gives us a very pragmatic bottom line: published security surveys are, on the whole, good enough to be worth using as security metrics. While many of us take them at face value, they are even more valuable if you have the knowledge and interest to consider and ideally compensate for the underlying issues and biases, thinking about them in PRAGMATIC terms. Whether you share your analysis with management (or, better still, undertake the analysis in conjunction with concerned managers) is a separate matter, but at least you will be well prepared to discuss their concerns if someone challenges the survey findings. That's got to beat having the wind knocked out of your sails by a dismissive comment from an exec, surely?
POSTSCRIPT: for a counter-view, check out this Microsoft academic research paper. The authors examined the basis for wildly-inflated but widely circulated claims for the total value of cybercrime, for example. Extreme extrapolation from relatively limited samples can result in one or two high-side outliers totally dominating and inflating the cost estimates. Definitely food for thought.
POSTSCRIPT: for a counter-view, check out this Microsoft academic research paper. The authors examined the basis for wildly-inflated but widely circulated claims for the total value of cybercrime, for example. Extreme extrapolation from relatively limited samples can result in one or two high-side outliers totally dominating and inflating the cost estimates. Definitely food for thought.