Hannover/Tripwire metrics part 3

"Number of incidents contained and breaches prevented" was evidently the third most popular metric with respondents to the Hannover Research/Tripwire CISO Pulse/Insight Survey.

The wording of this metric is an issue from the outset.  It implies that breaches are materially different to incidents, and refers to containing one and preventing the other.  Can't incidents be prevented too?  And surely breaches can be contained?  It all comes down to what is meant by these terms, and how they were interpreted by the survey respondents.

For the sake of argument, I'll assume for now that "breaches" are in fact privacy breaches, while "incidents" are any type of information security risk that materializes other than privacy breaches.  I'll leave aside for a moment the question of whether or not relatively minor or trivial events qualify as incidents or breaches - there's more on that point to come. 

Let's leap directly into the PRAGMATIC analysis ...
  • Predictiveness: in my experience, the total number of information security incidents and breaches tends to be fairly stable over time, primarily reflecting the organization's processes for identifying and notifying/reporting them.  Therefore, it is likely that the organization will experience approximately the same grand total number of incidents + breaches in the next reporting period as it did in the previous reporting period.  In that sense, this metric is indeed predictive.  However, it's a different matter if we take into account the severity, impact or gravity of those incidents and breaches: some are far more serious than others, and (thankfully!) the most severe ones don't usually happen very often.  Provided the organization has been tracking this metric for a long time (a  year or three at least) and/or is so huge that it has amassed sufficient data for valid statistical analysis, it may discover that serious incidents  are also somewhat predictable over the longer term.  This is a moot point, however, since the metric as worded in the survey says nothing about the severity, only the count of incidents and breaches.
  • Relevance: incidents and privacy breaches are of course highly relevant to information security, but as with the second metric in this series, this is a rather crude measure of the overall effect of the organization's information security and privacy arrangements.  Given that both the second and third metrics ignore the severity of the incidents and breaches, neither gives a fair picture of the organization's security/privacy status.  Neither is particularly helpful for information security/privacy management, although arguably they have some value in persuading management that Clearly Something Must Be Done ...
  • Actionability: ... but what, exactly, should be done?  Without more specific data concerning the nature and most of all the root causes of the incidents and breaches (which this simplistic metric lacks), we are none the wiser.  If the number is deemed "too high", should we supply more resources i.e. increase the budget?  Or sack the CISO/Privacy Officer?  Or up-skill and re-motivate those at the sharp end?  Change our entire approach, maybe?  Or should we simply lower our expectations and live with the consequences?  What is "too high" anyway?
  • Genuineness: as with the second metric, the person collecting and reporting the data on this metric is in a position to manipulate them if they so choose.  On the other hand, if the metric is carefully specified, and if the source data could be independently verified, it would have more credibility.
  • Meaningfulness: I've already brought up potential confusion over the terms used in the metric.  Even if they were formally clarified and defined, I suspect the confusion would lead to discussion, disagreement and genuine differences of opinion once the metric started being used in earnest - a surprising conclusion for what appears, at face value, such a straightforward and obvious measure.  In addition, it is uncertain how  meaningful and useful the metric would be in relation to governing, managing, controlling and directing information security and privacy.  
  • Accuracy: a centralized incident and breach reporting and management function (typically the IT Help Desk) would be the natural source of the numbers, although it may be appropriate to integrate other sources as well (e.g. privacy breaches disclosed in confidence to the Privacy Officer). and perhaps make estimates concerning others (e.g. minor incidents and breaches that are 'not worth reporting', or that are resolved before they ever get around to being reported) to confirm that the calculated numbers are 'about right'.   Estimation, also known as guesswork. normally implies a lack of accuracy, but paradoxically in this case it is being used as a technique to validate the numbers and so improve the metric's accuracy.
  • Timeliness: the metric's simplicity means it ought to be possible to track and report the numbers more or less contemporaneously with the occurrence of the incidents and breaches, with minimal delays anyway, particularly where the raw numbers are available from automated systems such as the Help Desk's ticket management system.  The metric is primarily historical, with some forward-looking predictability.  Overall, timeliness is not an issue with this metric.
  • Independence: in my experience, the IT Help Desk function normally reports to the IT Operations Manager, IT Support Manager or CIO.  Any of those people could meddle with the metric by meddling with the way the IT Help Desk receives, categorizes and reports incidents and breaches.  Just as likely, the IT Help Deskers and the people who should be reporting incidents and breaches could both affect the numbers through their subjective bias, especially if the perceived wisdom among management is that there are 'too many'.  Independently checking the numbers would be tricky, although audit techniques such as Benford's law and other statistical tests could be applied if necessary.  An even bigger potential issue may arise if the CISO, Information Security Manager and/or Privacy Officer have their bonuses linked to the metric, which is surely a temptation for such a high-level security-effectiveness measure.
  • Cost-effectiveness: being a simple count, the metric is once again quite cheap to collect, provided the raw data are readily available and credible.  Regarding the metric's effectiveness, conceivably it has some strategic value but even at the high level the lack of information on the severity and business impacts of incidents and breaches leaves it wanting.
By my calculation, taking that all into account, this metric rates 51% putting it a long way behind my top three metrics.  That's not to say there aren't situations in which it might score better, or indeed worse than that, but there are definitely other metrics worth considering.  The PRAGMATIC method is a rational basis on which to compare them and select the few that work best for you.

More to follow: if you missed them, see the introduction and parts onetwofour and five of this series.