SMotW #75: noncompliant controls
Security Metric of the Week #75: number of controls failing to meet defined control criteria/objectives
The premise for this metric is that information security controls are, or rather should be, designed to achieve something fairly specific i.e. the control objective. Provided they are well worded, it ought to be possible to assess the extent to which control objectives are satisfied in order to determine whether the corresponding controls are adequate.
The PRAGMATIC ratings for this metric, according to ACME's managers, are:
P | R | A | G | M | A | T | I | C | Score |
88 | 86 | 88 | 65 | 78 | 60 | 26 | 90 | 70 | 72% |
The metric's Timeliness rating depends on the testing-and-reporting period. The gaudy traffic-light colored example graph above shows a monthly reporting period which would be good for timeliness, but assessing and reassessing this metric every month would be beyond the limited capabilities of ACME, at least not without seriously impacting the metric's Cost-effectiveness and Independence: they had in mind asking Internal Audit to measure and report the metric annually, a rather different basis.
ACME management perceived this metric as a way to drive a long-term process of matching controls to control objectives to business outcomes. That was their mind-set when they scored it.
You, however, may score this metric quite differently, particularly if your understanding of (a) what the metric is actually measuring and (b) how it is to be measured, differs materially from ACME's - and that strikes at the core of an important issue. At face value, it would make sense for organizations to share their best security metrics with each other. The trouble is that, just like 'best practices', 'best security metrics' is a heavily-loaded term. What's best for you may not be best for me, and vice versa.
The PRAGMATIC approach does at least give us a mechanism to consider and discuss the pros and cons of our metrics, and even to compare them on a similar basis, but the organizational context in which the metrics are to be used is crucial to the choice.
We discuss at some length in the book the question of what metrics are for - the questions or issues that security metrics meant to address. One might argue that at a high level, most organizations have broadly similar goals (such as 'satisfy market demand' and 'make a profit') but it is not obvious how to derive information security metrics from those. Analysis in more depth will tease out information security objectives and suggest relevant metrics, but aside from the facile 'protect information', the details vary between organizations, and hence different metrics are probably needed.
What's more, the security metrics requirements within a single organization are likely to change over time for two distinct reasons:
1) The organizational context is constantly evolving. Management may, for instance, have a pressing need for compliance metrics in the run up to major compliance deadlines or audits, but at other times compliance may be off the radar, replaced by the drive for continuous improvement or incident management or continuity planning or governance or ... whatever;
2) The information security measurement system, as a whole, evolves, as does the information security management system that it supports. The issues that get translated into metrics when an organization is first getting to grips with its security metrics may be long forgotten a few months or years down the track, particularly if the metrics were successful in helping management resolve those issues.
The upshot is that there is less to be gained from sharing metrics than some would have you believe. Sharing the PRAGMATIC approach, now that's a different matter entirely. 'Give a man a fish and you feed him for a day. Teach a man how to fish and you feed him for a lifetime.'