CIS cyber security metrics

The latest and greatest sixth version of the CIS (Center for Internet Security) Critical Security Controls (now dubbed the "CIS Controls For Effective Cyber Defense") is supported by a companion guide to the associated metrics. Something shiny in the introduction to the guide caught my beady eye:
"There are lots of things that can be measured, but it is very unclear which of them are in fact worth measuring (in terms of adding value to security decisions)."
Sounds familiar. In PRAGMATIC Security Metrics, we said:
"There is no shortage of ‘things that could be measured’ in relation to information security. Anything that changes can be measured both in terms of the amount and the rate of observable change, and possibly in other dimensions as well. Given the dynamic and complex nature of information security, there are a great number of things we could measure. It’s really not hard to come up with a long list of potential security metrics, all candidates for our information security measurement system. For our purposes, the trick will be to find those things that both (a) relate in a reasonably consistent manner to information security, preferably in a forward-looking manner, and (b) are relevant to someone in the course of doing their job, in other words they have purpose and utility for security management."
From there on, though, we part company. 

The CIS approach is highly prescriptive. They have explicitly identified and detailed very specific metrics for each of the recommended controls. For example, the metric associated with control 4.5:
"Deploy automated patch management tools and software update tools for operating system and software/applications on all systems for which such tools are available and safe. Patches should be applied to all systems, even systems that are properly air gapped."
asks 
"How long does it take, on average, to completely deploy application software updates to a business system (by business unit)?". 
To answer that particular question, three distinct values are suggested, viz 1,440, 10,080 or 43,200 minutes (that's a day, a week or a month in old money). It is implied that those are categories or rough guides for the response, so why on Earth they felt the need to specify such precise numbers is beyond me. Curiously, precisely the same three values are used in most if not all of the other suggested metrics relating to time periods ... which might be convenient but disregards the differing priorities/timescales likely in practice. I'd have thought some controls are rather more urgent than others. For instance, the time needed by the organization to restore normal IT services following a disaster is markedly different to that required by an intrusion detection system to respond to a identified intrusion attempt. These are not even in the same ballpark.

The same concern applies to the CIS' proportional metrics. The suggested three choices in all cases are "Less than 1%", "1% to 4%" or "5% to 10%".

Note that for both types, answers above the maximum value are unspecified.

Note also that the response categories cover different ranges for those types of metric. The timescale values are roughly exponential or logarithmic, whereas the proportions are more linear ... but just as arbitrary. 

Oh and the timescales are point values, whereas the proportions are ranges.

The only rationale presented in the paper for the values is this vagueness:
"For each Measure, we present Metrics, which consist of three “Risk Threshold” values. These values represent an opinion from experienced practitioners, and are not derived from any specific empirical data set or analytic model. These are offered as a way for adopters of the Controls to think about and choose Metrics in the context of their own security improvement programs."
Aside from the curious distinction between measures and metrics, what are we to understand by 'risk thresholds'? Who knows? They are hinting at readers adapting or customizing the values (if not the metrics) but I rather suspect that those who most value the CIS advice would simply accept their suggestions as-is.

Later in the metrics paper, the style of metrics changes to this:
"CSC 1: Inventory of Authorized and Unauthorized Devices - Effectiveness Test. To evaluate the implementation of CSC 1 on a periodic basis, the evaluation team will connect hardened test systems to at least 10 locations on the network, including a selection of subnets associated with demilitarized zones (DMZs), workstations, and servers. Two of the systems must be included in the asset inventory database, while the other systems are not. The evaluation team must then verify that the systems generate an alert or email notice regarding the newly connected systems within 24 hours of the test machines being connected to the network. The evaluation team must verify that the system provides details of the location of all the test machines connected to the network. For those test machines included in the asset inventory, the team must also verify that the system provides information about the asset owner."
As I said, this is a highly prescriptive approach, very specific and detailed on the measurement method. It's the kind of thing that might be appropriate for formalized situations where some authority directs a bunch of subserviant organizations, business units, sites or whatever to generate data in a standardized manner, allowing direct, valid comparisons between them all (assuming they follow the instructions precisely, which further implies the need for compliance activities).

Anyway, despite my criticisms, I recommend checking out the CIS critical controls for cyber defense. Well worth contemplating.