Security metric #54: documentation of important operations

Security Metric of the Week #54: number of important operations with documented and tested procedures

At first glance, this week's example metric doesn't sound very promising.  The wording is ambiguous, its value unclear and its  purpose uncertain.

If you were a senior executive sitting on "mahogany row", trying to select some information security metrics to use as part of your governance role, you might well be tempted to reject this metric without further ado, given obvious concerns such as:

  1. It implies a straightforward count, giving no indication of how many "important operations" remain to be procedurally documented and tested. 
  2. What are "important operations" anyway?   Is it referring to business processes, information security processes, IT processes, activities, tasks or something else?  Who decides which ones qualify as "important", and on what basis?
  3. "Documented and tested" sticks two distinct issues into the one metric.  Does it mean that the documentation is tested, or the "important operations" are tested, or both?

On the other hand, imagine there was a corporate policy that the organization's business-critical processes should be documented and the documentation quality tested, this could be a worthwhile compliance metric, useful to drive through the documentation of key business operations.  The graph above shows how the metric might indicate the number of such processes that are both documented and tested (upper line) and documented but not yet tested (lower line), addressing point 3.  Furthermore, if the policy explicitly referred to "the top fifty business-critical processes" or the top ten or whatever, then concern 1 would also be addressed.  


It is clear from their analysis that ACME's management took a real shine to this metric, giving it an overall PRAGMATIC score of 84%.  The phrase "important operations" evidently Means something specific in ACME's corporate lingo, and since they also rated the metric high on Predictability and Relevance, they must believe that the documentation and testing of those "important operations" is key to ACME's information security.

This is a classic illustration of the drawbacks of those generic lists or databases of 'recommended' or 'best practice' or 'top N' information security metrics.  The organizations and individuals behind them undoubtedly mean well but, as Anton Aylward quite rightly keeps reminding us, context is everything.  In your situation, this metric may be as good as ACME's managers believe, maybe even better.  For many organizations, however, it is mediocre at best and probably outshone by others.  The PRAGMATIC method gives us the means not just to say metric X is better than metric Y, but to explain why, and to develop and discuss our reasoning in some depth.

There may be particular reasons why this metric scores so well right now for ACME.  Perhaps there is a corporate initiative to improve the documentation of ACME's business-critical processes as part of a drive towards ISO/IEC 27001 certification.  A year or so into the future, when most if not all of the processes are documented and tested, the metric will probably have outlived its usefulness.  ACME managers will find it straightforward to reconsider this year's PRAGMATIC ratings and the associated notes to remind themselves what made them favor the metric before, updating their thinking and the PRAGMATIC score in their regular metrics review.  Retiring this metric will be no big deal.

Compare the enlightened, rational and consensual PRAGMATIC approach to those dark and dismal days when we used to sit around endlessly complaining about metrics and sniping at each other.  What started out with someone insisting that we needed to "sort out our security metrics" soon turned into a bun-fight, each of us becoming ever more entrenched in defending our pet metrics while dismissively criticizing our colleagues'.   The horribly divisive and unsatisfying process meant that, once the dust had settled, there was very little appetite to review the metrics ever again, except perhaps for those battle-scarred veterans who relished every opportunity to re-play the same old arguments, more stridently each  time.   Without regular reviews, the metrics gradually decayed until eventually the whole vicious cycle kicked off again with someone insisting that we "sort out our security metrics" ...

We've been there, done that, soiled the bandages, but does this ring true to you, or are we barking up the wrong tree?  Is it all sweetness and light in your organization, or does your C-suite resemble Flanders whenever metrics are discussed?  Do let us know ...