Saturday 31 August 2013

Application security awareness module




In the dying days of August, just as we were busily finishing-off September's awareness module on application security, what should pop on to my screen but a new survey from Ponemon Institute on that very topic.  With some trepidation, I opened the report to see how its findings compared to our own research ... and was relieved to see that we had picked up on all seven of Ponemon's key issues, plus a few more due to our slightly wider scope.  

Does your security awareness and training program cover the information security aspects of application development, acquisition, management and use?  Does it even mention mobile apps, BYOD and cloud computing?  Go ahead, dust it off and take a look.  Does it talk to business and project managers, IT pros and employees in general about relevant security aspects that matter to them, in terms that make sense and resonate?  Does it successfully prompt a productive dialogue between executives and practitioners concerning application security risks and controls?  Does it highlight topical issues, pull up the latest research and thinking, capture employees’ imagination, and most of all motivate them to behave more securely? 

Thursday 29 August 2013

Analog Risk Assessment method, ARA [UPDATED x2]


A multi-part blog piece by Brad Bemis suggesting the use of a simplified/standardized risk analysis process for the purposes of PCI-DSS led me to look into a quick/rough-cut information security risk assessment method based around just ten yes/no questions, that Brad recommends. The method's inventor, Ben Sapiro, calls it "Binary Risk Analysis" [BRA] to emphasize those yes/no choices, implying a degree of simplicity and objectivity to the method (although some have questioned that: forcing users to choose in this manner merely causes them to collapse or shoe-horn each of their subjective opinions into one of two boxes, which does not make them any less subjective, and if anything makes them more constrained). The ten questions in BRA lead the user through the process of evaluating the frequency and severity (albeit called "threat likelihood" and "threat impact"), key factors that are commonly used to evaluate risks.

It occurred to me that it would be even quicker and easier, and in fact no less accurate, for a competent person to assess and plot the probability and impact elements of relevant incident scenarios directly.   

Since Ben's method is "binary", I've cheekily called mine "Analog Risk Assessment" ARA.

I'm not saying analog is inherently better or worse than binary, just different. It happens to suit my way of thinking and, judging by the popularity of this blog item, others are similarly inspired. I find the colorful graphic (a heatmap metric) an excellent way to prompt consideration and discussion around risks in workshops etc., to stimulate creative thought and summarize the findings for management decisions, naturally focusing attention on risks in the red zone.

Whereas I've labeled the axes "Likelihood" and "Severity", you may prefer similar terms such as "Probability" and "Consequences" or "Impacts".  

Note that the axes are scaled from 'Low' to 'High', relative terms. The lack of precision is not a problem - in fact, as I mostly use ARA for risk evaluation and security awareness purposes, the simplicity is actually an advantage, one less distraction from the main business of thinking about and discussing the risks. There's nothing to stop you putting values against the scales (specific numbers, ranges or categories) if you are so inclined ... but don't forget that we're talking about risk and inherent uncertainties. Personally, I'm much more comfortable to assert that risk A is 'much more likely' than risk B than I am to estimate actual probabilities. 

A nice feature of ARA is that it can be applied to most if not all kinds of risk. I've developed ARA graphics for most of the 60-odd information risk and security topics in our awareness portfolio, for starters. I see no reason why it couldn't be used for financial risks, political risks, market risks, health-and-safety risks, strategic risks, compliance risks and so on. You could even compare and contrast different types of risk on the same basis on the same graphic, giving a sense of perspective across markedly different areas, with obvious value in governance, enterprise risk management and audit planning.   

In the same spirit as Ben, I'm very happy for anyone to use ARA or develop it further under a Creative Commons license:

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.


Enjoy!

PS  The heatmap is also known as a Probability Impact Graphic, a PIG. Oh and heatmap.

Wednesday 28 August 2013

SMotW #70: incident detection time lag

Security Metric of the Week #70: delay between incident occurrence and detection


While some information security incidents are immediately obvious, others take a while to come to light ... and an unknown number are never discovered. Compare, say, a major storm that knocks out the computer suite against an APT (Advanced Persistent Threat) incident.  During the initial period between occurrence and detection, and subsequently between detection and resolution, incidents are probably impacting the organization. Measuring how long it takes to identify incidents that have occurred therefore sounds like it might be a useful way of assessing and if necessary improving the efficiency of incident detection to reduce the time lag.  

When ACME's managers scratched beneath the surface of this candidate security metric, thinking more deeply about it as they worked methodically through the PRAGMATIC analysis, it turned out to be not quite so promising as some initially thought:

P
R
A
G
M
A
T
I
C
Score
80
70
72
30
75
50
50
65
65
62%

Management was concerned that, in practice, while the time that an incident is detected can be ascertained from incident reports (assuming that incidents are being reliably and rapidly reported - a significant assumption), it is harder determine, with any accuracy, exactly when an incident first occurred.  Root cause analysis often discovers a number of control failures that contributed or led to an incident, while in the early stages of many deliberate attacks the perpetrators are gathering information, passively at first then more actively but often covertly. Forensic investigation might establish more objectively the history leading up to the discovery and reporting of incidents, but at what cost?

For the purposes of the metric, one might arbitrarily state that an incident doesn't exist until the moment it creates an adverse impact on the organization, but that still leaves the question of degree.  Polling the corporate website for information to use in a hacking or phishing attack has a tiny - negligible, almost unmeasurable - impact on the organization, so a better definition of the start of an incident might involve 'material impact' above a specified dollar value: fine if the costs are known, otherwise not so good.

The 30% rating for Genuinness highlights management's key concern with this metric.  The more they discussed it, the more issues, pitfalls and concerns came out of the woodwork, leaving an overriding impression that the numbers probably couldn't be trusted.  On the other hand, the 62% score means the metric has some potential: the CISO was asked to suggest other security incident-related metrics, perhaps variants of this one that would address management's reservations.  

[This is one of eight possible security incident metrics discussed in the book, two of which scored quite a bit higher on the PRAGMATIC scale.  There are many many more possibilities in this space: how would you and your colleagues choose between them?]

Sunday 18 August 2013

SMotW #69: incident root causes

Information Security Metric of the Week #69: proportion of information security incidents for which root causes have been diagnosed and addressed


'Learning the lessons' from information security incidents is the important final phase of the incident management lifecycle that also involves preventing, detecting, containing and resolving incidents.  Its importance is obvious when you think about it:
"Progress, far from consisting in change, depends on retentiveness. When change is absolute there remains no being to improve and no direction is set for possible improvement: and when experience is not retained, as among savages, infancy is perpetual. Those who cannot remember the past are condemned to repeat it."
George Santayana

This week's example metric picks up on three crucial aspects:
  1. Root causes must be determined.  Addressing the evident, immediate or proximal causes of incidents is generally a superficial and unsatisfactory approach since problems upstream (e.g. other threats and vulnerabilities) are likely to continue causing trouble if they remain unidentified and unresolved.
  2. Diagnosis of root causes implies sound, competent, thorough analysis in the same way that doctors diagnose illnesses.  Casual examinations are more likely to lead to misdiagnoses, increasing the probability of failing to identify the true causes and perhaps then making things even worse by treating the wrong ailments, or implementing the wrong treatments.
  3. Addressing root causes means treating them appropriately such that, ideally, they will never recur. The fixes need to be both effective and permanent.

Before you read ahead, think for a moment about what we've just said.  Given the positive nature of that analysis, you might be tempted to implement this metric immediately ... but systematically applying the PRAGMATIC criteria reveals a number of concerns:

P
R
A
G
M
A
T
I
C
Score
85
85
67
40
77
40
48
16
40
55%

Aside from the undeniable Costs of analysing in depth and fully addressing root causes, it seems there are issues with the metric's Genuinness, Accuracy and most of all its Integrity ratings.  

One of the ACME managers who scored this metric expressed concern that the people most likely to be measuring and reporting the metric (meaning ACME's information security professionals) would have a vested interest in the outcome.  While hopefully such professionals could be trusted not to play political games with the numbers, the fact remained that they are actively involved in determining, diagnosing and addressing root causes, hence there is a distinct possibility that they might be mistaken, especially given the practical difficulties in this domain.  Information security incidents often have multiple causes, including contributory factors (such as the corporate culture) that are both hard to identify and difficult to resolve.  They may well believe that they have eliminated root causes whereas in fact even deeper issues remain unaddressed.

Given the promising introduction above, the metric's disappointing 55% score led ACME management to put this one on the watch list for now, preferring to implement higher-scoring metrics in this domain first.  The CISO was asked to think of ways to address the independence and trust issues that might put this metric back on the agenda for ACME's next security metrics review meeting.

Thursday 8 August 2013

Security lessons from a car park worm

A news item about an NZ car park card payment system being infected with the Conficker worm, and potentially compromising customers' credit cards, is a classic example of the potential fallout from an incident that probably would not have occurred if the company concerned had been proactively tracking the appropriate information security metrics.

According to Stuff.co.nz's article "Car park hack puts credit cards at risk":
"Hundreds of parking machines used by thousands of motorists a week may be infected with a virus allowing hackers to harvest credit card numbers.  A compromised machine in Wilson Parking's Alexandra St car park in Hamilton prompted security experts to warn motorists to check their credit card statements if they've recently used a machine at one of the company's 276 car parks across the country.  But Wilson Parking says there is no problem with the system.  The Hamilton machine was displaying an error message on Monday and Tuesday warning it was infected with the Conficker virus, the same virus which disabled Waikato District Health Board's 3000 computers in 2009."
The article makes a right muddle of virus, worm, hacking and identity theft/credit card fraud, but that aside, it seems clear that Wilson Parking has fallen seriously behind with their patching of public-access systems that are processing credit cards, meaning they handle personal and financial data.  In information security risk terms, this was an eminently predictable incident that should have been avoided. 

The incident might have avoided or at least ameliorated using controls such as:
  • Patch management, keeping all their distributed and office systems reasonably up-to-date with patches and especially the critical security patches;
  • Periodic security reviews, audits and tests of the systems, which should have identified this issue long ago;
  • Antivirus software, again maintained up-to-date with virus signatures and periodically checked/tested; and
  • Strong incident identification, management and response policies and procedures, reinforced by security awareness and training (their management should probably have known about the incident before the journalist called, should already have been well on top of the response, and should have known to refer inquiries to someone trained to deal with the press).
All of those I would have thought would be a routine part of their information security arrangements, particularly if they are subject to the requirements of PCI-DSS and other compliance obligations (e.g. privacy laws), let alone good security practices such as ISO27k.

At a higher level, Wilson Parking's management should have known there was trouble brewing through a number of management-level controls and information flows (including suitable metrics, naturally!), hinting at a possible governance failure here.  A simple metric such as 'patch status' or even 'unpatched vulnerabilities' should have indicated in bright red that they were way behind, and the security and risk people should have been clamouring for attention and corrective action as a result.  In theory.

However, let me be clear: I am only guessing at what really went on in this case, based on the very limited and perhaps misleading and inaccurate information in the news article.  I have no further knowledge of Wilson Parking's security arrangements, metrics, controls or risks, nor should I - it's not my business.  It is conceivable that they have simply been caught out by a one-off fluke and a journalist prone to hyperbole.  As far as I can tell, Conficker is more likely to have been sending spam than stealing credit card numbers.  It is vaguely possible that management deliberately and consciously accepted this risk for genuine business reasons (such as the practical difficulties of managing, updating, testing and maintaining physically distributed and hardened IT systems) ... although that begs further awkward questions about their risk/security management and contingency planning!

The real point is to learn what we can from incidents like this, the better to avoid ending up in the same situation ourselves.

Would YOUR security controls have avoided something along these lines?

Would YOUR security metrics have made the accumulating risk obvious to management?

What do YOU need to do to check and perhaps update YOUR information security arrangements?

Think on.

Wednesday 7 August 2013

ISO/IEC 27004 back on track?

At long last: a glimmer of hope
on the ISO27k metrics front!  

ISO/IEC JTC1/SC27 respondents to a questionnaire circulated by the editors responsible for revising ISO/IEC 27004:2009 acknowledge that the current published standard is wordy, academic, perhaps even unworkable, which is probably why it has achieved such a low uptake, despite the obvious need for measurements as part of an Information Security Management System.  No surprise there. 
However, there are encouraging signs that the editors and project team are prepared to consider a markedly different approach, although there is some concern that the new version ought to be backward compatible with the old (one might ask “Why?” given that it is hardly being used!).  I hope publication of the current version of 27004 has not, in fact, set the field back which was the fear expressed to SC27 in the formal comments accompanying NZ’s vote against publishing the standard.

Given that the editors feel “ISMS standards are practical standards, not university textbooks”, the rather academic and unhelpful measurement modelling content of the current version will hopefully be dropped like a stone, toned-down or at least relegated to an dark and dusty annex.

Other security measurement standards are being trawled for more pragmatic guidance in relation to ISO27k. NIST SP800-55 Revision 1 certainly merits a closer look, as does ISO/IEC 15939, BSI’s BIP 0074 and perhaps IT Grundshutz. The idea of ‘categorizing’ metrics seems to have taken hold, although there is no agreement yet on the nature of those categories, while maturity metrics are also of interest (in the sense that an organization’s infosec metrics will change as its approach to and experience of infosec matures). Meanwhile, for those who simply can’t wait for the 27004 update, we recommend the PRAGMATIC approach which, we believe, addresses many of the shortcomings of 27004 - for example, how to select or design worthwhile security metrics, those being workable measures that support both business/strategic and information security management objectives.
I will be doing my level best to help the SC27 project team exploit the PRAGMATIC ideas and other concepts from the book, where appropriate.

Tuesday 6 August 2013

Your authors need you!


Have you read PRAGMATIC Security Metrics yet?  What did you make of it? Does it make good sense?  Is it understandable?  Are the tips and suggestions helpful?  Is it interesting, well written, approachable, stimulating?  Is it a worthwhile addition to your bookshelf, a valuable contribution to the field - something you are already using in earnest, or that you definitely intend to put to good use?  A book you are happy to recommend to your colleagues - your peers and managers - and to the likes of (ISC)2, ISACA and SANS perhaps?  

 - OR - 

Have you skimmed it in the bookshop or website and put it straight back on the (virtual) shelf?  Is it gibberish?  Did you buy it but wish you'd not wasted your money on it?  Is it a pathetic attempt, not a patch on the other excellent security metrics books and standards out there?  Does the casual writing style annoy you, and the footnotes distract you?  Is the PRAGMATIC approach completely misguided and misleading?  

We are very keen to hear back from you either way.  So far, apart from two five-star customer reviews on Amazon and some words of encouragement from Professor Kabay (who kindly wrote the preface for us), we are surprised and somewhat disheartened by the lack of reader feedback, whether positive or negative. Nice comments are welcome for obvious reasons, but even complaints have their uses!  Most helpful of all are your constructive criticisms and improvement suggestions, especially those that make us think and perhaps stimulate us to tackle new angles or new topics.

The thing is, to you this book represents an investment of 50-odd bucks, a few hours' reading and a few more contemplating, interpreting and then applying the PRAGMATIC method.  To us, it represents literally hundreds, maybe thousands of hours of intense focus, an enormous effort over the two years it took to write and publish.  Don't get me wrong, both Krag and I enjoy our writing.  The question is: do you?  Should we continue, or give it up as a bad job?

We are also very keen to add to our stock of 150+ example metrics that have been put through the mill, and we are looking for case study materials, anecdotes and feedback on the method to use in PRAGMATIC training courses.  While it might be interesting to know your organization's industry, size, maturity etc., we don't need to know its name and we are very happy to maintain your privacy if you would rather not be identified.

Please get in touch by email (Gary@isect.com or Kragby@gmail.com) or by commenting here on the blog.  Thank you in advance for your trouble.


PS  If you feel strongly about it, how about writing and publishing your own book review?

Monday 5 August 2013

Basing ISO27k standards on risks

While catching up with a backlog of ISO27k emails from SC27 to update www.ISO27001security.com, our ISO27k information site, a new project has me very confused. 

Given the new standard's working title "The Use and Application of ISO/IEC 27001 for Sector/Service-Specific Third-Party Accredited Certifications", what do you think ISO/IEC 27009 will cover? 
Few of the originally-envisaged “sector-specific” variants of ISO/IEC 27001 or 27002 have surfaced since those standards, like BS 7799 that spawned them, are deliberately broad in scope and very generally applicable. At present, we have sector-specific ISMS variants for the telecomms, finance and healthcare industries (ISO/IEC 27011, ISO/IEC TR 27015 and ISO 27799 respectively), with something for the energy industry under development (ISO/IEC TR 27019).
Rather than working on a standard for 'use and application' of sector-specific ISMS standards (whatever that means), and given that the core philosophy of ISO27k is to determine and then treat information security risks, I would argue that there is a far more pressing need for the standards to identify information security risks that are typical or common within various sectors and situations. This would give management a strong steer as to the information security risks that probably ought to feature highly in their risk registers, which in turn would drive the implementation of suitable risk treatments, including mitigating controls. Their actual risks will vary, of course, but do you agree there's value in giving hem a starting point - guidance on the kinds of infosec-related risks that (as a minimum) ought to be analyzed and treated by users of the ISO27k standards under different circumstances?
Come to think of it, how about SC27 making all the ISO27k standards themselves explicitly risk-based, laying out and discussing the risks that they supposedly address?
  • ISO/IEC 27000 should address the concepts of information security risk assessment/analysis and treatment in general terms, and pick up on other relevant risks such as the risks of being at cross purposes when multiple parties are discussing and agreeing information security requirements and obligations, if their fundamental architectural models, vocabularies and understanding differ markedly;
  • ISO/IEC 27001 should address the risks of not managing or mis-managing information risk and security through a management system;
  • ISO/IEC 27002 should address generic information risks (meaning that section 4 of ISO/IEC 17799 and ISO/IEC 27002:2005 should not merely have been retained, but significantly expanded since risk reduction is the rationale for the control objectives and controls that follow);
  • ISO/IEC 27003 should address the risks associated with ISMS implementation projects;
  • ISO/IEC 27004 should address the risks associated with a lack of relevant, reliable and up-to-date information on the status of the information security risks and controls, plus the associated security management processes, information security resources, information security incidents, business continuity arrangements etc. and the extent to which they satisfy strategic objectives and goals for information security;
  • ISO/IEC 27005 should address the risks associated with incorrectly or incompletely identifying and treating information security risks (e.g. emphasizing the need for incident management, business continuity and contingency arrangements to handle information security risks that were not identified or appreciated as such, were inadequately treated, that changed or materialized unexpectedly, or that exceeded the capabilities of the mitigating controls);
  • ISO/IEC 27006 should address the risks relating to compliance assessments and certification (e.g. the possibility of incompetent auditors, collusion/coercion, concealment or fabrication of material facts, misinterpretation of the evidence, questions around scope and purpose of the ISMS, inherent limitations of compliance auditing real-world organizations against generic, formalized standards etc.);
  • ISO/IEC 27007 should address the audit risks associated with management systems auditing (e.g. incompetent, weak or misguided auditors, audit scope and time constraints, changes between audits, limited or misleading information provided by auditees, risks to the certification scheme as a whole if certified organizations prove insecure etc.);
  • ISO/IEC 27008 should address the myriad risks associated with IT auditing (e.g. incompetent, weak or misguided auditors, audit scope and time constraints, changes between audits, limited or misleading information provided by auditees etc.);
  • ISO/IEC 27009 should address the risks relating to this standard, whatever it turns out to be ...;
  • ISO/IEC 27010 should address the risks arising from inadequate communications between organizations on information security matters;
  • ISO/IEC 27011 should address the information risks that are pertinent to the telecomms industry;
  • ISO/IEC 27013 should address the risks associated with integrating, or not integrating, information security and IT service management;
  • ISO/IEC 27014 should address the risks to valuable information assets arising from weak or missing governance, and perhaps excessively strong governance;
  • ISO/IEC 27015 should address the information risks that are pertinent to the finance industry - I understand they have some!;
  • ISO/IEC TR 27016 should address the risks of failing to account properly for the costs and benefits of information security, leading to under- or over-investment;
  • ISO/IEC 27017 should address those information risks floating around in the clouds;
  • ISO/IEC 27018 should address the privacy risks relating to cloud computing;
  • ISO/IEC TR 27019 should address the information risks relating to process controls, SCADA, real-time safety-critical computing, critical infrastructure and all that;
  • ISO/IEC 27031 should address business continuity risks, or rather the risks associated with failing adequately to identify and treat other information risks;
  • ISO/IEC 27032 should address cybersecurity risks, whatever that means;
  • ISO/IEC 27033 should address IT network security risks;
  • ISO/IEC 27034 should address application security risks;
  • ISO/IEC 27035 should address the risks arising from the mismanagement of information security incidents;
  • ISO/IEC 27036 should address the information risks relating to business relationships;
  • ISO/IEC 27037 should address the information risks relating to forensic evidence and the forensics processes;
  • ISO/IEC 27038 should address the risks associated with the redaction process, such as redaction failures, inference, and accidentally releasing the unredacted content;
  • ISO/IEC 27039 should address the risks relating to network, system and site intrusions and the systems that are meant to detect and prevent them;
  • ISO/IEC 27040 should address the risks relating to information storage;
  • ISO/IEC 27041 should address the risks associated with inadequate assurance in forensic processes;
  • ISO/IEC 27042 should address the risks arising from failing adequately to analyse and interpret forensic evidence;
  • ISO/IEC 27043 should address the risks relating to forensic processes;
  • ISO/IEC 27044 should address the risks relating to SIEM;
  • ISO 27799 should address information, health and safety, and privacy risks that are pertinent to the healthcare industry.

Perhaps I should propose this change of approach to SC27, but I just know I would have a hard time getting it discussed, let alone adopted as a policy across all the ISO27k standards.

Being an international committee of experts that meets just twice a year, with no real opportunities for structured discussions or collaborative working between meetings, and with a conspicuous lack of strong leadership direction, SC27 is immensely conservative and slow to change. Such "radical" suggestions typically generate only brief if heated discussions at the committee meetings before being dismissed or ignored, and pretty soon we're back where we started.

To be fair, many of the ISO27k standards kind of mention risks as well as controls, but the linkages are generally tenuous and in most cases the risks are neither explicitly nor comprehensively described.

The issue came to a head for me last year when we came up with eleven information security risks typically associated with redactionthe ISO27k redaction standard ISO/IEC 27038 (due to be published soon) completely fails to address entire classes of risk that are relevant to redaction, having been narrowly scoped with particular redaction scenarios in mind (i.e. redaction of digital documents). The standard will hopefully be a roaring success when it is published but it's yet another golden opportunity lost as far as I'm concerned. Potential customers for the standard who are not myopically focused solely on digital document redaction are unlikely to find much of value in the standard. Worse still, customers who use the advice to formulate corporate redaction policies and procedures may naively believe they have covered all their bases simply by adopting the good practices recommended by an international standard, whereas they probably ought to be treating various other redaction-related risks as well.

To press my point home, consider this topical example. The classified information obtained and published by WikiLeaks and Snowden was unredacted and clearly not intended for publication, but such information commonly is redacted and published following freedom-of-information requests. ISO/IEC 27038 will barely touch on the risk that unredacted original documents might be disclosed, whether by accident or on purpose, as an unwelcome side-effect of the redaction process. A single, somewhat vacuous and unhelpful sentence in the final draft is all I can find on this point: "Original digital documents (e.g. the un-redacted document) shall be retained and be accessible only to those authorized." Note that the risk is merely implied, not stated, while the security advice is at such a high level as to be almost pointless.

Ho hum.

Sunday 4 August 2013

SMotW #68: continuity plan maintenance

Security Metric of the Week #68: business continuity plan maintenance status


Business continuity plans that are out of date may be a liability rather than an asset.  Whereas ostensibly it may appear that the organization is ready to cope with business interruption, in fact the plans may be unworkable in practice due to substantial changes in the business and/or the technology and/or the people since they were written or last updated.  

Furthermore, valid questions about the suitability of the continuity plans at the time they were originally prepared or updated are still more important if the organization is failing to maintain the plans. Did the inevitable assumptions and constraints involved in their preparation invalidate them?  Did they pass their tests with flying colors?  Were they ever adequately tested in fact?  Could they be trusted to work properly?  If they are not being properly maintained (which could be taken to imply their being systematically reviewed and improved), the quality of the organization's processes for managing the plans is seriously in doubt.

ACME's senior managers are quite rightly concerned that its business continuity arrangements are good and ready to keep things going when it all turns to custard, begging the question how to measure its business continuity plans?

Possible business continuity metrics include:

  • Measuring the breadth of coverage of the plans, particularly of course those business processes (and the associated IT systems and relationships and people and other vital assets or components ...) deemed business-critical, but also miscellaneous supporting processes that could become critical if they failed irrecoverably;
  • Measuring the quality of the plans, perhaps by assessing compliance with ACME's business continuity plan quality standards, or against some external arbiter such as BS 2999, ISO 22301 or the Business Continuity Institute's recommendations;
  • Testing the plans to the appropriate level of assurance (corresponding to the criticality of the associated processes etc.), and measuring the test results (hopefully with something more useful than crude pass/fail!);
  • Counting the number of plans that have not been reviewed or tested when planned;
  • Counting the number of days overdue for the plan reviews - easier if all the plans have a "test before" date;
  • Proportion of plans that BOTH passed their last test AND are not overdue for the next planned test;
  • A maturity metric looking at the overall quality and suitability of ACME's business continuity planning;
  • Measure and rank the residual risks associated with the failure of business processes etc., taking into account their inherent risks and the risk treatments, including business continuity plans;
  • Measuring component parts of the business continuity arrangements e.g. resilience, recovery and contingency aspects;
  • Benchmarking e.g. comparing the business continuity arrangements made by various parts of ACME against each other, and/or against acknowledged good practices, and using the ranking to encourage the weakest to emulate the strongest.
[Some of these metrics have been or will be discussed and scored separately in this blog and the book, but feel free to apply the PRAGMATIC approach to them yourself, in the context of your organization, if they strike you as worth considering.  By all means score other business continuity metrics on the same basis, including any that you favor or are already using.  For bonus marks, tell us what you make of them and share your PRAGMATIC scores with us and our readers.  Seriously, we'd be fascinated.]

Anyway, faced with a proposal to implement a metric that reported the status of the business continuity plans across ACME using a red-amber-green map representation as shown above, ACME management rated the metric as follows:


P
R
A
G
M
A
T
I
C
Score
75
75
90
73
84
76
80
77
93
80%


80% is a very respectable score with no serious concerns, making this a strong candidate for incorporation into ACME's "Executive Management Metrics Dashboard" (well, OK, an intranet page and perhaps a simple display app to help justify those shiny new iPads!).  However, since there are four even-higher-scoring business continuity metrics examples in chapter 7 of the book, plus a further 5 metrics scoring over 70%, it's not an automatic decision to adopt this one.