Things an ISO27k SoA doesn't say

According to ISO/IEC 27001:2013, organisations are supposed to consider all the information security controls outlined in Annex A, confirming that they have done so by preparing a Statement of Applicability "that contains the necessary controls .... and justification for inclusions, [states] whether they are implemented or not, and [gives] the justification for exclusions of controls from Annex A".

That ineptly-worded requirement in a poorly-constructed and in fact self-contradictory clause of the standard is generally interpreted, in practice, in the form of an SoA table with a row for every Annex A control* and columns for applicability, justifications and implementation status of each control*.

Three exclusive states are generally used.  Each control* is one of:
  1. Applicable and implemented;

  2. Applicable but not implemented; or

  3. Not applicable.
... implying a simple decision tree with just two binary questions:  
  • First, is the control* applicable (yes or no)?

  • If the control* is applicable, is it implemented (yes or no)?
Hmmmm, that's all very well in theory but here are some of the options I've heard as an auditor, or thought if not expressed as an auditee:
  • Applicable under some circumstances – the control applies in specific situations only and is not generally applicable

  • Partially applicable – the control is not enough to mitigate the risk and needs to be modified and/or complemented by other controls; as described, it’s not really what we want to do

  • Applicable and partially implemented – we did this at least once

  • Applicable and allegedly implemented – someone claims to have done this at least once

  • Applicable and apparently implemented - someone genuinely but naively and perhaps inadvisedly believes they have truly nailed this one

  • Implemented but inapplicable – to pacify out auditors, we “just did it” ... even though, deep down, we regret doing it at all and suspect we should really have done something else anyway

  • Implemented for some obscure reason - someone evidently decided this would be a great idea and did it, but we’ve forgotten why or who … and now we’re afraid to turn it off

  • It’s not that simple – I challenge your right to demand such a crude response to such a complex issue

  • Go away - what gives you the nerve to meddle in my stuff?  Anyway, this is secret and you are not cleared

  • You wouldn’t understand – even if I say it in words of one syl-a-bub

  • Applicable, necessary and valid but … pull up a comfy chair, we have a litany of excuses to justify not making what a reasonable person would accept is actual progress on this

  • Implementation is intended – yes, we probably ought to do this, in a perfect world this would be jolly useful

  • Implementation is planned – someone has vaguely proposed some sort of timescale for doing this, although they are dreamin’

  • Implementation is planned and approved – management does not entirely disagree with the planned work, in principle at least, when last asked

  • Implementation is planned and approved and the resources are allocated – management is, allegedly, prioritising the work over all the other stuff that needs to be done

  • Implementation is planned and approved, appropriate resources are allocated, and they are actually available and ready to do this – now we’re getting somewhere, but it’s not actually “done”

  • Applicable and purchased – the technology is sitting in a cupboard somewhere, gathering dust (and no, it isn’t a dust filter)

  • Applicable but implementation is “on hold” for some reason – oh oh, we have a problem Houston

  • Applicable but implementation status is unknown – we neglected to track this, and we’ve forgotten who’s doing what, when and how

  • Applicable but implementation status is untrustworthy – we’re not entirely sure what’s going on, and anyway we simply don’t trust the reports we have received

  • Inapplicable if the control is interpreted literally as it is worded – our lawyers would love to argue about the punctuation

  • Applicable, implemented badly – we made a complete hash of this, so although the control is allegedly in place, it isn’t actually working

  • Applicable, implemented but unused – the control is there in theory but nobody uses it in practice, in fact they work around it

  • Applicable, implemented but disabled – someone quietly turned it off

  • Applicable, implemented but broken – something else we did has resulted in a “reduction of efficiency” of this control

  • Applicable, implemented but unreliable – it seems to work some of the time, we think

  • Applicable, implemented but unsupported – it used to work before stuff happened, and now nobody wants to touch it so it is slowly decaying

  • Applicable, implemented but out of date – the 2600 Hz tone filters on our acoustic couplers are still working fine though

  • Applicable, implemented and status unknown – who knows?  The pretty lights are blinking, the whirring noises suggest stuff is happening but we aren’t entirely convinced the risk is actually being mitigated effectively

  • Applicable, implemented and failed – we are still getting incidents despite the alleged presence of this control

  • Applicable, implemented and dubious – we don’t think we’ve had any incidents after implementing this control, but you can never be completely sure, can you? 

  • Applicable, implemented and pointless – the threats have changed and/or zero day vulnerabilities have come to light and are being actively exploited

  • Applicable, implemented and yet ineffective – the risk is inadequately mitigated but, hey, we have shown due diligence by complying with an International Standard

  • Applicable, implemented but too expensive to continue – as soon as the auditors leave, we will have to bin this one

  • Applicable, implemented, functional, effective, wonderful – but we are mistaken: something is wrong somewhere although we don’t know it

  • Applicable, implemented, functional, effective, perfect – but we plain lied to get our certificate

  • No idea – we don’t understand the risk and/or the control, or we simply haven’t considered this, yet (the default position)
This week I am busy compiling a suite of generic ISMS materials to help clients jump-start their ISO27k implementations, including an SoA spreadsheet. I fear the drop-down selector list for the cells in my SoA template may be a little tedious to use but, hey, it might make our clients smile wryly.

* Joking aside, "control" is the word used in the standard for the sake of simplicity. In fact, most of the "controls" in Annex A are far from simple. Take this classic example:

There are numerous editorial, technical, philosophical and practical issues with that, too many to go into at this point but for now I'll just point out that there are several aspects to the control as stated. Not only must backup copies be taken (regularly?) but they must also be tested regularly according to a policy which has been agreed. I count not one but four actual or atomic 'controls' there (backups taken, backups tested, policy agreed, policy complied with) with several further related controls either entirely unstated or merely alluded-to, plus lots of unanswered questions e.g.
  • Backups need to be stored securely and safely, under the right environmental conditions, with access appropriately controlled to prevent inappropriate access, disclosure, damage or substitution;

  • Testing backups is, again, more involved than it might appear. Someone needs to decide exactly what testing is necessary, perform it competently and diligently, and of course act appropriately on the findings (not just record a test failure and continue as normal!);

  • 'Regularly' is undefined in Annex A: how often is "regular"? Should there be a documented schedule, with evidence of backup tests being completed on time? Is hourly too often? Is once a decade sufficient? Is annual testing OK even though we know the technology, procedures, people and business are changing all the time? 

  • What should a "backup policy" say, exactly? Should it only cover backup testing? How should such a policy be formulated and "agreed", and by whom?  Is "agreed" the same as "approved" or "authorised" or "mandated"? 

  • If this control is deemed applicable, how should it be implemented in practice, and (how) should that be verified?

  • Even if this control is applicable and is implemented literally as described, is it sufficient to mitigate one or more unstated information risks completely?
Some of these issues are addressed in ISO/IEC 27002:2013 section 12.3.1 or elsewhere, and in myriad other standards, advisories etc.  My point is that there's a lot more complexity here than implied by that binary decision tree and the three states on the SoA.

Oh and that is just one of the 100+ "controls" listed in Annex A, a relatively straightforward and supposedly well-understood one at that.

Bear this in mind when you are shown an ISO/IEC 27001 compliance certificate, or if you are given access to the associated ISMS scope and SoA - especially if the organisation's information security status matters. If you are, say, a CEO or owner, don't be fooled by the lengthy SoA and the fancy parchment that your infosec people are so proud of. Over-reliance on ineffective assurance is, itself, an information risk.

PS  Ignoring the bad grammar, perhaps SoA really means "Should of Asked" or even "Sod off Auditor"!

PPS  My pal Ed Hodgson suggested two more SoA options:

  • Applicable but managed by head office - it's delivered by an OLA that falls under A.15

  • Applicable but limited - we have a control but we only use it in one particular instance to manage a specific risk, and not in the broad way that you might otherwise expect.