Are our infosec controls sufficient?


^ Although it's tempting to dismiss such questions as rhetorical, trivial or too difficult, there are reasons for taking them seriously*. Today I'm digging a little deeper into the basis for posing such tricky questions, explaining how we typically go about answering them in practice, using that specific question as an example.

OK, here goes.

The accepted way of determining the sufficiency of controls is to evaluate them against the requirements. Adroitly sidestepping those requirements for now, I plan to blabber on about the evaluation aspect or, more accurately, assurance.

Reviewing, testing, auditing, monitoring etc. are assurance methods intended to increase our knowledge.  We gather relevant data, facts, evidence or other information concerning a situation of concern, consider and assess/evaluate it in order to:

  • Demonstrate, prove or engender confidence that things are going to plan, working well, sufficient and adequate in practice, as we hope; and
  • Identify and ideally quantify any issues i.e. aspects that are not, in reality, working quite so well, sufficiently and adequately. 

Assurance activities qualify as controls to mitigate risks, such as information risks associated with information risk and security management e.g.:

  • Mistakes in our identification of other information risks (e.g. failing to appreciate critical information-related dependencies of various kinds);
  • Biases and errors in our assessment/evaluation of identified information risks (e.g. today’s obsessive focus on “cyber” implies down-playing, perhaps even ignoring other aspects of information security, including non-cyber threats such as physical disasters and human/cultural issues more generally – COVID for instance, just one of many people-related risks), leading to inappropriate risk treatment decisions, priorities, plans and resources;
  • Failures in our treatment of identified and unacceptable information risks (e.g. controls inadequately specified, designed, implemented, used, managed, monitored and maintained, that do not sufficiently mitigate the risks we intended to mitigate, in practice; inattention, incompetence, conflicting priorities and plain mistakes in the processes associated with using, managing and maintaining security controls);
  • Changes in the information risks such as: novel or more/less significant threats; previously unrecognized vulnerabilities; evolving business processes, systems, relationships and people; and myriad changes in the ‘the business environment’ or ‘the ecosystem’ within which our risks and controls exist and (hopefully!) operate;
  • Changes in the information security controls including those that, for various reasons, gradually decay and/or suddenly, unexpectedly and perhaps silently fail to operate as intended, plus those that are overtaken by events (such as the availability of even better, more cost-effective controls); 
  • Invalid or inappropriate assumptions (e.g. that an ISO27k ISMS is sufficient to manage our information risks, management fully supports it, it is well designed and sufficiently resourced etc., and it represents the optimal approach for any given situation); it is unwise to assume too much, especially regarding particularly important matters ... begging questions about which infosec-related matters are particularly important, and how they stack up in relation to other business priorities, issues, pressures etc.;
  • Blind-spots and coverage gaps that leave potentially significant information risks partially or wholly unaddressed because everyone either doesn’t appreciate that they exist (a failure of risk identification), or blithely assumes that someone else is dealing with them (failing to evaluate and treat them appropriately).

Assurance activities also generate and involve metrics - another can of worms there. Whereas certification is an example of a binary pass/fail metric, most forms of assurance aim to measure by degrees, quantifying issues and acknowledging that the world is mostly shades of grey, not black-or-white. The sufficiency of our infosec controls, for instance, may range from zero (wholly inadequate or missing) through barely sufficient, and on through appropriately or perfectly sufficient, to excessive. Yes, it is possible to be 'too secure', wasting resources on unnecessarily strong controls, being so risk averse that legitimate business opportunities are missed. You might even say that excessive security inadequately satisfies general business objectives relating to the optimal use of resources. It harms the organization's overall efficiency.  

There’s a lot to think about here … and I’m not finished yet!

Consider that various forms of assurance are controls just like any other - controls that may themselves be inadequate or excessive, and may partially or wholly fail in practice. Although assurance generally has value, it too has its limits as a control mechanism, such as:

  • Sophisticated and reactive threats such as targeted hacks and fraud – Nick Leeson’s book “Rogue Trader” illustrates the lengths that determined fraudsters will take to undermine, bypass, mislead and essentially evade general and financial management controls and even focused audits, taking advantage of little weaknesses in the control systems and ‘opportunities’ that arise. Information security is replete with examples of malware and hackers;
  • I don't know about you but I’ll freely admit I’ve had my off-days - I’ve made mistakes, missed things, misinterpreted situations, made errors of judgement etc
Speaking as a reformed IT auditor, software tester, information risk and security specialist, consultant, technical author and proofreader, I've learned to temper my perfectionist streak by accepting that finite resources, imposed timescales and competing priorities mean I have to accept 'good enough for now' in order to move on to other things. Having already consumed a good couple of hours, I could continue writing and wordsmithing this very article indefinitely, if it weren't for Having A Life and Other Stuff On My Plate. 

So, since essentially everything (including assurance) is fallible, it is worth considering and adopting suitable resilience, recovery and contingency measures designed to help cope with possible failures – particularly as I said in relation to ‘important matters’, where failures would cause serious problems for the organization. An example of this is the way customers typically probe into the information security, privacy and governance arrangements, the financial stability, capability etc. of their “critical suppliers”, accepting that various assertions, certifications, assurances and legal obligations may not, in fact, totally avoid or prevent incidents. Supplier assessments and the like are forms of assurance to mitigate information risks. Wise businesses have their feelers out, remain constantly alert to the early signs of trouble ahead in their supply networks, have suitable information processes to collect, collate, evaluate and respond to the assurance and other information flowing in, and have strategies to deal with issues arising (e.g. alternative sources of supply; stocks; strong relationships and understandings with their customers and partners plus other suppliers …; oh and an appreciation that, under some circumstances, even supposedly non-critical suppliers may turn out to be critically important after all).

It should be obvious that (given enough resources) we could continue circling around risks indefinitely, using assurance to identify and help address some risks on each lap without ever totally eliminating them as a whole. At the end of the day, even the most competent and paranoid risk-averse organizations and individuals have to accept some residual risks. Too bad! Life’s a bitch! Suck it up! 

Congratulations (or should I say commiserations?!) if you have read this far. I hope to have convinced you that there’s much more to assurance than checking various cyber or IT security controls, given the organization’s interests and objectives, the business context for this stuff. In addition to the technical and human aspects of infosec, there are broader governance, strategic and commercial implications of [information] risk management and assurance. 

Assurance is just a piece of a bigger puzzle. I've sketched the picture on the box.  Have I given you something interesting to mull over this weekend?


Along with "Are we secure enough?" and "How are things going in information security?", these are classic examples of the naïve, vague, open-ended challenges that are occasionally tossed at us by colleagues, including senior management. Tempting as it is to offer equally vacuous, non-committal or dismissive responses, they can also indicate genuine concerns or doubts that we infosec pro's should be willing and able, even keen to address. If you are serious about doing just that, I recommend studying PRAGMATIC Security Metrics for further clues about how to frame the issues, gather relevant data and come up with more credible and convincing responses. But then I would, wouldn't I? Lance Hayden's IT Security Metrics and Doug Hubbard's How to Measure Anything are further valuable contributions to the field. This blog piece barely even scratches the surface.