Iterative scientific infosec

 

 
 
Here's a simple, generic way to manage virtually anything, particularly complex and dynamic things:
  1. Think of something to do
  2. Try it
  3. Watch what happens
  4. Discover and learn
  5. Identify potential improvements
  6. GOTO 1

It's a naive programmer's version of Deming's Plan-Do-Check-Act cycle - an iterative approach to continuous improvement that has proven very successful in various fields over several decades. Notice that it is rational, systematic and repeatable.

Here's a similar grossly-simplified outline of the classical experimental method that has proven equally successful over several centuries of scientific endeavour:

  1. Consider available information
  2. Propose a testable hypothesis
  3. Test it (design and run experiments)
  4. Watch what happens
  5. Discover and learn
  6. GOTO 1

Either way, I'm a committed fan. The iterative approach with incremental improvements, works well. I approve.

Along the way, aside from pushing back the frontiers of science and technology and achieving remarkable advances for human society, we've also learned about the drawbacks and flaws in the processes, and we've developed assorted mechanisms to reduce the risks and increase our chances of success e.g.:

  1. Key to 'improving' or 'advancing' is to be able to recognise and ideally measure the improvement or advance - in most cases anyway. Improvements or advances that happen purely by chance ('discoveries') are welcome but rare treats. A big issue in quality assurance is the recognition that there are usually several competing and sometimes contradictory requirements/expectations, not least the definition of 'quality'. For certain customers, a rusty old heap of a car discovered in a barn is just as much the 'quality vehicle' as a Rolls Royce to its customers. Likewise, security improvements depend on one's perspective. For hackers, exposing exploitable vulnerabilities improves their chances of breaking in: 'improvement', for them, means weaker security!

  2. Various forms of 'control' are important to stabilise situations and gain assurance that whatever actually happens is the anticipated result of whatever changes we have made, rather than some factor that we probably hadn't even appreciated - which itself can be valuable knowledge. In a sense, there's no such thing as a failed experiment or test provided we still learn something useful from it. A lot of innovation involves figuring out what doesn't work (such as enforced periodic passsword changes: we tried it, it didn't help, move on). 

  3. 'Consider', 'discover' and 'learn' are all about being open to new knowledge, climbing on the shoulders of the giants that came before us and hopefully reaching ever higher. Again, assurance is part of that: to what extent can we trust the information at hand? How reliable is any new knowledge we gain? What can/should we do to be more certain that things are going the right way? Knowledge sharing is another factor. The community as a whole benefits by sharing and collaborating, even though individuals might benefit more by selfishly withholding information. There is a strong argument to facilitate much more sharing of information about information risk and security, incidents, controls etc. - perhaps something similar to the airline industry where open disclosure of issues is encouraged and facilitated in order to protect lives and increase trustworthiness. It's another angle on responsible disclosure.

  4. Small changes are generally far less risky than large ones, although sometimes major advances require risky step-changes.

  5. Given that we cannot be absolutely certain of making improvements and advances, 'planning to fail' is an integral part of the process ... and yet failure is yet another valuable opportunity to learn and improve (provided we survive!).

So, this morning I've been thinking about the applications of those principles and mechanisms to information risk management, putting infosec under the microscope.

  1. 'Improving' or 'advancing' infosec is more involved than it seems. It is typically described in terms of reducing the probability and/or impacts of adverse incidents, but digging deeper, those terms are unclear. The probabilities of incidents occurring in future can generally only be estimated within a finite timescale, and the impacts are equally hard to predict and measure. Security metrics is, at best, an immature field. It is not even straightforward to define 'adverse incidents': adverse to whom, in what sense? And what are 'incidents', in fact?

  2. The controls I am talking about in point 2 are process controls rather than typical information security controls. So, for instance, when some bright spark decides to introduce a new corporate infosec policy on, say, responsible disclosure, what can/should be done to measure the improvement achieved? Once more, that's tough to answer. As I've just said, it's not easy to define what 'improve' even means in this context, and yet without that we stand little chance of measuring or driving it. Reasonably clear objectives are the best starting point when designing or selecting metrics.

    [ASIDE: there's a little learning point here. Shouldn't we at least try to clarify what our infosec policies are intended to achieve, preferably checking that out and adjusting things accordingly? Hmmmmmmm, more thinking required. If I make any headway, I'll pick up this loose end in a future blog piece.]

  3. Re 'consider', 'discover' and 'learn', I'll make the general point that infosec management, as with any form of management, is a rational undertaking. It requires thoughtful strategising, intelligent decision-making, appropriate governance. It revolves around and is crucually dependent upon information, a blend of objective and subjective. There are several substantial information risks associated with 'management' ... and yet they are notably absent from any corporate risk registers that I've seen (or managed or contributed to!) to date - potentially a widespread blind-spot and a serious omission. I mentioned the need for 'assurance' in relation to management information, one of several information integrity controls which are, thankfully, more commonly employed than the risks are identified, begging questions about whether and how such controls were ever justified. Post-incident reviews plus ISO/IEC 27001's ISMS management reviews and internal audits are examples of assurance measures relating to infosec, similar to peer reviews of scientific papers. Confidentiality controls are even more common, while certain availability controls tend to be highly valued after management information systems have failed although I'm sure more could/should be done in advance.

    [ASIDE: another learning point. What are the information risks associated with an ISMS? What mitigating controls are appropriate to protect and allow legitimate exploitation of management information, aside from those required/suggested by '27001?]

  4. Most security improvements are minor and incremental in nature - little tweaks or adjustments to an existing system or suite of controls, with little associated risk (as far as we know: they are seldom even considered). More significant improvements (such as the adoption of new security systems, not least an ISMS) may be more risky, hence those risks really ought to be identified, evaluated and treated in the conventional manner. While 'project risks' associated with system implementation/change projects or initiatives are commonly managed (well, OK, I should say they are selectively tracked, maybe reported, and perhaps mitigated with project management process controls), broader information risks arising from the new/changed systems may not be. For example, regular updates to antivirus systems plus security patching in general typically involve new software being provided to the organisation by the vendors. Some mature, security-conscious organisations run regression and security tests of some sort before deploying such updates but I'm sure most don't because they lack the time, resources, ability and will to do so. Who among us takes care of the associated information risks? Did you identify, evaluate and start treating the risks when the corresponding systems were originally implemented? What about current and future implementations - including all those cloud systems being changed for business reasons other than information security? What about process changes, new suppliers, new employees (especially managers and others with significant responsibilities and powers), new whatevers? How do you ensure that information risks are duly identified, evaluated and treated appropriately across the board, for all substantial changes - not just the IT stuff?

  5. I've already mentioned assurance controls such as post-incident reviews, ISMS management reviews and ISMS internal audits. There are also myriad control-failure-controls in the form of 'security-in-depth', such as multiple overlaid layers of access controls protecting valuable information assets. There are even resilience, recovery and contingency controls: a lot of business continuity falls into this area. However, a substantial problem remains due to the paucity of detective controls. We often don't know about infosec incidents until impacts have grown noticeable, by which time the damage is ongoing. So, knowing that, we should of course redouble our efforts to improve security monitoring and incident detection, while also acknowledging that we are likely to continue discovering incidents-in-progress, hence we cannot afford to give up on our reactive incident responses. 
Quite a lot to process there. Think on.