SMotW #80: quality of system security
Security Metric of the Week #80: Quality of system security revealed by testing
Our 80th Security Metric of the Week concerns [IT] system security testing, implying that system security is in some way measured by the testing process.
The final test pass/fail could be used as a crude binary metric. It may have some value as a measure across the entire portfolio of systems tested by a large organization in a period of a few months, but despite being so simple, underneath lurks a raft of potential issues. If, for instance, management starts pressuring the business units or departments whose software most often fails security testing to 'pull their socks up', an obvious but counterproductive response would be to lower the security criteria or reduce the amount or depth of security testing.
The number of security issues identified by testing would also be a simple metric to gather but again not easy to interpret. If the metric is tracking upwards (as seen on the demo graph above), is that good or bad? It's bad news if it means that there are more security issues to be found, but good if testing is finding more of the security issues that were there all along. Taken in isolation, the metric does not distinguish these, or indeed other possibilities (such as changes in the way the measurements are made - a concern with every metric unless there is strong change management).
Rather than the simple count, more sophisticated metrics could be designed, perhaps analyzing identified issues by their severity (which is really another way of saying risk) and/or nature (e.g. do they chiefly affect confidentiality, integrity or availability?). ACME managers were quite keen on such a metric, judging by the PRAGMATIC score:
P | R | A | G | M | A | T | I | C | Score |
83 | 88 | 83 | 73 | 90 | 68 | 80 | 82 | 10 | 73% |
The metric would have been a no-brainer if it were not for the 10% rating on Cost-effectiveness. In the managers' opinion, the metric would need to measure and take account of a number of factors relating to system security, making it fairly expensive. However, with a bit more work up-front, some or all of the data collection processes might perhaps be automated in order to reduce the costs. This, then, is an obvious avenue to explore in developing the metric.
A pilot study would be a good way to take this forward, trialing the metric and perhaps comparing a number of variants side-by-side, systematically eliminating the weakest over several months until there were just one or two remaining, or until management decided that the metric does not make the grade after all.