SMotW #44: system change correlation
Security Metric of the Week #44: Correlation between system/configuration logs and authorized change requests
In theory, changes to controlled IT systems (other than data changes made by legitimate, authorized users through their applications) should only be made under the authority of and in accordance with approved change requests. In practice, other changes typically occur for various reasons such as ad hoc system administration (usually involving relatively "minor" changes that may not require separate authorization) and changes made for nefarious purposes (such as hacks and malware). Furthermore, authorized changes aren't always made (e.g. they are delayed, overtaken by events, or neglected). This metric involves someone somehow linking actual with authorized changes.
The metric's PRAGMATIC ratings and overall score are quite good apart from the final three criteria:
P | R | A | G | M | A | T | I | C | Score |
87 | 80 | 90 | 80 | 80 | 80 | 60 | 50 | 47 | 73% |
The person measuring this is probably going to be a system administrator who has a direct interest in the metric, affecting the Independence rating. The metric is unlikely to identify a rogue sysadmin, unless they are so inept as to leave obvious traces and incriminate themselves! The metric could be independently measured or cross-checked by someone else (such as an IT auditor) to confirm the values, especially if there is some reason to doubt the integrity of the measurer or the validity and Accuracy of the measurements. However, cross-checking inevitably impacts the Cost-effectiveness rating and further increases the Time delay before the measure is available.
Aside from that issue, the metric is bound to be quite Costly, given the painstaking manual analysis that would be needed to correlate technical log entries with change requests. A given change could generate a multitude of log entries, possibly on several systems. Furthermore, log entries accumulate normally, hence the measurer would need to sift out those that are associated with authorized changes from those that aren't.
To be of much use, the metric would also need to distinguish trivial from important changes, requiring still more analysis.
Oh by the way, mathematicians reading this may expect the metric to be represented as a correlation coefficient between -1 and +1, but that is not necessarily so. While there may be numbers behind the scenes, a crude red/amber/green rating of a bunch of servers may be entirely sufficient for a management report and fit for purpose if, for instance, it enables management to spot obvious issues with particular sysadmins, departments or business units, or with certain categories/types of change. In our jaundiced view, information security metrics are far more valuable as decision-support tools for practitioners and managers than as theoretical exercises in mathematical precision. It's handy if the two objectives coincide, but not always necessary!
Oh by the way, mathematicians reading this may expect the metric to be represented as a correlation coefficient between -1 and +1, but that is not necessarily so. While there may be numbers behind the scenes, a crude red/amber/green rating of a bunch of servers may be entirely sufficient for a management report and fit for purpose if, for instance, it enables management to spot obvious issues with particular sysadmins, departments or business units, or with certain categories/types of change. In our jaundiced view, information security metrics are far more valuable as decision-support tools for practitioners and managers than as theoretical exercises in mathematical precision. It's handy if the two objectives coincide, but not always necessary!