Metrics lifecycle management
Most metrics have a finite lifetime. They are conceived, used, hopefully reviewed and maybe changed, and eventually dropped or replaced by something better.
Presumably weak/bad metrics don't live as long as strong/good ones - at least that's a testable hypothesis provided we have a way to measure and compare the quality of different metrics (oh look, here's one!).
Ideally every stage of a metric's existence is proactively managed i.e.:
- New metrics should arise through a systematic, structured process involving analysis, elaboration and creative thinking on how to satisfy a defined measurement need: that comes first. Often, though, the process is more mysterious. Someone somehow decides that a particular metric will be somewhat useful for an unstated, ill-defined and barely understood purpose;
- Potential metrics should be evaluated, refined, and perhaps piloted before going ahead with their implementation. There are often many different ways to measure something, with loads of variations in how they are analyzed and presented, hence it takes time and effort to rationalize metrics down to a workable shortlist leading to final selection. This step should take into account the way that new or changed metrics will complement and support or replace others, taking a 'measurement system' view. Usually, however, this step is either skipped entirely or superficial. In my jaundiced opinion, this is the second most egregious failure in metrics management, after the previous lack of specification;
- Various automated and manual measurement activities operate routinely during the working life of a metric. These ought to be specified, designed, documented, monitored, controlled and directed (in other words managed) in the conventional manner but rarely are. No big deal in the case of run-of-the-mill metrics which are simple, self-evident and of little consequence, but potentially a major issue (an information risk, no less) for "key" metrics supporting vital decisions with significant implications for the organization;
- The value of a metric should be monitored and periodically reviewed and evaluated in terms of its utility, cost-effectiveness etc. That in turn may lead to adjustments, perhaps fine-tuning the metric or else a more substantial change such as supplementing or dropping it. More often (in my experience) nobody takes much interest in a metric until/unless something patently fails. I have yet to come across any organization undertaking 'preventive maintenance' on its information risk and security metrics, or for that matter any metrics whatsoever - at least, not explicitly and openly.
- If a metric is to be dropped (retired, stopped), that decision should be made by relevant management (the metric's owner/s especially), taking account of the effect on management information and any decision-making that previously relied upon it ... which implies knowing what those effects are likely to be. In practice, many metrics circulate without anyone being clear about who owns or uses them, how and what for. It's a mess.
Come on, this is hardly rocket surgery. Information risk and security metrics are relatively recent additions to the metrics portfolio so it's not even a novel issue, and yet I feel like I'm breaking new ground here. Oh oh.
I should probably research fields such as finance and engineering with mature metrics, for clues about good metrics management practices that may be valuable for the information risk and security field.