Security awareness/training metrics


An interesting discussion on the ISO27k Forum concerns measuring security awareness and training activities.  Most of the measures proposed so far have been 'input' or 'process' metrics such as evaluation sheets measuring things such as the quality of the venue, the course materials, the food served, the tutor (even the parking spaces!).  Many organizations collect basic data such as the number of attendees at awareness and training events, and the number of events attended by each employee in the course of a year.

Input measurements of this nature are relatively cheap and easy to collect, in volumes large enough for meaningful statistical analysis, and some of those statistics are actually useful for management purposes (e.g. distinguishing good from bad trainers, and identifying areas for improvement in their techniques or the materials or the venue).  A few may even be required for compliance reporting purposes against [senseless] regulatory requirements such as "All employees must go through security training once a year" (which says absolutely nothing about making the training effective!). 

However, 'output' or 'outcome' metrics tend to be muchmore valuable, although it takes some creativity to think up metrics that measure the actual effects of security awareness and training.  Examples include:
  • Comprehension tests to assess how well employees recall and understand key lessons
  • Behavioral surveys to measure the organization's security culture
  • Phishing, social engineering and penetration tests etc. to determine how well employees recognize and respond to mock attacks
  • Audits and post-incident reviews of actual incidents (including noncompliance) to determine the extent to which lack of training/awareness was a factor
  • 'Security maturity' metrics that rate the organization's awareness and training practices against generally accepted good practices [see an example of this kind of metric].
Both input and output metrics should show the effects of changes in the awareness and training practices (e.g. running a "fraud focus month" with a bunch of extra activities on that topic), however output metrics measured over successive months are much more likely to demonstrate whether the effects were long-lasting and had the desired effects.


The PRAGMATIC method deliberately emphasizes the value of metrics that are Relevant, Predictive, Meaningful and Cost-effective: naturally cost is a factor since audits and surveys can be quite expensive, but that's not the whole story: the additional value of output metrics in supporting key management decisions compared to  cheap-n-nasty input metrics means they are often an excellent investment.


Gary.


--------------


PS  Penny Riordan pointed out the Kirkpatrick model for evaluating training. Professor Kirkpatrick elaborated on four 'levels' or methods or evaluation as follows:

  1. Reaction of student - measure what student thought and felt about the training;
  2. Learning - measure (by testing) the resulting increase in knowledge or capability;
  3. Behaviour - measure (by observation) the extent of behaviour and capability improvement and implementation/application;
  4. Results - measure the effects on the business or environment resulting from the trainee's performance.
The third/fourth levels correspond to most of what I called 'output' or 'outcome' metrics, while the second level corresponds with the comprehension tests I mentioned.  Training feedback forms equate to level 1, part of what I called 'input' measures.

Thanks Penny!