Sunday 30 June 2013

Bloopers - a brand new security awareness module



We're only human.  None of us is perfect – we all make misteaks (some more than most!).  

Most of the time we get away with them, but the odd moment’s inattention sometimes leads to a little accident and, in rare cases, causes something far more serious.

Researching the topic and writing July's security awareness module was quite entertaining at times.  Even the very words we use tend to raise a smile - blooper, boo-boo, boob, gaffe, gotcha, blunder and so on.  For those not directly impacted, human errors such as actual slip-ups can be hilarious.  There are popular TV programs and YouTube channels devoted to this stuff.  

On the other hand, the 2 micron error that nearly wrecked NASA’s Hubble Space Telescope was outrageously expensive and not funny at all, while the ‘Weapons of Mass Destruction’ intelligence error that led to the Iraq war is deadly serious, not in the least bit amusing.  With such a huge variety of bloopers to draw upon, we were truly spoiled for choice.

We built July's staff seminar around the infamous story of HMS Titanic, highlighting a litany of human errors, bad decisions and other information integrity issues that contributed to the loss of over 1,500 lives.  

In the commercial context, errors and gaps within business information, plus mistakes made in interpreting and acting upon it, lead ultimately to the success or demise of the corporation.  Some can have societal impacts.  To take a recent example, the sub-prime mortgage fiasco that sparked a global economic crisis could be ascribed to severe errors of judgment by the banking industry and its regulators.  

It’s a common problem, well known to the psychologists who study gamblers.  So long as things are going well, there is a natural human tendency to “push our luck”.  Couple that with raw greed and shortsightedness, and it’s surely only a matter of time before the walls come crashing down.  

A security awareness module, no matter how creative and engaging, won’t change human nature overnight but being aware of the issue is a good start on the road to doing something positive about it.  We know of no better tool to shift the corporation from an ingrained culture of carelessness and mediocrity to one of quality and genuine, widespread concern for information integrity.

Thursday 27 June 2013

IMF TO TAX LOWER CASE LETTERS

HAVING JUST RECEIVED THIS EMAIL MISSIVE FROM THE IMF WORLD REGULATORY OFFICE, ALL IN CAPS, I CAN ONLY ASSUME THAT THE IMF IS ABOUT TO IMPOSE A TAX ON LOWER CASE LETTERS: 
IMF WORLD REGULATORY OFFICE
INTERNATIONAL FUNDS REGULATORY AUTHORITY
INTER-CONTINENTAL DEBT RECONCILIATION DEPT.

FROM THE DESK OF: HONORABLE MRS. SARAH JONES DIRECTOR; IMF WORLD REGULATORY OFFICE.

ATTENTION:PROVISION OF AFFIDAVIT OF CLAIM CERTIFICATE FOR LEGAL COVER/ PROTECTION OF 15.5 MILLION GREAT BRITAIN POUNDS IN FAVOR OF YOU

TODAY IS ALREADY JUNE 2013 AND WE WANT TO BRING TO YOUR KIND AND HUMBLE NOTICE THAT A NEW BOARD OF DIRECTOR'S HAVE NOW TAKEN OVER THE AFFAIRS OF THIS OFFICE AND DURING OUR AUDITING WE FOUND OUT THAT YOUR FUND WORTH 15.5 MILLION GREAT BRITAIN POUNDS IS YET TO BE PAID TO YOU, THIS WAS A SHOCKING NEWS AND REVELATION TO THE NEW BOARD OF DIRECTOR'S.

ON OUR KNOWLEDGE OF THIS REVELATION A LENGTHY MEETING WAS HELD IMMEDIATELY BY THE NEW BOARD OF DIRECTORS ON YOUR BEHALF AND IT WAS DECIDED IN YOUR OWN FAVOR AND BEST INTEREST TO REDUCE ALL REQUIRED PAYMENT FOR THE CLEARANCE AND AFFIDAVIT OF CLAIM CERTIFICATE FOR YOUR OVER DUE TRANSFER WORTH 15.5 MILLION TO ONLY THE SUM OF 100 GREAT BRITAIN POUNDS SO THAT YOU CAN MEET UP WITH THE PAYMENT SO THAT YOUR TRANSFER CAN BE EFFECTED AND RELEASED TO YOU INTO ANY BANK ACCOUNT OF YOUR CHOICE WHERE EVER IN THE WORLD THAT YOU CHOOSE.

BE WARNED IN YOUR OWN BEST INTEREST TO STOP IMMEDIATELY ALL KIND OF COMMUNICATIONS WITH ANY OTHER OFFICE, GROUP, PERSON OR INDIVIDUAL OTHER THAN THIS OFFICE AND NEW BOARD OF DIRECTORS AS WE ARE THE ONLY OFFICE EMPOWERED TO SEE AND ENSURE YOU GET PAID YOUR DUE OWED FUND. COMMUNICATING WITH ANY OTHER OFFICE OTHER THAN THIS OFFICE WILL BE AT YOUR OWN RISK AND AT YOUR OWN DETRIMENT OF LOOSE. (BE WARNED)

PLEASE WE WANT YOU TO KNOW THAT YOU HAVE ONLY 24HOURS TO DO THIS PAYMENT SO WE CAN CLEAR,RELEASE AND EFFECT YOUR FUND WORTH 15.5 MILLION GREAT BRITAIN POUNDS IN OUR CARE TO ANY BANK ACCOUNT OF YOUR CHOICE SO WE ADVICE YOU TO PAY THE 100 GREAT BRITAIN POUNDS THROUGH THE WESTERN UNION MONEY TRANSFER OR MONEY GRAM MONEY TRANSFER TO THE BELOW INFORMATION AND SEND THE PAYMENT DETAILS TO ME

RECEIVERS NAME: FRANKLIN WOMA
RECEIVERS ADDRESS: LAGOS / NIGERIA
IT STRIKES ME THAT THIS IS NOT UNLIKE THE MEDIEVAL WINDOW TAX WHICH, THANKFULLY, WAS REPEALED BEFORE BILL GATES GOT INTO BUSINESS.  I HAVE WITNESSED THE TERRIBLE EFFECTS OF THE WINDOW TAX, INCLUDING "THE SMALLEST GOTHIC WINDOW IN THE WORLD" POINTED OUT BY A TOUR GUIDE ON A BUILDING IN BRUGES, BELGIUM.

POSSIBLY THE LOWER CASE TAX WILL AMOUNT TO 100 "GREAT BRITAIN POUNDS", WHATEVER THAT MEANS.  YOU'D HAVE THOUGHT THE IMF WOULD HAVE A CLUE BUT EVIDENTLY NOT.

AS YOU CAN SEE, I HAVE ALREADY SUPER-GLUED MY CAPS LOCK KEY AS A PREEMPTIVE MEASURE TO AVOID FURTHER LIABILITY FOR THE TAX.  HOWEVER, WE NOW KNOW THAT THE MERICANS HAVE BEEN NOT-VERY-SECRETLY CAPTURING OUR COMMUNICATIONS FOR YEARS SO I AM SAVING UP FOR THE INEVITABLE TAX BILL DUE TO MY FLAGRANT CONSUMPTION OF LOWER CASE LETTERS, AT MY OWN DETRIMENT OF LOOSE (I HAVE NOW BEEN WARNED).



PS AS IF TO CONFIRM MY SUSPICIONS ABOUT THE TAX, THE VERY NEXT MESSAGE IN MY SPAM BOX WAS ALSO ALL IN CAPS.

DEAR BENEFICIARY,

KINDLY NOTE THAT THIS MAIL MESSAGE IS STRICTLY FOR THOSE WHO HAVE BEEN SCAMMED BY NIGERIANS OVER THE YEARS OR THAT HAVE NOT YET RECEIVED THEIR OVER DUE CONTRACT PAYMENT. WE HAVE JUST CONFIRMED THAT YOU HAVE BEEN SCAMMED IN THE PAST BY NIGERIANS AND THE FEDERAL GOVERNMENT OF NIGERIA WANTS TO COMPENSATE YOU IMMEDIATELY.

THE FEDERAL MINISTRY OF FINANCE IN CONJUNCTION WITH THE FEDERAL SENATE AND THE PRESIDENCY GAVE THESE INSTRUCTIONS TO HAVE ALL UN-CLAIMED FUNDS RELEASED TO OWNERS AND SCAM VICTIMS COMPENSATED VIA ATM CARD WHICH WOULD BE PROCESSED BY UNION BANK NIGERIA PLC AND IMMEDIATELY SENT AND DELIVERED TO YOUR CONTACT ADDRESS VIA FEDEX COURIER. YOUR ATM CARD IS VALUED AT USD $5 MILLION ( $5,000,000,00 ) AND PLEASE NOTE THAT YOUR ATM CARD IS ALREADY ACTIVATED AND YOUR PIN NO IS 6312. YOU ARE ADVISED TO FORWARD THE UNDERLISTED INFORMATIONS SO WE CAN HAVE YOUR ATM CARD VALUED AT $5 MILLION SENT TO YOU AT NO COST.

1. YOUR FULL NAMES.
2. YOUR PRIVATE TELEPHONE NUMBERS.
3. YOUR CONTACT ADDRESS.
4. OCCUPATION.

WE SHALL DO ANYTHING POSSIBLE TO MAKE SURE YOU RECEIVE YOUR FUND (USD $5 MILLION ) IN NO DISTANT TIME.

REGARDS,
TONY ADAMS

UNION BANK OF NIGERIA


OF COURSE THIS COULD CONCEIVABLY BE A SCAM.  PERHAPS THE IMF WILL BE TAXING CAPITALS AFTER ALL, THIS BEING A CYNICAL ATTEMPT TO RAISE REVENUE.


Wednesday 26 June 2013

SMotW #63: infosec budget variance

Security Metric of the Week #63: information security budget variance



This is, self-evidently, a financial  information security metric but what exactly is "Information security budget variance"?  Now there's the rub.

You might interpret it as a measure of the discrepancy between budgeted, permitted, authorized or allocated funds for information security and actual expenditure.  The illustrative graph above is a view of Acme Enterprise's information security budget variance on this basis over the course of a year, showing actual relative to predicted security expenditure (the zero dollar horizontal axis representing the budgeted spend).  Things are looking pretty grim for the first quarter but gradually improve as (presumably) firm action is taken to correct the overspend.  It looks as if there might even be a small surplus at the end of the year, perhaps enough to afford some discretionary expenditure such as a boost to the security awareness and training budget, or maybe a management away-day to work on the organization's security metrics!  This is an example of a management metric that would be valued by the CISO or Information Security Manager, and may be of some concern to higher and lower strata.

Alternatively, you might believe it refers to changes in the information security budget from year to year.  For example, a budget that has remained static for years, despite the ever-increasing number and severity of security incidents plus a growing burden of regulatory compliance, might be used to justify a significant increase in the security budget next year.  This would be a strategic metric with a comparatively long timeline, of greatest interest to senior/executive management, the CISO and the CFO.

Acme managers might use the PRAGMATIC scores for these two quite different metrics to assess their worth and decide whether to use neither, either or both of them, depending on what other metrics options are on the table.  No doubt in the course of considering the PRAGMATIC ratings, Acme management would think of possible drawbacks or issues (such as the practical difficulty of accurately measuring the total organization-wide expenditure on information security, which far exceeds the Information Security Management Department's budget) and perhaps come up with refinements (such as consider the benefits as well as the costs) to improve their scores.

At a more basic level, different Acme managers might unknowingly start out with distinct perspectives and objectives for the metric titled "Information security budget variance", differences that would come to a head almost as soon as the PRAGMATIC process kicked-off.  Better now than later when "Information security budget variance" lands up in some management report somewhere, and the recipients interpret the metric in radically different ways, without even appreciating that their interpretations differ!

Tuesday 25 June 2013

Cart << horse

When we first met and started discussing information security metrics, Krag and I soon realized we shared the view that there are loads of possible metrics out there.  Anyone out shopping for security metrics is spoiled for choice, facing a bewildering array of things they could measure.  Far from being short of possible metrics, we face the opposite problem, choosing which of the plethora of metrics on offer to go with.  

Most security metrics people propose or recommend specific metrics.  The better ones at least make the effort to explain what the metrics are about, and a few take the trouble to justify their choices.  Here's a single example, a list of over 40 metrics recommended by Ben Sapiro on the LiquidMatrix blog:
Time to patch; time to detect; time to respond; system currency; time to currency; population size; vulnerability resilience/vulnerable population; average vulnerabilities per host; vulnerability growth rate versus review rate; infection spread rates; matched detection; unknown binaries; failure rates; restart rate; configuration mismatch rate; configuration mismatch density; average password age and length; directory health; directory change rate; time to clear quarantine; access error rates per logged in user; groups per user; tickets per user; access changes per user; new web sites visited; connections to botnet C&C’s; downloads and uploads per user; transaction rates; unapproved or rejected transactions; email attachment rates; email rejection/bounce rates; email block rates; log-in velocity and log-in failures per user; application errors; new connections; dormant systems; projects and without security approval; changes without security approval; average security dollars per project; hours per security solution; hours on response; lines of code committed versus reviewed; and application vulnerability velocity.  
That's not a bad list, as it happens, of readily-automated technical/IT security metrics.  Ben briefly explains each one, averaging about 30 words per metric.  He writes well and manages to squeeze quite a lot of meaning into those 30-odd words, hinting at what the metric really tells you, but inevitably there is far more left unsaid than said - not least, there's the issue of what other metrics Ben may have considered and rejected when compiling his shortlist, and on what basis he chose those 40+ metrics.   

If you're not yet convinced, sir, try on these lists, catalogs and sources of security metrics for size: CISOWASP, NIST, MetricsCenter, nCircle, ProjectQuant, ThirdDefense ... I could go on, but I'll leave the last word to Debra Herrmann's remarkable Complete Guide to Security and Privacy Metrics, all 800+ pages of it.

It's a bit like a child being spoon-fed medicine. "Here, take this, it's good for you".  It's the "Trust me" approach favored by vendors pushing complex technical products on an ignorant, naive or dubious market.  To put that another way, there is a strong tendency for metrics proponents to offer solutions (often their pet metrics) without taking the trouble to understand the problems.  Worse still, most are implicitly framing or bounding the problem space as a technical rather than a business issue by restricting the discussion to technical metrics derived from technical data sources.

What makes a given metric a good or a bad choice?  On the whole, the existing body of research on this topic failed to address this relatively straightforward issue well enough to offer usable, practical advice to busy CISOs, ISMs, ITSMs, risk managers and executives grappling with information security issues.  Whereas Andrew Jaquith, Dan Geer, Lance Hayden and others have tackled various parts of the issue, each in their own way, there was definitely something lacking.  In particular, we noticed a strong tendency to focus on automated, technical metrics i.e. the statistics spewed forth by most security systems, the logical extreme being SIEM (an expensive technical solution ... for what business problem, exactly?). 

We wrote about this at some length in PRAGMATIC Security Metrics.   Chapter 5 leads you on a voyage of discovery through a multitude of sources of candidate metrics, while chapter 6 lays out the PRAGMATIC criteria and method for honing a long list down to a short one, while figuring out the problems that your metrics are hopefully going to solve.   If you know what questions have to be answered, you know what information you need, hence the metrics all but choose themselves.  

Friday 21 June 2013

Honing your presentation skills for security awareness

Today on CISSPforum we've been chatting about Death by PowerPoint, the feeling that badly constructed and delivered presentations are not just tedious but counterproductive. Notable examples include eye-candywordy slidescool but distracting infographics and "When we understand that slide, we'll have won the war". This stuff is particularly important in topics as complex and arcane as information security. 

I’m not sure why PowerPoint is always in the dock, other than the routine M$bashing. It’s just a tool, one of many. It seems to me the problem lies not so much with the tools as with the craftsmen and women who wield them so ineptly and inappropriately.

You will rarely see the most accomplished, professional presenters using PowerPoint, or in fact any slides or handouts. They are positively overflowing with personality and expressiveness. They have presence and an infectious passion. They are naturals, true artists (though I'm sure we could argue nurture vs nature). For the rest of us, they are inspirational role models on how to present.

Seminars and conferences, documentaries, TV and radio interviews, training courses, sales pitches and political speeches are fabulous learning opportunities if you switch your focus from “What on Earth is he/she going on about?” to “How on Earth is he/she expressing it?” If anything, the less interest/knowledge you have in the subject the better as it means less distraction. It’s easy to make the switch from content to method if the presentation is terrible, boring, stilted and flat, but surprisingly hard if the presenter is passionate verging on evangelical, skilled and competent, has good material to work with, and knows how to hook you and spark your attention, even if you don’t (initially) care about the subject.

From the presenter’s perspective, it helps to remember that some of your audience hear what you’re saying, some see what you mean, some feel empathetic to your points … and some are texting or otherwise distracted, while a select few are quietly watching and listening to how you put it across (!). Some of us “think in pictures”, some worship the written word. There is no universal way to make a brilliant presentation since not only is every situation unique (inter/national conference <> sales meeting <> board meeting <> team meeting <> training course <> water-cooler-chat <> email <> phone <> Twitter) but every member of the audience is different and wants something different out of it, ranging from “the inside track” and “cool new ideas” to “a steer on what to do” or “something easy after lunch”. Picturing things from their perspectives and pandering to their information needs rather than just yours, is a vital part of preparing killer presentations.

Why not prepare and deliver your seminars like IT systems? Follow the waterfall from requirements specification (what is the seminar meant to achieve?) through design (key messages, the story-line, delivery modes) to development (crafting the content), testing (rehearsals, refinement), delivery (showtime!) and don't forget the outcome (what did it achieve, and what could I do better next time?). Measure, rinse, repeat. Practicing and learning from others is the key to getting better.

Likewise with a zillion other websites, blogs/articles and books about PowerPoint and presenting in general. There's loads of advice Out There if you are willing to sift through it, learn how to apply it and try it out. If you are naturally creative and innovative, you have a head start over the majority of IT and information security people. If not, why not collaborate with someone who is? I do, and I get a lot out of the interactions.

Whenever I look back through the archive of awareness materials I have written, I inevitably see opportunities for improvement and indeed every time I dust-off and revise an awareness module, I always find better - or at least different - ways to express things, as well as new things to say. Some of the old stuff really makes me cringe! Going forward, I hope I never stop learning and improving. It's a journey I hope will never end. It's my life.


PS I'm conscious that this rant is wordy, with no fancy graphics. My bad ... but you are still reading!

Thursday 20 June 2013

More security metrics from another vendor survey

website security survey by White Hat Security makes the point that 'a comprehensive metrics program' is valuable:  
"The tactical key to improving a web security program is having a comprehensive metrics program in place – a system capable of performing ongoing measurement of the security posture of production systems, exactly where the proverbial rubber meets the road.  Doing so provides direct visibility into which areas of the SDLC program are doing well and which ones need improvement. Failure to measure and understand where an SDLC program is deficient before taking action is a guaranteed way to waste time and money - both of which are always extremely limited."
Naturally, we agree with them that a 'comprehensive metrics system' (whatever that might be) is A Good Thing ... but it's not entirely clear to me how they reached that particular conclusion from the survey data. Worse still, the survey design begs serious questions, like for example whether 79 respondents is sufficient to generate statistically meaningful data, how those 79 respondents (and presumably not others) were selected, and exactly what they were asked ...

If you've been following our series about the Hannover/Tripwire survey (the introduction followed by parts onetwothreefour and five) this is an opportunity to think through the same kind of issues in the context of another vendor-sponsored survey.

Once again, I'd like to point out that I'm not saying such reports are worthless, rather that you need to read them carefully to counteract their natural bias.  It's a rare vendor-sponsored survey that doesn't have an agenda and/or serious flaws in the methodology, analysis and reporting.  Recognizing that is half the battle.

To be fair to White Hat Security, the report does outline some of their methods towards the end, mostly relating to their commercial website security assessment service, although the survey of 79 respondents is not well described.

Personally, I enjoy reading surveys to find out which metrics the authors have chosen to measure their chosen subjects, to learn both good and bad practices concerning experimental design etc., and to grab the odd soundbyte such as the paragraph above (quoted out of context, I admit) for my own biased purposes. Vendor-sponsored studies may or may not be scientifically sound, but so long as they make us think about the underlying issues, that's better than nothing, isn't it?

SMotW #62: security policy management maturity

Security Metric of the Week #62: security policy management maturity




As with the other ‘maturity metric’ examples given in the book (e.g. those for asset management, physical securityHR, business continuity and compliance) we envisage this metric as a scoring scale using predefined criteria against which the organization's security policy management practices are assessed and rated.  

Here's the first of four rows from the example policy maturity metric in Appendix H:

0%: no information security policy management
33%: basic information security policy management
67%: reasonable information security policy management
100%: excellent information security policy management
There is nothing even remotely resembling a security policy as such
There is a security policy of sorts, although probably of poor quality (e.g. badly worded or inconsistent), incomplete and/or out-of-date with some elements undocumented
The information security policy is documented, reasonably complete, accurate and up-to-date, reflecting most corporate and external obligations, albeit somewhat stilted and difficult to read and apply in places and perhaps with limited coverage in topical issues such as cloud computing
The information security policy materials are formalized, entirely complete, accurate, up-to-date, consistent and readable, explicitly reflecting a documented set of high level security principles, fully reflecting all corporate and external obligations and promoting generally accepted good security practices

On each row, there are four scoring criteria denoting scores of 0, 33, 67 and 100 points on the percentage scale.  There is also a fifth, implied point: 50% marks the boundary between unacceptable (scores less than 50%) and acceptable (scores greater than 50%).

The scoring criteria are written in order to give the assessor a good steer for the kinds of information security policy management practices to look out for at each maturity level, yet these are merely examples rather than firm requirements.  For instance, the 33% scoring point on this row clearly refers to the presence of something resembling a 'security policy' (in marked contrast to the 0% point), but calls into question the quality and status of the document (again, distinguishing it from the higher scoring points).  If that is a fairly accurate description of the situation, the assessor can simply award a score of 33% for that row and move on to the next, but he/she has the discretion to award slightly higher or lower scores to reflect the unique way that the organization manages its security policies.  This allows some leeway to acknowledge strengths and weaknesses that may not be shown in the scoring criteria, or that may appear at different points on the scoring scale (e.g. if the security policy is formally documented but the quality of the document is poor, it might merit a score of say 40 or 50%).

Although this was not the top-scoring policy metric, it is clear from the metric's PRAGMATIC score that Acme's management were impressed with this one:
  
P
R
A
G
M
A
T
I
C
Score
90
95
70
80
88
85
90
82
88
85%

The scoring process and/or the Meaning of the final score may need to be explained when the metric is reported, for instance highlighting particular rows in the table against which the organization scored relatively strongly or weakly to demonstrate how the final score was determined.  Doing so would be an opportunity to address the Actionability issue, since the detailed findings indicate particular things that Acme could be doing to improve its maturity score.

By the way, the very act of drawing up or refining the scoring criteria used in  maturity metrics like this is itself a sign of maturity in the organization’s approach to security metrics.  It takes some thought and effort to prepare the criteria, including research into good practices.  Gray-beard IT auditors or information security management professionals have generally experienced a wide variety of good and bad practices in past assignments, while there is plenty more advice in information security standards and methods concerning the kinds of things that the organization ought to be doing.

Monday 17 June 2013

Hannover/Tripwire metrics final part 5 of 5

So far in this series of bloggings, I have critiqued the top five metrics identified in the Hannover Research/Tripwire CISO Pulse/Insight Survey.  I'll end this series now with a quick look at the remaining six metrics and an overall conclusion.

Metric 6: "Legitimate e-mail traffic analysis

While the analysis might conceivably be interesting, isn't the metric the output or result of that analysis rather than the analysis itself?  I'm also puzzled at the reference to 'legitimate' in the metric, since a lot hinges on the interpretation of the word.  Is spam legitimate?  Are personal emails on the corporate email system legitimate?  Where do you draw the line?  Working on the assumption that this metric, like the rest, is within the context of a vulnerability scanner system, perhaps the metric involves automatically characterizing and categorizing email traffic, then generating statistics.  Without more information, the metric is Meaningless.


Metric 7: "Password strength"

This could conceivably be a fairly sophisticated metric that takes into account a wide variety of characteristics of passwords (such as length, complexity, character set, character mix, predictability, quality of the hashing algorithm, time since last changed, relationship to known or readily guessed factors relevant to the users, relationship to users' privilege levels or data access rights and so on) across multiple systems.  More often, it is a much simpler, cruder measure such as the length of an individual password at the point it is being entered by a user, or the minimum password length parameter for servers or applications.  Both forms have their uses, but again without further information, we don't know for sure what the metric is about.

Metric 8: "Time to incident recovery" and metric 9: "Time to incident discovery" 

These metrics concern different parts of the incident management process.  At face value, they are simple timing measures but in practice it's not always easy to determine the precise points in time when the clock starts and stops for each one.  
Metric 8 implies that incidents are recovered (not all are), and that the recovery is completed (likewise).  If metric 8 were used in earnest, it would inevitably put pressure to close-off incidents as early as possible, perhaps before the recovery activities and testing had in fact been finished.  This could therefore prove counterproductive.
Metric 9 hinges on identifying when incidents occurred (often hard to ascertain without forensic investigation) and when they were discovered (which may coincide with the time they were reported but is usually earlier).  The metric is likely to be subjective unless a lot of effort is put into defining the timepoints.  The tendency would be to delay the starting of the timer (e.g. by arbitrarily deciding that  an incident only counts if the business is impacted, and the time of that impact is the time of the incident), and to stop the timer as early as possible (e.g. by making presumptions about the point at which someone may have first 'spotted something wrong').  The accuracy and objectivity of the metric could be improved by more thorough investigation of the timing points, but that would increase the Costs at least as much as the benefits.

Metric 10: "Patch latency"

On the assumption that this is some measure of the time lag between release of [security relevant] patches and their installation, this could be a useful metric to drive improvements in the efficiency of the patching process provided care is taken to avoid anyone unduly short-cutting the process of assessing and testing patches before releasing them to production.  Premature or delayed implementation could both harm security, implying that there is an ideal time to implement a given patch.  Unfortunately, it's hard to ascertain when the time is just right as it involves a complex determination of the risks, which vary with each patch and situation (e.g. it may be ideal to implement patches immediately on test or development systems, but most should be delayed on production systems, especially business-critical production systems).

Metric 11: "Information security budget as a % of IT budget" 

This is, quite rightly in my opinion, the least popular metric among survey respondents.  
It presumes that security and IT budgets are or should be linked.  That argument would be stronger if we were talking about IT security, but information security involves much more than IT e.g. physical security of the office.
In reality, there are many factors determining the ideal budget for information security, the IT budget being one of the least important.

Concluding the series

A few of the metrics in the Hannover Research/Tripwire CISO Pulse/Insight Survey only make much sense in the narrow context of measuring the performance of a vulnerability scanner, betraying a distinct bias in the survey.  Others are more broadly applicable to IT or information security, although their PRAGMATIC scores are mediocre at best.  Admittedly I have been quite critical in my analysis and no doubt there are situations in which some of the metrics might be worth the effort.  However, it's really not hard to think of much better security metrics - just look back through the Security Metrics of the Week in this blog, for instance, or browse the book for lots more examples.  Better still, open your eyes and ears: there's a world of possibilities out there, and no reason at all to restrict your thinking to these 11 metrics.

If you missed the previous bloggings in this series, it's not too late to read the introduction and parts onetwothree and four.

Wednesday 12 June 2013

SMoTW #61: % of policies linked to objectives

Security Metric of the Week #61: proportion of information security policy statements unambiguously linked to control objectives

Measuring is one way to reinforce the linkage between policy statements and higher level control objectives or axioms.  Policies that bear no relation to control objectives/axioms beg the question: what are they meant to achieve? How will the organization determine whether they are effective if the intended outcome is uncertain?  What is the justification for compliance with the policy, and what are the implications of low compliance?  

Conversely, a strong security policy with a specific, legitimate purpose that cannot be linked to a control objective or axiom implies the need to fill a gap in the high-level control framework.

PRAGMATIC ratings:
P
R
A
G
M
A
T
I
C
Score
92
91
64
60
85
65
45
75
75
72%




"Unambiguously linked" leaves some wiggle room for subjective interpretation, while reviewing and assessing the linkages across the entire policy suite will inevitably take some Time to achieve.

72% is a pretty good PRAGMATIC score, making this a metric well worth considering unless there are other even-higher-scoring metrics that would achieve the same ends more effectively and efficiently.   If ACME Enterprises Inc. had identified concerns in relation to their policy coverage, this metric may be just the ticket to drive a policy review and improvement project, and perhaps it might be reported every year or two thereafter as an assurance measure.  You could say that he process and the metric need each other.

Monday 10 June 2013

The yin and yang of metrics


Many aspects of information security that would be good to measure are quite complex.  There are often numerous factors involved, and various facets of concern.  Take ‘security culture’ for example: it is fairly straightforward to measure employees’ knowledge of and attitudes towards information security using a survey approach, and that is a useful metric in its own right.  It becomes more valuable if we broaden the scope to compare and contrast different parts of the organization, using the same survey approach and the same survey data but analyzing the numbers in more depth.  We might discover, for instance, that one business unit or department has a very strong security culture, whereas another is relatively weak.  Perhaps we can learn something useful from the former and apply it to the latter.  This is what we mean by ‘rich’ metrics.  Basically, it involves teasing out the relevant factors and getting as much useful information as we can from individual metrics, analyzing and presenting the data in ways that facilitate and suggest security improvements.

‘Complementary’ metrics, on the other hand, are sets of distinct but related metrics that, together, give us greater insight than any individual metric taken in isolation.  Returning to the security culture example, we might supplement the employee cultural survey with metrics concerning security awareness and training activities, and compliance metrics that measure actual behaviors in the workplace.  These measure the same problem space from different angles, helping us figure out why things are the way they are. 

Complementary metrics are also useful in relation to critical controls, where control failure would be disastrous.  If we are utterly reliant on a single metric, even a rich metric, to determine the status of the control, we are introducing another single point of failure.  And, yes, metrics do sometimes fail.  An obvious solution (once you appreciate the issue, that is!) is to make the both the controls and the metrics more resilient and trustworthy, for instance through redundancy.   Instead of depending on, say, a single technical vulnerability scanner tool to tell us how well we are doing on security patching, we might use scanners from different vendors, comparing the outputs for discrepancies.  We could also measure patching status by a totally different approach, such as patch latency or half-life (the time taken from the moment a patch is released to apply it successfully to half of the applicable population of systems), or a maturity metric looking at the overall quality of our patching activities, or metrics derived from penetration testing.  Even if the vulnerability scanner metric is nicely in the green zone, an amber or red indication from one of the complementary metrics should raise serious questions, hopefully in good time to avert disaster.

A natural extension of this concept would be to design an entire suite of security metrics using a systems engineering approach.  We expand on this idea in the book, describing an information security measurement system as an essential component of, and natural complement to, an effective information security management system.