Monday 29 April 2013

Fraud awareness module released

Frauds, scams, swindles and cons involve taking advantage of victims through the use of deception, which is itself a form of social engineering.  As such, fraud definitely qualifies as an information security concern, making it a valid topic for the security awareness program.  What’s more, fraud is an inherently fascinating subject.  The deviously creative nature of fraudsters means they find surprising ways to dupe and manipulate people, processes and systems, undermining or bypassing controls that superficially appear sound.

Fraudsters may exist within or without the organization, sometimes both.  Procurement frauds, for instance, often involve dishonest or coerced employees acting in collusion with external suppliers to misappropriate the organization’s funds.  Collusion between individuals is a particularly challenging concern in relation to fraud since it negates a very important form of control – the division of responsibilities between individuals.

The breakdown of trust is another problem with fraud, a serious consequence given that commerce and society revolve around trust.  I'm deep into Bruce Schneier's latest book Liars and Outliers at the moment, and intrigued by the concept that fraudsters, hackers and other adversaries are 'defectors' who choose to ignore the explicit and implicit rules of society.  I'm sure I'll be drawing on that thought in future awareness modules and bloggery.

Wednesday 24 April 2013

Securing the security metrics


At the risk of appearing security-obsessed, I'd like explore the information security risks and control requirements that should be taken into account when designing an information security measurement system, particularly if (as is surely the aim) the metrics are going to materially affect the organization's information security arrangements. 

I'm talking here about the measurement system as a whole, not just the elements and metrics within it.   Information security is undoubtedly a concern for the executive suite's information security dashboard, the metrics database maintained by the CISO, and the monthly metrics report, but I'm taking a broader perspective.

It is not appropriate for me to propose specific information security controls for your information security measurement system since I can barely guess at your circumstances - the threats, vulnerabilities and impacts in relation to your security metrics, your business situation.  However, the rhetorical questions that follow will hopefully prompt you to explore the security and related requirements for your metrics, and think about what matters to your organization:
  • How are the source data for metrics - the base measurements - obtained?  Are the sources themselves (both automated and manual) sufficiently trustworthy?  Where are the biases, and how severe are they?  Are sufficient data points available to generate valid and useful statistics?  How much variability/variation is there, and how much should there be?  Does measurement variance itself qualify as a worthwhile metric?!?
  • Who is gathering, storing and processing measurements?  Are they sufficiently competent, diligent and trustworthy?  Are they well trained?  Do they follow documented procedures?  Are the criteria and processes for taking measurements sufficiently well defined so as to avoid ambiguity and to reduce the potential for abuse or fraud (e.g. selective use of ‘beneficial’ or positive data and disregard of negative values)?
  • What about the IT systems and programs supporting the measurement processes: has anyone actually verified that analytic tools, spreadsheets and databases are correctly, accurately and completely processing measurement data?  Are changes to the systems that generate, analyze and present security metrics properly managed, for instance are code or design changes adequately specified and tested before release?  If the measurement processes or systems change, are prior data properly re-based or normalized for trends analysis?
  • Is there a rational and systematic process for proposing, considering and selecting security metrics?  Does it cope with changes to the information requirements or emphasis/priorities, new opportunities, newly-identified information gaps or constraints, novel metrics suggestions etc.?  Is there a rational mechanism for specifying, testing, implementing, using, managing, maintaining and eventually retiring metrics?
  • Do metrics reporting processes accurately present ‘the truth, the whole truth, and nothing but the truth’?  How can we ensure sufficient objectivity and accuracy of reported data, and what do we mean by 'sufficient' anyway?  Is there potential at any part of the process for malicious actors to meddle with things?  Where are the weakest points?  Where might the threats originate?  What's in it for them?
  • Are good decisions made sensibly and rationally on the basis of the metrics?  Is the information used in the best interests of the organization?  Or are people intentionally or unknowingly playing games with them?  Does anyone monitor this kind of thing, or indeed the other issues raised here, and act accordingly?
  • How reliable are the metrics?  How reliable do they need to be?  Are some metrics absolutely crucial, supporting business decisions that could prove disastrous if incorrect?  Are there corroborating sources for such metrics, or ways to cross-check the metrics and/or correct the decisions?  Are any of the metrics of limited/marginal value, making them candidates for retirement, reducing the amount of distracting noise as well as cutting costs?
  • How serious would it be if the metrics turned up late?  Would important meetings or decisions be delayed?  Would this cause compliance issues?  What if the metrics were completely missing - for whatever reason they could not longer be provided?  Would people be forced to limp along without them?  Might there be alternative sources of information - and if so, would they be as good?  Are there situations where rough estimates, provided much sooner, would be at least as good if not better than more accurate and factual metrics provided later?
  • Given that they concern the organization's information security, are the metrics commercially confidential?  Are any of them particularly sensitive?  Would anyone else be interested in them, outside the intended audience?  Could they infer decisions and actions on security, incident levels and costs, vulnerabilities etc. from the metrics?  Conversely, are any of the metrics suitable for wider publication, consideration and use, for example in awareness or marketing?  Would any of them be beneficial and valuable for employees in general, business partners, sales prospects, authorities/regulators, auditors, owners or other stakeholders?  Are any of them dangerous in the governance sense, being undeniable evidence that management has been made aware of certain issues and consequently can be held to account for their decisions?

To close, I'll just mention that these generic considerations apply in much the same way to virtually ANY measurement system in ANY context: financial metrics, HR metrics, strategic metrics, risk metrics, product metrics, health and safety metrics, societal and political metrics, scientific metrics ... you name it.  Maybe it's worth talking to your colleagues about their metrics too.

Security metric #54: documentation of important operations

Security Metric of the Week #54: number of important operations with documented and tested procedures

At first glance, this week's example metric doesn't sound very promising.  The wording is ambiguous, its value unclear and its  purpose uncertain.

If you were a senior executive sitting on "mahogany row", trying to select some information security metrics to use as part of your governance role, you might well be tempted to reject this metric without further ado, given obvious concerns such as:

  1. It implies a straightforward count, giving no indication of how many "important operations" remain to be procedurally documented and tested. 
  2. What are "important operations" anyway?   Is it referring to business processes, information security processes, IT processes, activities, tasks or something else?  Who decides which ones qualify as "important", and on what basis?
  3. "Documented and tested" sticks two distinct issues into the one metric.  Does it mean that the documentation is tested, or the "important operations" are tested, or both?

On the other hand, imagine there was a corporate policy that the organization's business-critical processes should be documented and the documentation quality tested, this could be a worthwhile compliance metric, useful to drive through the documentation of key business operations.  The graph above shows how the metric might indicate the number of such processes that are both documented and tested (upper line) and documented but not yet tested (lower line), addressing point 3.  Furthermore, if the policy explicitly referred to "the top fifty business-critical processes" or the top ten or whatever, then concern 1 would also be addressed.  


It is clear from their analysis that ACME's management took a real shine to this metric, giving it an overall PRAGMATIC score of 84%.  The phrase "important operations" evidently Means something specific in ACME's corporate lingo, and since they also rated the metric high on Predictability and Relevance, they must believe that the documentation and testing of those "important operations" is key to ACME's information security.

This is a classic illustration of the drawbacks of those generic lists or databases of 'recommended' or 'best practice' or 'top N' information security metrics.  The organizations and individuals behind them undoubtedly mean well but, as Anton Aylward quite rightly keeps reminding us, context is everything.  In your situation, this metric may be as good as ACME's managers believe, maybe even better.  For many organizations, however, it is mediocre at best and probably outshone by others.  The PRAGMATIC method gives us the means not just to say metric X is better than metric Y, but to explain why, and to develop and discuss our reasoning in some depth.

There may be particular reasons why this metric scores so well right now for ACME.  Perhaps there is a corporate initiative to improve the documentation of ACME's business-critical processes as part of a drive towards ISO/IEC 27001 certification.  A year or so into the future, when most if not all of the processes are documented and tested, the metric will probably have outlived its usefulness.  ACME managers will find it straightforward to reconsider this year's PRAGMATIC ratings and the associated notes to remind themselves what made them favor the metric before, updating their thinking and the PRAGMATIC score in their regular metrics review.  Retiring this metric will be no big deal.

Compare the enlightened, rational and consensual PRAGMATIC approach to those dark and dismal days when we used to sit around endlessly complaining about metrics and sniping at each other.  What started out with someone insisting that we needed to "sort out our security metrics" soon turned into a bun-fight, each of us becoming ever more entrenched in defending our pet metrics while dismissively criticizing our colleagues'.   The horribly divisive and unsatisfying process meant that, once the dust had settled, there was very little appetite to review the metrics ever again, except perhaps for those battle-scarred veterans who relished every opportunity to re-play the same old arguments, more stridently each  time.   Without regular reviews, the metrics gradually decayed until eventually the whole vicious cycle kicked off again with someone insisting that we "sort out our security metrics" ...

We've been there, done that, soiled the bandages, but does this ring true to you, or are we barking up the wrong tree?  Is it all sweetness and light in your organization, or does your C-suite resemble Flanders whenever metrics are discussed?  Do let us know ...

Wednesday 17 April 2013

Security Metric #53: entropy

Information Security Metric of the Week #53: entropy of encrypted content


Randomness is a crucial concept in cryptography. Aside from steganography, strongly encrypted information appears totally random with no discernible patterns or indicators that would give cryptanalysts clues to recover the original plaintext.

"Entropy" is a convenient term we're using here to describe a measure of randomness or uncertainty - we're being deliberately vague in order to avoid getting embroiled in the details of measuring or calculating this metric. And, to be frank, because Shannon goes way over our heads.

We envisage ACME using this metric (howsoever defined) to compare encryption systems or algorithms on a common basis, for instance when assessing new encryption products for use in protecting an extremely confidential database of pre-patent information. Faced with a shortlist of products, management seeks reassurance as to their suitability beyond the vendors' marketing hyperbole. The assessment process involves encrypting one or more specific data files with each of the systems or algorithms, then determining the randomness of the resulting ciphertexts using an appropriate mathematical calculation, or indeed several. For completeness, the calculations might be repeated using a variety of encryption keys in case any of the systems/algorithms has limitations in that respect. The ones that produce the most random ciphertext are the strongest encryption systems/algorithms. QED.

The PRAGMATIC ratings for this metrics are mostly high, apart from a glaring exception: Meaningfulness rated a pitiful 3% when the metric was assessed by ACME's management, since it appears Shannon went way over their heads too! The overall PRAGMATIC score of 59% would no doubt have been much higher if management understood the concept. In any case, the metric is of interest to ACME's IT and information security professionals involved directly in the product selection process, in other words this could be a worthwhile operational as opposed to management metric, even if the teccies need to explain the end result to their bosses, patiently, in terms of one syllable or less.

PS  Luther Martin, writing in the May 2014 issue of ISSA Journal, discussed the percentage compression [such as that reported by WinZip] as a guide to the randomness of cyphertext.

PPS In the September 2016 issue of ISSA Journal, Luther (again) plus Tim Roake wrote about different definitions, meanings or measures of entropy, with various assumptions or prerequisites that can invalidate the calculations. The randomness of a data set reflects both (a) the frequencies of individual bits or digits or characters in the set and (b) the unpredictability or absence of pattern of the sequence. A binary sequence such as 11111111 does not appear random because it clearly has a marked 'excess' of 1s over 0s but despite its even frequencies, the sequence 10101010 is also probably not random since it has an obvious pattern, allowing us to predict future values. (a) is easy to measure, providing a relatively cheap and simple way to check whether supposedly strongly encrypted data are markedly biased. However, measuring or testing (b) is tricky, especially as 'patterns' may be quite obscure and complex. That pragmatic 'percentage compression' measure from WinZip is crude and insufficient for situations where randomness truly matters.

Thursday 11 April 2013

PRAGMATIC Security Metric of the Year, 2013

Having just discussed our fifty-second Security Metric of the Week here on the blog, it's time now to announce our top-rated example security metrics from the past year.  

<Cue drum roll>

The PRAGMATIC Security Metric of the Year, 2013, is ... "Security metametrics"

<Fanfare, riotous applause>

Here are the PRAGMATIC ratings for the winner and seven runners-up, all eight example metrics having scored greater than 80%:

Example metric P R A G M A T I C Score
Security metametrics 96 91 99 92 88 94 89 79 95 91%
Access alert message rate 87 88 94 93 93 94 97 89 79 90%
Business continuity maturity 90 95 70 80 90 85 90 87 90 86%
Asset management maturity 90 95 70 80 90 85 90 85 90 86%
Infosec compliance maturity 90 95 70 80 90 85 90 85 90 86%
Physical security maturity 90 95 70 80 90 85 90 85 90 86%
HR security maturity 90 95 70 80 90 85 90 85 90 86%
Security traceability 85 89 88 90 91 87 65 84 85 85%

Before you rush off to implement these eight metrics back at the ranch, please note that the PRAGMATIC scores were calculated in the context of an imaginary organization, ACME Enterprises Inc.  They reflect ACME's situation, and ACME management's perspectives, understanding, prejudices and measurement objectives.  They are merely worked examples, demonstrating how to apply the PRAGMATIC method in practice.  You may well already have better security metrics in place, and we know there are many other excellent security metrics - not least because there are other high-scoring examples in the book!  In short ...

Y M M V 
Your Metrics May Vary

You have no doubt noticed that five of the top eight are "maturity metrics", and if we include "security metametrics", fully six of the top eight are our own invention ... which probably reveals a bias in the way we scored and ranked the metrics.  These six are our babies and, naturally, we love them to bits, warts and all.  We are blind to their imperfections.  On the other hand, using the PRAGMATIC approach, we have elaborated in some detail on why we believe they are such strong candidates for ACME's information security measurement system.  We've shown our workings, and actively encourage you to review and reconsider these and other candidate metrics in your own contexts.  

It might be nice if we could develop and agree on a comprehensive suite of universally-applicable information security metrics, particularly as we now have a more rational approach than "Trust us, these are great security metrics!"   However, that may be just a pipe-dream since we are all so different.  Is it realistic to presume that the half-dozen information security metrics that have been chosen by, say, a small charity would also feature among the two dozen selected by a large bank, or the four dozen imposed on a government department by some regulatory authority?  We suspect not, but  having said that we would be delighted to reach a consensus on a handful of PRAGMATIC security metrics that have proven themselves invaluable to almost everyone.

OK, that completes the first year of our cook's tour of information security metrics.  In the months ahead, we plan to continue discussing and scoring other example metrics from the book, along with various others that pop into our consciousness from time-to-time.   If you'd like us to consider and score your favorite information security metric, then why not join the security metametrics discussion forum and tell us all about it?  Does yours score above 80%?  What makes it shine?

Wednesday 10 April 2013

Security metric #52: external lighting

Security Metric of the Week #52: proportion of facilities that have adequate external lighting

This week's example metric represents an entire class of metrics measuring the implementation of information security controls.  In this particular example, the control being measured is the provision of external security lighting that is intended to deter intruders and vandals from the facilities.  It is obviously a physical security control, one of many.  The metric could be used to compare and contrast facilities, for example in a large group with several operating locations.  While we've picked on external lighting for the example, the metric could be used to measure almost any control.

The metric's PRAGMATIC score is rather low:

P
R
A
G
M
A
T
I
C
Score
2
5
70
42
11
46
35
18
31
29%


Why has ACME's management evidently taken such a dislike to this metric?  Its shortcomings are laid out in some detail in the book (for instance, what does it mean by "adequate"?) but for now let's take a quick look at those dreadful ratings for Predictability and Relevance.

The Predictability rating is a percentage on a notional scale delineated by the following five waypoints:
  • 0% = The metric is purely historical and backward-looking, with no predictive value whatsoever;
  • 33% = The metric is principally historic but gives some vague indication of the future direction such as weak trends;
  • 50% = The metric is barely satisfactory on this criterion (50% marks the transition between unsatisfactory and satisfactory);
  • 67% = The metric definitely has predictive value such as strong trends, but some doubt and apparently random variability remains;
  • 100% = Highly predictive, unambiguously indicative of future conditions with very strong cause-and-effect linkages.
ACME managers evidently believe the metric is almost entirely historical and backward looking with next to no predictive value.  In their experience, the proportion of facilities that have adequate external lighting is a very poor predictor of their information security status.

Similarly, the metric is believed to have little Relevance to information security.  Possibly, there is some misunderstanding here about the necessity for physical security in order to secure information assets.  Perhaps physical security is managed and directed quite separately from information security within ACME.  The metric would presumably score higher for Relevance to physical security.

If for some reason someone wanted to push this particular metric, they would clearly have to address these and the other poor ratings, trying to persuade management of its purpose value ... implying, of course, that it is actually worth the effort.  They might need to redesign the metric, for instance broadening it to take account of other physical security controls that are more obviously relevant to information security such as physical access controls around the data center or corporate archives.  In the unlikely event that there were no better-scoring metrics on the table, the proponent would have their work cut out to rescue one as bad as this from the corporate scrapheap.

Believe it or not, that is in fact a very worthwhile PRAGMATIC outcome.  Many organizations limp along with some truly dreadful security metrics in their portfolio, metrics that get dutifully analyzed and reported every so often but have next to no value to the organization.  Occasionally, we come across metrics that are so bad as to be counterproductive: they actually harm information security!  Reporting them is a retrograde step.  The problem is that although almost everybody believes the metrics to be useless, there is a lingering suspicion that they must presumably be of value to someone since they appear without fail in the regular reports or on the dashboard.  Nobody has the motivation or energy to determine which metrics can or should be dropped.  Few except senior managers and Audit have visibility across the organization to determine whether anyone needs the metrics.

Unnecessary/avoidable costs are the consequence of this.  The costs can be substantial if you take into account the likelihood of poor quality metrics in all business areas, not just information security.   What a waste!

A systematic process of identifying and PRAGMATIC-scoring all the organization's information security metrics is a thorough way to identify and weed-out the duds.  Less onerously, metrics that are 'clearly dreadful' can be singled out for the chop.  Another possible approach is to identify or nominate "owners" or "sponsors" for every metric, and have them justify and ideally pay the associated measurement costs from their departmental budgets.  Suddenly, cost-effective security metrics are all the rage!  Yet another option is for the CISO or Information Security Manager to identify and cull weak metrics, either openly in collaboration with colleagues or quietly, behind the scenes, perhaps swapping duds for stars (which brings up the idea of "one in, one out" - for every additional information security introduced into the measurement system, another has to be retired from service and put out to graze, in order to avoid information overload and contain measurement costs).  

-------------------------------------------

Since this is our fifty-second Security Metric of the Week, we will shortly announce our fourth Security Metric of the Quarter and our very first PRAGMATIC Security Metric of the Year.  Watch this space.

Thursday 4 April 2013

Security metric #51: rate of IT change

Security Metric of the Week #51: perceptions of rate of change in IT

"Perceptions" are opinions, hence this is a clearly a highly subjective measure.  Nevertheless, it could be argued that extreme readings have some information security significance.  Rapidly changing or highly dynamic IT towards the right of the U-shaped curve implies that those surveyed are distinctly uncomfortable with the pace of change.  ACME may perhaps be struggling to keep up with new technology, hence it may not be on top of the information security aspects, increasing its information security risks.  Conversely, slowly changing or relatively static IT on the left implies that ACME may not be investing in technology, hence it may be falling behind on information security and again may be taking risks.  In the middle ground, the impression is that those surveyed are relatively comfortable with the changing IT ... but it takes a leap of faith to equate their comfort to a low level of information security risk.

The PRAGMATIC score of just 41% indicates that ACME managers were less than impressed with this potential information security metric:

P
R
A
G
M
A
T
I
C
Score
40
50
6
65
70
50
30
14
40
41%


The italicized words in the first paragraph stem from its subjectivity and the presumed cause-effect relationship between rate of change in IT and information security risk.  There is no proven factual basis, no science behind the U-shaped curve.  It's guesswork, which is like Kryptonite for metrics.

Meriting just 6% on the Actionability criterion drops this metric firmly into the "So what?" bucket with a resounding clang.  Even if the perceived rate of change of IT was determined to be very high or very low on a survey scale, there's not a lot that could be done to address the presumed information security aspects without much more information than the metric alone provides.  Low ratings in other criteria effectively seal its fate.

Having said that, the metric may have some value in relation to ACME's IT strategy and its IT investments.  It might be worth reconsidering and re-scoring the metric in that context, depending on what other IT investment strategy metrics might be in use or under consideration.  It would be quite straightforward to adapt the PRAGMATIC approach to that or indeed other contexts, especially if management was comfortable with the method.

Wednesday 3 April 2013

Five characteristics of effective metrics


What makes a security metric effective or good?  What makes one ineffective or bad?  Can we spot shining stars among the duds, short of actually firing them off to management for a few months and watching the fallout?  

It's an interesting question that gets into our understanding of metrics.

Naturally, Krag and I believe we know the answers, but we're not the only ones to have expressed an opinion on this.

[Before you read on, what do you think makes a good security metric?  Take a moment to mull it over.  It's OK, you don't need to tell anyone.  It's your little secret.]

Following a conference presentation by Gartner's Jeffrey Wheatman, Tripwire's Dwayne Melancon wrote up what he described as "a really good list of 'Five characteristics of effective metrics'" that had been presented by Wheatman:
  1. Effective metrics must support the business’s goals, and the connection to those goals should be clear.
  2. Effective metrics must be controllable. (In other words, don’t report on the number of vulnerabilities in your environment, since you can’t control that.  Instead, report on the % of “Critical” systems patched within 72 hours, which you can control).
  3. Effective metrics must be quantitative.
  4. Effective metrics must be easy to collect and analyze. (Wheatman says “If it takes 3 weeks to gather data that you report on monthly, you should find an easier metric to track.”)
  5. Effective metrics are subject to trending.  (Tracking progress and setting targets is vital to get people to pay attention)
I agree to an extent with the first characteristic (along with Jaquith's fifth criterion - see below) but Wheatman's phrasing, as reported by Melancon, is subject to differing interpretations.  If a security metric only partly supports the business' goals, does that necessarily mean it is not effective?  What if there simply is no better metric?  Effectiveness is a comparative not an absolute value, and sometimes we have to settle for metrics that are good enough rather than perfect.  That said, it does make sense to clarify the connections or associations between metrics and organizational objectives, values, strategies etc., and ideally to start out with those very objectives etc. when designing or selecting suitable metrics.   Clearly specifying the requirements is a great way to start anything!

Wheatman's second characteristic is almost but not quite right.  I would agree that effective metrics usually measure activities, situations, systems etc. that can be directly controlled or influenced to some extent, but not always.  Sometimes, raw knowledge about a situation is valuable, even if there is no obvious, straightforward way to use it at that point.  "Defcon" is an example: it is a generalized metric, used more as an awareness or alerting tool than a way to switch certain behaviors and activities on or off (although some anticipated behaviors and activities are no doubt specified in the military procedures and training manuals).  

[I'm sure we could have an interesting panel discussion about the remainder of Wheatment's second statement too: any information security pro would challenge his assertion that you cannot control the number of vulnerabilities - most of the time we are doing exactly that.  I can envisage situations in which 'number of vulnerabilities' could be a valid and worthwhile metric, particularly with a small change to the wording along the lines of 'number of identified or known or confirmed vulnerabilities' (for example in relation to system security testing).  I could also challenge the implied suitability of '% of "Critical" systems patched within 72 hours'.]  

Characteristics 3 and 4 are distinctly reminiscent of "the definition of a good metric" by Andrew Jaquith in his book Security Metrics: Replacing Fear, Uncertainty and Doubt.  According to Jaquith, a good security metric should be:

  • Consistently measured, without subjective criteria;
  • Cheap to gather, preferably in an automated way;
  • Expressed as a cardinal number or percentage, not with qualitative labels like "high", "medium", and "low";
  • Expressed using at least one unit of measure, such as "defects", "hours", or "dollars"; and
  • Contextually specific - relevant enough to decision-makers so that they can take action.

Jaquith's characteristics have been widely circulated for more than five years, at least since the book was published in 2007, but I have seen little critical discussion of them.  It's as if people are simply quoting them without, perhaps, understanding or challenging the implicit assumptions.

Take Jaquith's first criterion, for instance: "Consistently measured" seems fair enough (one could certainly argue that consistency is a useful property, depending on how one defines it), but the subsidiary clause "without subjective criteria" raises a different issue entirely.  One can measure things consistently using subjective criteria, just as one can measure things inconsistently using objective criteria.  Jaquith is confusingly blending two distinct considerations, one of which is quite misleading, into the same criterion.

Jaquith's second criterion is also heavily loaded by the subsidiary phrase.  Equating cheapness with automation is inaccurate and, again, misleading.   It  reflects a strong bias towards the use of automated data sources throughout IT, and implies that manually-collected metrics are junk.  Furthermore, and even more importantly, there are many situations in which the metric's cost is almost irrelevant provided the information and insight it generates is sufficiently valuable - in other words, the issue is not cheapness per se but the metric's cost-effectiveness.  Some security metrics are most certainly worth the investment.  Some cheap security metrics are indeed nasty.

Wheatman's third point, plus Jaquith's third and fourth criteria, are distinctly troubling.  They strongly imply that qualitative measures are totally worthless - that assigning measurement values to categories such as high/medium/low  is innately wrong.  This is a curiously prejudicial view, expressed at some length by Jaquith in Security Metrics and elsewhere.  There are legitimate mathematical concerns about categorization, in particular the misuse of simple arithmetic to manipulate category labels that happen to be numeric.  For instance, a risk categorized as level 1 is not necessarily "half as risky" as one at level 2.  Two level 1 risks are not necessarily equivalent to one level 2 risk.  It is not appropriate to manipulate or interpret category labels in this way (which, if anything, is an argument in favor of textual labels such as high/medium/low or red/amber/green).  However, that does not mean that it is inherently wrong to use categories, nor that metrics absolutely must be expressed as cardinal numbers or percentages which is how Jaquith's criteria are commonly interpreted, even if that is not quite what he means.

There are perfectly legitimate, valid, mathematically accurate and scientifically sound reasons for using metrics that involve categories and/or qualitative values.  One of the most useful is prioritization, or comparative analysis.  Isn't it better to tell management that option A is "much riskier" than option B (based on your subjective analysis of the available evidence, 'you' being an experienced information security/risk management professional), than to withhold that information purely because "riskiness" cannot be expressed as a cardinal number or percentage?  Isn't it just as misleading, biased or wrong to insist, dogmatically, on cardinal numbers or percentages?