Sunday 30 December 2012

Privacy awareness

One of six new privacy awareness posters available this month


We have just completed an awareness module covering privacy. Despite having repeatedly covered the privacy topic, there is clearly just as much of a need for it today as ever.  

One of our case studies in the module concerns a major privacy breach here in New Zealand.  ACC is the government department that administers a national insurance scheme providing medical cover for accidents and emergencies.  As such, it handles a lot of personal information including sensitive medical info.  When an ACC manager accidentally and unknowingly attached a spreadsheet containing personal details on thousands of ACC customers to an email to one of those customers, he caused an incident that rumbled along for a year, embarrassing the minister and upsetting a lot of people along the way.  Better training and awareness on privacy is one of several improvements recommendations made by the recent official report into the debacle.   

If the ACC privacy breach seems remote and obscure, the train-the-trainer guide in the module suggests adapting or replacing the provided case study scenarios with something closer to home, such as a privacy incident involving the organization or employees, a competitor, a neighbor, or something else in the news.  The unfortunate fact is that there is no shortage of privacy incidents and breaches to discuss, and those are just the ones that get (a) noticed, and (b) reported.

Surveillance is another addition to the awareness module this time around.  An increasing number of news articles are reporting voyeurs using miniature cameras to spy on neighbors and members of the public.  The cameras are readily available and cheap to buy.  They can be concealed as pens and key fobs, or built-in to cellphones, laptops and tablets.  Conventional CCTV cameras are part of modern life, both in public places such as high streets, and inside corporations.  Big Brother in George Orwell's book 1984 is not such a far-fetched threat after all.  We encourage our customers to cover surveillance (whether by the organization on its employees etc., or by employees etc. on each other) in their privacy policies, which implies management thinking through the issues and deciding how best to respond.  It's surely better to do so in advance, than to face awkward situations later without a policy or rulebook for guidance.   By the way, the complainant in the ACC case secretly recorded a meeting, providing undeniable evidence that ACC managers were made aware of the breach - covert surveillance is sometimes in the public interest.  

Likewise, we suggest developing and documenting a privacy incident management process to handle the incidents or breaches that will probably occur.  The ACC case once again demonstrates the need to have a well structured and thought-through process that is actually used when incidents are notified or identified.  The ACC incident would probably have been much less damaging to ACC and the ministry if it had been properly investigated and resolved, perhaps avoiding the breach being disclosed to the press.

Finally, the technical awareness stream identifies the need for technical and physical controls for privacy in addition to policies and procedures, such as IDS/IPS/DLP systems that routinely monitor the network for inappropriate traffic and sensitive personal information passing in cleartext.  Some while ago, one of our customers discovered that their email encryption system had been wrongly configured soon after just such a monitoring control was put in place.  As well as protecting their customers' personal information, they narrowly avoided a breach that would have been highly embarrassing and costly for the organization - something else that ACC might like to bear in mind. 

Wednesday 26 December 2012

SMotW #37: unaccounted software licenses

Security Metric of the Week #37: proportion of software licenses purchased but not accounted for in the repository

We are not entirely sure of the origin or purpose of this metric, but it's typical of the those that pop randomly out of the woodwork every so often for no obvious reason, sometimes taking on a curious aura of respectability depending on who raised or proposed them.  

Unfortunately, as it stands, we lack any context or explanation for the metric.  We don't have access to whoever proposed it, we can't find their reasoning or justification, and hence we find it hard to fathom their thinking processes that presumably led them to propose it.

Perhaps someone had been checking, validating or auditing software licenses and  used something along these lines as a measure in their report.  Maybe it was suggested by a colleague at an information security meeting or online forum, or proposed by a naive but well-meaning manager in such a way that it simply had to be considered.  Who knows, perhaps it came up in idle conversation, mystically appeared out of the mist in a dream, turned up as a worked example in a security metrics book, or featured in some metrics catalog or database.  

It may well have been someone's pet metric, something they invented, discovered or borrowed one day for a specific purpose, found useful in that context, and so presumed their success means it must therefore be a brilliant security metric for everyone, in other, unspecified contexts.*  

To be frank, we not terribly bothered where it came from or why it appeared on our shortlist.  We do care about its utility and value as a security metric for ACME Enterprises Inc, in relation to the plethora of others under consideration.

Maybe for some it really is a wonderful metric ... but evidently not for ACME.  The PRAGMATIC score says it all:

P
R
A
G
M
A
T
I
C
Score
1
1
90
84
1
70
50
81
30
45%




It scores abysmally on Relevance (to ACME's information security), on its ability to Predict or be used to direct ACME's information security status, and on its Meaning to ACME's information security people and managers.  On the other hand, it is highly Actionable in the sense that a low score self-evidently  implies the need to account for more of the purchased software licenses.  It's also pretty Genuine and would be hard to falsify unless someone had the motivation, skill and time to fabricate a stack of 'evidence' from which the numbers could be reconstructed.  ACME's people have better things to do.

OK so it's not ideal for information security but maybe it would have more value to, say, Finance or IT?  Perhaps they too could be persuaded to PRAGMATIC rate the metric and compare it to those they are using or considering ... no promises, mind you.

Anyway, its poor score clearly takes it out of contention as an information security metric for ACME, and right now we have a date with a mince pie and a small glass of vintage port ...

Merry Christmas readers.

* Note that we are not immune from this kind of generalization and a bias towards the metrics that we find valuable.   The metrics in the book, including the 'security metrics of the week' on this blog, come from a variety of sources.  Some were metrics that we have used in anger ourselves, before, including a few of our own pet metrics of course.  Some have been suggested, recommended even, by various other security metrics authors.  Some made an appearance in security surveys, management reports, blogs, discussion groups and standards such as ISO/IEC 27004.  Some we invented on-the-fly while writing the book, deliberately trying to illustrate and demonstrate the power of the PRAGMATIC approach in helping to differentiate the good from the bad and the ugly.  

Please remember, above all else, that whatever we or others may say or imply, we are NOT telling you what security metrics to use in your situation.  We are not clairvoyants.  We have ABSOLUTELY NO IDEA what your specific security information needs might be, except in the most general hand-waving sense of being infosec greybeards ourselves.  Much as we would love to just give you "the best security metrics" or a set of "recommended" or "valuable" or "worthwhile" metrics, we honestly can't do that.

What we are offering is a
straightforward method for you to
find your own security metrics.

In the unlikely event that you are short of inspiration, the book includes a stack of advice on where to find candidate security metrics - places to go looking - and hints on how to invent new ones either from scratch or by modifying and customizing or adapting existing or proposed metrics.  The PRAGMATIC method is a great way to sift through a giant haystack of candidate security metrics to find the very needles you've been hunting for.

Thursday 20 December 2012

SMotW #36: business continuity spend

Security Metric of the Week #36: business continuity expenditure

At first glance, this looks like a must-haveinformation metric: surely expenditure on business continuity is something that management can't possibly do without?  As far as ACME Enterprises is concerned, this metric warrants a fairly high PRAGMATIC score of 71%, making it a strong candidate for inclusion in ACME's information security measurement system.

It has its drawbacks, however.  Determining BC expenditure accurately would be a serious challenge, but thankfully great precision is probably not necessary in this context: estimations and assumptions may suffice.  Still, it would be handy if the accounting systems could be persuaded to regurgitate a sufficiently credible and reliable number on demand.  Furthermore, it is not entirely obvious what management is expected to do as a result of the metric, at least not unless the business benefits of business continuity are also reported.  The net value of business continuity, then, could be an even better metric.

Tuesday 4 December 2012

SMotW #35: compliance maturity

Security Metric of the Week #35: information security compliance management maturity

Compliance with information security-related laws and regulations is undoubtedly of concern for management, since non-compliance can lead to  substantial penalties both for the organization and, in some cases, for its officers personally.  Legal and regulatory compliance is generally asserted by the organization, but confirmed (and in a sense measured) by independent reviews, inspections and audits.  

But important though they are, laws and regulations are just part of the compliance landscape.  Employees are also expected to comply with obligations imposed by management (in formal policies mostly) and by other third parties (in contracts mostly).  Compliance in these areas is also confirmed/measured by various reviews, inspections and audits.

In order to measure the organization's compliance practices, then, we probably ought to take all these aspects into account. 

P
R
A
G
M
A
T
I
C
Score
90
95
70
80
90
85
90
85
90
86%



This week's security metric is another maturity measure.  Maturity metrics (as we have described before) are very flexible and extensible, so it's no problem to take account of all the issues above, and more besides.

We have been quite harsh on the Actionability rating for this metric, giving it "just" 70%, in anticipation of the practical issues that would crop up if Acme's management deemed it necessary to improve the organization's security compliance.  On the other hand, breaking down and analyzing security compliance in some detail makes this an information-rich metric.  Aside from the overall maturity score, management would be able to see quite easily where the biggest improvement opportunities lie.

PRAGMATIC security metrics for competitive advantage

Blogging recently about Newton's three laws of motion, we mentioned that organizations using PRAGMATIC metrics have competitive advantages over those that don't.  Today, we'll expand further on that notion.

Writing in IT Audit back in 2003, Will Ozier discussed disparities in the way information security and other risks are measured and assessed.  Not much seems to have changed in the nine years since it was published.  Ozier suggested a "central repository of threat-experience (actuarial) data on which to base information-security risk analysis and assessment": today, privacy breaches are being collated and reported fairly systematically, thanks largely to the privacy breach disclosure laws, but those are (probably) a tiny proportion of all information security incidents - at least, in my experience things such as information loss, data corruption, IP theft and fraud are far more prevalent and can be extremely damaging.  Since these are not necessarily reportable incidents, most don't  become public knowledge, hence we don't have reliable base data from which to calculate the associated risks with any certainty. 

"In my experience" is patently not a scientific basis however.  I doubt that adding "Trust me" would help much either.

Talking of non-scientific, there is no shortage of surveys, blogs and other sources of anecdotal information about security incidents.  However, the statistics are of limited value for making decisions about information security  risks.  The key issue is bias: entire classes of information security incident may not even be recognized as such.  Take human errors, for instance.  Human errors that lead to privacy breaches may be reported but for all sorts of reasons there is a tendency not to want to blame someone, hence often the cause is unstated or ascribed to something else.  Most such incidents probably remain undetected, although some errors are noticed and quietly corrected.

However, while we lack publicly-available data about most information security incidents, organizations potentially have access to a wealth of internal information, provided that information security incidents are reported routinely to the Help Desk or wherever.  Information security reviews, audits and surveys within the organization can provide yet more data, especially on relatively serious incidents, and especially in large, mature organizations.

OK, so where is this rambling assessment leading us in relation to information security metrics?  Well in case you missed it, that "wealth of internal information" was of course a reference to security metrics.

And what have security metrics, PRAGMATIC security metrics specifically, got to do with competitive advantage?  Let me explain.

Aside from selecting or designing information security metrics carefully from the outset, management should review the organization's metrics from time to time to confirm and where necessary improve, supplement or retire them.  This should ideally be a systematic process, using metametrics (information about metrics) to examine the metrics, comparing their value rationally against their information requirements.  Fair enough, but why should they use PRAGMATIC metametrics?  Won't SMART metrics do?

The Accuracy, Independence and Genuinness of measurements are important concerns, especially if there might be systematic biases in the way the base data are collected or analyzed, or even deliberate manipulation by someone with a hidden agenda and a blunt ax.  This hints at the possibility of analyzing the base data or measurement values for patterns that might indicate bias or manipulation (Benford's law springs immediately to mind) as well as for genuine relationships that may have Predictive value.  It also hints at the need to check the quality and reliability of individual data sources, for instance the variance or standard deviation are guides to their variability and, perhaps, their integrity or trustworthiness.  Do you routinely review and reassess your security metrics?  Do you actually go through the process of determining which ones worked well, and which didn't?  Which ones were trustworthy guides to reality, and which ones lied?  Do you think through whether there are issues with the way the measurement data are gathered, analyzed, presented, and/or interpreted and used - or do you simply discard hapless metrics that haven't earned their keep without truly understanding why?

Relevance and Timeliness are both vital considerations for all metrics when you think about it.  How many security situations have been missed because some droplet of useful information was submerged in a tsunami of junk?  How many times have things been neglected because the information arrived too late to make the necessary decisions?  To put that another way, how much more efficiently could you direct and control information security if you had a handle on the organization's real security risks and opportunities, right now?  

In respect of competitive advantage, Cost-effectiveness pretty much speaks for itself.  It's all very well 'investing' in a metrics dashboard gizmo with all manner of fancy dials and glittery indicators, but have you truly thought through the full costs not just of generating the displays, but using them?   Are the measurements merely nice to know, in a coffee-table National Greographic kind of way, or would you be stuffed without them?  What about the opportunity cost of either being unable to use or discounting other, perfectly valid and useful metrics that, for some reason, don't look particularly sexy in the dashboard format?  Notice that we're not railing against expensive dashboards per se, provided they more than compensate for the costs in terms of the value they generate for the organization - more so than other metrics options might have achieved.  Spreadsheets, rulers and pencils have a lot going for them, particularly if they help focus attention on the information content rather than its form.

In contrast to the others, Meaningfulness is a fairly subtle metametric. We interpret it specifically as a measure of the extent to which a given information security metric 'just makes sense' to its intended audience.  Is the metric self-evident, smack-the-forehead blindingly obvious even, or does it need to be painstakingly described, at length, by a bearded bloke in a white lab coat with frizzy hair, attention-deficit-disorder and wild, staring eyes?  A metric's inherent Meaningfulness is a key factor in relation to its perceived value, relevance and importance to the recipient, which in turn affects the influence that the numbers truly have over what happens next.  A Meaningful metric is more likely to be believed, trusted and hence actually used as a basis for decisions, than one which is essentially meaningless.  Let the competitors struggle valiantly on with their voluminous management reports, tedious analysis and, frankly, dull appendices stuffed with numbers that nobody values.  We'll settle for the Security Metrics That Truly Matter, thanks.

The Timeliness criterion is also quite subtle.  In the book we explain how the concept of feedback and hysteresis applies to all forms of control, although we have not seen it described before in this context.  A typical  manifestation of hysteresis involves temperature controls using relatively crude electromechanical or electronic sensors and actuators.  As the temperature  reaches a set-point, the sensor triggers an actuator such as a valve or heating element to change state (opening, closing, heating or cooling as appropriate).  Consequently the temperature gradually changes until it reaches another set point, whereupon the sensor triggers the actuator to revert to its original state.  The temperature therefore cycles constantly between those set points, which can be markedly different in badly designed or implemented control systems.  Hysteresis loops apply to information security management as well as temperature regulation: for instance, adjusting the settings on a firewall between "too secure" and "too insecure" is better if the metrics relating to firewall traffic and security exceptions are available and used in near-real-time, rather than on the basis of, say, a monthly firewall report, especially if the report takes a week or three to compile and present!  The point is that network security incidents may exploit that gap or delay between "too secure" and "too insecure", so Timeliness can have genuine security and business consequences.

Finally for today, spurious precision is a factor relating to several of the PRAGMATIC criteria (particularly Accuracy, Predictability, Relevance, Meaning, Genuinness and Cost-effectiveness).  We're talking about situations where the precision of reporting exceeds the precision of measurement and/or the precision needed to make decisions. Have your competitors even considered this when designing their security metrics?  Do they obsess over marginal and irrelevant differences between  numbers derived from inherently noisy measurement processes, or appreciate that "good enough for government work" can indeed be good enough, much less distracting and eminently sensible under many real-world circumstances?  A firm grasp of statistics can help here but it's not necessary for everyone to be a mathematics guru, so long as someone who knows their medians from their Chi-squared can be trusted to spot when assumptions, especially implicit ones, no longer hold true.  

We'll leave you with a parting thought.  Picture yourself presenting and discussing a set of PRAGMATIC security metrics to, say, your executive directors.  Imagine the confidence you will gain from knowing that the metrics you are discussing have been carefully selected and honed for that audience because they are Predictive, Relevant, Actionable ... and all that.  Imagine the feeling of freedom to concentrate on the knowledge and meaning, and thus the business decisions about security, rather than on the numbers themselves.   Does that not give you a clear advantage over your unfortunate colleagues at a competitor across town, struggling to explain let alone derive any meaning from some near-random assortment of pretty graphs and tables, glossing over the gaps and inconsistencies as if they don't matter?

Saturday 1 December 2012

Security awareness == Social engineering



This is a busy time of year for most of us with social events at work and at home, so it seemed appropriate to deliver a module on 'social insecurity' now. The latest batch of security awareness materials primarily covers social engineering, and touches on the related information security aspects of social networking and social media.  

Social engineering revolves around manipulating people to do your bidding. Social networks and social media are sources of information about targets than can be used to gain their trust and persuade or manipulate them. They are also communications vehicles through which to socially engineer others.  Social is the common factor, of course.  Humans are sociable by nature: we tend to 'belong' to various groups, and apply different standards to group members than we do to non-members. 

If you think about it, security awareness and training are forms of social engineering.  We're actively using information to persuade people to change their behaviors.  We inform and motivate them.  We don't lie, as such, but we do 'emphasize' things in order to bring them to the attention of our audiences, using information selectively to make them appreciate certain information security risks for instance.  We use policies and compliance activities to manipulate people into doing what we want.  We repeatedly remind people about security, gradually building their trust and understanding.  Oh sure, we are doing it with the best of intentions and we are quite open about it, but be honest: it is social engineering. 

You have probably heard about, if not actually performed, a "mock phishing attack" on your fellow employees as part of your security awareness program.  The basic idea is straightforward: craft an email with a pretext, some cunning ruse that will fool your "victims" into opening a link to a web page that either simulates a typical phishing data-capture form (perhaps popping up warning messages and awareness content as victims start to enter personal data) or simply displays a suitable security awareness message about phishing. Capturing victims' IP addresses as they visit the page allows you to generate statistics showing just how easy it was to fool some proportion of your organization's employees.  After hammering away with your phishing awareness, a further mock attack with a different pretext should get a much lower hit rate, demonstrating the value of the awareness.  Well, that's the theory!  

December's NoticeBored module takes this rather specific idea and extends it into a more general approach to security awareness.  As well as phishing, several other social engineering techniques could usefully be exploited for security awareness purposes.  Likewise social networks and social media.  Regardless of whether you actually carry through with the idea, discussing such a contentious proposal with management (which is necessary to get their explicit approval) would be a worthwhile awareness activity in its own right.  There are clearly trust and ethical considerations that need to be tackled but the payoff might be worthwhile.

[I'm thinking about writing a paper on this. If I've fired up your imagination already and you are bubbling over with ideas on how to apply social engineering to security awareness, please get in touch.]

Wednesday 28 November 2012

SMotW #34: homogeneity

Security Metric of the Week #34: organizational and technical homogeneity

The degree of homogeneity (sameness) or heterogeneity (variation or variability) within the organization and its technologies affects its aggregated information security risks, in much the same way that monoculture and multiculture crops may face a differing risks from natural predators, parasites, adverse environmental conditions etc.  A particular mold that successfully attacks a certain cultivar of wheat, for example, may decimate a wheat field planted exclusively with that cultivar whereas it may not take hold, making little impact on a neighboring field planted with a mix of wheat cultivars differing in their susceptibility or resistance to the mold.  On the other hand, under ideal conditions, the monoculture crop may do exceptionally well (perhaps well enough to counteract the effects of the mold) where the mixed crop does averagely.  

Homogeneity of technologies, suppliers, contracts etc. increases an organization's exposure to  common threats - for example, serious security vulnerabilities in MS Windows may simultaneously impact the millions of organizations that rely on Microsoft's products.  On the other hand, homogeneity means standardization, lower complexity and ‘economies of scale’, generally generating substantial business benefits.  It is clearly in Microsoft's commercial interests to be seen to address serious security vulnerabilities in its products urgently, or risk mass defection of its customers (those who aren't entirely dependent, at least!).

The overall PRAGMATIC score for this candidate metric is mediocre:

P
R
A
G
M
A
T
I
C
Score
67
70
40
59
67
50
33
65
45
55%




The metric rates poorly on both Timeliness and Cost due to the difficulties of gathering and analyzing suitable data with any kind of precision.  However, a quick-and-dirty low-Accuracy assessment might be sufficient get this issue raised and discussed at the top table, which might actually be good enough (we're hinting at the measurement objective - an issue we have hardly mentioned in the blog but which is covered at length in the book).  The metric may perhaps be measured using scoring scales that we have discussed in several previous blog postings, for instance.

Sitting at 40%, the Actionability rating is also depressed for two distinct reasons: 
  1. It is not entirely clear what constitutes an 'ideal' amount of homogeneity, since, as we have just said, there are pros and cons to it;
  2. There are obvious practical constraints on management's ability to change the organization's homogeneity even if they wanted to do so.  Senior management might institute a supplier diversity policy, for instance, but there is likely to be considerable inertia due to the existing portfolio of suppliers currently contracted.  In many cases, there will be overriding commercial or technical reasons to retain the current suppliers, on top of the natural affinity that emerges through social interaction between individual employees and their supplier contacts.
Bottom line: this candidate metric is unlikely to make the grade for Acme Enterprises Inc., but it may be valuable elsewhere.

Thursday 22 November 2012

Newton's take on security metrics

He may not have considered this at the time, but Sir Isaac Newton's three laws of motion are applicable to security metrics ... 


Law 1.  Every object in a state of uniform motion tends to remain in that state of motion unless an external force is applied to it. 

An organization lacking effective metrics has no real impetus to change its approach to information security.  Management doesn't know how secure or insecure it is, nor whether security is "sufficient", and has no rational basis for allocating resources to security, nor for spending the budget on security activities that generate the most value.  Hence, they carry on doing pretty much what they've always done.  They approve the security budget on the basis of "last year's figure, plus or minus a bit".  They do security compliance activities under sufferance, and at the last possible moment.  

The law of inertia is particularly obvious in the case of large bodies that continue to blunder through situations that smaller, more nimble and responsive ones avoid.  We're not going to name names here: simply check the blogosphere and news media for plenty of unfortunate examples of sizable, generally bureaucratic, often governmental organizations that continue to experience security incident after incident after incident.  Management shrugs off adverse audit reports, inquiries and court cases as if it's not their fault.  "Our hands are tied", they bleat, "don't blame us!" and messrs Sarbanes and Oxley groan. 

By the same token, the auditors, investigators, courts and other stakeholders lack the data to state, definitively, that "You are way behind on X, and totally inadequate on Y".  They know things are Not Quite Right, but they're not entirely sure what or why.  Furthermore, those who mandate various security laws, regulations and edicts have only the vaguest notion about what's truly important, and what would have the greatest effect.  Mostly they're guessing too.


Law 2.  The relationship between an object's mass m, its acceleration a, and the applied force F is F = ma

Applying a force to an object accelerates or decelerates it.  The amount of acceleration/deceleration is proportional to the force applied and the mass of the object.  Do we honestly need to spell out how eloquently this describes metrics?  For those of you who whispered "Yes!" we'll simply mention the concepts of proportional control and feedback.  Nuff said.


Law 3.  For every action there is an equal and opposite reaction.

An interesting one, this.  

Once organizations are designing, developing, selecting, implementing, using, managing and improving their suites of PRAGMATIC information security metrics, they will inevitably start using the metrics to make changes that systematically and measurably improve their security.  That's the action part.  

Newton might predict a reaction: what would that be?  

Well, one reaction will involve the human threats such as hackers, malware authors, fraudsters, spies and so forth: they will up their game in order to continue successfully exploiting those victims who are more secure, or of course direct their evil attentions to less secure victims, including those who lack security metrics and hence presumably still manage, direct and resource security using guesswork, gut feel, magic incantations, lucky charms and astrology.   "I've heard on the golf course|read in the in-flight magazine|been told by a little bird that competitor X only spends 5% of its IT budget on security.  Clearly, we're spending far too much!"

Another reaction will involve other parts of the organization - other departments who notice that, for once, information security has management's ear.  They are successfully justifying the security budgets and investments that they themselves would love to have.  Some will react negatively, challenging and undermining the security metrics out of jealousy and a desire to go back to the good old days (law 1 in action), while others will seize the opportunity to reevaluate their own metrics, finding their own PRAGMATIC set.

Yet another reaction will come from the authorities, owners and other stakeholders who can't help but notice the marked contrast between PRAGMATIC and non-PRAGMATIC organizations.  The former give them fact-based, reliable and most of all useful information about their information security status and objectives, while the latter mysteriously hint at celestial bodies and rabbits' feet.  We confidently predict that security compliance obligations imposed on organizations will increasingly specify PRAGMATIC metrics, and indeed the PRAGMATIC approach, as part of the deal.

Let's be realistic about it: the change will undoubtedly be incremental and subtle at first, starting with the thought leaders and innovators who grasp PRAGMATIC and make it so.  Gradually, the language of security metrics will change as the early adopters enthuse about their new-found abilities to manage security more rationally and scientifically than has been possible before, and others come to appreciate that at last they can make sense of the metrics mumbo-jumbo spouted by the consultants and standards.  The laggards who cling to their existing approaches like a drowning man clings to a sodden log will face extinction through increasing security threats and incidents, and increasingly strident pressure from their stakeholders to "be honest about security".