Sunday 29 March 2015

Another new security awareness topic: best practice

We have just delivered April's security awareness module covering best practices in information risk and security to subscribers - nearly 100Mb of crisp, fresh awareness content.

Numerous learned committees, panels, industry groups and other bodies of experts recommend best practices relevant to information risk and security, covering a wide variety of methods, controls and approaches.  What can we learn from their advice?  The latest module discusses a selection of best practices in information, helping our customers’ awareness audiences contemplate their purpose and value.  We even lay out for them a systematic, cyclic process for discovering, evaluating and adopting best practices.

Strictly speaking, the ‘best’ in ‘best practice’ is misleading unless the guidance is truly universal and cannot possibly be improved upon.  In reality, each organization differs in its situation or context and needs, so the practices that happen to be best for one might not suit another – in fact the guidance could turn out to be rotten for some.  Organizations intending to adopt best practice therefore need to evaluate the guidance to determine, first of all, whether it is even applicable to them, and secondly whether it is likely to be beneficial.

Best practice is also about systematic improvement.  It describes a state of excellence, a laudable objective or goal that inspires, motivates and encourages us to aspire to be ‘the best’ – or at the very least to avoid practices that are generally considered bad! 

Good on yer if your security awareness program covers best practices.  If not, and if it sounds like something that would catch your employees' imaginations, do get in touch.  We offer top quality, creative security awareness content on more than 50 topics, and we're already busy researching others.

Saturday 21 March 2015

Metrics matter (updated)

An article by Mintz Levin about the 2013 privacy breach/information security incident at US retailer Target stated that the company has disclosed gross costs of $252 million, with some $90m recovered from its insurer leading to a net cost of $162m, up to the end of 2014 anyway (the incident is not over yet!).

Given that the breach apparently involved personal information on about 40 million people, it's trivial to work out that the incident apparently cost Target roughly $6 per compromised record ($4/record net of insurance payouts) ... but before anyone runs amock with those headline numbers, let's delve a bit deeper.

First off, what confidence do we have in the numbers themselves? The article cites its sources as 8-K filings, in other words Target's official reports concerning the incident to the Securities and Exchange Commission. Personally, I'm quite happy with that: the dollar amounts are not mere speculation but (I believe, not being a specialist in US laws and regulations) have been carefully drawn up, audited and formally approved by management - specifically Target's CFO. We could pore over the filed 8-K reports to verify them since they are published on the SEC site, or we could simply accept Mintz Levin's word: they are a law firm so there's a degree of trust and confidence. 

I took the 40 million compromised record count from a convenient web page somewhere - not as easy to verify but there are many such pages reporting similar numbers, so let's assume the figure is based on a count and disclosure by Target.  And let's assume it's correct (yes, another assumption).

Now dig further. Having tried to track and calculate the financial costs from relatively small information security incidents myself, I appreciate just how tough that can be in practice. The costs fall across two main categories, direct and indirect or consequential. The direct costs are a bit of a nightmare to monitor when everyone is running around frantically dealing with the incident at its height, but they can be estimated retrospectively and tracked fairly accurately once things calm down: it's a matter of cost accounting. Simply stated, someone assigns the direct expenses associated with the incident to an accounting code for the incident, and the financial system tots up and spews out the numbers. There are several opportunities for substantial error in there (for instance, signficant costs wrongly coded or neglected, and investments in information security/privacy improvements that would have been made anyway, regardless of the incident, being charged against it in order to secure the budgets and inflate the insurance claims), but these errors pale into insignificance against the indirect or consequential costs ...

A serious information security incident that becomes public knowledge seems likely to have an adverse impact on the organization's image and hence its brand values, but how much of an effect, in dollar terms? It's almost impossible to say with any certainty. In the case of a major incident, the company's marketing and financial people could evaluate and estimate the effects using metrics such as customer footfall, turnover, profitability, market surveys and so forth ... but potentially there is a conflict of interest there since those self-same people are charged with maintaining or boosting the company's brands and value, hence they may be understandably reluctant to report bad news to management. Furthermore, there are no easy, generally-accepted, accurate or independently-verifiable ways to convert changes in most of these metrics (such as "brand recognition") into dollars without a great deal of argument and doubt.

On top of that, there is some truth to the saying that "There's no such thing as bad news". Publicity of incidents is also publicity for the organizations and individuals involved. Publicity equates to media exposure and brand recognition hence, paradoxically, bad incidents might actually benefit those involved.

That leads us to consider stock price as another possible measure of the gross effects of an incident, one that conveniently enough is already in dollars and is widely reported, with historical data just a few click away (e.g. see the 3-year Target share price graph to the left here, courtesy of those nice people at MarketWatch.com). Given the number of shares issued (requiring a few more clicks), it's not too hard to convert the share price at any point into a market capitalization value for the company, and thus to calculate the effect the incident had on that value, but now it gets really interesting. After the incident was initially disclosed and widely reported in 2013, Target's share price declined markedly and then recovered in 2014, and is now well above the 2013 peak. What relation does that have to the incident? Again, it's almost impossible to say because there are just so many factors involved: stockbrokers, dealers and investors take a professional interest in identifying, evaluating and predicting those factors, and some of them are very successful so you might try asking them about the incident, but don't be surprised if they confuse you with statistics while keeping their trade secrets to themselves!

The same issue cropped up in the Sony hack at the end of last year. Sony's share price (plotted on the right over the past 6 months) has moved quite consistently upwards. There was a noticeable dip around the year end but it pretty much recovered its original trajectory by the end of January. I'm quite sure I could fit a straight line trend to the data with little statistical variance. 

OK, is that all there is to it? Well, no, we're not finished yet, not by a long chalk. 

So far we've only considered Target's costs: what about those whose personal information was disclosed, and the banks and other companies who have lost out to identity fraud? How much has the incident as a whole cost? How on Earth can we measure or calculate that? Once again, the short answer is that we can only estimate at best. 

What price would YOU put on the personal aggravation and grief caused by discovering that YOUR privacy has been breached and you may be the victim of identity theft? Go ahead, think about it and name your price! If enough of us did so, we might generate some sort of mean value but it's obviously highly subjective and doubtless extremely sensitive to the context and the precise questions we pose - plus of course there's the issue of our sampling strategy and sample size, since we can't ask everyone. Unfortunately, even a small error in our per-victim cost estimate will be massively amplified if we multiply that by the 40 million, so we really ought to take more care over this if the numbers matter - which they surely do as we'll come on to in a moment.

First, though, consider that the relationship between the total cost of a privacy breach/incident and the number of records disclosed is generally implied, but that is another unproven and potentially highly misleading assumption. We don't actually know the nature of the relationship, and it is likely to vary according to a number of factors aside from just the number of records. Identities belonging to the rich and famous are probably worth much more to identity thieves than those belonging to the the poor, for example, so a breach involving data from high-worth individuals, organizations or celebrities seems likely to result in greater losses than one involving the same number of records for "ordinary" people. Different items or types of information vary markedly in their inherent value (e.g. contrast the value of someone's email address or phone number to their credit card number - and then consider the additional value to fraudsters of obtaining multiple items in linked records). One might argue on basic arithmetic that the per-record costs decrease exponentially as the number of records increases, or that the relationship is non-linear due to the additional impact of news headlines with nice round figures ("more than 40 million" is worse than "almost 40 million", and far worse than "40 thousand"!). 

In privacy breaches, the black-market price of credit card numbers etc. is sometimes used to estimate the overall costs (e.g. if 'the average' record is worth, say, $2 to criminals, then $40m records are worth $80m). That simplistic approach begs various questions about how we determine the black-market price (which, by its very nature, is not openly available information), and at what point we measure it (since the value of stolen credit card numbers declines quite rapidly as word about the incident spreads and victims, banks and credit card companies progressively identify and cancel the cards). Furthermore, the costs accruing to the victims (i.e. Target and its owners/stakeholders, the data subjects, the banks and other institutions involved, oh and the FBI, police etc.) as a result of the incident may be related to but almost certainly exceed the profits accruing to the identity thieves, fraudsters and assorted middle-men exploiting it. Society as a whole picks up the discrepancies in a diffuse fashion.

That brings us to our final issue. Who cares how much infosec incidents such as this actually cost anyway? It matters because the information gets used in all sorts of ways, for example to justify investment in information security and privacy controls, incident management, insurance premiums, identity theft cover, contingency sums and more. It gets used for budgeting and benchmarking, for policy- and law-making. It feeds into our general appreciation of the information risks associated with personal information, and information risks as a whole. 

Stepping back a pace or two, this whole issue could be considered the elephant in the room for information risk and security professionals. We put enormous effort into promoting and justifying investments in information security controls to reduce the probability of, and damage caused by, incidents, trying our level best to persuade management to take heed of our concerns, support our business cases and invest adequately in security, especially proactive measures, systematic approaches and good practices such as ISO27k ... but if we look coldly and dispassionately at the situation including the assumptions and arguments laid out above, it could be said that incidents are not nearly as bad as we tend to make out, in other words we are crying wolf.  

Oh oh!  I guess we ought to firm up some of those estimates and assumptions, pronto, before we all lose our jobs! Metrics do matter, in fact.

PS The 2015 Verizon Data Breach Investigation Report attempts to define the mathematical relationship between 'Payout' and 'Records Lost' in so-called data breach incidents (see figure 21 and associated text), but acknowledges that although they have improved their model, they still don't have a firm grasp of all the relevant factors. Perhaps this blog piece will prompt them to re-evaluate their assumptions and presumptions, maybe even to do the research given the data and other resources available to them. Don't hold your breath though. I fully expect the mythical linkage between incident costs and records compromised to persist for many years yet, despite my best efforts. Its the infosec equivalent of the search for the holy grail - the Monty Python version. 

Thursday 19 March 2015

ISO/IEC 27000 - UPDATED

I spent the morning reviewing the Draft International Standard version of ISO/IEC 27000:2016, in particular checking carefully through the definitions section for changes since the current 2014 version in order to update our information security glossary. 

I noticed that "executive management" is now defined in addition to "top management". The difference between these terms is quite subtle: whereas it appears the former always refers to the most senior managers within the entire organization, the latter may refer to senior managers within a business unit, department etc. if the scope of an ISMS is limited to that particular business unit, department etc. In practice, this distinction is unhelpful except under particular circumstances which might have been covered by a note to the definition of "executive management". To this native English speaker, "top management" is an obscure term - in fact I don't recall ever hearing or using it outside the ISO27k context. "Senior management" or "executive management", certainly, but not "top management", with, by natural extension, the rather unsavoury implication that there might also be "bottom management". I guess this is simply the outcome of someone literally translating the equivalent term in another language, and I look forward to the day when "top management" is finally expunged from the ISO27k lexicon.

Being a perfectionist by nature, I could and indeed sometimes do quarrel with many of the definitions in ISO/IEC 27000. Most of the terms are perfectly well defined in the dictionary - in fact, dictionaries generally make a much better job of it. While the committee might claim that they are defining specialist terms of art, good dictionaries include such specialist, obscure or archaic definitions but these normally follow the current usage, which is expressed in plain English written by professional linguists. The sequential approach is especially helpful for those who struggle with English, since the initial plain-language definitions make it easier to comprehend the specialist definitions that follow, in most cases anyway.

Take "risk" for example. ISO/IEC 27000 defines risk very succinctly and broadly, though not very helpfully, as "effect of uncertainty on objectives", and then goes on to muddle things with 6 'notes to entry' (i.e. notes!) which include further, often equally vague or obscure definitions (e.g. "An effect is a deviation from the expected - positive or negative"). Only the last two notes refer specifically to information security risk, the remainder being generic. Compare that to, say, the Oxford Dictionaries definitions of "risk". Do you see what I mean?

Similarly, "organization" is defined in the standard as [a] "person or group of people that has its own functions with responsibilities, authorities and relationships to achieve its objectives (2.56)" with a 'note to entry' that gives some examples of what we generally understand to be organizations, albeit using pseudo-legalese ("... includes but is not limited to ..."). The definition mentions that an organization might be a person, whereas in common use a person is, well, a person. Furthermore, elsewhere the same standard specifies "person or organization", making the distinction redundant. A lay person would not define an organization as "... a group of people that has its own functions ...": the definition is curious, to say the least. Although strictly speaking "objectives" are not defined, the singular form is defined in section 2.56 as "result to be achieved" followed another 4 'notes to entry'. All-in-all, the standard makes a meal of defining a relatively simple, straightforward concept, and this is definitely not the only such instance. Several definitions are incomplete, misleading or inaccurate, some are gibberish, a few are self-referential (e.g. the definition of "external context" starts with "external environment" without any attempt to define "external") and one uses the term "conformance" which is deprecated in the very same document.

Worst of all, several terms or concepts that are absolutely crucial to information security (such as "accountability", "information" and "information risk") remain totally undefined.

As if that's not enough, the formatting (fonts and spacing) is inconsistent ... and I'm still only half way through the document's 40-odd pages. Life's too short!

Now perhaps you appreciate why I chose to illustrate this post with that nice black-and-white photo of a right pig's ear ...


UPDATE (July 2016): the 4th edition of ISO/IEC 27000 is now available for FREE

Tuesday 3 March 2015

Comparative security metrics

In situations where it is infeasible or impracticable to quantify something in the form of a discrete count or absolute value in specific units, comparative or relative measures are a useful alternative. They are better than not measuring at all, and in some cases easier to comprehend and more useful in a practical sense. In this respect, we disagree with those in the field who fervently insist that all metrics must be expressed as numbers of units (e.g. "20 centimetres"). It seems to us "A bit longer than a pencil", while obviously imprecise, might be a perfectly legitimate and helpful measure of something (regardless of what that thing might be - a cut on your arm for instance).

Cardinal numbers and units of measure have their place, of course, but so do ordinals, comparatives and even highly subjective measures - all the way down to sheer guesswork (and, yes, 'down to' itself implies a comparative value). Douglas Hubbard's "How To Measure Anything" is an excellent, throught-provoking treatise on this very subject.

In information security, comparisons or relations can provide answers to entirely valid and worthwhile questions such as:
  • Are we more or less secure than our peers?
  • Are we getting more or less secure over time?
  • If we both sustain our present rate of change, how long will it be before we'll surpass our competitors' level of information security?
  • Are our information risks increasing or decreasing?
  • Which are our strongest and weakest areas or aspects of security?
  • Of all the myriad changes currently occuring in information security, what are the most worrying trends?
  • Does information risk X fall within or exceed our risk appetite or tolerance?
  • Which business unit, function, department or site is the most/least vulnerable?
  • Are we spending too little, about the right amount, or too much on information security
As part of an information security awareness case study on 'the Sony hack', a management discussion paper describes three types of comparative security metrics with several examples of each.

Free Sony hack case study

We have just published our security awareness case study on the Sony hack under a Creative Commons license.

The information sources are fully cited and referenced in the materials – all public domain stuff and no special inside-track from Sony I’m afraid*, hence there are probably errors and certainly omissions … and yet nevertheless this was a remarkably instructive incident touching on an usually wide range of information security topics. 

One aspect that stands out for me is that, since information is Sony’s lifeblood, information risks arebusiness risks.  Regardless of whether the North Koreans were or were not behind the hack, management’s strategic decision to press ahead with The Interview undoubtedly affected Sony’s information risk profile.  Their strategic approach towards information and IT security has been implicated in several major infosec incidents over the years.  There are lessons here about governance, risk management and security strategy.

The ongoing incident management and business continuity aspects are also interesting.  The Sony hack may no longer be all over the news but (as far as I know) we have yet to discover how they ultimately responded to the extortion demands, and whether the FBI are homing-in on the culprits.  Meanwhile, Sony recently had to ask for a special dispensation to miss a critical business reporting deadline as a result of the disruption caused to its systems and processes.  It’s not hard to imagine the internal turmoil behind their relatively calm public statements.


* Hey, wouldn't it be good to have the information security equivalent of the official air accident investigations or public inquiries into other types of major incident i.e. a thorough, detailed examination of the facts by highly competent, diligent and independent experts with unrestricted access to the necessary information, leading to a public report with sound improvement recommendations to help us all avoid falling into the same traps?  ...

Sunday 1 March 2015

Annual malware awareness update

Every March we update and re-issue our awareness module on malware (viruses, worms, Trojans and all that junk).  This year, we've picked up on three disturbing trends in the murky world of malware - three risks that are in or heading towards the red zone.

The Sony hack security awareness case study presented a golden opportunity to demonstrate how today's sophisticated malware-fueled attacks work ... so that's exactly what we did. 

Being such a technical topic, it is becoming more of a challenge every year to explain cutting-edge malware in terms that everyone can understand ... but that's the kind of challenge we relish!  

Does your security awareness program cover malware?  Is it bang up to date on today's malware landscape?  Do you have the time to research and prepare top-quality adult education materials that motivate your audiences to behave more securely?  Have you been able to take advantage of Sony's misfortune and learn from their mistakes?    

Security awareness is our passion.   It's what we do.