Saturday 30 March 2013

Our tenth anniversary module

The new “Taking chances” awareness module is about identifying, assessing and dealing with information security risks and opportunities. 
Whereas information security and risk management professionals, as a breed, are generally risk-averse, the awareness materials this month acknowledge pragmatically that there are legitimate business reasons to accept some information security risks, to take chances deliberately: the trick is to know which ones to live with, and which to avoid, pass to someone else or mitigate.
Animals deal with safety risks routinely at a subconscious level, avoiding extreme dangers instinctively, and learning to avoid other risks through teaching, by observing their parents and peers, or by trial-and-error: the ability to learn and so change our behavior is a vital survival skill.  In a sense, organizations also have both instinctive and learned reactions to risks.  This month’s awareness module passes-on decades of real-world experience with the management of information security risks.
Some cynical graybeard information security professionals feel that the methods commonly used to analyze risks are little better than chicken entrails at predicting the future.  By explaining the elements of the risk management process, we demonstrate that rational analysis, prioritization, treatment and monitoring of information security risks does give us a bit of an edge over those entrails, and perhaps in our own small way we can help advance the profession a little.  It’s not all hocus pocus!
"Taking chances" is our 120th monthly module, in other words we have  successfully navigated our first decade in security awareness.  We're still trying to decide how best to celebrate our tenth birthday so watch out for a news update once we sober up from the office party.
Happy Easter all!

Thursday 28 March 2013

Molds and parasites - new families of malware

The following paragraph remains unredacted in a heavily redacted NSA newsletter from 1996:
"The most harmful computer virus will not be the one that stops your computer, but the one that randomly changes or corrupts your data over time."
Malware that causes data corruption perhaps ought to be called a fungus or mold rather than a virus but I guess "virus" remains the nondescript all-purpose term preferred by journalists and lay-people alike. 

Anyway, I partially agree with the statement. Compared to incidents that are as crude and noisy as completely stopping the computer, more sophisticated and silent attacks (such as those behind APTs - Advanced Persistent Threats) are more dangerous and insidious because they can continue unabated for longer.  As with a parasite that exploits its symbiotic relationship with the host, a lengthy infection starts off with the host barely even recognizing that it has been victimized.

Random data corruption is a concern, for sure, but is fairly noisy in its own right. Creeping data corruption in a relational database system, for instance, will eventually fall foul of the built-in database integrity controls, and may well be spotted by users who are aware and intelligent enough to appreciate that just because the computer says something does not necessarily mean it is true.  

So what about directed data corruption, where the malware targets particular data items and makes specific but relatively subtle changes?  Such a mold could be used to manipulate the system, the data, the users and their decisions in a concerted manner, leading them a merry dance for as long as possible before the inconsistencies came to light, by which time it might be too late to act.  The changes may appear as innocuous typoos in textual information (generally overlooked) or slight but consistent biases in numeric data.  Numeric changes might perhaps be picked up by statistical integrity-checking routines or Benford's Law - provided anyone bothered to consider the risk, implement and use the controls that is.  Aside from the NSA paper and our own security awareness materials on the topic of integrity, I have not seen this risk discussed (maybe I just missed it).

To close, let me return to the idea of parasitic malware.  Some living parasites have evolved the capability to alter their host's behavior, secreting toxins or hormones if not directly stimulating the host's nervous system. Ophiocordyceps unilateralis, for example, is a fascinating parasitic fungus that infects certain ants, causing them to climb and cling to the top of foliage where the parasite kills them and sends out its fruiting bodies and spores over a wider area than it could have reached if the ants had remained at  ground level. Imagine now an APT that not only stole and manipulated information, but influenced management and operational decisions made by managers and staff, changing the way the organization behaved.  

Remember this if your organization seems, for no obvious external reason, to be climbing the foliage.

Metric of the Week #50: policy clarity

Information Security Metric of the Week #50: proportion of information security policies that are clear

This week's worked example concerns a metric that looks, at first glance, to be far too subjective to be of any value ... but read on.

Clarity is a critical requirement for information security and indeed other kinds of policies.  Policies that are garbled, convoluted, rambling and full of jargon are less likely to be read and understood by the people that ought to be compliant.  As a corollary, well-written, succinct policies facilitate reading and understanding, while also making them 'motivational' in style encourages compliance.

It's obvious, isn't it?  So how come we still occasionally see policies written in the worst kind of legalese, stuffed with obscure/archaic language in an embarrassingly amateurish attempt, presumably, to appear "official"?

The suggestion for this metric involves regularly surveying employees' opinions regarding the clarity of the security policies, using a questionnaire based on a Likert scale developed and administered in person by a market research firm.   

Applying the PRAGMATIC method, ACME managers expressed some interest in this metric, but were concerned on a number of fronts:


P
R
A
G
M
A
T
I
C
Score
75
70
68
41
96
50
56
90
34
64%

There are at least three advantages to this metric, the first one more obvious than the others:
  1. Aside from the overall clarity/readability level, the metric should identify policies that are perceived as better or worse than average, providing shining examples and opportunities for improvement, respectively.  This is useful new information for the authors of policies, making this a beneficial operational security metric. 
  2. Asking employees about the policies will inevitably find some unable or unwilling to state an opinion because they cannot recall reading the policies.  The proportion of 'no response' returns is therefore a rough measure - an indicator - of the extent to which policies are actually being read.
  3. Asking employees about policies also prompts them, and hopefully their colleagues, to (re)read the policies in order to give an opinion ... which is itself a useful awareness outcome.
On the downside, however, the metric would be Costly and slow to collect if a market research company was engaged.  It could be run more cheaply by employees such as Information Security, although since they are generally responsible for the policies, they may be tempted to influence or manipulate the metric to make them appear clearer than they really are - or at least they would find it difficult to prove that they were completely unbiased.  Increasing the Cost-effectiveness rating in this way would therefore depress the Independence factor and, perhaps, the Genuinness and Accuracy factors which are already low due to this being such a subjective matter.


There is a further advantage to having members of Information Security conduct the surveys in person: it would give them a legitimate reason to leave the sanctuary of the security office, get out among their colleagues and pick up on things that are really going on in the business.  Their colleagues, at the same time, would get to meet and chat with Information Security people in an informal, non-confrontational situation, and could perhaps raise other queries or concerns.  This would be particularly valuable if the security team was reclusive and shy, or appeared aloof and distant.


[A variant of this metric involving an automated assessment of clarity was also tabled for consideration by ACME management: it will appear on this blog at some point so you can either wait patiently to find out about it or look it up in Chapter 7 of the book!]

Thursday 21 March 2013

On cryptography


On Cryptography

The focus on key length obscures the failures of cryptography
Mar 21, 2013 | 07:39 AM |  No comment
By Gary Hinson 
Light Reading 
Should companies continue sinking yet more money into cryptography? It's a contentious topic, with respected experts on both sides of the debate. I personally believe that cryptography is generally a waste of time and that the money can be spent better elsewhere. Moreover, I believe that our industry's obsessive fascination with crypto serves to obscure greater failings in security design.
In order to understand my argument, it's useful to look at cryptography's successes and failures. One area where crypto doesn't work very well is health. We are forever trying to secure health records using encryption.  We apply the very finest mathematical and statistical trickery known to Man to scramble them beyond comprehension.  But then medics go and decrypt them in order to use them, callously undoing our good work!  What is it with this people?  Don't they realize that plaintext health records can be read by anyone?  Couldn't they at least give hexadecimal a go?  There's a lot to be said for doctors hand-writing their notes, in Latin, with a quill.
Similarly, cryptography is an abstract "benefit" that gets in the way of using and enjoying the Internet. Good cryptographic practices might protect me from a theoretical attack by a marauding horde of keyboard-tapping monkeys at some time in the future, but they’re a bother right now, and I have more fun things to think about than how many rounds of Ess- and Pee-boxes are necessary.  No one except cryptographers actually read and comprehend new cryptographic algorithms; for the rest of us, it's much easier to just click "OK" and start chatting with our friends. In short: crypto is not for Joe Public.
One reason crypto remains the domain of egg-heads is that cryptographers do their level best to make sure it is a dark, mysterious, magical art. We can train anyone in the basics -- even software developers -- with a simple reward mechanism: increase the key by one bit, double the effort required to brute force it. But instead we imply that crypto is not quite so easy. With smoke and mirrors, we seed those little germs of doubt.  Is 'one more bit' enough?  How many bits do you really need?  Is each new bit worth the same as all those old bits?  If you have too many bits, will you go to pieces?  Is it your fault if someone breaks my beautiful algorithm by circumventing the random number generator that you thought was quietly factoring the least significant figures of pi? 
Training laypeople in cryptography also isn't very effective: why is it that laypeople and IT professionals alike seem unable to make perfectly straightforward decisions concerning obscure parameters on oh-so-elegant algorithms when configuring their systems and browsers?  Are they simply thick or are they being deliberately obstructive?  Turns out that it's a bit harder than one might think to teach ordinary mortals advanced theoretical mathematics. We can't expect every motherf to have the knowledge of a cryptographer and we certainly can't expect him to become a crypto-expert when most of the advice he's exposed to comes from cryptographers' blogs. In cryptography, too, a lot of so-called expert advice comes from companies with products and services to sell, some of it good, some of it ... fantastic, according to their marketing anyway.
Talking of which, one area of cryptography that has been a tremendous commercial success is churn. Why release a cryptographic system that is provably secure for a zillion years when we can fool everyone into adopting a crippled variant that will fail within ten?  Even better, let's publish its inner workings in explicit detail, and fund a ravenous mob of cryptanalysts to smash it to pieces in public like the statue of a deposed dictator so there is no choice but to deprecate it, discard an entire generation of broken software and replace it ... with ... something based on ... the next crippled variant.  This points to a possible way that cryptography can succeed.  Instead of trying to design ever more fantastically convoluted and beautiful machines, perhaps we ought to focus our efforts on making them usable and maintainable by ordinary mortals, greasy oiks armed with monkey wrenches instead of PhDs in astrophysics.
On the other hand, we still have trouble teaching some cryptographers to wash -- even though it’s easy, fairly effective, and simple enough to explain if we used diagrams with numbers. Notice the difference, though.  The risks of cryptographic failure are huge, and the cause of the failure is obvious. The risks of not washing are low, and it’s not easy to prove personal hygiene is necessary in a formal model. Some might claim that the world of cryptography stinks. Is it any wonder that cryptographers are shunned by security architects?
Another illustration of the outright failure of cryptography is driving. We trained, either through formal courses or one-on-one tutoring, and passed a government test to be allowed to drive a car. We're even allowed to fill up by ourselves and some of us maintain our own vehicles.  One reason that works is because we have car manuals with exploded parts lists and step-by-step instructions. Even though the technology of driving has changed dramatically over the past century, we don't have to worry ourselves over transposition functions and matrix algebra.  You might have learned to drive and service a vehicle 30 years ago, but that knowledge is still relevant today.  What use is a DES-expert now, eh?  Triple-DES was the beginning of the end of that era.  "It's no use,"  I told them, "hanging on to the thought of quad-DES.  It's over I tell you, over."
To those who think that cryptography is a good idea, I want to ask: "Have you ever met an actual cryptographer, in the flesh?" They're not human, and we can’t expect them to become human. They inhabit a bizarre world populated by people called Alice and Bob who insist on chatting about their most personal secrets on phone lines despite knowing they are being tapped.  
Even if we could invent a provably-effective cryptographic system (don't laugh - it has already been done), there's one last problem. Malware prevention training works because affecting what the average person does is valuable. Even if only half of the population practices safe hex, those actions dramatically reduce the spread of worms and Trojans. But computer security is often only as strong as the weakest link. If four-fifths of company employees learn to choose better passwords, or not to click on dodgy links, that's four-fifths who can thumb their noses at the bad guys.  But there's no such thing as a four-fifths broken cryptosystem.  Its all-or-nothing with crypto - a teeny weeny bit too little entropy and they fail spectacularly.  As long as we continue to build cryptosystems with built-in-obsolescence, key escrow, raising the 'number of bits' won't make them more secure.  It's the magician's diversion.
The whole concept of bit-length being a measure of the strength of cryptography demonstrates how the cryptographic industry has failed. We should be designing cryptosystems that don't care if users choose lousy passwords and don't mind what links a user clicks on. We should be designing cryptosystems that are provably unbreakable, not provably broken.  And we should be spending money on personal hygiene for cryptographers. These are people who, with patience and understanding, can be taught the necessary skills in a safe changing-room environment, and this is a situation where reduced odor correlates with increase security.
If cryptographers would only do their job right, then IT users and administrators would not have to worry about the number of bits or "how complex is complex".  Alice and Bob wouldn't have to plan on replacing their systems yet again because Eve knows their innermost secrets.  That makes a whole lot more sense.
Gary Hinson is a cynic with a sense of humour (with a you).  He researches and writes cost-effective security awareness materials by day and pragmatic books on security metrics by night.  Despite appearances, he actually values cryptography, respects cryptographers and is simply reacting instinctively to a poke in the ribs from one of his idols.

Tuesday 19 March 2013

Metric of the week #49: infosec risk score

Security Metric of the Week #49: information security risk score

Most risk analysis/risk assessment (RA) frameworks, processes, systems, methods or packages generate numbers of some sort - scores, ratings or whatever - measuring the risks.  We're not going to delve into the pros and cons of various RA methods here, nor discuss the differences between quantitative and qualitative approaches.  We know some methods only go as far as categorizing risks into crude levels (e.g. low, medium, high, extreme), while others produce percentages or other values.  We assume that ACME has chosen and uses one or more RA methods to analyze its information security risks, and on the whole management finds the RA process useful.
  
The question is: are information security risk scores produced by RA valuable as an information security metric?


P
R
A
G
M
A
T
I
C
Score
72
60
55
70
71
40
60
60
50
60%






In the view of ACME's management, the RA scores have some merit as an information security metric.  They do quite well in terms of their Predictiveness, Genuinness and Meaning, but there are concerns about their Actionability, Accuracy and Cost-effectiveness.   The overall PRAGMATIC score is somewhat disappointing.


If ACME management definitely wants to measure information security risks but there are no higher-scoring metrics on the cards, they have a few choices.  They might:

  • Accept the metric as it is;
  • Weight the PRAGMATIC ratings to emphasize the factors that are most important in this specific area (e.g. Predictiveness and Relevance), then re-compare the scores and reconsider the candidate metrics; 
  • Adopt this metric as a temporary measure for now but, while gaining experience of the metric, actively search for something better;
  • Reject the metric and carry on searching for something better;
  • Make changes to the metric in order to address its PRAGMATIC weaknesses, hopefully without compromising its strengths;
  • Conduct a trial, comparing a few metrics in this area, including variants of this one, over the course of a few months;
  • Reconsider what it is that they really want to know and try to be more explicit about the goals or objectives of measurement in order to prompt the selection/design of better metrics candidates;
  • Review the PRAGMATIC ratings for this and other risk-related metrics, challenging their assumptions and considering more creative approaches;
  • Adopt complementary metrics or use some other approach to compensate for the weaknesses in this metric.

Friday 15 March 2013

Risk matrix

I developed this 3x3 matrix today as part of an information security awareness module about risk management.  The matrix plots the severity (or impact or consequences or costs) of various kinds of information security incident against the frequency (chance, probability or likelihood) of those same kinds of incident.  It is color coded according to the level of risk.

Clearly, the incidents shown are just a few illustrative examples.  Furthermore, we could probably argue all day about their positions on the matrix (more on that below).

Some might claim that it doesn't even qualify as a metric since there are no actual numbers on the matrix.  "Cobblers" I say!  It is information that would be relevant to decision making and that's good enough for me, but if you feel so inclined, go ahead and pencil-in some suitable numbers at the boundaries of those severity and frequency categories ... and be prepared to argue with management about those numbers as well!

Anyway, it set me thinking about whether the matrix might form the basis of a worthwhile information security metric.  The obvious approach is to score it on the PRAGMATIC scale, in the context of an imaginary organization ("ACME Enterprises" will suffice):

P
R
A
G
M
A
T
I
C
Score
70
70
50
65
70
40
55
45
85
61%


The 61% score implies that this metric has some potential for ACME, but it doesn't stand out as an must-have addition to their information security measurement system.

There is a question mark about its Independence since the people most likely to be preparing such a matrix (i.e. information security pros like me) generally have something of a vested interest in the risk management process.  

I wouldn't exactly call it Actionable since it doesn't directly indicate what we need to do to reduce the severity or frequency of the incidents ... but it does at least show that reducing either aspect will reduce the risk.  It is Actionable in the sense that preparing and discussing such a matrix would be a useful part of the risk analysis process, particularly if the discussion engaged other risk professionals and business managers (which would also increase its Independence rating, by the way).

Involving managers in the process of preparing, reviewing or updating the matrix would increase the cost of the metric, but at the same time would generate more value, so the Cost-effectiveness would end up more-or-less unchanged.  

I mentioned earlier that we might 'argue all day' about the positions of incidents on the matrix, but in so doing we would be considering and discussing information security incidents, risks and controls in some depth - which is itself a valuable use or outcome for this metric.  Understanding the risks would help management prioritize them and, in turn, allocate sufficient resources to treat them sensibly (for example investing in better malware controls rather than, say,  expensive state-of-the-art CCTV system to reduce a risk that is already in the green zone).

So, all in all, it's an interesting security metric although it needs a bit more thinking time yet ...

UPDATE: there is a remarkably similar metric at the heart of the "Analog Risk Assessment" ARA method.

Wednesday 13 March 2013

Measuring things right

A random comment of unknown origin fluttered into my consciousness today:


“Made with the highest attention to the wrong detail”

It set me thinking about the waste of effort that goes into oh-so-carefully measuring and reporting the wrong things, for reasons that include:

  • Failing to determine the information actually required, and/or mistakenly assuming the nature of the inquiries (and, perhaps, reporting to the wrong audience)
  • Naivete and lack of understanding about metrics, measurement, decision making and/or statistics in general
  • Using certain measures simply because the base numbers and/or the charts, tables and reports are readily available (so they must be useful, right?)
  • Presuming the need for a high level of accuracy and precision when in fact rough-and-ready indicators would be perfectly adequate (and cheaper) (and quicker)
  • Analytical errors e.g. believing that the measured item is well-correlated with or predictive of something of interest when in fact it isn't
  • Brain-in-neutral: no discernable thought patterns or reasoning behind the choice of metrics, at least nothing concrete that we can recall if pressed
  • Falsely equating voluminous data with information or knowledge (knowing everything about nothing)
  • Adopting metrics used, recommended, mentioned or discussed by others in conversations, articles, standards, metrics catalogs, websites or blogs (yes, including this one!) without considering one's own information needs and the differing contexts
  • Giving the appearance of Being On Top Of Things ("The value is 27.435 ... but ... OK I'm not entirely sure what that means")
  • Generating chaff, deliberately distracting the audience from genuine issues by bombarding them with spurious data, theories, arguments and presumptions
  • Being "too clever by half" - an obsessive/compulsive fascination with particularly complex or obscure yet strangely intriguing metrics having a whiff of magic
  • Being required to spout nonsense either by some authority who perhaps doesn't understand the issues but wants to be seen to be Doing Something, or in accordance with a poorly-considered contract, Service Level Agreement or reporting system
  • Continuing to use (= failing to challenge and revise/withdraw) old metrics long after they should have been dispatched to a rest home for the sadly bewildered
  • Re-using metrics that have proven worthwhile elsewhere/in other contexts on the mistaken assumption that they are 'universal'
  • Filling spaces on a fancy dashboard or management report with "pretty" metrics (eye-candy), triumphs of form over substance
  • Desperation stemming from an apparent lack of alternatives due to limited capabilities and skills, a lack of imagination/creativity and/or no will or opportunity to design or select more suitable metrics
That's some list!  If I'm honest, I am personally guilty of some of them.  I've been there, done that, and still I see others treading the same route.  I think I'm wiser now, but only time will tell if the effort required to think more deeply about metrics and write the book has led to a breakthrough, or whether (or, more accurately, in which respects) I am still deluding myself.

Think about this issue the next time you find yourself poring over a management report or survey, trying to make sense of the content.  Even more so if you are skimming and discarding metrics without making any real effort to understand them.  Ask yourself whether you actually need to know whatever it is you are being told, and why it concerns you.  In GQM parlance, consider the Goals and the Questions behind the Metrics on the table.  Understand also that there may be hidden agendas, false assumptions and plain errors in the numbers.  Consider the methods used to gather, analyze and present the metrics, including their mathematical/scientific validity (e.g. was the population sampled randomly or selectively?  Is the variance significant?  Are there uncontrolled factors?).  Be dubious, cynical even, if it makes you contemplate the true meaning and worth of the information presented.  

For therein hides genuine insight.

Tuesday 12 March 2013

Metric of the week #48: redundant controls

Security Metric of the Week #48: proportion of information security controls that are ossified


Information security risks and controls change gradually within the organization.  From time to time, new risks emerge, new controls are introduced, existing controls find new applications, old risks subside and redundant controls are retired from service - at least in theory they are.  Keeping the portfolio of controls aligned with the risks is the primary purpose of the information security function and the Information Security Management System.  This week's metric is one way to measure how well that alignment is being managed in practice. 

In immature organizations, security controls once accepted and installed seem to become permanent fixtures, integral parts of the corporate infrastructure.  Successive risk analyses lead to the introduction of yet more controls, but nobody ever quite gets around to reviewing and thinning-out the existing corpus of controls.  Gradually, controls accumulate while the business becomes ever more constrained and inefficient, carrying a costly burden of redundant, unnecessary and unsuitable controls.  

This is understandable and perhaps appropriate during the early growth phases of information security/risk management when the focus is on bringing the organization up to an acceptable baseline level of security by implementing basic controls.  After a few years, however, there is a good chance that some of the installed controls are no longer needed.  It's not unusual to find a few controls that nobody understands, needs or supports - their original purpose has long since been forgotten and may in fact no longer exist (e.g. compliance obligations that no longer apply).  Some will have been superseded by other controls, changes in the business, and changes in the risks.

The metric implies that someone has to count information security controls, and determine how many of them are ossified, redundant and no longer necessary.  Counting controls is easier said than done: for example, is every PC running the corporate antivirus software package counted separately?  Is the antivirus package a single control, since it is probably addressing several distinct malware-related risks and perhaps others such as spam?  This is something that would need to be sorted out when specifying the metric in order to ensure that successive readings were directly comparable, since this metric is about the medium to long term trend rather than point values. 

According to the management at ACME Enterprises Inc., here is the PRAGMATIC rating for this metric:

P
R
A
G
M
A
T
I
C
Score
85
88
85
80
84
75
22
62
39
69%

They are evidently most concerned about the metric's Timeliness and Cost-effectiveness.  Counting and assessing the status of controls is undoubtedly a tedious, time-consuming process: management is not sure it is worth the effort.

In discussing this metric, the ACME's Information Security Manager acknowledged that she ought to be actively looking for ossified controls.  She argued that this was an operational security management activity under her remit, and she did not require a metric, preferring instead to put her efforts into systematically addressing the ossified controls as they were identified.  On the other hand, the Chief Information Security Officer sought reassurance that the process was, in fact, operating well, and felt that the metric would be a useful way to demonstrate to more senior management that information security was playing an active part in cost containment.

As we go to press, the discussion is unresolved.  Other candidate metrics are being compared on the basis of their PRAGMATIC scores, while the ISM and CISO are exploring their requirements in more depth.  If you were advising them on their metrics, what would you suggest?