Monday 24 September 2012

SMotW #25: critical systems compliance

Security Metric of the Week #25: proportion of critical information assets residing on fully compliant systems

In order to measure this metric, someone has to: 
  1. Identify the organization's critical information assets unambiguously;
  2. Determine or clarify the compliance obligations;
  3. Assess the compliance of systems containing critical information assets.

All three activities are easier said than done.  In our experience, the concepts behind this metric tend to make most sense in those military and governmental organizations that make extensive use of information classification, but even there the complexities involved in measuring compliance with a useful amount of accuracy would make it slow and expensive.  Consequently, the low Accuracy, Cost and Timeliness scores all take their toll on the metric's PRAGMATIC score:

P
R
A
G
M
A
T
I
C
Score
48
26
36
41
56
13
19
46
12
33%


Thus far, we have considered and scored this and other example metrics from the perspective of management within the organization.  The situation is somewhat different from the perspective of the authorities that typically impose or mandate security compliance obligations on others.  We are not going to elaborate further ourselves but leave it to you as an exercise to re-score the metric on behalf of, say, a government agency responsible for privacy.  Imagine yourself inside such a body, discussing information security metrics with management.  What would they make of its Predictability, Relevance to information security, Actionability, Genuinness, Meaningfulness to the intended audience, Accuracy, Timeliness, Independence or integrity, and Cost-effectiveness?  Go ahead, try out the PRAGMATIC method and tell us what you make of it ...

Monday 17 September 2012

SMotW #24: security traceability

Security Metric of the Week #24: Traceability of information security policies, control objectives, standards & procedures

This metric is based on the fundamental premise that all information security controls should be derived from and support control objectives, those being explicit business requirements for security.   Controls that cannot be traced to specific, documented requirements may not be justified, and may in fact be redundant and counterproductive: alternatively, the requirements may be valid but unstated, indicating a likely gap in the organization's policies etc.

The metric implies that there should be a way of tracing, referencing or linking controls with the corresponding security requirements, in both directions: it should be possible for management to determine which control/s satisfy a given control objective, and which control objective/s are satisfied by a given control.  There are various ways of achieving this in practice, such as a 2-dimensional table with control objectives along one axis and controls along the other.  The body of the table can simply contain ticks for the relevant intersections, or more detailed information concerning the implementation status of the controls.  In theory, every row in the table should contain at least one entry in the body, and so should every column: many will have more than one since there is a many-to-many relationship between control objectives and controls.

Turning now to the PRAGMATIC score:

P
R
A
G
M
A
T
I
C
Score
85
89
88
90
91
87
65
84
85
85%

That's a good score, let down just a bit on Timeliness since it will take a while to draw up the table and elaborate all the linkages to start with, and then to re-check them every time the metric is reported.  Furthermore, making changes in response to the metric will inevitably be a slow process, resulting in a substantial lag between measuring, reporting and responding to the metric.

By the way, a similar many-to-many relationship exists between control objectives and risks.  Conceptually, this adds a third dimension to the table, allowing us to trace information security risks to the corresponding control objectives and on to the related controls (or vice-versa).  Such multi-dimensional relationships are quite easily represented in a database but are harder to track, manage and measure manually.  

Thursday 13 September 2012

Themes from ISACA OceaniaCACS 2012


Having attended and spoken at ISACA's Oceania CACS conference in Wellington NZ the past 3 days, I noticed a few themes coming up repeatedly.  This piece expresses my personal perspective but I must stress that I didn't attend every session (not least because of the three parallel tracks) or speak to everyone of the 200-odd people present.  I'm sure other attendees would have their own opinions ...

"Risk" remains a core concern.  Compared to risk, there was less discussion around controls to mitigate risks, and almost nothing was said on risk avoidance, risk acceptance and risk transfer.  Even IT audit seemed less prominent as a seminar topic than in ISACA conferences I have attended previously.  However, despite our common interest, "risk" clearly has different meanings to different professionals at the conference, and no doubt to many of our business colleagues.  I'm sure there were many misunderstandings as a result of subtly different interpretations and emphases - including my own of course.

Information security incidents involve both unstructured and structured data e.g. spreadsheets and databases.  Whereas databases tend to hold much larger amounts of data, computer users often have quite sensitive and valuable information on their desktops.  Databases tend to be secured (although the lack of patching and complexities of securing large systems are often issues), while users tend not to take sufficient care to secure their systems and unstructured data.

As "compliance" slides gently into the background, "governance" is an issue on the ascendance.  People are thinking more deeply about the distinction between governance and management, and most accept the need for information security direction from senior management (e.g. through documented strategies).

Capability Maturity Models are popular, along with COBIT 5, RiskIT, ValIT, ISO27k and ISO38500, as ways to make sense of the complexities associated with information risk management, information security, governance and related matters.  Unfortunately, however, the models and frameworks are evidently being considered and adopted rather superficially by some: the subtleties and complexities behind the pretty diagrams aren't always appreciated.  I'm convinced that deeper analysis will generate better insight and more value from the models, but at least the basic structures and concepts are becoming commonplace.  It's a start.

Mobile technologies and social media are on unstoppable upward trajectories, despite the substantial risks (e.g. roughly half of tested mobile apps were malware-infected, and there are lots of vulnerabilities associated with smartphones).  "Gen Z" young employees are not just comfortable with the associated technologies and practices, they are almost dependent on them and will insist on using them even if they have to use their own devices at work (whether BYOD or carrying multiple devices).  Some, at least, are blase about their own privacy (perhaps as a result of naively believing that they are only disclosing private stuff to their friends and families, and that they are trustworthy), raising concerns around how they will treat personal information in their care at work.

Cloud computing is another unstoppable trend.  There wasn't much discussion about the specific risk and security issues arising from cloud computing, however: several speakers expressed the opinion that it was 'just outsourcing', betraying a naive understanding of the field.  One speaker identified that cloud computing suffers the same security risks as more traditional forms, plus a load more that are slowly being appreciated: some are hidden and will only become issues in a few years when the early adopters of cloud computing start trying to extract themselves from their contracts.

Research into various security and privacy breaches has identified some surprising findings with implications for the ways we perhaps ought to be addressing the risks.  For example, the possibility of being detected and suffering personal consequences are deterrents: organizations that patently don't take much notice of the security logs, alarms and alerts, or who fail to do anything much about incidents they do detect, are in effect training their employees to ignore the rules.  The possibility of adverse consequences for the organization is of less concern to individuals than the direct threat of being disciplined, sacked or prosecuted.  So much for employee loyalty.

Unsurprisingly, I spotted numerous references to security awareness in various contexts, and was particularly pleased that several people mentioned the need to raise awareness at senior management level using language that suits the audience - in other words, expressing information risk, security, compliance and governance issues in business rather than technology terms.  I was surprised to find that a few attendees still appear to be myopically focused on IT or technical security, and several referred to training and awareness interchangeably.  On the other hand, I was fascinated to hear that some infosec professionals are making the effort to express information security issues to their colleagues using terms such as safety, trust, resilience, protection, agility, efficiency, compliance, comfort etc. rather than banging on about confidentiality, integrity and availability.

Information security metrics came up in several places too, besides my own presentation.  Something that really caught my imagination was the idea that creative risk analyses should identify the 'early warning signs' of impending incidents, as well as identifying, characterising, scoring and ranking risks.  Normally, risk analyses and related processes lead to the listing of mitigating controls in the main, but I am intrigued at the possibility of identifying predictive metrics and leading indicators that perhaps things aren't quite going to plan.  For instance, the risks relating to malware are usually addressed through antivirus and firewalls, plus resilience and recovery measures such as patching, incident management and backups.  But what about the detective controls, the indications that malware activities are on the rise, unusual types of network traffic are occurring and so forth?  Major incidents don't often happen totally out-of-the-blue, but are usually preceded by various little tell-tale signs that something is going on - things such as probing and enumeration on the network before a hack, or minor frauds before a biggie, or a catalog of minor issues with the power before a black-out: if we are lucky, someone notices the signs in time to do something positive to forestall or prevent a crisis, but 'being lucky' is not a sound strategy!  Developing metrics and instrumenting risk-laden processes, networks and systems, and even people, accordingly represents a more proactive and sensible approach.

Aside from the seminars, the social side of the conference was excellent.  It was a fantastic opportunity to meet and chat with peers from the Pacific area, particularly New Zealand and Australia plus some from the US and South America.  

Monday 10 September 2012

SMotW #23: business continuity maturity

Security Metric of the Week #23: Business Continuity Management (BCM) Maturity

The high PRAGMATIC score for this week's metric shows that we consider it a valuable measure of an organization's business continuity management practices:

P
R
A
G
M
A
T
I
C
Score
90
95
70
80
90
85
90
87
90
86%

This metric is designed on exactly the same lines as the HR security maturity metric, SMotW #15, using a maturity scoring table with predefined criteria for various aspects of business continuity management indicating various levels  of maturity.

We are not going to give you the entire maturity scoring table now (you will have to continue waiting patiently for the book, I'm afraid) but here are two rows demonstrating the approach:

No business continuity management
Basic business continuity management
Good business continuity management
Excellent business continuity management
Nothing even vaguely approximating a policy towards business continuity
Something vaguely approximating a policy towards business continuity, though not very well documented, hard to locate and probably out of date
A clear strategy towards business continuity, supported by a firm policy owned and authorized by management and actively maintained
A coherent and comprehensive business continuity strategy, supported by suitable policies, procedures, guidelines and practices; strong coordination with other relevant parties
Business continuity requirements completely unknown
Major business continuity requirements identified, but typically just those mandated on the organization by law; limited documentation
Business impact analysis used systematically from time to time to identify, characterize and document business continuity requirements, both internal and external
Business continuity requirements thoroughly analyzed, documented and constantly maintained through business impact analysis, compliance assessments, business analysis, disaster analysis etc.


The table's four columns correspond to maturity scores of 0%, 33%, 67% and 100% respectively.  Each row in the table considers a different aspect or element of the measured area, in this case business continuity management, laying out four markers or sets of criteria for the four scores.   

If your management decides to adopt security maturity metrics like this, you could either take the scoring tables directly from the book (when available!), or use them as a starting point for customization.  Adapt them according to your experience in each area, integrating good practices recommended by various standards such as ISO27k and NIST's SP800-series, and organizations such as ISACA and the Business Continuity Institute.  Adjust the wording of the criteria to be more objective if you wish.  Include specific criteria or conditions.  Reference your policies, legal and regulatory obligations, whatever.

You may for instance feel that certain aspects of business continuity management are far more important than others, in which case you could weight the scores from each row accordingly ... but doing so would further complicate the scoring process and might lead to interminable discussions about the weightings, rather than about the organization's business continuity management maturity.  

Similarly, you may prefer further or fewer columns, giving you more or less granularity in the criteria.  Knock yourself out.

The percentage scoring scale lets us score things "towards the lower edge of the category" if appropriate, and to fine-tune the scores to represent a range of situations (e.g. if two businesses, departments or business units both qualify for the 3rd column on a certain criterion but one is a bit stronger than the other, its score might be a few percent higher than the other).  

The flexible design of this style of metric, coupled with its high PRAGMATIC score, is why we find it so useful in practice.  It is a particularly good way of  measuring relatively subjective matters in a relatively objective and repeatable manner.

Saturday 8 September 2012

The limits of "plain English security" policies

Being naturally optimistic (or 'realistic' as I put it), I generally look on the bright side of life - cue Monty Python.  Where appropriate I'm happy to cut a few corners in the interests of saving time and effort, believing that on the whole things will work out just fine.  

However, 'where appropriate' is an important caveat since, paradoxically, I'm also a perfectionist by nature, which means not cutting corners but doing things properly.  

Yes indeed, there is conflict lurking deep in my psyche.

Anyway, today this issue came to mind while reading the opinion accompanying a judgment on a legal case involving the (alleged) appropriation by a departing employee of his soon-to-be-former employer's proprietary information.  Please pore over the case notes for the full story and don't take anything I say as gospel, but for now suffice to say that the appeals court confirmed that there was no case to answer under the US Computer Fraud and Abuse Act (CFAA).  The facts underlying the case do not appear (to my legally-untrained eye) to be in dispute: the departing employee evidently did access proprietary information from his former employer and pass it to his new employer.  The central legal argument relates to the question of whether he had or had not been authorized to access the information at that point.  

The former employer alleged that the employee broke the terms of its security policies, and as such was not authorized and hence breached the CFAA.  The relevant parts of the CFAA are summed up in the opinion piece thus: "Among other things, the CFAA renders liable a person who (1) "intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains . . . information from any protected computer," in violation of § 1030(a)(2)(C); (2) "knowingly and with intent to defraud, accesses a protected computer without authorization, or exceeds authorized access, and by means of such conduct furthers the intended fraud and obtains anything of value," in violation of § 1030(a)(4); or (3) "intentionally accesses a protected computer without authorization, and as a result of such conduct, recklessly causes damage[,] or . . . causes damage and loss," in violation of § 1030(a)(5)(B)-(C)."

Later, the opinion notes that "To protect its confidential information and trade secrets, [the former employer] instituted policies that prohibited using the information without authorization or downloading it to a personal computer. These policies did not restrict [the soon-to-be-former employee's] authorization to access the information, however."  The remainder of the opinion, and the ultimate judgement, largely revolves around the precise (not to say arcane) legal definitions relating to the question of exactly what constitutes authorization.

In plain English, while the company believed the policy meant Miller did not have the authority to access the information, the fact that he was able to do so meant that, in practice, he was authorized.  Arguably he should not have accessed it, but he could - and indeed did - do so.  And therein lies the rub.

The judges quote and give weight to common English language (dictionary) definitions of certain terms used in the CFAA, determining that ""access" means "[t]o obtain, acquire," or "[t]o gain admission to."  Oxford English Dictionary (3d ed. 2011; online version 2012). Moreover, per the CFAA, a "computer" is a high-speed processing device "and includes any data storage facility or communications facility directly related to or operating in conjunction with such device." § 1030(e)(1). A computer becomes a "protected computer" when it "is used in or affecting interstate or foreign commerce." § 1030(e)(2)."

I can only guess why the "3d ed. 2011; online version 2012" (whatever that means!) Oxford English Dictionary, specifically, is given such credibility by the court: presumably it has become accepted practice in the courts and legal profession to refer to the a particular edition of the OED as a definitive source, and I suppose it suits the wider community's interests to agree on a single reference even if, perhaps, that agreement is not, itself, enshrined in law.  There is of course an argument that it doesn't particularly matter which specific source is the reference, just so long as everyone accepts it.  The fact that there are a vast number of other documented and potentially just as 'definitive' definitions for those terms is, it seems, irrelevant, as is the fact that language is constantly evolving, hence there is a distinct possibility that later editions of the OED will redefine the terms.  

I rather suspect that the lawyers would love to argue incessantly about definitions, on their clients' shilling of course, while the clients, the judges and the Ordinary Man would rather they just Got On With It. 

The real point of my diatribe is that words matter.  A lot.  Definitions and meanings are important - especially if something ends up before the courts, which is not uncommon in respect of disputes arising from corporate policies and procedures.  And if a case goes to appeal, the stakes are raised another notch.

If the former employer's policies had explicitly defined the terms and words they used (for example, referring to such-and-such an edition of whatever dictionary), there is a distinct possibility that their definitions would have been given more weight, although they would still not have been able to override the court's interpretation of the relevant statutes if there was conflict.  I idly wonder whether the company publishing and maintaining an information security glossary might have affected the outcome of this case ... but then I idly wonder whether I might have prospered or had a breakdown if I had studied law at college instead of genetics!

Oversight - a novel security awareness topic


September’s  security awareness module has a split personality, covering oversight in both senses of the word:
1) Casual errors and omissions are commonplace: these are oversight incidents.  Whereas most are trivial, some oversights are more serious and costly.  The worst can literally be deadly - as suggested by the poster graphic above (one of six new designs in the module).
2) ‘Keeping an eye on things’ through supervision and various forms of reviewing and testing (such as the checklist shown on one of this month’s awareness posters) are oversight controls.  The aim is to prevent, or at least spot and correct, errors and omissions before the damage is done. 
Security awareness is our passion - it’s what we do.

Monday 3 September 2012

SMotW #22: IRR

Security Metric of the Week #22: Internal Rate of Return

IRR is one of a number of financial metrics in our collection.  IRR measures the projected profitability of an investment, a proposed security implementation project for example.  If the IRR is greater than the organization's cost of capital, the project may be worth pursuing (unless there are limited funds available, and other proposals with even higher IRR or intangible benefits).

Comparing IRR against other financial metrics is tricky.  For starters, we are not accountants, economists or financiers by training, and this stuff is hard!  Furthermore, different circumstances and different types of investment call for different metrics ... but arguably the most important factor is that organizations tend to rely on certain financial metrics to assess and monitor most of their projects.  Regardless of any technical arguments for or against using IRR as a metric, if management routinely uses it, there is undoubtedly going to be pressure on security projects to follow suit.

Being PRAGMATIC about it:

P
R
A
G
M
A
T
I
C
Score
70
72
25
30
82
50
44
60
88
58%





Notice the 88% score for Cost: if IRR is going to be required anyway for investment appraisal, the marginal cost of using it as a security metric is almost nil.  Finance probably has the requisite models/spreadsheets and expertise to calculate IRR for all proposed projects on an even footing ... but someone still has to provide the input parameters, so it is not totally free.

The low ratings for Accuracy and Genuinness reflect the underlying fact that virtually all investments are inherently uncertain.  The metric depends on projections and estimations, and they in turn are influenced by the assumptions of whoever provides the raw data.  Strong optimists and pessimists are likely to make unrealistic claims about the costs and benefits and may not even appreciate their own bias (we all secretly believe we know because we are the realists!).  'Calibrating' the people making the projections may help, and this tends to happen naturally with experience - in other words, IRR accuracy probably correlates with the number of years of experience at calculating investment returns.  Another way to improve the accuracy is to persuade several competent and interested people to provide the requisite numbers for the factors used to calculate IRR.  If their estimations cluster closely around the same values (i.e. low deviation from the mean, low variance), the numbers have more credibility  than if they provide wildly differing estimates: exploring the reasons for those differences (for example, different assumptions or factors) can generate further insight and value from the metric, perhaps suggesting the need to control those factors more closely.