Showing posts with label SMotW. Show all posts
Showing posts with label SMotW. Show all posts

Thursday 6 February 2014

SMotW #91: incident management maturity

Security Metric of the Week #91: information security incident management maturity


Notwithstanding the photo, we're using 'maturity' here in the sense of wisdom, stability and advanced development, rather than sheer age! The idea behind maturity metrics is to assess the organization against the current state of the art, also known as good practice or best practice.

This particular metric measures the organization's processes for managing (identifying, reporting, assessing, responding to, resolving and learning from) information security incidents. 

That's all very well in theory, but how do we actually identify good/best practices, and then how do we measure against them?

The maturity metrics described in PRAGMATIC Security Metrics employ a method that I developed and used very successfully over 3 decades in information security and IT audit roles. The scoring process breaks down the area under review into a series of activities and offers guidance notes or criteria for bad, mediocre, good and best practices in each of those activities, based on an appreciation of the related risks and control practices gained from experience and research. The scoring tables contain a distillation of knowledge in a form that gives reasonably objective guidance for the assessment, without being overly restrictive. The approach is flexible since the table is readily updated as new practices and issues emerge (including good and not so good practices discovered in the course of my audits, assessments and consultancy work across hundreds of organizations and business units, plus advice gleaned from standards, advisories, textbooks, vendors, blogs and so forth), either by amending the wording of the existing rows in the scoring table or by adding new rows. Furthermore, the assessor has some latitude at run-time (during the assessment) to read between the lines, applying his/her expertise and knowledge in determining how well the organization is really doing against each of the criteria. The metric deliberately and consciously blends objectivity with subjectivity through a measurement process that turns out to be surprisingly useful, informative and repeatable in practice.

The maturity metrics scoring tables given in the book are illustrations or examples to demonstrate the approach and get you started, but it's up to you to take them forward, adapting and developing them henceforth. The scoring tables, and hence the metrics, are themselves intended to continue evolving and maturing over time. 

ACME gave this metric an overall PRAGMATIC score of 86%, putting it firmly in contention as our "security metric of the quarter" ...

The next post on the Security Metametrics blog will list the quarter's metrics in order of their PRAGMATIC scores

Thursday 30 January 2014

SMotW #90: % of business units with proven I&A

Security Metric of the Week #90: proportion of business units using proven identification and authentication mechanisms

This metric hinges on the meaning of "proven". Proof is a relative term. What level of proof is appropriate? It's a matter of assurance, trust and risk.

ACME managers implicitly assumed* that the metric would be self-measured and reported by business units. Given a central mandate from HQ to implement specific controls, business units are obviously under pressure to confirm that the required controls are in place ... even if they actually are not. Aside from the risk of business units simply reporting whatever HQ expects to hear, there is also a distinct possibility that the business units might have misunderstood the requirement, and failed to implement the control effectively (perhaps mis-configuring their security systems).

That brings us to the matter of the nature and extent of control implementation. If a business unit has the required identification and authentication (I&A) mechanism in place for some but not all of their systems, how should they report this? What if they have made a genuine effort to implement it on most systems, but the few that remain are particularly important ones? What if the identification part is working as per the spec but the authentication isn't, perhaps using a different mechanism for valid business or technical reasons? There are several variables here, making it tough to answer honestly a typically naive checklist question such as "Are your IT systems using the proven I&A mechanisms required in the corporate security standards (Y/N)?"

On that basis, the managers gave this metric a PRAGMATIC score of just 44%, held back by abysmal ratings for Genuineness and Independence (see page 207 in PRAGMATIC Security Metrics). 

The metric is not necessarily dead in the water, though, since it would be possible to address their main concerns through some form of independent assessment and reporting of the I&A mechanismsCertifying IT systems is something rarely seen outside large military and governmental organizations, who have the governance structures in place to:
  1. Define security requirements including technical controls such as specified I&A mechanisms, methods, software etc.;
  2. Mandate those requirements on the various business units;
  3. Implement the controls locally, often with central support (e.g. technical support plus standards, procedures and guidelines);
  4. Accredit certification functions who are competent to test and certify business units' compliance with the security requirements;
  5. Test and certify the business units, and re-test and re-certify them periodically;
  6. Deal with any noncompliance.
That little lot would generally be viewed as an expensive luxury for most organizations (impacting the metric's Cost-effectiveness rating), although the global spread of ISO/IEC 27001 certification is gradually assembling most of those pieces, and making more organizations familiar with the concept of accredited certification.

Meanwhile, ACME quietly parked this metric in the "too hard for now" bucket, pressing ahead with the higher-scoring metrics still on their shortlist.

* PS Unless someone present happens to notice and point out assumptions like this, they tend to remain unspoken, and are a frequent cause of misunderstandings. At some stage (perhaps after a PRAGMATIC workshop has shortlisted a reasonably small number of metrics thought worth pursuing), the metrics ought to be specified in sufficient detail to dispel such doubts. Several security metrics standards and websites give examples of the forms typically used to specify metrics, although most appear obsessed with the statistics, often neglecting valuable information such as the reasoning behind and justification for the metrics, the intended audiences and so forth. I'm sure "How should we specify security metrics" would spawn an interesting thread on the Security Metametrics group on Linkedin ...

Thursday 23 January 2014

SMotW #89: number of infosec events

Security Metric of the Week #89: number of information security events, incidents and disasters


This week, for a change, we're borrowing an analytical technique from the field of quality assurance called "N why's" where N is roughly 5 or more.

Problem statement: for some uncertain reason, someone has proposed that ACME might count and report the number of information security events, incidents and disasters.
  1. Why would ACME want to count their information security events, incidents and disasters?
  2. 'To know how many there have been' is the facile answer, but why would anyone want to know that?
  3. Well, of course they represent failures of the information risk management process. Some are control failures, others arise from unanticipated risks materializing, implying failures in the risk assessment/risk analysis processes. Why did the controls or risk management process fail?
  4. Root cause analysis reveals many reasons, usually, even though a specific causative factor may be identified as the main culprit. Why didn't the related controls and processes compensate for the failure?
  5. We're starting to get somewhere interesting by this point. Some of the specific issues that led to a given situation will be unique, but often there are common factors, things that crop up repeatedly. Why do the same factors recur so often?
  6. The same things keep coming up because we are not solving or fixing them permanently. Why don't we fix them?
  7. Because they are too hard, or because we're not trying hard enough! In other words, counting infosec events, incidents and disasters would help ACME address its long-standing issues in that space.
There's nothing special about that particular sequence of why's nor the questions themselves (asking 'Who?', 'When?', 'How?' and 'What for?' can be just as illuminating), it's just the main track my mind followed on one occasion. For instance, at point 5, I might equally have asked myself "Why are some factors unique?". At point 3, I might have thought that counting infosec incidents would give us a gauge for the size or scale of ACME's infosec issues, begging the question "Why does the size of scale of the infosec issues matter?". N why's is a creative technique for exploring the problem space, digging beneath the superficial level.

The Toyota Production System uses techniques like this to get to the bottom of issues in the factory. The idea is to stabilize and control the process to such an extent that virtually nothing disturbs the smooth flow of the production line or the quality of the final products. It may be easy for someone to spot an issue with a car and correct it on the spot, but it's better if the causes of the issue are identified and corrected so it does not recur, or even better still if it never becomes an issue at all. Systematically applying this mode of thinking to information security goes way beyond what most organizations do at present. When a virus infection occurs, our first priority is to contain and eradicate the virus: how often do we even try figuring out how the virus got in, let alone truly exploring and addressing the seemingly never-ending raft of causative and related factors that led to the breach? Mostly, we don't have the luxury of time to dig deeper because we are already dealing with other incidents.

Looking objectively at the specific metric as originally proposed, ACME managers gave it a PRAGMATIC score of 49%, effectively rejecting it from their shortlist ... but this one definitely has potential. Can PRAGMATIC be used to improve the metric? Obviously, increasing the individual PRAGMATIC ratings will increase the overall PRAGMATIC score since it is simply the mean rating. So, let's look at those ratings (flick to page 223 in the book).

In this case, the zero rating for Actionability stands out a mile. Management evidently felt totally powerless, frustrated and unable to deal with the pure incident count. The number in isolation was almost meaningless to them, and even plotting the metric over time (as shown on the example graph above) would not help much. Can we improve the metric to make their job easier?

As indicated at item 7 above, this metric could help by pointing out how many information security events, incidents and disasters link back to systematic failures that need to be addressed. Admittedly, the bare incident count itself would not give management the information needed to get to that level of analysis, but it's not hard to adapt and extend the metric along those lines, for instance categorizing incidents by size/scale and nature/type, as well as by the primary and perhaps secondary causative factors, or the things that might have prevented them occurring.

A pragmatic approach would be to start assigning incidents to fairly crude or general categories, and in fact this is almost universally done by the Help Desk-type functions that normally receive and log incident reports - therefore the additional information is probably already available from the Help Desk ticketing system. Management noting a preponderance of, say, malware incidents, or an adverse trend in the rate of incidents stemming from user errors, would be the trigger to find out what's going wrong in those areas. Over time, the metric could become more sophisticated with more detailed categorization etc.

Thursday 16 January 2014

SMotW #88: security ascendancy

Security Metric of the Week #88: information security ascendancy level


One of the most frequent complaints from information security professionals is that they don't get sufficient management support. They say that management doesn't take information security seriously enough, relative to other corporate functions. But are they right to complain, or are they just whining?

There are several possible metrics in this space, for example:
  • Survey management attitudes towards information security, relative to other concerns;
  • Compare the information security budget (revenue and capital charges) against other functions;
  • Assess the maturity of the organization's governance of information security;
  • Measure the level of the most senior manager responsible for information security ("security ascendancy").
The last of these is the simplest and easiest to measure. On the organogram above, the organization presumably scores 2 since it has a Chief Information Security Officer who reports directly to the Chief Executive Officer, the most senior manager in the firm. However, if the CEO takes a personal and direct interest in information security, the score might reach 1 (perhaps depending on whether information security is formally acknowledged as part of the CEO's role in his role description).

The power and influence of the function across the organization decreases with each additional layer of management between it and the CEO. If it is down at level 4 or 5, buried out of sight in the depths of IT (as is often the way), its influence is largely constrained to IT, meaning that it is essentially an IT security rather than information security function. However, since IT typically pervades the business, that is not necessarily the end of the world: with competent and dedicated professionals on board, the Information Security function can still build a strong social network, prove its worth, and influence colleagues by informing and persuading them rather than using positional power. Sure it's hard work, but it's possible.

ACME scored this metric highly at 85% on the PRAGMATIC scale (see the book for the detailed score breakdown). It was welcomed as a strategic metric that directly supported ACME's strategy to improve the organization's focus on information security, one that had value in the short to medium term (i.e. not necessarily a permanent security metric).

Wednesday 8 January 2014

SMotW #87: visitor/employee parking separation

Security Metric of the Week #87: distance separating employee from visitor parking


Imagine your corporate security standards require that "Employee parking spaces must be physically distant from visitor parking spaces, separated by at least 100 paces". The rule might have been introduced in order to reduce risks such as employees covertly passing information to visitors between vehicles, or terrorists triggering vehicle bombs in the vicinity of key employees, or for some other reason (to be honest, we're not exactly sure of the basis - a common situation with big corporations and their thick rulebooks: the rationale often gets lost or forgotten in the mysts of time). Imagine also that senior management has determined that the security standards are important, hence compliance with the standards must be measured and reported across the corporation. Forthwith! 

Now picture yourself in the metrics workshop where someone proposes this very metric. They painstakingly point out the specific rule in the rulebook, noting that the distance between employee and visitor parking is something that can be measured easily on the site plans, or paced out in the parking lot. As far as they are concerned, this metric fits the bill. It is cheap, elegant even, hard to fake and easily verified. "If HQ wants compliance metrics, compliance metrics is what they'll jolly well get!"

It soon becomes abundantly clear that the proposer has ulterior motives. Rather than proactively supporting HQ, his cunning plan is to undermine the effort through passive resistance. A metric that technically fulfills the requirement while providing no useful information would be perfect!

As the group tries ever harder to dismiss the metric, so the proposer digs-in deeper until he is fully entrenched. By this stage, it is definitely "his" metric: he takes any hint of criticism personally, and seemingly has an answer for everything. Tempers fray as the heat exceeds the light output from the discussion.

PRAGMATIC to the rescue! In an attempt to defuse the situation, someone suggests working through the method and scoring the metric as a team effort. Dispassionately considering the PRAGMATIC criteria one by one, and allowing for the metric's plus points, leads to a final score of just 41% ... and a big thumbs-down for this metric.

Friday 3 January 2014

SMotW #86: info asset inventory integrity

Security Metric of the Week #86: integrity of the information asset inventory

As a general rule, if you are supposed to be securing or protecting something, it's quite useful to know at least roughly what that 'something' is ...

Compiling a decent list, inventory or database of information assets turns out to be quite a lot harder than one might think.  Most organizations made a stab at this for Y2K, but enormous though it was, that effort was very much focused on IT systems and, to some extent, computer data, while other forms of information (such as "knowledge") were largely ignored. 

Did your organization even maintain its Y2k database?  Hardly any did.

If we were able to assess, measure and report the completeness, accuracy and currency of the information asset inventory, we could provide some assurance that the inventory was being well managed and maintained - or at least that the figures are headed the right way.  


How would one actually generate the measurements? One way would be to validate a sample of records in the inventory against the corresponding assets, or vice versa (perhaps both).  A cunning plan to validate, say, the next 10% of the entries in the inventory every month would mean that the entire data set would be validated every year or so (allowing for changes during the year, including perhaps the introduction of additional categories of information asset that were not originally included). 

P
R
A
G
M
A
T
I
C
Score
82
66
83
78
80
43
50
66
70
69%

ACME management were quite interested in this metric, if a little concerned at the Accuracy, Timeliness and Integrity of the metric (ironic really!).  Having calculated the metric's PRAGMATIC score, they decided to put this one on the pending pile to revisit later.

The CISO was more confident than his peers that his people would compile the metric properly, and he toyed with the idea of either using the metric for his own purposes, or perhaps proposing a compromise: Internal Audit might be commissioned to sample and test the inventory on a totally independent basis, comparing their findings against those from Information Security to prove whether Information Security could be trusted to report this and indeed other security metrics.

Wednesday 25 December 2013

SMotW #85: controls consistency

Security Metric of the Week #85: consistency of information security controls



This metric implies that someone is concerned about security controls being inconsistent, but what does that mean - inconsistent in what regard? Possible types of inconsistencies include:

  • Controls do not sufficiently mitigate the risks, address the wrong risks, or are in some way inappropriately designed/specified;
  • Expected or standardized controls (e.g. controls mandated in law) not implemented in all relevant places;
  • Controls not implemented to the same degree or extent, or in the same way, in all relevant places;
  • Controls that vary over time (e.g. security procedures ignored in busy periods);
  • Controls not operated or managed in the same way in all relevant places;
  • Others.

ACME's senior managers did not rate this metric highly, being concerned about its Accuracy, Timeliness, Independence/integrity and Cost-effectiveness:


P
R
A
G
M
A
T
I
C
Score
78
83
67
60
71
33
27
31
27
53%

However, from the perspectives of the CISO or ISM, the metric was more PRAGMATIC:

P
R
A
G
M
A
T
I
C
Score
85
90
76
60
90
50
46
100
75
75%

They could see themselves using this metric to drive up consistency of security controls in whatever respects they chose to measure ... although exactly how they would measure consistency was not exactly self-evident: they were thinking initially about using and perhaps extending their routine compliance checks against ACME's baseline security standards.

Notice the distinctly different ratings for Independence/integrity given in these two PRAGMATIC assessments. In the former, senior management were concerned that if they started using the metric to pressure Information Security and various business units to improve their information security, things might deteriorate to arguments over the measurements rather than productive discussion around making necessary improvements. They also weren't entirely convinced that the metric would be a trustworthy guide to controls consistency. In contrast, the CISO and ISM envisaged measuring the metric themselves for their own purposes in connection with continuously improving ACME's ISO27k Information Security Management System, with little need for discussion or argument with those being measured. In fact, the metric might not even need to be reported or circulated beyond the infosec office.

This is a good illustration of why published lists of security metrics (including the 150 examples in our book!) are of dubious value except perhaps as creative inspiration. Despite what you might think, a security metric that works brilliantly for one organization may be mediocre or quite inappropriate for another, while one that is ideal for a particular purpose and a specific audience within a given organization may be a poor choice in other circumstances or for other audiences. This is precisely what makes the PRAGMATIC method shine: it offers a systematic, structured way to figure out and compare the merits of various possible security measures in a specific situation or context, something that was previously very difficult to achieve.

With that, we'd like to wish all our readers a brilliant Christmas: the next SMotW will appear here early in the new year, although we might perhaps blog about new year's metrics resolutions. Meanwhile, we hope Santa brings you all you desire, and doesn't get stuck in the chimney.

Merry Christmas from Gary & Krag. Have a good one.

Tuesday 17 December 2013

SMotW #84: % of security-certified systems

Security Metric of the Week #84: proportion of IT systems whose security has been certified compliant



Large organizations face the thorny problem of managing their information security consistently across the entire population of business units, locations, networks, systems etc. Achieving consistency of risk analysis and treatment is tricky for all sorts of reasons in practice: diverse and geographically dispersed business units, unique security challenged, cultural differences, political issues, cost and benefit issues, differing rates of change, and more.

Three common approaches are to: 
  1. Devolve security responsibilities as much as possible, basically leaving the distributed business units and teams to their own devices (which implies widespread distrust of each other's security arrangements between different parts of the organization);
  2. Centralize security as much as possible, possibly to the extent that remote security teams are mere puppets, with all the heavy lifting in security done via the network by a tight-knit centralized team (with the risk that the standard security settings might not, in fact, be appropriate in every case);
  3. Hybrid approaches, often involving strong central guidance (security policies and standards mandated by HQ) but local security implementation/configuration and management (with some discretion over the details).
Some highly-organized organizations (military and governmental, mostly) take the hybrid approach a step further with strong compliance and enforcement actions driven from the center in an effort to ensure that those naughty business units out in the field are all playing the game by the rules. Testing and certifying compliance of IT systems against well-defined systems security standards, for instance, gives management greater assurance that system security is up to scratch - provided the testing is performed competently which usually means someone checking and accrediting the teams who do the testing so that they are permitted to issue compliance certificates.

ACME Enterprises Inc may not be the very largest of imaginary corporations but it does have a few separate sites and lots of servers. With some concern about how consistently the servers were secured, ACME's managers agreed to take a PRAGMATIC look at this metric:

P
R
A
G
M
A
T
I
C
Score
72
79
73
89
68
32
22
89
88
68%

With most of the numbers hovering in the 70's and 80's, the two lowest ratings stand out: their reasoning for the 32% rating for Accuracy was that certified compliance of a system to a technical security standard does not necessarily mean it is actually secure: ACME has had security incidents on certified compliant servers that met the standard but, for various reasons, turned out to have been inadequately secured after all.

On the other hand, it was seen as A Good Thing overall that more and more servers were both being made compliant and certified as such, hence management thought this metric had some potential as an organization-wide security indicator: they gave it 72% for Predictiveness since, in their opinion, there was a reasonably strong correlation between the proportion of servers having been certified compliant, and ACME's overall security status.

Let me repeat that: although certification is not a terribly reliable guide to the security of a given server, the certification process is driving server security in the right direction, hence the proportion of certified servers could be a worthwhile strategic-level security metric for ACME.  Interesting finding!

The rating of just 22% for Timeliness was justified on the basis that the certification process is slow: the certification tests take some time to complete, and the certification team has a backlog of work. The process and the metric gives a delayed picture of the state of security. Focusing management attention on the proportion of servers certified would undoubtedly have the side-effect of pressuring the team to certify more of the currently unchecked servers (perhaps increasing the possibility of the tests being shortcut, although the certification team leader was known to be a no-compromise do it right or not at all kind of person), but there are ways to deal with that issue.

The metrics discussion headed off at a tangent at this point, as they realized that "Time taken to security-certify a server" might be another metric worth considering. Luckily, with many other security metrics on the table already, someone had the good sense to park that particular proposal for now, adding time-to-certify to the list of metrics to be PRAGMATICally assessed later, and they got back on track - well almost ...
One of the managers queried the central red stripe on the mock-up area graph on the table. The CISO admitted that the stripe represented the servers that had failed their certification testing, and so opened another can o' worms when the penny dropped that 'proportion of servers certified or not certified' is not the whole story here. As the temperature in the workshop room rapidly escalated, the arrival of lunch and the temporary departure of several managers to catch up with their emails saved the day!

Tuesday 10 December 2013

SMotW #83: information asset values

Security Metric of the Week #83: total value of information assets owned by each Information Asset Owner


This week's metric presumes two key things.  

First, it presumes that the organization has Information Asset Owners (IAOs). While the terms vary, IAOs are generally the people who are expected to protect and and exploit the information assets in their remit or nominally assigned to them, both the organization's own information asset and those placed in its care by other organizations or individuals (its clients and employees for instance). Someone senior such as the Human Resources Director would typically be the IAO for the HR system, while lesser databases, systems and paperbases might be allotted to mid-level managers. By holding IAOs personally accountable for valuable information, management puts them under pressure to assess and treat the associated risks sensibly, and ideally to enhance the value of the assets by using them well.

Second, the metric presumes that there is some way to value the information assets - easier said than done, but valuation has several benefits so it is worth some effort. In fact, it is hard to envisage rational corporate management without this information, and yet curiously enough in many organizations asset valuation is merely an accountancy exercise, one that is largely restricted to tangible assets (book values) and certain financial/investment instruments (off-balance-sheet).

ACME managers rated the metric at 51%:

P
R
A
G
M
A
T
I
C
Score
48
64
78
57
79
38
50
22
26
51%



If you look up example metric 7.6 in chapter 7 of our book, you'll discover that we deliberately omitted the scoring rationale for this metric in order to emphasize keeping notes about the PRAGMATIC process. If the only record that remains is the table of ratings, or even worse just the overall PRAGMATIC score, it's hard to recall the discussion and the reasoning behind the metric ... but let's give it a go now and see how we get on.

Overall, the 51% PRAGMATIC score tells us that management was not very impressed with the metric: in their estimation, it should not be dismissed out of hand but it is unlikely to feature highly on anyone's security metrics wish-list.  [OK, but we really need to know why. What was it about the metric that slightly interested and slightly concerned them?]

The high spots in the scoring table were the metric's Meaningfulness and Actionability. Looking at the sample graphic above, it's obvious at a glance that three IAOs (Fred, Alan and Sarah) own just over half of the information assets by value between them, with the remainder divided between seven other IAOs. That in turn implies that Fred, Alan and Sarah are shouldering heavier information security burdens than the other seven, so perhaps some reallocation of information assets is in order? It's hard to tell with so little information to go on. With hindsight, the Meaningfulness and Actionability ratings were both quite generous, but it could well be that we are interpreting the metric quite differently now than when it was originally considered. 

The metric's low spots were its Independence and Cost-effectiveness. The 22% rating for Independence suggests that perhaps management believed the IAOs with most to gain or lose from the metric would be largely responsible for taking and reporting the measurements, a potential conflict of interest. The poor rating on Cost-effectiveness gives the impression that this is a metric with limited value and high costs.

Now pick any other PRAGMATIC criterion and try to figure out why it was rated as it was. It's even harder to reconstruct the arguments here! Maybe the ACME managers who were involved in the original discussion will remember what was said, although if that was many months ago, things will have moved on - ACME's security metrics program will have matured somewhat, and the business context is different.

So, the main take-home message from this week's example is to keep decent notes as you work through the PRAGMATIC process. It is appropriate, indeed necessary to review and revisit the organization's choice of information security metrics from time to time (perhaps every year or so). Trust us, it will be much easier to pick up the threads of previous discussions by referring to your scoring notes than to start from scratch.

There's one final point before we end. The metric was originally proposed, described, discussed and scored in words and numbers - no pictures. We prepared the simple pie chart graphic above for this blog, later, using some made-up data in MS Excel, but visualizing metrics like this turns out to be a powerful way to help us imagine and think through how they might actually work out in practice. It's also a potential source of bias, however, since we have undoubtedly framed the discussion in a certain way with that particular illustration (we have interpreted it as a pie chart, a proportional representation for starters). If we had illustrated this same piece with the bar chart below instead of the pie chart above, what effect might that have had on your thoughts concerning this metric?  Think on.