Friday 31 January 2014

Network security awareness


Networking has played a pivotal role in the explosion of IT use in recent decades – first LANs and then WANs, most notably of course the Internet. Being an OF, I recall how it was in the dim and distant days prior to LANs, when computers were mostly accessed through directly connected teletype or green-screen terminals, and generally only by computer scientists sporting white labcoats and clipboards. Ordinary users - the lucky ones at least - interacted with Data Processing through the coding sheets for punched cards and fan-fold printouts. 

Local Area Networks of various kinds were introduced to put terminals, and later PCs, directly in the hands of the users on site. Working in IT in the 80s, I saw rapid technological changes as wave after wave of networking protocols and standards rose and fell from grace. A dual-ring counter-rotating daisy-chain network from RACAL seemed cutting-edge at the time but was the bane of my life back then: supposedly it was resilient and self-healing, the "taps" being able to route traffic back around the other way if an upstream tap became unavailable. If the taps and cables weren't so inherently unreliable, it might have worked in practice as well as it did on paper.

The packet-routing capabilities of X.25 and TCP/IP enabled the first practical Wide Area Networks to pass-on traffic for distant nodes, and to re-route around obstacles such as overloaded or broken links. Once organizations started to interconnect entire networks and communities rather than individual computers and users, the social aspects started to come to the fore. As ARPANET, JANET and others morphed into the early Internet, dial-up modems, bulletin boards and a wonderful utility called Kermit enabled the benign hackers of the day to share and share alike. The net was a giant play pen cum classroom, where the technology was being actively developed, hacked and improved on the fly by some immensely talented individuals and groups, both professionals and amateurs.

Network security barely existed in those days: it was mostly computer security in fact, securing individual computers as if they were desert islands. The information security issues associated with networking today are many and varied. We may have thought we’d solved the availability issues of those unreliable early networks, and yet today we have wireless networks with intermittent coverage, and Denial of Service attacks are still occurring, some very significant and costly. Encrypting network traffic seems fairly straightforward, until we learn that the authorities have deliberately handicapped the protocols we trust, and anyway most network traffic is in clear.

Well fair enough, but aside from all those technical issues, what about the business angle on network security? Why should management care?

Organizations are facing the possibility of frauds, extortion, information theft and covert long-term infiltration of the corporation, as well as the business continuity aspects of network downtime. We have to deal with employees picking up viruses or giving away their passwords and other confidential information with gay abandon. There are strategic issues too such as network and security architectures, and governance issues such as putting the appropriate network security policies, teams and systems in place.

Ordinary employees may not really understand the technology, but when it comes to setting up their home networks and portable IT equipment, they need a basic appreciation of the security aspects, at least enough to ask for help if they can’t cope. They also need to realize that their use of corporate IT networking facilities is routinely logged and monitored, with obvious privacy implications as a consequence.

And then we come to the Internet of Things. I may be am a cynical OF but let's just say I was less than surprised to read recently about Internet-enabled fridges circulating spam already.

Thursday 30 January 2014

SMotW #90: % of business units with proven I&A

Security Metric of the Week #90: proportion of business units using proven identification and authentication mechanisms

This metric hinges on the meaning of "proven". Proof is a relative term. What level of proof is appropriate? It's a matter of assurance, trust and risk.

ACME managers implicitly assumed* that the metric would be self-measured and reported by business units. Given a central mandate from HQ to implement specific controls, business units are obviously under pressure to confirm that the required controls are in place ... even if they actually are not. Aside from the risk of business units simply reporting whatever HQ expects to hear, there is also a distinct possibility that the business units might have misunderstood the requirement, and failed to implement the control effectively (perhaps mis-configuring their security systems).

That brings us to the matter of the nature and extent of control implementation. If a business unit has the required identification and authentication (I&A) mechanism in place for some but not all of their systems, how should they report this? What if they have made a genuine effort to implement it on most systems, but the few that remain are particularly important ones? What if the identification part is working as per the spec but the authentication isn't, perhaps using a different mechanism for valid business or technical reasons? There are several variables here, making it tough to answer honestly a typically naive checklist question such as "Are your IT systems using the proven I&A mechanisms required in the corporate security standards (Y/N)?"

On that basis, the managers gave this metric a PRAGMATIC score of just 44%, held back by abysmal ratings for Genuineness and Independence (see page 207 in PRAGMATIC Security Metrics). 

The metric is not necessarily dead in the water, though, since it would be possible to address their main concerns through some form of independent assessment and reporting of the I&A mechanismsCertifying IT systems is something rarely seen outside large military and governmental organizations, who have the governance structures in place to:
  1. Define security requirements including technical controls such as specified I&A mechanisms, methods, software etc.;
  2. Mandate those requirements on the various business units;
  3. Implement the controls locally, often with central support (e.g. technical support plus standards, procedures and guidelines);
  4. Accredit certification functions who are competent to test and certify business units' compliance with the security requirements;
  5. Test and certify the business units, and re-test and re-certify them periodically;
  6. Deal with any noncompliance.
That little lot would generally be viewed as an expensive luxury for most organizations (impacting the metric's Cost-effectiveness rating), although the global spread of ISO/IEC 27001 certification is gradually assembling most of those pieces, and making more organizations familiar with the concept of accredited certification.

Meanwhile, ACME quietly parked this metric in the "too hard for now" bucket, pressing ahead with the higher-scoring metrics still on their shortlist.

* PS Unless someone present happens to notice and point out assumptions like this, they tend to remain unspoken, and are a frequent cause of misunderstandings. At some stage (perhaps after a PRAGMATIC workshop has shortlisted a reasonably small number of metrics thought worth pursuing), the metrics ought to be specified in sufficient detail to dispel such doubts. Several security metrics standards and websites give examples of the forms typically used to specify metrics, although most appear obsessed with the statistics, often neglecting valuable information such as the reasoning behind and justification for the metrics, the intended audiences and so forth. I'm sure "How should we specify security metrics" would spawn an interesting thread on the Security Metametrics group on Linkedin ...

Tuesday 28 January 2014

Preventive & corrective actions

Having been hit twice so far, I've upped my evaluation of the risk of my credit/debit cards being compromised by online vendors' inadequate information security. The latest incident was, I suspect, a result of the Adobe hack a few months ago. Both times, the bank's fraud systems spotted and stopped the incidents and told me well before I even noticed anything awry.

After the first incident, I resolved to dedicate a specific card for online purchases so at least I could carry on using my other cards if I got hit. That was a good move that made things easier after the second incident ... but I missed my chance this time around to be even more proactive. When I received an apologetic email from Adobe about their breach, or perhaps even earlier, I should have cancelled the card immediately and ordered a replacement. Next time, I won't wait for the bank to pull its finger out ...

I now have a new card, once again dedicated to online purchases. This time, I have opted for a VISA debit card on a separate bank account with no credit or overdraft facility. Treating it like an online pre-pay card, I deliberately maintain a low balance on that account, just enough for my normal small value online purchases. If - or should I say when - the card is next compromised, the fraudsters won't be able to steal $thousands, and I won't be out of pocket for the weeks it takes the banks to sort things out and refund in full (which, thankfully, they have done for me on both prior occasions - no complaints from me on that score!).

So, aside from all that, and the usual "Watch for the padlock" and "Only do business with reputable online traders", is there anything else you'd recommend me to do to mitigate the risk? It's all a bit embarrassing, me being a CISSP and all!

Saturday 25 January 2014

ISO/IEC 27000:2014 available now - for FREE!

In the course of catching up with a long backlog of ISO/IEC JTC 1/SC 27 emails and updating ISO27001security.com, I discovered that the third edition of ISO/IEC 27000 has just been released.

Like its predecessors, ISO/IEC 27000:2014 can be downloaded legitimately free of charge through the ITTF site

The idea of '27000 being free is to encourage the adoption of a common glossary of information security terms, and to gain an appreciation of the ISO27k standards outlined within it.  It's a shame the other ISO27k standards aren't also free as I'm sure it would markedly increase their adoption as with the excellent SP800-series security standards from NIST, but unfortunately I don't determine the pricing policies for ISO/IEC.  

Although I haven't even finished reading the new edition and updating the site, I noticed already that the new version no longer defines the terms "asset" and "information asset". I suspect this was done in order to draw to a close the lengthy but rather unedifying SC27 discussions (OK, arguments!) around those contentious terms. Unfortunately, that does rather leave things up in the air. Does “information asset” mean the intangible information content, the tangible storage media, both, or something else? The distinction could be quite important in the context of various ISO27k standards, but I guess organizations using the standards will have to figure out the answers for themselves if the terms are used but not explicitly defined in those standards.

Thursday 23 January 2014

SMotW #89: number of infosec events

Security Metric of the Week #89: number of information security events, incidents and disasters


This week, for a change, we're borrowing an analytical technique from the field of quality assurance called "N why's" where N is roughly 5 or more.

Problem statement: for some uncertain reason, someone has proposed that ACME might count and report the number of information security events, incidents and disasters.
  1. Why would ACME want to count their information security events, incidents and disasters?
  2. 'To know how many there have been' is the facile answer, but why would anyone want to know that?
  3. Well, of course they represent failures of the information risk management process. Some are control failures, others arise from unanticipated risks materializing, implying failures in the risk assessment/risk analysis processes. Why did the controls or risk management process fail?
  4. Root cause analysis reveals many reasons, usually, even though a specific causative factor may be identified as the main culprit. Why didn't the related controls and processes compensate for the failure?
  5. We're starting to get somewhere interesting by this point. Some of the specific issues that led to a given situation will be unique, but often there are common factors, things that crop up repeatedly. Why do the same factors recur so often?
  6. The same things keep coming up because we are not solving or fixing them permanently. Why don't we fix them?
  7. Because they are too hard, or because we're not trying hard enough! In other words, counting infosec events, incidents and disasters would help ACME address its long-standing issues in that space.
There's nothing special about that particular sequence of why's nor the questions themselves (asking 'Who?', 'When?', 'How?' and 'What for?' can be just as illuminating), it's just the main track my mind followed on one occasion. For instance, at point 5, I might equally have asked myself "Why are some factors unique?". At point 3, I might have thought that counting infosec incidents would give us a gauge for the size or scale of ACME's infosec issues, begging the question "Why does the size of scale of the infosec issues matter?". N why's is a creative technique for exploring the problem space, digging beneath the superficial level.

The Toyota Production System uses techniques like this to get to the bottom of issues in the factory. The idea is to stabilize and control the process to such an extent that virtually nothing disturbs the smooth flow of the production line or the quality of the final products. It may be easy for someone to spot an issue with a car and correct it on the spot, but it's better if the causes of the issue are identified and corrected so it does not recur, or even better still if it never becomes an issue at all. Systematically applying this mode of thinking to information security goes way beyond what most organizations do at present. When a virus infection occurs, our first priority is to contain and eradicate the virus: how often do we even try figuring out how the virus got in, let alone truly exploring and addressing the seemingly never-ending raft of causative and related factors that led to the breach? Mostly, we don't have the luxury of time to dig deeper because we are already dealing with other incidents.

Looking objectively at the specific metric as originally proposed, ACME managers gave it a PRAGMATIC score of 49%, effectively rejecting it from their shortlist ... but this one definitely has potential. Can PRAGMATIC be used to improve the metric? Obviously, increasing the individual PRAGMATIC ratings will increase the overall PRAGMATIC score since it is simply the mean rating. So, let's look at those ratings (flick to page 223 in the book).

In this case, the zero rating for Actionability stands out a mile. Management evidently felt totally powerless, frustrated and unable to deal with the pure incident count. The number in isolation was almost meaningless to them, and even plotting the metric over time (as shown on the example graph above) would not help much. Can we improve the metric to make their job easier?

As indicated at item 7 above, this metric could help by pointing out how many information security events, incidents and disasters link back to systematic failures that need to be addressed. Admittedly, the bare incident count itself would not give management the information needed to get to that level of analysis, but it's not hard to adapt and extend the metric along those lines, for instance categorizing incidents by size/scale and nature/type, as well as by the primary and perhaps secondary causative factors, or the things that might have prevented them occurring.

A pragmatic approach would be to start assigning incidents to fairly crude or general categories, and in fact this is almost universally done by the Help Desk-type functions that normally receive and log incident reports - therefore the additional information is probably already available from the Help Desk ticketing system. Management noting a preponderance of, say, malware incidents, or an adverse trend in the rate of incidents stemming from user errors, would be the trigger to find out what's going wrong in those areas. Over time, the metric could become more sophisticated with more detailed categorization etc.

Monday 20 January 2014

7 things you should know about infosec metrics

A new two-page Educause paper by Shirley C. Payne from the University of Virginia and Stephen A. Vieira from the Community College of Rhode Island succinctly explains the purpose and utility of information security metrics.
"An information security metric is an ongoing collection of measurements to assess security performance, based on data collected from various sources. Information security metrics measure a security program’s implementation, effectiveness, and impact, enabling the assessment of security programs and justifying improvements to those programs. Effective metrics can bring visibility and awareness to the underlying issue of information security and highlight effective efforts through benchmarking, evaluation, and assessment of quantified data. This can put institutions in a proactive stance regarding information security and demonstrate support for leadership’s priorities."

Although written for educational institutions, the principles are universally applicable to any organization that secures information.

By referring specifically to IT security and the IT function, the paper introduces a subtle bias towards technical metrics. Personally, I would have emphasized using enterprise and information security strategies rather than IT to drive the selection of metrics - but that's a small quibble with an otherwise well-written paper.

Making an impact

For an infosec pro, "impact" is a bad thing, the adverse consequences of an incident, but it has another meaning. If your security policies, standards, procedures and guidelines make a positive impact on the readers, they are more likely to change their ways - and vice versa.

Nice in theory, but how do you actually achieve that?  Well, it helps to figure out a few things:
  • Who are your audiences?  Who is it that you are trying to influence? If you can break your audience down from an amorphous blob labeled "employees" or "users" to more specific groups or types of people, you will find that they have different information needs and perspectives on information security. Salesmen, for instance, live and breathe sales and marketing. Their heads are mostly on prospects and customers, plus products and the sales process (and, of course, their commission). Most are not exactly keen to read a boring information security newsletter, or a tedious procedure for requesting access to a system, or whatever. How are you going to catch their attention? [Hint: things that affect brands and sales, and anything that affects their commission, are very much in their line-of-sight!]

  • What is it that you are trying to put across, exactly? Trust me, it's easy to blabber on about information security in general, hand-waving terms, but takes a bit more effort to home-in on specific issues, particular messages. You need to research the topic, break down the risks, find angles that are relevant and important enough to warrant being communicated to people busy doing other stuff. If appropriate, pick-up on breaches that have affected the organization. Failing that, incidents affecting neighbors and peers, and near-misses. The aim is to motivate your audiences by impressing on them that "It could have been me", in other words information security is not just a theoretical concern but something worth taking seriously and actively.

  • When you say you want them to 'change their ways', what do you mean? What is the nature of the change/s? Are we talking about a slight adjustment, a tweak, evolution or revolution? Is the desired change entirely within the domain of the individuals, or is it a group-wide or cultural thing, taking in aspects such as social relationships and power as well as the people themselves? A wonderful way to think this though is to ask yourself what differences you expect to see if the change is 100% successful, contrasting that against the 100% unsuccessful case, which naturally suggests ways to measure the effects i.e. metrics.

  • What's in it for them? This is hard. It's all very well telling people they ought to care about information, risk, protection, privacy and compliance, but that's our imperative, it's what drives us as infosec pros. How are we going to make it theirs? How do we get them to internalize and own the problem? We use take one of two lines: we emphasize the benefits either to the organization or to the individual. In fact, even the organizational benefits tend to be couched in terms that hint at self-interest, for instance a healthy, profitable, vibrant organization is going to be a happier, more exciting and promising place to work. If your default approach is to warn people about the penalties and dire consequences of not doing things right, perhaps you ought to re-think things. Enforcement is a necessary part of achieving compliance but is not the most effective. It's too negative. How about some carrot to go with, or instead of, the stick?

  • How are you going to put the message/s across? Reviewing the answers to the previous questions generally reveals that you have a diversity of messages and audiences with differing needs, so good luck if you are putting all your eggs in one basket. I'm not just talking here about using a single communications vehicle such as a newsletter, poster or intranet site, but also a single mode of communications such as the written word. Some of us love reading and writing, some of us think in pictures, others like to be told or shown things, and some need to experience things for themselves. Like the carrot-and-stick image above, your security awareness poster or infographic, for all its striking graphic imagery, bright color and well-meaning advice, is not going to have the same impact on everyone. Some will love it and take it to heart, others may barely give it a second glance. The poster has value as part of a coherent communications approach, not the whole.
Contact me for more along these lines, either by email or through the comments. There's lots more to say!

Sunday 19 January 2014

Valuable tech knowledge

Patent disputes bring the $ value of intellectual capital to the headlines - for example, over $1bn was recently awarded against Marvell Technology Group for infringing Carnegie Mellon University's patent on a technique for accurately reading data from a hard drive.

IBM has consistently taken out the most US patents for 21 straight years, both to protect the proprietary technology in its own products and to force third parties into lucrative license agreements with Big Blue. In 2013, IBM took out another patent every 15 working minutes or so on average (assuming 8 hour days and 200 working days per year) and spent of the order of $6bn in the year on research and development. All ten top US patentees are IT/high tech companies.

I wonder if any of those organizations cover the need to protect knowledge in their security awareness programs?

Thursday 16 January 2014

SMotW #88: security ascendancy

Security Metric of the Week #88: information security ascendancy level


One of the most frequent complaints from information security professionals is that they don't get sufficient management support. They say that management doesn't take information security seriously enough, relative to other corporate functions. But are they right to complain, or are they just whining?

There are several possible metrics in this space, for example:
  • Survey management attitudes towards information security, relative to other concerns;
  • Compare the information security budget (revenue and capital charges) against other functions;
  • Assess the maturity of the organization's governance of information security;
  • Measure the level of the most senior manager responsible for information security ("security ascendancy").
The last of these is the simplest and easiest to measure. On the organogram above, the organization presumably scores 2 since it has a Chief Information Security Officer who reports directly to the Chief Executive Officer, the most senior manager in the firm. However, if the CEO takes a personal and direct interest in information security, the score might reach 1 (perhaps depending on whether information security is formally acknowledged as part of the CEO's role in his role description).

The power and influence of the function across the organization decreases with each additional layer of management between it and the CEO. If it is down at level 4 or 5, buried out of sight in the depths of IT (as is often the way), its influence is largely constrained to IT, meaning that it is essentially an IT security rather than information security function. However, since IT typically pervades the business, that is not necessarily the end of the world: with competent and dedicated professionals on board, the Information Security function can still build a strong social network, prove its worth, and influence colleagues by informing and persuading them rather than using positional power. Sure it's hard work, but it's possible.

ACME scored this metric highly at 85% on the PRAGMATIC scale (see the book for the detailed score breakdown). It was welcomed as a strategic metric that directly supported ACME's strategy to improve the organization's focus on information security, one that had value in the short to medium term (i.e. not necessarily a permanent security metric).

Wednesday 15 January 2014

New year, fresh eyes

Never mind all those new year's resolutions. The turn of a new year is an opportunity to take take a long hard look at your information security strategies, policies, procedures, guidelines, forms, awareness program, intranet website etc. including things such as your corporate Employee Rulebook, Code of Conduct and IT/network/information Acceptable Use Policy.

Try to view them objectively from the perspective of an ordinary employee, perhaps someone who has recently joined the organization and hence lacks preconceptions about, and an understanding of, the corporate culture with respect to valuing and protecting information. If you acknowledge that perhaps you might be a little too close to the action to see things for what they are (particularly if you wrote the materials), ask other people about the documentation. Solicit their candid feedback. An informal survey may be perfectly adequate to flush out any issues with style, readability, meaning and impact, all of which are important if the documentation is to be motivational and effective. If the initial response is "Security policies? What policies?" you have your answer already!

Good on you if you have a fabulous suite of security metrics including appropriate measures and targets relating to the documentation: you presumably already have the data you need to assess the position, in fact you will have been monitoring and responding to the metrics all year round so the new year is nothing special. Oh look, look, flying piggies!

The new year is also a chance to review the broader context for information security, including aspects such as:
  • The organization: what's new in the business this year that wasn't around at the start of last year, or whenever you last reviewed things? Has the organization structure stayed the same? What's hot and what's not? What about looking forward: are the business strategies, objectives and challenges any different to a year ago? What about the markets, products, third party relationships and so forth? It's a remarkably rare organization that sees no changes year-on-year, and at least some of those changes probably ought to be reflected in corresponding updates to the corporate and information governance, including information security. Aside from that, explicitly aligning information security with The Business is the key that unlocks a rosy future - trust me.

  • Information security risks and control requirements: do your policies, procedures, guidelines etc. reflect the state of our art? Are you up to date with things such as wireless networking, social media, BYOD, cloud computing, tablets and [insert another current buzzword here]? What about current threats (such as ransomware and the NSA), vulnerabilities (such as that nasty one in [name virtually any Microsoft or Adobe product here] and business impacts (see previous point)?

  • External compliance obligations: whether it is updates to PCI-DSS, ISO27k, or the myriad governance, security and privacy laws and regulations that affect us, compliance is one of those areas where shifts can be seismic.  Hopefully, of course, you have not only kept up with developments in 2013, but you have stayed ahead of the curve ... which means now is a great time both to confirm that you are fully compliant with the existing raft of rules and regs, and will be compliant with forthcoming changes at the time they come into effect. Are there any ground-shaking changes on your radar already for 2014? If so, how about incorporating them into your strategies and plans? Compliance obligations are golden opportunities to push things along that, in most cases, ought to have been done right all along. Most of your colleagues implicitly accept the compulsion to comply, so with a sneaky bit of planning ahead, you can use that to your advantage.
Looking back at 2013, were there any recurrent nightmares in terms of information security incidents that refused to play dead? Is it clear from your metrics that you have a weakness in your technical controls, manual controls, physical controls, preventive controls, detective controls, corrective controls, compliance controls ... or has something else been the thorn in your side, perhaps a particular system, person, team, department, business unit, site, partner or whatever? Recurrent issues recur, and will probably continue recurring, until the root causes are resolved so it's no good turning a blind eye, no matter how intractable the problems seem to be. If the issues are too big for you to tackle, get some help. Find business colleagues who also experience the pain, and collaborate with them on a new approach for the new year.

Wednesday 8 January 2014

SMotW #87: visitor/employee parking separation

Security Metric of the Week #87: distance separating employee from visitor parking


Imagine your corporate security standards require that "Employee parking spaces must be physically distant from visitor parking spaces, separated by at least 100 paces". The rule might have been introduced in order to reduce risks such as employees covertly passing information to visitors between vehicles, or terrorists triggering vehicle bombs in the vicinity of key employees, or for some other reason (to be honest, we're not exactly sure of the basis - a common situation with big corporations and their thick rulebooks: the rationale often gets lost or forgotten in the mysts of time). Imagine also that senior management has determined that the security standards are important, hence compliance with the standards must be measured and reported across the corporation. Forthwith! 

Now picture yourself in the metrics workshop where someone proposes this very metric. They painstakingly point out the specific rule in the rulebook, noting that the distance between employee and visitor parking is something that can be measured easily on the site plans, or paced out in the parking lot. As far as they are concerned, this metric fits the bill. It is cheap, elegant even, hard to fake and easily verified. "If HQ wants compliance metrics, compliance metrics is what they'll jolly well get!"

It soon becomes abundantly clear that the proposer has ulterior motives. Rather than proactively supporting HQ, his cunning plan is to undermine the effort through passive resistance. A metric that technically fulfills the requirement while providing no useful information would be perfect!

As the group tries ever harder to dismiss the metric, so the proposer digs-in deeper until he is fully entrenched. By this stage, it is definitely "his" metric: he takes any hint of criticism personally, and seemingly has an answer for everything. Tempers fray as the heat exceeds the light output from the discussion.

PRAGMATIC to the rescue! In an attempt to defuse the situation, someone suggests working through the method and scoring the metric as a team effort. Dispassionately considering the PRAGMATIC criteria one by one, and allowing for the metric's plus points, leads to a final score of just 41% ... and a big thumbs-down for this metric.

Measuring health risks

I think it's fair to say that metrics is a "challenging" topic across all fields, not just information security. The issues are not so much with the actual mathematics and statistics (although it is all too easy for non-experts like me to make fundamental mistakes in that area!) as with what to measure, why it is being measured, and how best to measure, report and interpret/use the information.

As a reformed geneticist, here's an example I can relate to: measuring and reporting health risks resulting from off-the-shelf DNA test kits. A journalist for the New York Times took three different tests and compared the results. 

Underlying the whole piece is the fact that we're talking about risks or probabilities, with inherent uncertainties. The journalist identified several factors with these tests that make things even less certain for customers.

For a start, the three test companies appear to be testing for their own unique batteries of disease markers, which immediately introduces a significant margin for error or at least differences between them. To be honest, I'm not even entirely certain that all their markers are valid. I don't know how they (meaning both the markers and the companies) are assessed, nor to what extent either of them can be trusted.

Secondly, the test results were reported relative to 'average' incident rates for each disease, using different averages (presumably separate data sets, quite possibly means of samples from entirely different populations!). This style of metric reporting introduces the problem of 'anchoring bias': the average numbers prime the customers to interpret the test results in a certain way, perhaps inappropriately.

Thirdly, except in a few specific situations, our genes don't directly, indisputably cause particular diseases: most of those disease markers are correlated to some extent with a preponderance to the disease, rather than being directly causative. If I have a marker for heart disease, I may be more likely to suffer angina or a heart attack than if I lacked the marker, but just how much more likely is an open question since it also depends on several other factors, such as whether I smoke, over-eat or am generally unfit - and some of those factors, and more besides, are themselves genetically-related. There are presumably genetic 'health markers' as well as 'disease markers', so someone with the former might be less prone to the latter.

A fourth factor barely noted in the NY Times piece concerns the way the results are reported. In a conventional clinical setting, diagnostic test results are interpreted by specialists who truly understand the tests, the natural variation between people, and the implications of the results, given the context of the actual patient (particularly the presence/absence, nature and severity of other symptoms and contributory factors). The written lab test reports may highlight specific values that are considered outside the normal range, but what those numbers actually mean for the patient is left to the specialists to determine and explain. In cutting out the specialists, the off-the-shelf test kit companies are left giving their customers general advice, no doubt couched very carefully in terms that avoid any liability for mistakes. On top of that, they have a responsibility to avoid over- and under-playing the risks, implying a neutral bias. In the doctor's surgery, the doc can respond to your reactions, give you a moment to let things sink in, and offer additional advice beyond the actual test results. That interaction is missing if you simply get a letter in the mail. 

There's a fifth factor that isn't even mentioned in the report, namely that the samples and tests themselves vary somewhat. It's a shame the reporter didn't take and submit separate samples to the same labs (perhaps under pseudonyms) to test their repeatability and inherent quality.

The final comments in the NY Times are right on the mark. Instead of spending a couple of hundred dollars on these tests, buy a decent set of bathroom scales and assess the more significant health risks yourself! While I have a lot of respect for those who develop sophisticated information security risk models and systems, I'm inclined to say much the same thing. An experienced infosec or IT audit pro can often spot an organization's significant risk factors a mile off, without the painstaking risk analysis. 

Friday 3 January 2014

SMotW #86: info asset inventory integrity

Security Metric of the Week #86: integrity of the information asset inventory

As a general rule, if you are supposed to be securing or protecting something, it's quite useful to know at least roughly what that 'something' is ...

Compiling a decent list, inventory or database of information assets turns out to be quite a lot harder than one might think.  Most organizations made a stab at this for Y2K, but enormous though it was, that effort was very much focused on IT systems and, to some extent, computer data, while other forms of information (such as "knowledge") were largely ignored. 

Did your organization even maintain its Y2k database?  Hardly any did.

If we were able to assess, measure and report the completeness, accuracy and currency of the information asset inventory, we could provide some assurance that the inventory was being well managed and maintained - or at least that the figures are headed the right way.  


How would one actually generate the measurements? One way would be to validate a sample of records in the inventory against the corresponding assets, or vice versa (perhaps both).  A cunning plan to validate, say, the next 10% of the entries in the inventory every month would mean that the entire data set would be validated every year or so (allowing for changes during the year, including perhaps the introduction of additional categories of information asset that were not originally included). 

P
R
A
G
M
A
T
I
C
Score
82
66
83
78
80
43
50
66
70
69%

ACME management were quite interested in this metric, if a little concerned at the Accuracy, Timeliness and Integrity of the metric (ironic really!).  Having calculated the metric's PRAGMATIC score, they decided to put this one on the pending pile to revisit later.

The CISO was more confident than his peers that his people would compile the metric properly, and he toyed with the idea of either using the metric for his own purposes, or perhaps proposing a compromise: Internal Audit might be commissioned to sample and test the inventory on a totally independent basis, comparing their findings against those from Information Security to prove whether Information Security could be trusted to report this and indeed other security metrics.