Monday 30 September 2019

Digital (cyber) forensics module released


IT systems, devices and networks can be the targets of crime as in hacking, ransomware and computer fraud. They are also tools that criminal use to research, plan and coordinate their crimes. Furthermore, criminals use technology routinely to manage and conduct their business, financial and personal affairs, just like the rest of us.
Hence digital devices can contain a wealth of evidence concerning crimes committed and the criminals behind them.
Since most IT systems and devices store security-related information digitally, digital forensics techniques are also used to investigate other kinds of incidents, figuring out exactly what happened, in what sequence, and what went wrong ... giving clues about what ought to be fixed in order to prevent them occurring again.  
It’s not as simple as you might think for investigators to gain access to digital data, then analyze it for information relevant to an incident. For a start, there can be a lot of it, distributed among various devices scattered across various locations (some mobile and others abroad), owned and controlled by various people or organizations. Some of it is volatile and doesn’t exist for long (network traffic, for instance, or the contents of RAM). Some is unreliable and might even be fake, a smoke-screen deliberately concealing the juicy bits.
A far bigger issue arises, though, if there is any prospect of using digital data for a formal investigation that might culminate in a disciplinary hearing or court case. There are explicit requirements for all kinds of forensic evidence, including digital evidence, that must be satisfied simply to use it within an investigation or present it in court. Ensuring, and being able to prove, the integrity of forensic evidence implies numerous complications and controls within and around the associated processes. They are the focus of October’s security awareness materials which:
  • Describe the structured, systematic process of gathering digital forensic evidence and investigating cyber-crime and other incidents involving IT;
  • Address information risks associated with the digital forensics process;
  • Prompt management to prepare or review policies and procedures in this area, training workers or contracting with forensics specialists as appropriate;
  • Encourage professionals with an interest in this area to seek and share information;
  • Discourage workers in general from interfering with and perhaps destroying forensic evidence.

Sunday 29 September 2019

Awareness and training program design

The first task when preparing any awareness content is to determine the objectives. What are you hoping to achieve here? What is the point and purpose? What's the scope? What would success or failure even look like?

There are several possible approaches. 

You might for instance set out to raise security awareness 'in general', with no particular focus. That's a naive objective given the variety of things that fall within or touch on the realm of 'security'. Surely some aspects are more pertinent than others, more likely to benefit the workforce and hence the organization? Trying to raise awareness of everything all at once spreads your awareness, training and learning resources very thin, not least the attention spans of your audiences. It risks bamboozling people with far too much information to take in, perhaps confusing them and turning them off the whole subject. 

It's not an effective educational strategy. We know it doesn't work and yet, strangely, there are still people talking in terms of an "annual security awareness training session" as if that solves the problem. 

[Shakes head in despair, muttering incoherently]

Instead, you might identify a few topic areas that are more deserving of effort, 'just the basics' you might say. OK, that's better but now there's the issue of deciding what constitutes 'the basics'. One of the complicating, challenging  and fascinating aspects of information risk and security is the mesh of overlapping and interlocking concerns. Security isn't achieved by doing just a few things well. We need to do a lot of things adequately and simultaneously.

Take 'passwords' for example, one of the security controls that most organizations would consider basic. You could simply instruct workers on choosing passwords that meet your organization's password-related policies or standards ... but wouldn't it be better to explain why those policies and standards exist, as well as what they require? Why do we have passwords anyway? What are they for? Addressing those supplementary issues is more likely to lead to understanding and acceptance of the password rules. As you scratch beneath the surface, you'll encounter several important things relating to passwords such as:
  • access control;
  • accountability and responsibility;
  • biometrics and multi-factor authentication;
  • identification and authentication;
  • malware and hacking attacks;
  • password length and complexity;
  • password memorability and recall;
  • password sharing and disclosure;
  • password vaults;
  • phishing and other social engineering attacks;
  • the password change process ...
... and more. Similar considerations apply to any other form of 'basic' security: I challenge you to name any 'basic' security topic so narrowly-scoped that it doesn't touch or depend on related matters. 

A third approach, then, is to acknowledge those touch points and the mesh of interrelated topics, planning a sensible sequence of awareness topics that meander through the entire field. Maybe cover accountability first, then passwords, then access control ... and so on. Now you're starting to get somewhere! 

Oh but hang on, at this level of analysis there is such a variety of potential topics that the sequence takes some thought, especially as there are only so many awareness and training opportunities in the year. Planning is like plate-spinning: in order to raise awareness, you need to re-cover each topic periodically, reminding people before they forget, each awareness and training episode building on previous ones (especially the most recent and/or the most memorable). That's all very well, provided you don't let the plates fall. If your security awareness people move on, listen for the clatter of broken crockery.

A fourth approach is our way. Every month since 2003, we've picked a topic and gone into some depth on it. We've brought up other relevant topics but only briefly, since they are all explored in depth when their time comes. We've picked up on new topics as they emerged (making the content fresh and topical - literally), sometimes combining topics or deliberately taking different perspectives in successive passes. As plummet towards the 200th awareness module in December, we've steadily accumulated a security awareness and training portfolio covering ~70 topics, all of them designed and prepared to a consistently high standard by a small team of experts. On average, every module has passed three times through the mill, meaning they are all quite stable and mature.

Aside from the topic-based monthly deliveries, there's another innovation in that our awareness materials address three parallel audiences: general employees, managers and professionals. Complementing the breadth and depth of the awareness content, the three-streams lead to cultural changes across the entire organization. We think of this as socializing security within the corporation, informing the three audience groups about matters that concern them in terms they can understand, while encouraging them to interact and communicate both among and between themselves. 

With our monthly subscription service drawing to a close in just a few months, we're thinking about how best to continue maintaining and updating the portfolio of materials, tracking the ever-evolving field of information risk and security. We'll probably make fewer, irregular updates just a few times a year.

Meanwhile, we're gradually loading-up the SecAware eStore with additional awareness modules and ramping-up the marketing. If you need top-notch content for an effective security awareness and training program, please browse SecAware's virtual shelves and grab yourself a bargain. There's something strangely motivating about sales!

Thursday 26 September 2019

Audit strategies

I recommend treating any audit as a negotiation process with risks and opportunities* for both parties i.e. auditees and auditors. Here's why.

In respect of ISO/IEC 27001 compliance, the certification auditors are supposed to be formally checking that an ISMS complies with the standard’s formal requirements, plus information security requirements that the organization determines for its own purposes**. They are not supposed to conjure-up additional requirements out of thin air, then complain about noncompliance. However, auditors are human and make mistakes. So auditees are fully entitled to ask auditors to identify any requirements in the standard or in their corporate requirements that they say are not being fulfilled, if necessary down to the individual clause numbers and specific words from ‘27001, their policies etcBy all means discuss the wording and intent/meaning of those requirements, as well as reviewing the evidence and details of the alleged noncompliance. 

So far, that's conventional, an expected, routine part of the normal interaction between auditor and auditee. From that point, however, the process can proceed along various paths. 

The auditee could take a very hard line, focusing myopically and deliberately on strict compliance with the explicit requirements of the standard, being really tough on the auditors about that … but beware as the auditors can take just as hard a line in response, perhaps even pointing out additional minor noncompliance issues that they might otherwise have ignored. Bringing out the big compliance sticks is a viable but risky strategy. It can be tricky to back down once either party starts down this path. It tends to make the relationship between auditors and auditees highly adversarial and tough-nosed, each party treating the other as the enemy to be beaten. It’s stressful for all concerned, adding to the usual stresses of audits and certification. [Speaking as a former/reformed auditor, this may be a sign of either a naïve/scared or, paradoxically, a highly experienced/assertive auditee. Identifying and responding proactively to the situation as it develops is part of the auditor’s social skill set, which varies with the auditor’s experience level plus their own personality. If things escalate, it draws-in management on both sides, so each party really needs their management behind them. It’s also something that experienced auditors will have dealt-with many times (stress and challenge is very much part of the job), hence they tend to be well-practiced at it and on the front-foot, whereas auditees tend to be less well prepared and on the back-foot.]

Alternatively, the auditee could make more of an effort to understand and deal with the issues the auditor claims to have found, setting aside the pure compliance aspects (at least for now). Discuss and negotiate with the auditors, aiming towards finding mutually-acceptable solutions. Be “reasonable” about things (whatever that means!). Consider the businessimplications of what the auditors are saying, in particular consider whether they might just have put their finger on genuine information risks that the organization probably ought to address in some way. Focus on addressing those risks and reaching agreement on suitable responses, rather than compliance. Make and seek little concessions, respond positively and home-in on a resolution that both moves the business forward andleads to certification. Work with the auditors, each party treating the other as collaborators or colleagues with shared objectives. At the end of the day, either party can still reach for the big compliance stick if the negotiation stalls and the other party becomes stubborn, but that’s best left as a last resort option since it can lead to the same souring of the relationship. [This is generally a less stressful, less risky approach provided both parties are willing to play the game and move things forward. It helps if both parties have negotiation skills, or can get support from their managers/colleagues who do. It may take longer, though, which can be an issue if there are deadlines such as other audits or business demands. And there is inevitably some formality around this that needs to be respected. The auditors must meet their own obligations or risk losing their accreditation.]

But wait, there’s more.

The audit report, in particular the precise phrasing and wording of any adverse findings/noncompliance statements, is potentially another opportunity to clash or collaborate. Although the auditors own their report and have the final say (part of their formal independence), the auditee should have opportunities to review and discuss/respond to drafts, if appropriate challenging and ‘insisting’ that the details are factually correct. In general, the issue comes down to the facts and hence the audit evidence, which should be non-negotiable if the auditor has done a good job. The way those facts are documented, explained and interpreted is where the discussion tends to revolve. Again, both parties have their objectives/requirements, and it’s best if they negotiate a mutually satisfactory outcome and move ahead. Both parties being clear about priorities and overall objectives helps immensely.

And one last thing.

The relationship between auditor and auditee generally extends beyond an individual audit since audits are periodic. As well as the stage 1 and 2 certification audits, there are surveillance and re-certification audits to look forward to. So, the way the audit itself goes, the manner in which issues are raised, discussed and addressed, and the way audit findings and reports are resolved, is all part of the background for, and hence to some extent affects, future audits. Auditors who personally experienced or have been briefed about an intensely adversarial auditee in a previous audit are likely to anticipate a similar strategy and more aggravation on the next audit. Audit management might even consciously pre-select tough auditors who are strong in that situation for future audits, and likewise auditees might choose hard-nosed compliance specialists and negotiators to front-up their team, escalating matters. This can be the sting in the tail for auditors and auditees who have taken an unreasonably hard line in the past: it takes effort on both sides to turn things around and re-focus on more productive matters (namely the organization’s management of its information risks and security in support of business objectives), rather than the audit/certification process itself. 

--------------------

* Experienced negotiators appreciate the game-playing aspect to the typical negotiation process. Clued-up players enter the arena well-prepared, with goals and bottom-lines clarified and various game-playing strategies not just in mind but ideally refined through previous events. Each game plays out within the rules (mostly!), the players attacking and defending, trying various approaches, each pushing towards their own goals and exploiting weaknesses in the other, while gradually establishing and reaching agreement on neutral ground (hopefully!). At the end, the players depart with yet more experience under their belts, ready for another encounter. Every negotiation is a rehearsal for the next. Same thing with audits.

** ISO/IEC 27006:2015 says:

  • "Certification procedures shall focus on establishing that a client’s ISMS meets the requirements specified in ISO/IEC 27001 and the policies and objectives of the client." (clause 9.1.3.2);
  • "The audit objectives shall include the determination of the effectiveness of the management system to ensure that the client, based on the risk assessment, has implemented applicable controls and achieved the established information security objectives." (clause 9.2.1.1);
  • "In addition to evaluating the effective implementation of the ISMS, the objectives of stage 2 are to confirm that the client adheres to its own policies, objectives and procedures." (clause 9.3.1.2.1) ...
... and more. Auditees who are unclear about this, want to develop a sound, proactive strategy in preparation for their audits, or find themselves heading into a battle royale with the auditors, can study '27006 and ISO/IEC 17021-1:2015 (Conformity assessment — Requirements for bodies providing audit and certification of management systems — Part 1: Requirementsfor additional insight into the certification audit objectives, process and constraints. 

Tuesday 17 September 2019

A fraudulent fraud report?



Our awareness module on digital forensics is coming along nicely. Today, in the course of researching forensics practices within organizations, I came across an interesting report from the Association of Certified Fraud Examiners. As is my wont, I started out by evaluating the validity of the survey on which it is based, and found this:
"The 2018 Report to the Nations is based on the results of the 2017 Global Fraud Survey, an online survey opened to 41,573 Certified Fraud Examiners (CFEs) from July 2017 to October 2017. As part of the survey, respondents were asked to provide a narrative description of the single largest fraud case they had investigated since January 2016. Additionally, after completing the survey the first time, respondents were provided the option to submit information about a second case that they investigated.
Respondents were then presented with 76 questions to answer regarding the particular details of the fraud case, including information about the perpetrator, the victim organization, and the methods of fraud employed, as well as fraud trends in general. (Respondents were not asked to identify the perpetrator or the victim.) We received 7,232 total responses to the survey, 2,690 of which were usable for purposes of this report. The data contained herein is based solely on the information provided in these 2,690 survey responses."
"2018 Report to the Nations", ACFE (2018)
OK so more than half the submitted responses were deemed unusable. That's a lot more rejects than I would normally expect for a survey which could be good, bad or indifferent: 

  • It's good if they were excluded for legitimate reasons such as being patently incomplete, inaccurate, out of scope or late - like spoiled votes in an election; 
  • It's bad (surprising and disappointing) if they were excluded illegitimately such as because they failed to support or refute some working hypothesis or prejudice;
  • It's indifferent if they were excluded for purely practical reasons e.g. they ran out of time to complete the analysis. Hopefully they used an unbiased sampling technique to trim down the data though. Perhaps the unusable responses were simply lost or corrupted for some reason.

Unfortunately, the reasons for exclusion aren't stated in the report, which to me is an unnecessary and avoidable flaw. We're reduced to guesswork. That they excluded so many responses could for instance indicate that the survey team was unusually cautious, excluding potentially as well patently dubious submissions. It could be that the survey method was changed for some reason during the survey, and the team decided to exclude those before and/or after the chosen method was used (begging further questions about what changed and how they chose the method/s).

The fact that this report comes from the ACFE strongly suggests that both the analytical methods and the team are trustworthy. Personal integrity is essential to be a professional fraud examiner, a fundamental requirement. Furthermore, they have at least disclosed the number of responses used and provide additional details in the report about the respondents. So, on balance, I'm willing to trust the report: to be clear, I do NOT think it is fraudulent! In fact, with 2,690 responses, the findings carry more weight than most vendor-sponsored "surveys" (advertisements) that I have criticised several times before.

Moving forward, I'm exploring the findings for tidbits relevant to security awareness programs, doing my level best to discount the ridiculous "infographics" they've used in the report - another unnecessary and avoidable source of bias, in my jaundiced opinion. Yes, the way metrics are reported does influence their interpretation and hence value. And no, I don't think it's necessary to resort to gaudy crayons to put key points across. Some of us aren't scared by lists, tables and graphs.

Friday 13 September 2019

ISO/IEC 27001:2013 ambiguities

ISO/IEC 27001 concerns at least* two distinct classes of risk - ISMS risks and information risks** - causing confusion. With hindsight, the ISO/IEC JTC 1 mandate to require a main-body section ambiguously titled "Risks and opportunities" in all the certifiable management system standards was partly to blame for the confusion, although the underlying issue pre-dates that decision: you could say the decision forced the U-boat to the surface.

That is certainly not the only issue with '27001. Confusion around the committee's and the standard's true intent with respect to Annex A remains to this day: some committee members, users and auditors believe Annex A is a definitive if minimalist list of infosec controls, hence the requirement to justify Annex A exclusions ... rather than justify Annex A inclusions. It is strongly implied that Annex A is the default set. In the absence of documented and reasonable statements to the contrary, the Annex A controls are presumed appropriate and necessary ... but the standard’s wording is quite ambiguous, both in the main body clauses and in Annex A itself.

In ISO-speak, the use of ‘shall’ in "Normative" Annex A indicates mandatory requirements; also, main body clause 6.1.3(c) refers to “necessary controls” in Annex A – is that ‘necessary for the organization to mitigate its information risks’ or ‘necessary for compliance with this standard and hence certification’?  Without explanation, it could mean either.

Another issue with '27001 concerns policies: policies are mandated in the main body and recommended in Annex A. I believe the main body is referring to policies concerning the ISMS itself (e.g. a high-level policy or strategy stating that the organization needs an ISMS for business reasons) whereas Annex A concerns lower-level information security-related policies … but again the wording is somewhat ambiguous, hence interpretations vary (and yes, mine may well be wrong!). There are other issues and ambiguities within ISO27k, and more broadly within the field of information risk and security management.

Way down in the weeds of Annex A, “asset register” is an ambiguous term comprised of two ambiguous words. 
  • Having tied itself in knots over the meaning of “information asset” for some years, the committee eventually reached a truce by replacing the definition of “information asset” with a curious and unhelpful definition of “asset”: the dictionary does a far better job of it! 
  • In this context, "register" is generally understood to mean some sort of list or database ... but what are the fields and how much granularity is appropriate? Annex A doesn't specify.
But wait, there’s more! The issues extend beyond '27001. The '27006 and '27007 standards are (I think!) intended to distinguish formal compliance audits for certification purposes from audits and reviews of the organization’s information security arrangements for information risk management purposes. Aside from the same issue about the mandatory/optional status of Annex A, there are further ambiguities tucked away in the wording of those standards, not helped by some committee members’ use of the term “technical” to refer to information security controls, leading some to open the massive can-o-worms labelled “cyber”!

Having said all that, we are where we are. The ISO27k standards are published, warts and all. The committee is doing its best both to address such ambiguities and to maintain the standards as up-to-date as possible, given the practical constraints of reaching consensus among a fairly diverse global membership using ISO’s regimented and formal processes, and the ongoing evolution of this field. Those ambiguities can be treated as opportunities for both users and auditors to make the best of the standards in various contexts, and in my experience rational negotiation (a ‘full and frank discussion’) will normally resolve any differences of opinion between them. I’d like to think everyone is ultimately aligned on reaching the best possible outcome for the organization, meaning an ISMS that fulfills various business objectives relating to the systematic management of information risks. 


* I say ‘at least’ because a typical ISMS touches on other classes of risk too (e.g. compliance risks, business continuity risks, project/programme management risks, privacy risks, health and safety risks, plus general commercial/business risks), depending on how precisely it is scoped and how those risk classes are defined/understood. 

** I’ve been bleating on for years about replacing the term “information security risk”, as currently used but not defined as such in the ISO27k standards, with the simpler and more accurate “information risk”.  To me, that would be a small but significant change of emphasis, reminding all concerned that what we are trying to protect - the asset - is, of course, information. I’m delighted to see more people using “information risk”. One day, maybe we’ll convince SC 27 to go the same way!

Thursday 12 September 2019

Metrics lifecycle management


This week, I'm thinking about management activities throughout the metrics lifecycle.

Most metrics have a finite lifetime. They are conceived, used, hopefully reviewed and maybe changed, and eventually dropped or replaced by something better. 

Presumably weak/bad metrics don't live as long as strong/good ones - at least that's a testable hypothesis provided we have a way to measure and compare the quality of different metrics (oh look, here's one!).

Ideally every stage of a metric's existence is proactively managed i.e.:
  • New metrics should arise through a systematic, structured process involving analysis, elaboration and creative thinking on how to satisfy a defined measurement need: that comes first. Often, though, the process is more mysterious. Someone somehow decides that a particular metric will be somewhat useful for an unstated, ill-defined and barely understood purpose;
  • Potential metrics should be evaluated, refined, and perhaps piloted before going ahead with their implementation. There are often many different ways to measure something, with loads of variations in how they are analyzed and presented, hence it takes time and effort to rationalize metrics down to a workable shortlist leading to final selection. This step should take into account the way that new or changed metrics will complement and support or replace others, taking a 'measurement system' view. Usually, however, this step is either skipped entirely or superficial. In my jaundiced opinion, this is the second most egregious failure in metrics management, after the previous lack of specification;
  • Various automated and manual measurement activities operate routinely during the working life of a metric. These ought to be specified, designed, documented, monitored, controlled and directed (in other words managed) in the conventional manner but rarely are. No big deal in the case of run-of-the-mill metrics which are simple, self-evident and of little consequence, but potentially a major issue (an information risk, no less) for "key" metrics supporting vital decisions with significant implications for the organization;
  • The value of a metric should be monitored and periodically reviewed and evaluated in terms of its utility, cost-effectiveness etc. That in turn may lead to adjustments, perhaps fine-tuning the metric or else a more substantial change such as supplementing or dropping it. More often (in my experience) nobody takes much interest in a metric until/unless something patently fails. I have yet to come across any organization undertaking 'preventive maintenance' on its information risk and security metrics, or for that matter any metrics whatsoever - at least, not explicitly and openly. 
  • If a metric is to be dropped (retired, stopped), that decision should be made by relevant management (the metric's owner/s especially), taking account of the effect on management information and any decision-making that previously relied upon it ... which implies knowing what those effects are likely to be. In practice, many metrics circulate without anyone being clear about who owns or uses them, how and what for. It's a mess.
Come on, this is hardly rocket surgery. Information risk and security metrics are relatively recent additions to the metrics portfolio so it's not even a novel issue, and yet I feel like I'm breaking new ground here. Oh oh.

I should probably research fields such as finance and engineering with mature metrics, for clues about good metrics management practices that may be valuable for the information risk and security field.

Wednesday 11 September 2019

What it means to be risk-driven

Since ISO27k is [information] risk-driven, poor quality risk management is a practical as well as a theoretical problem. 

In practical terms, misunderstanding the nature of [information] risk, particularly the ‘vulnerability’ aspect, leads to errors and omissions in the identification, analysis and hence treatment of [information] risks. The most common issue I see is people equating ‘lack of a control’ with ‘vulnerability’. To me, the presence or absence of a control is quite distinct from the vulnerability, in that vulnerability is an inherent weakness or flaw in something e.g. an IT system, an app, a process, a relationship, contract or whatever. Even controls have vulnerabilities, yet we tend to neglect the fact that controls aren’t perfect: they can and do fail in practice, with several information risk management implications. 

Think about it: when was the last time you seriously considered the possibility that a control might fail? Did you identify, evaluate and treat that secondary risk, in a systematic and formal manner … or did you simply get on with things informally? Have you ever done a risk analysis on your “key controls”? 

Do you actually know which of your organization’s controls are “key”, and why? 

That's a bigger ask than you may think. Try it and you'll soon find out, especially if you ask your colleagues for their inputs.

In theoretical terms, risk is all about possibilities and uncertainties i.e. probability. Using simplified models with defined values, it may be technically possible to calculate a precise probability for a given situation under laboratory conditions, but that doesn’t work so well in the real world which is more complex and variable, involving factors that are partially unknown and uncontrolled. We have the capability to model groups of events, populations of threat actors, types of incident etc. but accurately predicting specific events and individual items is much harder, verging on impossible in practice. So even extremely careful, painstaking risk analysis still doesn’t generate absolute certainty. It reduces the problem space to a smaller area (which is good!), but not to a pinpoint dot (such precision that we would know what we are dealing with, hence we can do precisely the right things). What’s more, ‘extremely careful’ and ‘painstaking’ implies slow and costly, hence the approach is generally infeasible for the kinds of real-world situations that concern us. Our risk management resources are finite, while the problem space is large and unbounded. The sky is awash with risk clouds, and they are all moving!

Complicating things still further, we are generally talking about ‘systems’ involving human beings (individuals and organizations, teams, gangs, cabals and so on), not [just] robots and deterministic machines. Worse, some of the humans are actively looking to find and exploit vulnerabilities, to break or bypass our lovely controls, to increase rather than decrease our risk. The real-world environment or situation within which information risks exist is not just inherently uncertain but, in part, hostile. 

So, in the face of all that complexity, there is obviously a desire/need to simplify things, to take short cuts, to make assumptions and guesses, to do the best we can with the information, time, tools and other resources at our disposal. We are forced to deal with priorities and pressures, some self-imposed and some imposed upon us. ISO27k attempts to deal with that by offering ‘good practices’ and ‘suggested controls’. One of the ‘good practices’ is to identify, evaluate and treat [information] risks systematically within the real-world context of an organization that has business objectives, priorities and constraints. We do the best we can, measure how well we’re doing, and seek to improve over time.

At the same time, despite the flaws, I believe risk management is better than specified lists of controls. The idea of a [cut down] list of information security controls for SMEs is not new e.g. “key controls” were specifically identified with little key icons in the initial version of BS7799 I think, or possibly the code of practice that preceded it. That approach was soon dropped because what is key to one organization may not be key to another, so instead today’s ISO27k standards promote the idea of each organization managing its own [information] risks. The same concerns apply to other lists of ‘recommended’ controls such as those produced by CIS, SANS, CSA and others, plus those required by PCI-DSS, privacy laws and other laws, regs and rulesets including various contracts and agreements. They are all (including ISO27k) well-meaning but inherently flawed. Better than nothing, but imperfect. Good practice, not best practice.

The difference is that ISO27k provides a sound governance framework to address the flaws systematically. It’s context-dependent, an adaptive rather than fixed model. I value that flexibility.

Friday 6 September 2019

The CIA triad revisited

I've swapped a couple of emails this week with a colleague concerning the principles and axioms behind information risk and security, including the infamous CIA triad

According to some, information security is all about ensuring the Confidentiality, Integrity and Availability of information ... but for others, CIA is not enough, too simplistic maybe.


If we ensure the CIA
of information, does that
mean 
it is secure?


Towards the end of the last century, Donn Parker proposed a hexad, extending the CIA triad with three (or is it four?) further concepts, namely:
  • Possession or control;
  • Authenticity; and 
  • Utility. 
An example illustrating Donn's 'possession or control' concept/s would be a policeman seizing someone's computer device intending to search it for forensic evidence, then finding that the data are strongly encrypted. The police physically possess the data but, without the decryption key, are denied access to the information. So far, that's simply a case of the owner using encryption to prevent access and so prevent availability of the information to the police, thereby keeping it confidential. However, the police might yet succeed in guessing or brute-forcing the key, or exploiting a vulnerability in the encryption system (a technical integrity failure), hence the owner is currently less assured of its confidentiality than if the police did not possess the device. Assurance is another aspect of integrity

Another example concerns intellectual property: although I own and have full access to a physical book, I do not generally have full rights over the information printed within. I possess the physical expression, the storage medium, but don't have full control over the intangible intellectual property. The information is not confidential, but its availability is limited by legal and ethical controls, which I uphold because I have strong personal integrity. QED

Personally, I feel that Donn's 'authenticity' is simply an integrity property. It is one of many terms I've listed below. If something is authentic, it is true, genuine, trustworthy and not a fake or counterfeit. It can be assuredly linked to its source. These aspects all relate directly to integrity.

Similarly, Donn's 'utility' property is so close as to be practically indistinguishable from availability. In the evidence seizure example, the police currently possess the encrypted data but lacking the key or the tools and ability to decrypt it, the information remains unavailable. There are differences between the data physically stored on the storage medium and the intangible information content, sure, but I don't consider 'utility' a distinct or even useful property.

Overall, the Parkerian hexad is an interesting perspective, a worthwhile challenge that doesn't quite make the grade, for me. That it takes very specific, carefully-worded, somewhat unrealistic scenarios to illustrate and explain the 3 additional concepts, scenarios that can be readily rephrased in CIA terms, implies that the original triad is adequate. Sorry Donn, no cigar.

In its definition of information security, ISO/IEC 27000 lays out the CIA triad then notes that "In addition, other properties, such as authenticity, accountability, non-repudiation, and reliability can also be involved". As far as I'm concerned, authenticity, accountability and non-repudiation are all straightforward integrity issues (e.g. repudiation breaks the integrity of a contract, agreement, transaction, obligation or commitment), while reliability is a mix of availability and integrity. So there's no need to mention them, or imply that they are somehow more special than all the other concepts that could have been called out but aren't even mentioned ....

Integrity is a fascinatingly rich and complex concept, given that it has a bearing on aspects such as:
  • Trust and trustworthiness;
  • Dependability, reliability, confidence, 'true grit' and determination; 
  • Honesty, truthfulness, openness; 
  • Authenticity, cheating, fraud, fakery, deception, concealment …; 
  • Accuracy and precision, plus corruption and so forth; 
  • Timeliness, topicality, relevance and change; 
  • Rules and obligations, prescriptions, expectations and desires, as well as limitations and constraints; 
  • Certainty and doubt, risk, probability and consequences; 
  • Accidents, mistakes, misinterpretations and misunderstandings; 
  • Compliance and assurance, checks and balances; 
  • Consistency, verifiability, provability and disprovability, proof, evidence and fact - including non-repudiation; 
  • Social and cultural norms, conventions and ‘understandings’; 
  • Personal/individual values, ethics and morals, plus social or societal aspects such as culture and group-think; 
  • Enforcement (through penalties) and reinforcement (through awareness and encouragement) of obligations, rules, expectations etc.; 
  • Reputation, image and credibility - very important and valuable in the case of brands, for instance. 
Confidentiality is pretty straightforward, although sometimes confused with privacy.  Privacy partially overlaps confidentiality but goes further into aspects such as modesty and personal choice, such as a person's right to control disclosure and use of information about themselves.

Availability is another straightforward term with an interesting wrinkle. Securing information is as much about ensuring the continued availability of information for legitimate purposes as it is about restricting or preventing its availability to others. It's all too easy to over-do the security controls, locking down information so far that it is no longer accessible and exploitable for authorized and appropriate uses, thereby devaluing it. Naive, misguided attempts to eliminate information risk tend to end up in this sorry state. "Congratulations! You have secured my information so strongly that it's now useless. What a pointless exercise! Clear your desk: you're fired!"

Summing up, the CIA triad is a very simple and elegant expression of a rather complex and diffuse cloud of related issues and aspects. It has stood the test of time. It remains relevant and useful today. I commend it to the house.

Thursday 5 September 2019

Right to repair vs IPR

This week I've been contemplating the right to repair movement, promoting the idea that consumers and third parties (such as repair shops) should not be legally denied the right to meddle with the stuff they have bought - to diagnose, repair and update it - without being forced to go back to the original manufacturer (a monopolistic constraint) or throw it away and buy a replacement (eco-unfriendly).

Along similar lines, I am leaning towards the idea that products generally ought to be repairable and modifiable rather than disposable. That is, they should be designed with ‘repairability’ as a requirement, as well as safety, functionality, standards compliance, value, reliability and what have you. I appreciate that miniaturization, surface mounting, multi-layer PCBs, flow soldering and robotic parts placement make modern day electronic gizmos small and cheap as well as tough to repair, but obsolescence shouldn’t be built-in, deliberately, by default. Gizmos can still have test points, self-testing and diagnostics, replaceable modules, diagrams, fault-finding instructions and spare parts.

The same consideration applies, by the way, to proprietary software and firmware, not just hardware. Clearly documented source code, with debugging facilities, 'instrumentation' and so on, should be available for legitimate purposes - checking and updating the information security aspects for instance.

On the other hand, there are valuable Intellectual Property Rights to protect, and in some cases 'security by obscurity' is a valid - though fragile - control. 

Perhaps it is appropriate that monopolistic companies churning out disposable, over-priced products to a captive market should consider their intellectual property equally disposable. Perhaps not. Actually I think not because I believe the concept of IPR as a whole trumps the greed of certain tech companies. 

The real problem with IPR, as I see it, is China, or more specifically the Chinese government's apparent disregard for international law ... and I guess the Chinese have a vested interest in disposability. So that's a dead end then.

Wednesday 4 September 2019

Intelligent response

Among other things, the awareness seminars in the SecAware awareness module on hacking make the point that black hats are cunning, competent and determined adversaries for the white hats. In risk terms, hacking-related threats, vulnerabilities and impacts are numerous and (in some cases) substantial - a distinctly challenging combination. As if that's not enough, security controls can only reduce rather than completely eliminate the risk, so despite our best efforts, there's an element of inevitability about suffering harmful hacking-related incidents. It's not a matter of 'if' but 'when'.

All very depressing.

However, all is not lost. For starters, mitigation is not the only viable risk treatment option: some hacking-related risks can be avoided, while insurance plus incident and business continuity management can reduce the chances of things spiraling out of control and becoming critical, in some cases literally fatal.

Another approach is not just to be good at identifying and responding effectively to incidents, but to appear strong and responsive. So, if assorted alarms are properly configured and set, black hat activities that ought to trigger them should elicit timely and appropriate responses ... oh but hang on a second. The obvious, direct response is not necessarily appropriate or the best choice: it depends (= is contingent) on circumstances, implying another level of information security maturity.

'Intelligent response' is a difficult area to research since those practicing it are unlikely to disclose all the details, for obvious reasons. We catch little glimpses of it in action from time to time, such as bank fraud systems blocking 'suspicious' transactions in real time (impressive stuff, given the size and number of the haystacks in which they are hunting down needles!). We've all had trouble convincing various automated catchpas that we are, in fact, human: there the obvious response is the requirement to take another test, but what else is going on behind the scenes at that point? Are we suddenly being watched and checked more carefully than normal? Can we expect an insistent knock at the door any moment? 

In the spirit of the quotation seen on the poster thumbnail above, I'm hinting at deliberately playing on the black hats' natural paranoia. They know they are doing wrong, and (to some extent) fear being caught in the act, all the more so in the case of serious incidents, the ones that we find hardest to guard against. Black hats face information risks too, some of which are definitely exploitable - otherwise, they would never end up being prosecuted or even blown to smithereens. That means they have to be cautious and alert, so a well-timed warning might be all it takes to stop them in their tracks, perhaps diverting them to a softer target.

Network intrusion detection and prevention systems are another example of this kind of control. Way back when I was a nipper, crude first-generation firewalls simply blocked or dropped malicious network packets. Soon after, stateful firewalls came along that were able to track linked sequences of packets, dealing with fragmented packets, sequence-out-of packets and so on. Things have moved on a long way in the intervening decades so I wonder just how sophisticated and effective today's artificial intelligence-based network and system security systems really are, in practice, for those who can afford them anyway. Do they have 'unpredictability' options with 'randomness' or 'paranoia' settings? Do they play little mind games with hackers?

Tuesday 3 September 2019

Principles, axioms and policies

ISO/IEC 27001:2013 section 5.2 is normally interpreted as requiring the top layer of the classical ‘policy pyramid’. 

As with all the main body text in ‘27001, the wording of clause 5.2 is largely determined by:
(a) ISO/IEC JTC 1 insisting on commonality between all the management systems standards, hence you’ll find much the same mandated wording in ISO 9000 and the others; and
(b) the need to spell out reasonably explicit, unambiguous ‘requirements’ against which certification auditors can objectively assess conformity.

Personally, when reading and interpreting clause 5.2, I have in mind something closer to “strategy” than what information security pro's would normally call “policy” - in other words a visionary grand plan for information risk and security that aligns with, supports and enables the achievement of the organization’s overall business objectives. That business drive is crucial and yet is too often overlooked by those implementing Information Security Management Systems, partly because '27001 doesn't really explain it. The phrase "internal and external context" is not exactly crystal clear ... but that's what the JTC 1 directive demands.

In our generic (model, template) corporate information security policy, we lay out a set of principles and axioms for information risk and security such as:
Principle 1. Our Information Security Management System conforms to generally accepted good security practices as described in the ISO/IEC 27000-series information security standards.
Principle 2.   Information is a valuable business asset that must be protected against inappropriate activities or harm, yet exploited appropriately for the benefit of the organization.  This includes our own information and that made available to us or placed in our care by third parties.
... and ...
Axiom 1: This policy establishes a comprehensive approach to managing information security risks.  Its purpose is to communicate management’s position on the protection of information assets and to promote the consistent application of appropriate information security controls throughout the organization.  [A.5.1]

Axiom 2: An Information Security Management System is necessary to direct, monitor and control the implementation, operation and management of information security as a whole within the organization, in accordance with the policies and other requirements.  [A.6.1]
As you might have guessed from those [A. …] references, the axioms are based on the controls in Annex A of ISO/IEC 27001:2013. We have simply rephrased the control objectives from ISO/IEC 27002:2013 to suit the style of a corporate policy, such that the policy is strongly linked to and aligned with ISO27k. Those reading and implementing the policy are encouraged to refer to the ISO27k standards for further details and explanation if needed. 

There is a downside to this approach however since there are 35 axioms to lay out, making the whole generic policy 5½ pages long. I'd be happier with half that length. Customers may not need all 35 axioms and might review and maybe reword, revise and combine them, hopefully without adding yet more. That's something I plan to have a go at when the generic policy is next revised.

The principles take things up closer to strategy. This could be seen as a governance layer, hence our first principle concerns structuring the ISMS around ISO27k. It could equally have referred to NIST's Cyber Security Framework, COBIT, BMIS or whatever: the point is to make use of one or more generally accepted standards, adapting them to suit the organization's needs rather than reinventing the wheel.

I find the concept of information risk and security principles fascinating. There are in fact several different sets of principles Out There, often incomplete and imprecisely stated, sometimes only vaguely implied. Different authors take different perspectives to emphasize different aspects, hence it was an interesting exercise to find and elaborate on a succinct, coherent, comprehensive set of generally-applicable principles. I'm pleased to have settled on just 7 principles, and these too will be reviewed at some point, partly because the field is moving on. 

Meanwhile, further down the policy pyramid, a set of classical security policies covers a wide range of topics in more detail, supporting and expanding on those high-level axioms in the overall context of the principles. '27001, refers to such policies in A.5.1.1:
"A set of policies for information security shall be defined, approved by management, published and communicated to employees and relevant external parties."
ISO/IEC 27002 section 5 expands on that succinct guidance with more than a page of advice. ISO/IEC 27003 is not terribly helpful in respect of the topic-specific policies but does a reasonable job of explaining how the high level/corporate security policy aligns with business objectives.