Tuesday 31 July 2018

Insider threats awareness module published

For August, the spotlight turns towards the threat from within the organization, insiders.


“Insider threats” may be a common term but it's technically incorrect. “Insider risks” is more accurate since there is more to this than just the threats posed by insiders. Our awareness materials explore the vulnerabilities and impacts too.

“Insiders” in this context are primarily employees - both staff and management - of the organization, those on its payroll. “Outsiders”, then, are third-party employees (particularly those working for competitors or other adversaries) and unemployed people – a much larger group of course. In the government/military context, ‘foreigners’ (citizens of other nations and cultures, regardless of where they live) are generally considered outsiders too: we’ll have more to say about outsider threats in September’s awareness materials.

Both August and September's modules cover the overlap between insiders and outsiders - the no-mans-land inhabited by contractors, temps, interns etc. plus assorted consultants, professional advisers and maintenance engineers who have 'gone native'. They pose threats too, with divided loyalties facing a hail of bullets from all sides.

Ignore them at your peril. Recall that Ed Snowden was a defense contractor working in a privileged position within the NSA. Insider or outside is a moot point: the damage was immense. The risk is obvious ... once you think about it.

August’s awareness module is designed to:
  • Introduce insider threats, providing general context and background information (e.g. who are those threatening insiders, and in what sense do they threaten?);
  • Expand on the information risks (threats, vulnerabilities and impacts) arising from and involving insiders, particularly for the management and professional audiences;
  • Describe and promote the corresponding information security controls, which are numerous and varied (policies, procedures, practices, technologies …);
  • Leave everyone with the lasting impression that insider threats are real, antisocial and unacceptable.
So what about your awareness and learning objectives in relation to insider threats, or information risks involving workers. Are there any business angles or concerns you’d like to emphasize in your awareness program? Any insider issues your organization has resolved, or for that matter is still struggling to address?

Oh, hang on a moment, does “insider threats” feature as a topic in your awareness and training schedule? Do you even have a schedule, a rolling sequence of hot topics delivered continuously throughout the year? Oh. OK then. 

Thursday 26 July 2018

Cyber, again

Something on the Just Security law blog caught my attention today:
"For a growing number of states, cyber operations are now firmly ensconced as a means of conducting traditional and not-so-traditional statecraft, to include conflict. Cyberspace has delivered tremendous benefits, but its unique construct and ubiquity have also created significant national security vulnerabilities, generating unprecedented challenges to the existing framework of international peace and security. One need look no further than North Korea’s destructive and subversive actions against Sony Pictures, its launch of the Wannacry ransomware, Russia’s launch of the indiscriminate NotPetya malware against the Ukraine, or its cyber-enabled covert influence campaigns against the U.S. and other western democracies to realize that cyber capabilities are increasingly part of a powerful arsenal states are using to pursue their interests, oftentimes through aggressive actions aimed at disrupting the status quo. As the recently released Command Vision for US Cyber Command recognizes, the emerging cyber-threat landscape is marked by adversary states engaging in sustained, well-constructed campaigns to challenge and weaken western democracies through actions designed to hover below the threshold of armed conflict while still achieving strategic effect. And as the Cyber Command Vision also makes clear, passive, internal cyber security responses have proved inadequate, ceding strategic initiative and rewarding bad behavior."
I've argued for years that most people (including many journalists and far too many so-called cybersecurity professionals) interpret "cybersecurity" rather differently to how it is being used in the government/military context. Whereas everyday Internet security is part of the problem space, it's a small part. Ordinary controls such as firewalls and antivirus are woefully inadequate defences against the "powerful arsenals" being developed and deployed by "adversary states". Those "unprecedented challenges" are not going to be met with off-the-shelf security solutions - just as wet cardboard is not much use as a bulletproof vest.

One of the lessons in next month's awareness module on insider threats is that everyday controls are inadequate against high-end threats involving committed and resourceful adversaries - and yet, it makes sense to start with those everyday controls both to knock back the everyday issues and as a platform for the more advanced stuff. The cases we'll be using illustrate the range of insider threats nicely, from casual expenses fraud to espionage.

In discussing the more severe end of the scale, I'm conscious of the risk of alienating the most naive parts of the audience ... and yet if we don't make the effort to open their eyes to what's going on, they will remain oblivious. Actual incidents reported by the news media are a good way to demonstrate that we are not entirely paranoid. Headline stories catch their attention: all we need to do is explain what's behind the headline. Easy, when you know how.

Friday 20 July 2018

ISO/IEC 27001 and 27031 revisions

Today I've spent (?invested?) some time and brainwaves into the ISO27k standards.

First, ISO/IEC 27002 is currently being revised. The revision involves completely restructuring the controls described in the current standard into 4 "themes": 
  1. Organizational; 
  2. People; 
  3. Physical; and 
  4. Technical. 
Clearly those are not truly orthogonal or distinct categories - for example organizational controls are quite likely to involve people (e.g. policies and procedures), physical (e.g. physical access) and/or technical (e.g. IT) aspects. Some security controls may fit into any of those categories, so the choice is arbitrary. However, the categorization doesn't matter much. It is really just a convenient order for the standard, especially as the controls are going to be further 'tagged' with other attributes such as:
  • "Information security properties" i.e. confidentiality, integrity or availability (the classic CIA triad - not Donn Parker's hexad, I note);
  • "Control type" i.e. preventive, detective or reactive (reflecting the time relative to the occurrence of events or incidents that the control acts);
  • "NIST cyber security framework classifications" i.e. identify; protect; detect; respond; recover (notice that is an extension of the "Control types" tagset);
  • "Information security management life cycle" i.e. creation; distribution; transmission; access; retrieval; storage; use; preservation; control of change; disposal (despite the title, these tags appear to relate to the lifecycle of data not of 'information security management' which would, in fact, be some sort of process maturity sequence);
  • Other tagsets, yet to be determined.
Anyway, leaving all that aside, we have our chance right now to consider and contribute to the revision of the infosec controls that are currently in 27002, perhaps culling those that are no longer worthy of inclusion and adding others that are, as well as rewording the existing set. What fun! I've spent a few hours today thinking and commenting.

I've also glanced through the 4th working draft revision of ISO/IEC 27031 on "Information technology – Cybersecurity – Information and communication technology readiness for business continuity". 

Golly, another ISO27k standard that fails to define that word "cybersecurity". It's in the title, so we are supposed to just know, I guess. Or guess, I know.

Given that ISO 22301 does such a good job on business continuity, I honestly don't see much point to this ICT-focused standard. If it is to remain a part of ISO27k, it at least ought to be properly aligned with ISO 22301, and ideally extended beyond the ICT domain since ISO27k is about information risk and security, not just ICT.

Although this standard vaguely mentions resilience to as well as recovery from disastrous situations, the coverage on resilience is distinctly light, perhaps because of the definition: 
“Resilience: ability to transform, renew, and recover, in timely response to events”.
That’s plain weird! Resilience is an ordinary common-or-garden English word, meaning that any half-decent dictionary is likely to have a perfectly serviceable definition, including the Oxford English dictionary that is supposed to be the default reference for all standard terms in ISO27k. I don't have the OED to hand but I would be gobsmacked if it didn't talk about another meaning relating to elasticity - the ability for things under stress to bend without breaking. If SC27 insists on defining the word, I suggest that resilience in the information risk and security context generally concerns the latter meaning. It’s about toughness and determination, keeping the essential core business activities (plus the supporting/enabling information processes, applications, systems, networks, data flows, services etc.) going despite and through adversity. Resilience controls include widely-applicable and sound engineering concepts such as redundancy, robustness and flexibility, ensuring that vital business operations are not materially degraded or halted by incidents - they keep right on running, albeit often at somewhat reduced performance or capacity. In this day and age, high-availability 24x7 systems and networks are hardly radical but SC27 just doesn’t seem to get it. Is it really that hard?

So, there we go, the day is history and another working week draws to a close.

Friday 13 July 2018

ISO/IEC 27001 Annex A status

I've just completed an internal audit of an ISO27k ISMS for a client. By coincidence, a thread on ISO27k Forum this morning brought up an issue I encountered on the audit, and reminded me of a point that has been outstanding for several years now.

The issue concerns the formal status of ISO/IEC 27001:2013 Annex A arising from ambiguities or conflicts in the main body wording and in the annex. 

Is Annex A advisory or mandatory? Are the controls listed in Annex A required by default, or optional, simply to be considered or taken into account?

The standard is distinctly ambiguous on this point, in fact there are direct conflicts within the wording - not good for a formal specification against which organizations are being audited and certified compliant.

Specifically, main body clause 6.1.3 Information security risk treatment clearly states as a note that "Organizations can design controls as required, or identify them from any source." ... which means they are not required to use Annex A.

So far so good .... however, the very next line of the standard requires them to "compare the controls determined in 6.1.3 b) above with those in Annex A and verify that no necessary controls have been omitted". This, to me, is a badly-worded suggestion to use Annex A as a checklist. Some readers may interpret it to mean that, by default, all the Annex A controls are "necessary", but (as I understand the position) that was not the intent of SC 27. Rather, "necessary" here refers to the organization's decision to treat some information risks by mitigating them using specific controls, or not. If the organization chooses to use certain controls, those controls are "necessary" for the organization, not mandatory for compliance with the standard.

To make matters worse still, a further note describes Annex A as "a comprehensive list of control objectives and controls", a patently false assertion. No list of control objectives and controls can possibly be totally comprehensive since that is an unbounded set. For starters, someone might invent a novel security control today, one that is not listed in the standard since it didn't exist when it was published. Also, there is a near-infinite variety of controls including variants and combinations of controls: it is literally impossible to identify them all, hence "comprehensive" is wrong.

The standard continues, further muddying the waters: "Control objectives are implicitly included in the controls chosen. The control objectives and controls listed in Annex A are not exhaustive and additional control objectives and controls may be needed." This directly contradicts the previous use of "comprehensive".

As if that's not bad enough already, the standard's description of the Statement of Applicability yet again confuses matters. "d) produce a Statement of Applicability that contains the necessary controls (see 6.1.3 b) and c)) and justification for inclusions, whether they are implemented or not, and the justification for exclusions of controls from Annex A". So, despite the earlier indication that Annex A is merely one of several possible checklists or sources of information about information security controls, the wording here strongly implies, again, that it is a definitive, perhaps even mandatory set after all.

Finally, Annex A creates yet more problems. It is identified as "Normative", a key word in ISO-land meaning "mandatory". Oh. And then several of the controls use the key word "shall", another word reserved for mandatory requirements in ISO-speak.

What a bloody mess!

Until this is resolved by wording changes in a future release of the standard, I suggest taking the following line:
  • Identify and examine/analyse/assess/evaluate your information risks;
  • Decide how to treat them (avoid, mitigate, share and/or accept);
  • Treat them however you like: it is YOUR decision, and you should be willing to justify your decision … but I generally recommend prioritizing and treating the most significant risks first and best, working systematically down towards the trivia where the consequences of failing to treat them so efficiently and effectively are of less concern;
  • For risks you decide to mitigate with controls, choose whatever controls suit your situation. Aside from Annex A, there are many other sources of potential controls, any of which might be more suitable and that’s fine: go right ahead and use whatever controls you believe mitigate your information risks, drawing from Annex A or advice from NIST, DHS, CSA, ISACA, a friend down the pub, this blog, whatever. It is your choice. Knock yerself out;
  • If they challenge your decisions, refer the certification auditors directly to the note under 6.3.1 b: Organizations can design controls as required, or identify them from any source. Stand your ground on that point and fight your corner. Despite the other ambiguities, I believe that note expresses what the majority of SC27 intended and understood. If the auditors are really stubborn, demonstrate why your controls are at least as effective or even better than those suggested in Annex A;
  • Perhaps declare the troublesome Annex A controls “Not applicable” because you prefer to use some other more appropriate control instead;
  • As a last resort, declare that the corresponding risks are acceptable, at least for now, pending updates to the standard and clearer, more useful advice;
  • Having supposedly treated the risks, check that the risk level remaining after treatment (“residual risk”) is acceptable, otherwise cycle back again, adjusting the risk treatment accordingly (e.g. additional or different controls).
If you are still uncertain about this, talk it through with your certification auditors – preferably keeping a written record of their guidance or ruling. If they are being unbelievably stubborn and unhelpful, find a different accredited certification body and/or complain about this to the accreditation body. You are the paying customer, after all, and it’s a free market!

Thursday 12 July 2018

Looking for inspiration

Over on the new CISSPforum, in a thread about helping our corporate colleagues understand what information security is all about, someone asked about raising awareness among the general public - specifically whether we might learn from how other industries explain fraud and abuse.

Well, they may not all concern fraud and abuse but there are loads of 'public awareness' activities going on all the time, some much more successful than others. Examples include:
  • Health and safety awareness (about the H&S legislation mostly)
  • Health awareness (with much broader objectives about living healthier lifestyles, getting fit, reducing obesity, not smoking etc.)
  • Illness awareness e.g. cancer, mental ill-health etc. (aiming to support sick people and get them to seek professional help ... such as the breast cancer awareness ad I'm hearing right now on NZ local radio)
  • Safety awareness (such as driving more carefully ... a n d   s l o w l y ... and preparing for various disasters)
  • Political awareness (promoting the policies and objectives of political parties)
  • Social awareness (mostly about or supporting 'disadvantaged' groups for various values and causes of disadvantage)
  • Marketing and advertising of products, branding And All That (by far the most widespread, creative and successful form of awareness, I'd argue)
  • Global awareness (on a wide range of global issues such as warming, poverty, trade, travel ...)
  • Business awareness (ranging from tax and other compliance stuff to good business practices)
  • Finance awareness (mostly marketing but some genuine efforts to help people manage their money and debts more effectively)
  • Life awareness a.k.a. the education system generally, not just skool
  • Trades and professions, with their courses and badges galore, plus codes of practice and so forth
  • Celebrity awareness (Kardashian-itis, Trump-itis ...)
  • Art awareness and appreciation
  • Science awareness and appreciation
  • Engineering awareness ....
  • More: over to you! What have I missed?
There is no shortage of examples varying widely in scope, focus, delivery methods, objectives and success ... which means a bewildering array of approaches to consider and perhaps adapt or simply apply in "our" field/s and context/s (for there are several of both).

Sunday 1 July 2018

Security frameworks awareness module released

The security awareness module for July concerns conceptual or architectural frameworks, standards, methods and good practices in the area of information risk and security – ‘security frameworks’ or ‘frameworks’ for short.

Both the organization and individual workers are obliged to comply with various rules concerning information security.  Some rules are imposed on us by external authorities in the form of laws and regulations, others we impose on ourselves through corporate policies and procedures, contracts etc. 

There are numerous laws and regulations relating to information security, far too many for us to cover in detail.  We can only talk in general terms. 

We face a similar practical constraint with corporate security policies, procedures etc.: we are not familiar with our customers' policies, nor with their current internal compliance challenges. But the ‘policy pyramid’ is a near universal structure or framework, so the generalities apply again ... and for good measure we're supplying an updated suite of 71 security policy templates along with July's awareness content (the policies are sold separately too).



The module provides a sound platform or starting point to raise awareness of good security practices, frameworks and structured approaches. 

Next month we’ll move on to cover insider threats - threats originating within the organization from its employees, contractors, consultants, temps, interns and more.  August’s module will be simpler and more practical, less conceptual than July’s.