Tuesday 26 June 2018

Critiquing NIST's Cyber Security Framework


Today in the final stages of preparing the awareness module on "Security frameworks", I'm thinking and writing about the NIST Cyber Security Framework (CSF). For awareness purposes, there's no point describing and elaborating on the CSF in great detail, but I need to read and evaluate it in order to sum it up and comment meaningfully for our subscribers. I'm investing my time and effort partly on their behalf, partly for my own education: I'm interested in infosec standards, keen to discover NIST's take on 'cyber security', and on the look out for good security practices.

So, indulge me for a moment as I talk you through the evaluation of just one small part of the CSF, specifically the core framework's advice on awareness and training (denoted "PR.AT", making it the prat section :-).
"The organization’s personnel and partners are provided cybersecurity awareness education and are trained to perform their cybersecurity related duties and responsibilities consistent with related policies, procedures, and agreements."
Reading that paragraph literally and narrowly, precisely who are "the organization's personnel and partners"? The "organization's personnel" are presumably its employees ... but that's a presumption. "Employee" is legally defined in at least some jurisdictions. Are temporary workers, interns and so on included in or excluded from that category? It depends.

"The organization's ... partners" is even less clear and more open to interpretation: various third parties may or may not be included in that category. Does it mean 'business partners' only, for example joint venture partners with binding contracts in place? Or suppliers and customers? Consultants? Contractors? Owners/stockholders? Assorted authorities? The general public and society at large? Current and former partners, perhaps future ones too (e.g. potential partners currently in negotiation)? 

Hmmm. There are many possible concerns at this stage for those (like me) who are anal enough to critique the wording. Many users of the CSF will not even notice these issues, or if they do will gloss-over them. Some may even actively exploit issues like these for their own advantage, or perhaps dismiss the entire CSF out of hand as "ambiguous and unhelpful".

The underlying issue I'm getting at here is common to most public security standards and advisories. There are several prospective audiences with a variety of expectations and interests, concerns and constraints. Most readers/users of the standards are not lawyers, and many are not trained or experienced in this area - which is precisely why some go looking to the standards for help. We all either seek or welcome easy answers, simple and elegant solutions to our immediate needs, without necessarily recognizing or accepting that the standards aren't written for us, personally. They are inevitably generalized or generic. They need to be interpreted, which in turn frees the authors from writing too narrowly and specifically but at the same time increases the risk of the standards becoming hand-waving, bland and unactionable. It's a fine line they tread.

NIST's approach in the CSF involves layered structures within the standards. The paragraph above is one of 23 in fact, called "categories" within 5 areas called "functions". The structure reflects a process view of cybersecurity, a timeline relative to the point an incident occurs. That's certainly not the only way to structure the CSF but, presumably, it suits their purpose and has the advantage of roughly even amounts of content in each part - an example of symmetry or balance that, for some obscure reason, seems to matter.

Moving further down into the structure, the 23 categories across 5 functions are supported by additional recommendations plus references to other standards, for example these support the awareness and training category:
"All users are informed and trained (CIS CSC 17, 18; COBIT 5 APO07.03, BAI05.07; ISA 62443-2-1:2009 4.3.2.4.2; ISO/IEC 27001:2013 A.7.2.2, A.12.2.1; NIST SP 800-53 Rev. 4 AT-2, PM-13)"
"Users" there presumably refers to IT users. "Informed and trained" is not the ultimate objective of awareness and training, but the process or mechanism used to achieve the (unstated) objective. While admirably succinct, notice the total lack of details about the form or nature of the awareness and training activities, their content and topics, motivation, frequency, reception etc. The reader is left to figure all that out for themselves, perhaps exploring those cited resources for further advice. 
"Privileged users understand their roles and responsibilities (CIS CSC 5, 17, 18; COBIT 5 APO07.02, DSS05.04, DSS06.03; ISA 62443-2-1:2009 4.3.2.4.2, 4.3.2.4.3; ISO/IEC 27001:2013 A.6.1.1, A.7.2.2; NIST SP 800-53 Rev. 4 AT-3, PM-13)"
Again, readers have to interpret "privileged users" but at least this time the statement is somewhat closer to being an objective or intended outcome. 'Understanding' is helpful, yes, but doesn't achieve much in isolation unless people go on to comply with the requirements and fulfill the organization's expectations, which means behaving in certain ways, making sound decisions etc. The reader is left to flesh out all those unstated details. Easy enough for those of us who live and breathe this stuff, not so easy for readers who have come here for guidance.
"Third-party stakeholders (e.g.,suppliers, customers, partners) understand their roles and responsibilities (CIS CSC 17; COBIT 5 APO07.03, APO07.06, APO10.04, APO10.05; ISA 62443-2-1:2009 4.3.2.4.2; ISO/IEC 27001:2013 A.6.1.1, A.7.2.1, A.7.2.2; NIST SP 800-53 Rev. 4 PS-7, SA-9, SA-16)"
This statement takes even more interpretation. It's good of them to offer three examples of "third-party stakeholders", but there's no advice on those "roles and responsibilities" - no examples there. Given the context, the roles and responsibilities presumably relate in some way to cybersecurity, but what are they, even generally speaking? 
"Senior executives understand their roles and responsibilities (CIS CSC 17, 19; COBIT 5 EDM01.01, APO01.02, APO07.03; ISA 62443-2-1:2009 4.3.2.4.2; ISO/IEC 27001:2013 A.6.1.1, A.7.2.2; NIST SP 800-53 Rev. 4 AT-3, PM-13)"
I have the same concerns here are with first supporting statement. Who are "senior executives"? Are senior, middle and junior managers excluded? What about team leaders, shift leaders, project managers and others? What are their roles and responsibilities, and is 'understanding' sufficient?
"Physical and cybersecurity personnel understand their roles and responsibilities (CIS CSC 17; COBIT 5 APO07.03; ISA 62443-2-1:2009 4.3.2.4.2; ISO/IEC 27001:2013 A.6.1.1, A.7.2.2)"
Ditto. Let's make sure our 'physical personnel' understand what they're meant to be doing, eh?  :-)

Take another look at the overall PR.AT sentence though: notice there's no mention or supporting detail for the final clause "related policies, procedures, and agreements".

The are similar issues with the cited sources: they are all generic and fairly high-level, needing to be interpreted (within their own contexts plus the organizations using them) and applied sensibly. 

Summing up, the Cyber Security Framework, plus those other standards and methods cited by it and more besides, all need to be interpreted carefully and applied sensibly to have any real value to a given organization. They are skeletal, the bare bones: simply add flesh and bring to life.  If only it were that simple.

Monday 25 June 2018

ISO27k updates

Slogging away tediously for 3 full days, I've caught up with a 3-month backlog of emails from the ISO/IEC JTC 1/SC 27 committee, picking out and checking through all the ISO27k-related items and updating our website. It's a laborious process but worth it, I think, to keep up with developments, especially as the ISO27k standards will feature heavily in July's awareness module on security frameworks.

Here's a potted selection of news highlights on the ISO/IEC 27000-series standards:
  • 27001 (ISMS) is likely to see some changes in the wording around risks and opportunities, and the Statement of Applicability. Hopefully the end result will be an improvement!
  • The 27002 (controls) revision is starting to get to grips with reorganizing and tagging the information security controls. This is going to be a slog ... but at the end of it, there will be more flexibility for users of the standard, for example if you are auditing, reviewing or (re)designing the IT suite, it should be possible to pick out "all the preventive, physical security controls" without having to pore through the entire standard.
  • A stop-gap minor update to 27005 (risks) should surface later this year, at last, while work progresses on the full revision in parallel. 
  • 27034 (appsec) is falling into place: this multi-part standard describes a highly structured method for managing the information security controls within a software development function, with fascinating features such as proper architecture, specification, design, hardening, testing and parameterization of controls. Users of the standard are encouraged to invests in building inherently strong controls, then reaping the rewards by re-using those controls in multiple applications or situations - a fascinating approach, one that some organizations are already using. It works!
  • The development of 27045 (big data security) is just starting. I suspect 'big data' actually means 'complex IT systems' to the project team, rather than truly vast amounts of data, but I could be wrong. Either way, it is a brave move to develop security standards in this evolving area.
  • The fun continues with 27100 and others on "cybersecurity", particularly as none of the existing or developing ISO27k cyber standards adequately define the terms. The committee appears to be drifting vaguely towards the area of basic Internet security (despite that being adequately served by existing ISO27k standards), although some remain curiously obsessed with "the Cyberspace" (whatever that actually means: the formal definition is distinctly unhelpful and bears little relation to what most people think cyber is all about) while critical infrastructure protection against cyberwarfare (a dramatically different interpretation of cyber in government and defense) is poorly addressed within ISO27k.
  • IoT security standards are showing some signs of life. It's early days though involving lots of interaction with other committees and industry bodies actively developing the technology standards behind IoT.
  • Privacy and information security are quietly sliding closer together. A number of new ISO27k standards will cover privacy matters, and the committee is considering a change of name from "IT Security Techniques" to "Information Security and Privacy" (or possibly something to do with cybersecurity, perhaps "Protecting the Cyberspace"?!). There is a substantial overlap between these areas, not 100% though.
For more info on these and other ISO27k news items, please browse ISO27001security.com or contact your national standards body for details of the shadow-committee slaving away on SC 27 matters.

Friday 22 June 2018

Critical of the critical infrastructure


A comment at the end of a piece in The Register about the safety aspects making it tricky to patch medical equipment caught my beady eye:
"Hospitals are now considered part of the critical national infrastructure in Israel and ought to be given the same status elsewhere".
Personally, I'm not entirely sure what being 'considered part of the critical national infrastructure' really means, in practice. It may well have specific implications in Israel or elsewhere, but I suspect that's just stuff and nonsense.

Those of you who don't work in hospitals, or in Israel, nor in critical national infrastructure industries and organizations, please don't dismiss this out of hand. Ultimately, we are all part of the global infrastructure known as human society, or wider still life on Earth but it is becoming increasingly obvious that we are materially harming the environment (= the Earth, our home) and if Space Force is real (not Space Farce) then even the sky's not the limit.

Within recent weeks on the Korean peninsula, the prospect of something 'going critical' has risen and receded, again. 'Nuff said.

Since we are all to some extent interdependent, we are all 'critical' in the sense of the butterfly effect within chaos theory. It is conceivable/vaguely possible that a seemingly trivial information security incident affecting a small apparently insignificant organization, or even an individual, could trigger something disastrous ... especially if we humans carry on building complex, highly interdependent, inherently unreliable, non-resilient, insecure information infrastructures, consistently glossing-over the fine details. 

I hear you. "It's OK, Gary, calm down. It just 'the cloud'. Don't you worry about that." But I'm paid to worry, or at least to think. As a knowledge worker, it's what I do.

Oh and by the way, not all critical infrastructure is global or national in scope. Some is organizational, even individual. I've just done the rounds feeding our animals, lit the fire and made a cup of tea, tending to my personal critical infrastructure.

So if we tag bits of various infrastructures critical, is that going to achieve a material change? No. It's just another label giving the appearance of having Done Something because, of course, Something Must Be Done. Unless it actually leads on to something positive, we are deluding ourselves, aren't we?

It's much the same bury-your-head-in-the-sand self-delusion as 'accepting' information risks. Having identified and analyzed the risks, having considered and rejected other treatments, we convince ourselves that the remaining risks are 'acceptable' and promptly park them out of sight, out of mind, as if they no longer exist. Hello! They are still risks! The corresponding incidents are just as likely and damaging as ever!

Whatever happened to security engineering? Is that in the clouds too? Or am I being too critical for my own good?

Happy Friday everyone. Have a good weekend. Keep taking the Pils.

Thursday 21 June 2018

Happy solstice!


10 o'clock this evening on June 21st is the Winter solstice for us down here in the Southern hemisphere. According to Wikipedia, we should be celebrating with "Festivals, spending time with loved ones, feasting, singing, dancing, fires". I lit the wood fire to warm the IsecT office before 8 this morning as usual. Having just fed the animals, I'm singing along to the radio as usual while I work. As to feasting, maybe we'll splash out on a special meal this weekend.

Up there on the Far Side, it's Midsommerfest which means festivals, spending time with loved ones, feasting, singing, dancing ... but no fires, hopefully. Most people are looking forward to summer holidays, I guess. We're looking forward to longer, warmer days and spring lambs, talking of which our Prime Minister is in hospital having a baby. It's OK though because we have a caretaker PM keeping an eye on things. The next few weeks will be interesting in NZ politics.

Friday 15 June 2018

Parting messages

Advertisers know the value of a parting message at the end of an advertisement. It's something catchy to stick in the memory, reminding people about the advertisement or rather the messages the ad was meant to convey, generally concerning the brand rather than the specific product.

Making ads memorable is one thing: making them influential or effective is another. Some ads are memorable for the wrong reasons, annoying and intrusive rather than enticing and beneficial. However, one man's hot button is another's cancel/exit. Ads are usually targeted at audience segments or categories, as opposed to everyone, though, so don't be surprised that you hate some ads and love others.

Translating that approach to security awareness, the end of an awareness event is just as important as the start and the main body of the session. It’s your final chance to press home the key awareness messages and set people thinking about the session as they wander off. 

In the closing remarks at the end of your seminars, workshops, courses, management presentations etc., try these ideas for size: 
  1. Give a brief summary/recap of the main points;
  2. Specifically mention anything that noticeably resonated with the audience, created a stir, got people talking or made them laugh;
  3. Mention - or better still promote - further awareness and training events/sessions, policies, briefings etc. that attendees might enjoy;  
  4. Persuade attendees to put something from the session into action that very day or week;  
  5. Invite attendees to hang back and ‘have a quick word with you’, handling any further questions, issues, concerns, comments and (most valuable of all for you) feedback on the awareness session.   
Personally, I despise the formulaic approach often recommended for inexperienced presenters, namely "Tell 'em what you're going to tell 'em, tell 'em, then tell 'em what you told 'em". It is crude, manipulative and counterproductive, I feel ... but then I present a lot and don't like being too predictable. 

Another little tip is to front-load the session with the most important messages, if you can, especially if it is a long session or follows a lunch break. Catch their attention before they doze off, or better still keep them awake with your best possible performance. If you need to cover other stuff first, let them know that there's something big coming up later, and remind them again before you deliver it. Punctuate the session in some way as you move from segment-to-segment. I'll blog about punctuating and structuring sessions another time.

Thursday 14 June 2018

Metrics maturity metric, mmm


Given that measurement can both establish the facts and drive systematic improvement, I wonder whether I might develop a metric to measure organizations' approach to security metrics? 

Specifically, I have in mind a security metrics maturity metric (!). 

Immature organizations are likely to have few if any security metrics in place, with little appreciation of what they might be missing out on and little impetus to do anything about it. In short, they are absolutely rubbish at it.

Highly mature organizations, in contrast, will have a comprehensive, well-designed system of metrics that they are both actively using to manage their information risk and security, and actively refining to squeeze every last ounce of value from them. They are brilliant.

Those two outlines roughly describe the end points of a maturity scale, but what about those in the middle? What other aspects or features have I seen in my travels, what other characteristics are indicative of the maturity status?

Eating my own dog food, before deciding on the Metric I should first have elaborated on the Goals of security metrics and the Questions arising (the GQM method). However, now, even with a maturity metric in mind, the same process of determining the Goals and Questions can help me work out the characteristics against which to assess organizations, the maturity Metric's measurement scale as it were.

Sorry if this is gibberish. I'm thinking aloud here, making lots of assumptions and skipping ahead while doing other stuff ... which I really ought to get on with, so I'll stop for now and pick this thread back up later on, unless I completely lose the plot.

Tuesday 12 June 2018

Infosec priorities


I'm rapidly bringing myself back up to speed on information security frameworks for July's security awareness materials. Today, I've been updating my knowledge on the wide range of frameworks in this area, thinking about the variety of concepts, approaches and recommendations out there.

There are several space-frame models. For some reason presumably relating to our visual perception, they are almost always symmetrical, often triangular or pyramidal in shape such as the ICIIP (Institute for Critical Information Infrastructure Protection) one above, developed at the USC Marshall School of Business in Los Angeles. The ICIIP model caught my eye back in 2008 shortly before ISACA adopted it as BMIS (Business Model for Information Security).

Alternatively the shape might represent the magic number 3, or perhaps 9 (3 squared) counting the nodes and links of a triangular 'pyramid' (glossing over the fact that the ancient Egyptian pyramids have square bases and hence 5 faces, not 4).

Talking of numbers, the dreaded Pareto principle or Pareto rule or 80:20 or whatever the thumbnail MBA guides and assorted self-proclaimed experts are calling it this year, rears its ugly head in some of the advice on information security. Speaking as an infosec pro with a scientific background and an interest in security metrics, I am more than a little cynical about Pareto under any and all circumstances. It's a very vague rule of thumb, at best, derived and wildly extrapolated from, of all things, an observation about the distribution of incomes in England at the end of the 19th Century. I kid you not.

In the context of information risk and security, it's misleading in the extreme. To my mind, 80% secure is woefully short of good practice, no matter how you determine the percentage (which, conveniently, virtually nobody advising Pareto in this space is inclined to do). I totally accept that 100% security is literally unattainable but 80 - really? Well OK then, you might claim to be able to get to 80% of the required level of security with 20% of the controls, or effort, or investment, or whatever. I might equally counterclaim that the remaining 20% of security takes 150% of the effort, maybe 200%. The figures are pure bunkum, made up on the spot. All Pareto really tells us is that life is a shit sandwich, and we should focus on the Stuff That Matters - prioritize in other words. Gosh. 

Priorities interest me in relation to information security. We have a huge array of possibilities in the future, far too many to handle in fact. We can only realistically deal with some of them, rather few when it comes down to it. It is inevitable that we need to focus. Yes, I hear you, "Focus on the 20%"! Whatever. Focus is the point, not your fake mathematics. So what should we focus on? Here it gets fascinating.

In the ISO27k-land, we are advised to focus on the risks, the information risks (although they don't - yet - say so). "Tackle the big risks first and work your way down from there, reviewing and revising constantly as your approach to information (risk and) security management matures" we're told. Hmmm.

Some (including me, at times!) would argue that we need to prioritize on business value, taking account of the effectiveness and efficiency of our information security arrangements AND, ideally, the projected real costs of incidents involving information - meaning both the impact and probability parts of risk.

Splitting that apart, it is feasible to address some high-impact incidents particularly if you limit yourself in some manner to credible scenarios. That's what the Business Impact Analysis component of Business Continuity Management does, extremely well. Better than us infosec wonks, anyway. It's a tiny wee tweak to use the BIA results to prioritize preventive activities in information security, so wee in fact that nobody except us will probably even notice. Cool! That's a substantial tranche of our security strategy and next budget proposal in the bag already, courtesy of those nice BCM people.

Addressing high-probability incidents is more science than art: simply look at your incident metrics to find out what is really going on. 

Oh, hang on a moment, 'incident metrics' is an alien term for some, while incident reporting is, let's say, lackluster at best, even in a fairly mature and compliant organization. 

Now that's an issue we can address through security awareness. 

Sunday 10 June 2018

Policy management approaches

I'm researching (well OK, I've done a little Googling) how other, non-infosec policy suites are structured, accessed/presented and managed, for clues that might be relevant to ours.

First, financial policies. Funds for NGOs specifies "seven principles suggested by [unnamed] experts" as good practice:
"6.1 Principle of Financial Policy: While developing a financial policy it is a good practice to incorporate the following seven principles suggested by experts. These principles lay the foundation of an effective financial policy which would ultimately result into a healthy organization.
  1. Consistency: The financial policy should be consistent, which simply means that it should not allow manipulation of processes and systems. All the staff members should consistently adhere to the financial policy and there should not offer much flexibility. A consistent policy will ensure better accountability, transparency, better information dissemination and timely reporting.
  2. Accountability: The financial systems should be such that it makes the organization more accountable to its stakeholders. As an NGO all you should account for all the resources and its expenses. For this the policy should clearly indicate the procedures for reporting and publication of financial data.
  3. Transparency: An organization should disclose all its operation and provide necessary information to stakeholders. This means that the NGO should provide accurate and timely information to donors, beneficiaries and all relevant stakeholders.
  4. Viability: For an NGO to be viable in the long run, the policy should set in place a mechanism that would maintain a balance between its expenditure and income. For any organization to be viable it is important that team leaders are able to generate sufficient funds to continue the functioning of the NGO.
  5. Integrity: All team members should follow all rules set by the financial policy. As a founding member you should set precedence in following and adhering to all rules.
  6. Oversight: The policy should also provide oversight into the future and should accordingly suggest measures to cope with future challenges. This would include risk assessment; strategic planning etc.
  7. Accounting standards: The policy should be such that it incorporates valid national standards and protocols. The accounting systems should meet national and international standards of financial accounting and recordkeeping this would facilitate easy transactions between diverse funding strategies."
Their 7 principles concern ensuring and demonstrating compliance with external obligations, evidently a strong driver in the world of finance. The final recommendation to 'incorporate valid national standards and protocols' would make those external obligations an explicit and integral part of the financial policies. 

For most of information risk and security, internal business drivers are arguably even more important than external obligations with a few exceptions (privacy for instance, plus integrity of the financial systems, processes and data). Thinking about it, the same point applies to financial management: making efficient and effective use of the organization's finances is at least as important as satisfying external compliance obligations, isn't it? 

Talking of legal obligations, what about health and safety policies?  The UK's Health and Safety Executive offers a simple policy template with just 2 pages:


The risk assessment page seems to be a working document used only in preparing the policy, hence the actual policy would typically be just the one page. The example has just 5 policy statements with brief explanations, specifying the responsible (actually, accountable) individuals for each policy statement.

There's a lot to be said for brevity (!) ... provided the policy is understood and followed in practice, placing much more emphasis on the associated awareness, training, compliance, assurance, oversight and monitoring activities, important supporting aspects not stated in the one-pager. The policy, then, is just a small piece of a bigger puzzle.

Brevity is a double-edged sword. A very brief policy gives a lot of latitude to workers in how they interpret and apply it, which can be a good thing but might be problematic depending on the circumstances. A major factor in health and safety is that workers literally have flesh in the game: it is clearly in their own personal interests to work safely and protect their own health. At the same time, the corporation has responsibilities towards ensuring its workers' health and safety (not just for legal and regulatory compliance reasons!) and workers have responsibilities towards each other, aspects that the example policies above don't cover. They seem narrow and naive to me but, hey, what do I know?

On that score, environmental protection is an area where both individual workers and the corporation have parts to play. Sony's approach to environmental policy, for instance, is quite complex with much more than the one-page health-and-safety example above ... as befitting a global organization with numerous national compliance obligations plus corporate objectives. For a start, Sony's environmental policy is part of Corporate Social Responsibility: it is not just an isolated or discrete policy matter but supports the corporation's wider aims towards society. This is a much more rounded and mature approach to policy. 

I can envisage a similar hierarchical policy structure for information risk and security, guiding the whole corporation along similar lines. As with the health and safety example above, policy edicts from HQ would need to be generic, leaving individual business units and workers the latitude to interpret and apply them locally ... but not so much freedom that corporate policies can be totally ignored. That's definitely a challenging requirement for policies!  Again, though, the policies themselves are not the whole story. Promulgating and enforcing policies involves the system of corporate governance and management.

Finally for today, Deming's PDCA cycle on the Sony page hints at the policy lifecycle. Someone has to specify, develop, check and authorize policies that are to be circulated and enforced, monitor compliance and effectiveness, react to changes by extending, maintaining and refining them, ideally achieving continuous improvement. The nice thing about systematic and cyclical improvement is that the starting point is irrelevant. If the policies start out woefully inadequate, the initial rounds of revision are likely to be step-changes, whereas later on the changes will be subtle refinements and tweaks. I sincerely hope our security policy templates enable customers to bypass the painful early learning stage, saving a small fortune and delay. We can't do all the policy refinement and management for you, but we can set you off to a strong start and support your security awareness and training activities. 

Friday 8 June 2018

Navigable structures

Some interesting suggestions concerning structures, content and management tools came up on CISSPforum yesterday as we chatted about security policies. 

I mentioned before that I'm getting glimpses of structure within the policy suite. In fact, there are several structures, different ways to group, link and use them which complicates matters. It's a mesh of  multiple partially-overlapping categories, and a number of possible viewpoints reflecting  the perspectives and interests of the various users. 

Much the same issue affects ISO/IEC 27002: numerous possible controls addressing a plethora of risks can be groups and arranged in several ways. At the same time the standard is aimed at a wide variety of people and organizations, with perspectives and needs that, by the way, aren't static but change as they get stuck in the subject and their interests develop.

ISO/IEC JTC 1/SC 27 is tackling this issue by systematically 'tagging' the controls with labels, allowing users to select whichever ones interest them. It's an obvious application for a database ... but how it will work with corporate security policies is not entirely obvious. 

Suggestions on the table so far include:
  1. General-purpose database or document management systems 
  2. [Security] policy [and procedure, standard and guideline] management systems
  3. Documents, simple or compound with a fixed structure but numerous cross-references
  4. Web sites and wikis with relatively fixed structures and loads of hyperlinks
  5. Something more dynamic and flexible, yet usable.
I'm idly exploring some of those options in parallel with reviewing and updating our policy suite, thinking about how we might deliver an even more valuable product without making our maintenance nightmares any scarier.

Wednesday 6 June 2018

Layers within layers

As I mentioned on the blog yesterday, we are working our way systematically through the suite of ~70 information security policies, making sure they are all up to scratch.

For context, the suite consists of 60-odd topic-based policies, plus an overarching high-level Corporate Information Security Policy, plus a handful of ‘acceptable use policies’ which are really guidelines with a misleading name.

We have here the bare bones of a typical policy pyramid with policies supported by corporate standards, guidelines and procedures and, of course, stacks of awareness and training stuff beneath.



The 60+ topic-based policies cover a wide range of information risk and security topics such as:
... and so on (derived originally from the structure of BS7799 then ISO27k), all in about 3 pages each in a standard format, ending with a cross-reference table listing other relevant policies etc. – and that’s where it gets interesting. Potentially, each policy could refer to any of the others, suggesting a master 60+ x 60+ matrix with ~3,600 cells each denoting the presence or absence of a cross-reference. Oh boy! Even assuming the cross-references would all be bi-directional (which seems likely), that’s still ~1,800 cells to complete in the matrix, and then check that the appropriate references are included in each of the 60+ policies. And then maintain, month by month as the policies are systematically checked and revised.

Looking at the existing cross-references, I’ve realized that all 60+ policies need to refer to the overarching Corporate Information Security Policy and almost all refer to the policy on information risk management. Information governance, information ownership and accountability, compliance and assurance policies feature in most of them too. Several refer to polices on general/infrastructure controls such as information classification and security awareness. In other words, I think I’ve stumbled across a 3-layer structure within the policy suite, in addition to the policy pyramid above. It’s not exactly clear yet, though.

Tuesday 5 June 2018

Security frameworks

The awareness and training materials for July will cover 'security frameworks', at least that's the working title at present. It may change as the scope is refined and the materials come together during June.

In addition to public standards such as ISO27k and NIST SP800, we plan to cover the internal frameworks or structures for information security within the corporation, important elements of information governance plus information risk and security management. I'm talking in particular about corporate security policies.

We are currently reviewing and revising our suite of generic information security policy templates, partly for subscribers as part of July's module. We routinely create or revise one or more of these templates each month in connection with the month's awareness topic, a systematic maintenance process that keeps the individual policies up to date. However it is a piecemeal process, meaning that changes may be required to several existing policies when a new one (such as the whistleblowing policy) is added to the suite, or when a policy is extensively revised. We don't always have the time and energy to ensure that all the changes to all the templates are identified and made, consistently, each month so that's something we're doing now: a full review and update of the entire policy suite.

With 70+ security policies in the suite, it's quite difficult to ensure that they all remain consistent, properly referencing each other. Take for example the policy on security awareness and training: that refers to several other policies ...


Since the assurance policy, for one, is cited in the security awareness policy, they are clearly related and relevant to each other, implying the need for each of them to reference the other ... and so on with all the other policies.

We've created something of a maintenance nightmare for ourselves here, a fairly complex mesh of security policies with lots of linkages or dependencies. 

A few years back, we handled this situation with our original Information Security Policy Manual, a substantial tome based originally on the ISO/IEC 27001 and 27002 standards. 27002, in particular, has numerous cross-references to related controls embedded within it, which in the policy manual we hyperlinked to the relevant parts of the same Word document. That worked nicely in practice for maintenance purposes ... but the 'substantial tome' was not so easy to read, understand and implement due to its sheer size. It topped out at something over 120 pages. Revising the entire manual to reflect the updated ISO27k standards released in 2013 also turned out to be too hard. 

After a year, we admitted defeat, shifting instead to the present approach with individual "topic-based" policies of just a few pages each, albeit more than 70 of them meaning something over 200 pages in total. 

Each policy starts with a clear title and succinct summary, defines its scope and applicability, lays out one or more axioms (broad security and control principles relating to the control objectives specified in the ISO27k standards), a page or so of detailed policy statements plus the table of references to related standards etc.

So if someone wants to know about the organization's policy on, say, security awareness, they simply dig out and read the 3½-page security awareness policy. They might also explore the referenced policies, procedures and awareness materials for further information ... or not.

Now, we're exploring ways to re-integrate the policies:
  • We already have partial integration through the overarching Corporate Information Security Policy listing out the axioms. We could also list the policies, except it then becomes lengthier and another maintenance burden as the policy suite continues to evolve. 
  • We might generate a 70x70 matrix listing all the cross-references between all the policies - that's about 5,000 potential cross-references to create and maintain! Oh boy. A spreadsheet would cope but I'm not sure I would!
  • A master MS Word document containing all 70 policies as sub-documents is another possibility, turning all the cross-references once again into hyperlinks to ease navigation and maintenance while still allowing for the individual policies to be used in isolation. Word can probably handle all that but I have had problems in the past with such complex documents, and the risk of either me or the technology messing things up completely is quite scary. We have invested hundreds, perhaps thousands of hours into this edifice already.
  • Some sort of automated system that can handle all the policies and dependencies as a suite would be nice, both for us and for our customers. Although a standard Document Management System might suffice, we're aware of, and have played a small part in developing, at least two commercial security policy and awareness management systems. 
Watch this space! If I have enough time and energy left over to blog this week, I'll mention something interesting, an internal structure emerging unexpectedly from the rather chaotic suite of policies, hinting at a conceptual framework.

Sunday 3 June 2018

Psychological support

A few hours after we completed and delivered the Incidents and disasters awareness and training module, Rob Slade posted an interesting little note on CISSPforum* about Psychological First Aid and/or Disaster Psycho-social Support, terms that I hand't come across before.

The World Health Organization's 64-page Psychological first aid: guide for field workers offers pragmatic advice to people such as aid workers, teachers and I guess emergency services professionals on how to help others suffering extreme emotional distress in the aftermath of a serious incident or disaster.

Fair point: "Different kinds of distressing events happen in the world, such as war, natural disasters, accidents, fires and interpersonal violence (for example, sexual violence). Individuals, families or entire communities may be affected. People may lose their homes or loved ones, be separated from family and community, or may witness violence, destruction or death."

PFA is described as "a humane, supportive response to a fellow human being who is suffering and who may need support" through:
  • "providing practical care and support, which does not intrude;
  • assessing needs and concerns;
  • helping people to address basic needs (for example, food and water, information);
  • listening to people, but not pressuring them to talk;
  • comforting people and helping them to feel calm;
  • helping people connect to information, services and social supports;
  • protecting people from further harm."
OK, so a major fraud, ransomware infection or hack may not be in quite the same league but the stress and trauma caused by serious cyber-incidents are, I believe, vaguely similar hence the WHO advice has value.

Protecting people from further harm is an intriguing idea: aside from limiting the corporate impacts, shouldn't we also do our best to limit the personal, traumatic impacts of serious cyber-incidents? Perhaps rest people who have been in the front line? 

Rob also posted a link to a Swedish government public-information leaflet about readiness for "crisis or war". Their advice on emergency supplies for the home caught my eye: staying warm is definitely a concern in the Scandinavian Winter:


I'll have more to say about that soon. Included in June's module was a staff awareness briefing about preparing a "Personal Plan B". As soon as we get the chance, we'll share it here. Watch this space.

* PS  (ISC)2 intends summarily to terminate CISSPforum inside a couple of weeks, without good cause and without even having the sense to consult the thousands of members of the community about this decision. Needless to say, some of those thousands are livid about yet another snub by (ISC)2, an organization supposedly established to further the interests of its professional members. If YOU are a CISSP or an information risk and security professional who cares about the profession, the community and (ISC)2's outrageous sleight, please apply to join the new improved CISSPforum on Groups.io

Friday 1 June 2018

Incidents and disasters awareness module


Despite our very best efforts to avoid or prevent incidents and avert disasters, infosec and cybersec pros may concede that they remain a possibility. A remote possibility. Vanishingly small, we hope.
Being prepared for incidents and disasters puts our organizations in a better position to survive and thrive, keeping essential business processes and systems running despite the events (i.e. continuity and resilience), recovering non-essential ones as soon as practicable afterwards (that's recovery and resumption), and generally coping with whatever comes our way (contingency, as in what we need to do is contingent on what actually transpires in the event of our worst nightmares coming true).
Preparedness involves getting ourselves ready in case something goes seriously wrong. Whereas we may cope perfectly well with relatively minor events, more serious incidents or disasters such as the following deserve or require better preparation:
  • Power cuts, surges and dips (that's power dips, not the cheese variety);
  • Fires, overheating or smoke damage (a hot trio);
  • Floods and leaks (rain water, ground water, sea water, sewage);
  • Earthquakes, cyclones, tornadoes, volcanic eruptions or terrible storms (The Tempest);
  • Hacks and social engineering attacks (yes, cyber incidents too);
  • Overloaded IT systems, out of capacity, broken (or just   f la ky);
  • Malware infections, spyware, ransomware (malevolent software);
  • Mistakes by system administrators or users, plus “accidents” of all sorts (whoopsies);
  • Essential people unavailable (off sick, on holiday, under a bus, mysteriously disappeared, poached by competitors, poached by a volcano, playing golf, getting married, otherwise engaged, retired, exhausted, hung over, spaced out, busy Doing Other Stuff, on strike, in the nick, locked in the telephone kiosk ...);
  • Sabotage and cybertage (cybervandals or drug-crazed ax-wielding nutcases on the rampage);
  • Failed IT changes or upgrades (no!  What are the chances of that, eh?);
  • Cloud and Internet failures (stop it, you're scaring me now);
  • Serious frauds (as opposed to the jocular or casual ones);
  • Failure to hit significant deadlines, leading to compliance issues (GDPR for example?);
  • Other nasty surprises ! BOO !
A vital part of the preparation is preparing our people. Being mentally ready to cope with the stuff life throws our way is part of it. A willingness to Do What Has To Be Done is another. Security awareness and training, then, is business-critical given the risk of incidents and disasters as well as being absolutely invaluable at all other times. If you're not convinced, consider the alternative: would you rather find yourself in the midst of a crisis with a bunch of people who haven't a clue what's going on, are scared witless, don't know what to do and mostly just want to disappear under a rock? 

Getting the organization ready to face up to assorted crises has advantages under normal circumstances also. Resilience is a small word for a big concept in business continuity, the idea being that essential business processes and systems should remain at least partially operational under all but the most extreme circumstances. Limping along rather than running, maybe, but still going like the Duracell bunny.

If your security awareness and training program doesn't cover business continuity, if "Keep calm and carry on" is the best/only advice you can offer, do get in touch: we'd love to help out. We have a clue.