Saturday 23 October 2021

Topic-specific policy 11/11: secure development

The final topic-specific policy example from ISO/IEC 27002:2022 is another potential nightmare for the naïve and inexperienced policy author. 

 

Policy scoping

Despite the context and presumed intent, the title of the standard's policy example ("secure development") doesn't explicitly refer to software or IT. Lots of things get developed - new products for instance, business relationships, people, corporate structures and so on. Yes, even security policies get developed! Most if not all developments involve information (requirements/objectives, specifications, plans, status/progress reports etc.) and hence information risks ... so the policy could cover those aspects, ballooning in scope from what was presumably intended when the standard was drafted.

Even if the scope of the policy is constrained to the IT context, the information security controls potentially required in, say, software development are many and varied, just as the development and associated methods are many and varied, and more poignantly so too are the information risks. 

 

Policy development

Your homework challenge, today, is to consider, compare and contrast these five markedly different IT development scenarios:

  1. Commercial firmware being developed for a small smart actuator/sensor device (a thing) destined to be physically embedded in the pneumatic braking system of commercial vehicles such as trucks and coaches, by a specialist OEM supplier selected on the basis of lowest price. 
  2. A long-overdue technical update and refresh for a German bank's mature financial management application, developed over a decade ago by a team of contractors long since dispersed or retired, based on an obsolete database, with fragmentary documentation in broken English and substantial compliance implications, being conducted by a large software house based entirely in India. 
  3. A cloud-based TV program scheduling system for a global broadcaster, to be delivered iteratively over the next two years by a small team of contractors under the management of a consultancy firm for a client that freely admits it barely understands phase 1 and essentially has no idea what might be required next, or when.
  4. A departmental spreadsheet for time recording by home workers, so their time can be tracked and recharged to clients, and their productivity can be monitored by management.
  5. Custom hardware, firmware and autonomous software required for a scientific exploration of the Marianas trench - to be deployed in the only two deep-sea drones in existence that are physically capable of delivering and recovering the payload at the extreme depths required.
You may have worked in or with projects/initiatives vaguely similar to one, maybe even two or three of these, but probably not all five - and these are just a few random illustrative examples plucked from the millions of such activities going on right now. The sheer number and variety of possibilities is bewildering, so how on earth can one draft a sensible policy?

As is the way with ISO27k, the trick is to focus on the information risks. Based on your experience, web research, consulting competent colleagues and advisors, studying software engineering and development books, methods and standards, searching security guidelines for established good practices, reading IT audit reports, checking your help desk records and post-incident reports for software-related incidents and chatting with team leaders and managers who have suffered through some of them, etc., systematically build up a general picture of the kinds of incidents that have, can, typically or might just happen in your situation. Start sifting out the aspects that matter most - risks in the red zone of your probability-impact graphic, the top right quadrant of your risk matrix, or simply the top few entries on a ranked list or catalogue of risks. Those risks are obvious candidates for your policy to address, in some way - but you're not home and dry yet. 


Planning policy implementation

How do you propose to treat the risks, in fact? What controls do you already have in this area, and how are they working out in practice? Is it feasible to introduce a raft of security changes in one hit, or will things need to be phased in gradually over a period, with management support, training, new technologies and more? What changes are needed first, and why? How will they be planned, executed, monitored and controlled? What will business managers make of the emerging proposal, and why should they  support and authorise it?  

True, this is starting to look more like strategy than policy development but actually the two are (and often should be) closely linked. Your policy need not be perfect and totally comprehensive at the outset. There are definite benefits in starting small, letting the policy and supporting practices evolve flexibly as the organisation gradually adapts, learns and matures. Ideally, the policy should be as much empowering and motivating, as it is controlling and constraining, on the basis that the future is highly uncertain, the risks unclear, and the policy audience is both smart and trustworthy (we hope!).

As usual, here's a $20 generic policy template if the standard's suggested topic-specific policy isn't enough to get you going - a pump-primer as it were.


The template policy covers acquisition of commercial software as well as bespoke in-house or contracted out development, and two distinct but related aspects:
  1. The need for information security and quality assurance controls to protect both the development and acquisition process and the associated information assets.

  2. The activities necessary to identify and take due account of information security requirements for the IT system, application, or indeed information service being developed or acquired.

 

Policy implementation

Preparing a policy, however fantastic it may be, is necessary but not sufficient. There's more to do. What does it mean, in fact, to 'implement' a policy? Policy implementation may involve:

  • Making those who are affected by the policy, particularly those who are expected to do or not do certain things to comply with it, aware of the expectations or obligations, perhaps training, encouraging and supporting them to act accordingly.
  • Preparing procedures and guidance amplifying and explaining the policy in more practical terms, translating management's formal direction into plain language that makes sense in the operational business context e.g
    • How should workers 'keep up to date with security patches'?
    • Why is it necessary? It's reasonable for workers to question why they have to do anything different, and ask what's in it for them
    • What - if anything - do they need to do, or not do? 
    • When should things happen, checking for updates for instance? Are there to be regular activities, proactive or reactive things, both, or something else?
    • Who is expected to comply with the policy? What about the assurance aspects such as checking/measuring, reporting and achieving adequate compliance? 
    • Who is expected to own and maintain the policy and associated procedures etc. Also how and when?
  • Adjusting existing working practices both directly and indirectly affected by the policy, emphasising any important control elements (such as the appropriate people being informed when security-related patches are released, promptly patching high priority servers) and if possible integrating other aspects through more subtle adjustments (e.g. casually mentioning security patching as an example during worker orientation training).
  • Putting in place complementary/supporting controls, not least those specified in the policy e.g. patch management systems, identification and quarantining of unpatched systems, patch testing for BYOD and corporate systems, patch delivery and follow-up ...
  • Operating the policy routinely, gradually becoming part of business-as-usual (hopefully! If it doesn't, it clearly isn't working as intended, suggesting the need to review and revise).
  • Providing compliance incentives or rewards, supplementing noncompliance penalties. This can be a surprisingly motivational approach - perhaps something as simple as a few words of encouragement and thanks to people or teams that make a genuine effort to comply.
  • Building on, strengthening or supplementing the policy where appropriate, in the light of experience, as the policy and related processes and controls mature.
  • Updating the policy as things change, including cross-references to other relevant policies and procedures.

 

Policy development procedure/checklist

There's clearly quite a lot to do beyond drafting and approving the policy itself. I guess you could develop a policy about implementing policies (!), but more useful might be a generic procedure or checklist reminding people of what should happen - perhaps something simple along the lines of the bullet points above. It needn't even be documented as such, provided those involved know what needs to be done and do it, routinely, reliably, efficiently and effectively: would further documentation and control add net value i.e. deliver business benefits exceeding the associated costs of yet more red tape? If so, go ahead. If not, well you've reached the end of the line and there are almost certainly more important things to do.

After a short breather to gather my thoughts, I'll wrap up this blog series about the 11 'topic-specific' information security policy examples coming soon in the next release of ISO/IEC 27002. 

The end is nigh!

Friday 22 October 2021

Topic-specific policy 10/11: management of technical vulnerabilities

With respect to whoever crafted the wording of the 10th topic-specific example policy for ISO/IEC 27002:2022, "management of technical vulnerabilities" is the kind of phrase that speaks volumes to [some, switched-on, security-aware] IT pro's ... and leaves ord'nry folk perplexed, befuddled and nonplussed. In this case, that may be appropriate if it aligns with the intended audience for the policy, perhaps not if the policy needs to be read, understood and complied with by, say, workers in general, for whom "Patching" is arguably a more apt and widely-known term.

So, do you need to tell workers to keep their IT systems, smartphones and IoT things up to date with security patches? If so, before launching into the policy development process, think very carefully about the title, content and style of your policy - plus the associated procedures, guidelines, awareness and training materials, help-desk scripts or whatever you decide is necessary to achieve your information risk management objectives in this regard (more on that below).

Hinson tip: what are your information risk management objectives in this regard (concerning 'technical vulnerabilities' ... or whatever aspect/s you believe need addressing)? What information risks are you facing, how significant are they (relative to other things on your plate) and how do you intend to treat them? Seriously, think about it. Talk it through with your peers and professional colleagues. Draft a cunning treatment plan for this particular subset of information risks, discuss it with management and refine it. Lather, rinse, repeat until you achieve consensus (or wear down the blockers and negotiate a fragile settlement), and finally you are primed to craft your policy.

Once more, we have your starter-for-ten, a generic patching policy template designed to help get you smartly off the starting blocks:


While we don't presently offer a policy template on vulnerability disclosures (something worth adding to our to-do list, maybe?), we do have others that are to some extent relevant to this topic, for instance on change and configuration management and information systems security. I'll pick up on that point at the end of this blog series.

Aside from the list of 11 policy examples, ISO/IEC 27002:22 offers further advice on security policies, such as:

"Topic-specific policies should be approved by appropriate managers."

Whereas it is obvious that corporate policies need to be approved (and authorized and mandated and overseen and maintained and ...) by 'management', notice the word "appropriate". Some policies (notably the information security policy) should be approved by senior management, or in ISO27k terms "top management" defined as "person or group of people who directs and controls an organization at the highest level" (with notes and further definitions - see ISO/IEC 27000). It may be appropriate for individual topic-specific policies, however, to be 'approved' by senior, middle or even junior managers. So, for instance, if your organisation currently lacks an CISO or ISM, but has maybe an IT or HR Manager, or in fact anyone reasonably senior, they could 'approve' the policies: this is an example of ISO27k's principle of applying to any kind or size of organisation. The standards avoid being too explicit.

The standard continues: 

"Topic-specific policies should be communicated to relevant personnel and external interested parties, in a form that is relevant, accessible and understandable to the intended reader. The organization can determine the formats and names of these policy documents that meet the organization’s needs. In some organizations, the information security policy and topic-specific policies may be in a single document. The organization may name these topic-specific policies standards, directives, policies or others."

Nice!  So, the people who need to know need to know, fair enough, and it doesn't matter how you tell them. It doesn't even matter whether you call them policies, procedures, catalogs, laws, regulations, lists, banana cupcakes or "Esmirelda".  "Guideline 33a" is just fine, if that suits your purposes. 

Tune-in to the blog tomorrow for a piece about the final topic-specific policy example in '27002, and some thoughts about what it means to 'implement' a policy.

Thursday 21 October 2021

Topic-specific policy 9/11: information classification and handling

I'll admit up-front that I have very mixed feelings about the utility and value of classification as a form of control, at least in the civilian/commercial world outside of the government and defence realm anyway.

On the one hand, it is (or rather it should be, thanks to the policies, procedures, guidelines, training and awareness materials and activities) reasonably obvious how to handle correctly classified and labelled hardcopy documents. Computer data - not so much, unless you are using mil-spec classified systems and networks with all manner of mandatory hard-coded built-in bullet-proof controls. 

Do your corporate information security controls include automatic rifles and attitude? Are you at the very top of your game?

On the other hand, even in mil/govt circles, classification and labelling can be tricky and consistency is always an issue. Each level or category of classification covers a range, a spectrum of information risks. Individual items of information falling at any point within the range are likely to be classified, labelled and handled in much the same way - which may not be appropriate in every case. What to do with unlabelled and/or unclassified or misclassified information is another concern, along with classification reviews, as well as the tendency to over-classification which impacts the availability of information for legitimate purposes. Finally, anything marked "TOP SECRET" in big red capitals is surely a magnet for spies, spooks, opportunist thieves, hackers, crackers, journalists, nosy/disaffected workers, fraudsters, criminals ... and even auditors on the prowl. It might as well say "READ ME!". 

So, although we offer a classification policy template, I'm reluctant to recommend classification as a general approach unless it is mandated for your organisation ... in which case your class/category definitions, processes and handling rules are probably already specified by whoever mandated it (perhaps in law), so you would need to check/update the template accordingly.



In summary, the template is here, a basic classification policy starter for just $20. It's not one of the topic-specific policy examples I personally would have selected for the standard, though, and I have serious reservations about the corresponding controls in section 5. To me, it's an outdated, unhelpful and largely irrelevant approach - except perhaps for the military (and I'm not entirely sure about that!). 

Remember, the topic-specific policies in ISO/IEC 27002:22 are merely illustrative examples, suggested not required, an incomplete menu of possibilities rather than a recipe for success. The same point applies to all the other controls in the standard. They may not be appropriate and worthwhile for your organisation. You may have a better approach in mind or already in place. You probably have other priorities, given your business situation, including of course important things to do other than manage information risk and security per se. ISO/IEC 27001 and other ISO27k standards (such as '27003, '27004 and '27005) help by describing a structured governance framework within which information security policies make sense. It's down to you, though, to craft an approach that not only makes sense but works well in practice for your specific organisation.

Wednesday 20 October 2021

Topic-specific policy 8/11: cryptography and key management

Maybe this particular policy was mentioned in previous editions of ISO/IEC 27002 and picked as a topic-specific policy example for the forthcoming 3rd edition in order to include something directly relevant to governmental organisations, although to be fair crypto is a consideration for all of us these days. Many (most?) websites are now using HTTPS with TLS for encryption, for example, while cryptographic methods are commonly used for file and message integrity checks, such as application/patch installers that integrity-check themselves before proceeding, and password hashing.

Here's a glimpse of one I prepared earlier:


Like all our templates, this one is generic. Organisations with specific legal or contractual obligations in this area (such as governmental and defense companies bound to employ particular algorithms, key lengths and technologies such as physically secure hardware crypto modules, or companies bound by PCI-DSS) would need to adapt it accordingly. 

You'll see that it mentions the Information Classification Policy: I'll have more to blog about classification tomorrow.

If you've been tagging along on my tiki-tour of the topic-specific policy examples in ISO/IEC 27002:2022, and if you read that LinkeDin piece by Chris Hall that I recommended, you will probably by now recognise the standard document structure we've adopted for all our policy templates. The main elements are:

  • Page header with a logo (our logo in the template, yours to download and customise) and a short, pithy, catchy policy title.
  • Information security policy up-front to be crystal clear about the nature and ownership of the policy, since some topics could equally belong to other corporate functions (e.g. our "Fraud" policy template is, in fact, an information security policy addressing the information risks associated with fraud, misrepresentation and so on, not an HR or legal policy about disciplinary procedures and compliance).      
  • Policy title, big and bold to stand out. The precise wording is important here (I'll return to that point in another blog piece).
  • Policy summary, outlining the main thrust of the policy in a single paragraph for readers who have been sufficiently intrigued by the title to open the document, are unsure whether they ought to read the full policy (e.g. is it applicable to them?), and hopefully as a quick reminder of the content some while after they last read it. 
  • Applicability is stated to indicate that most information security policies apply to 'all workers' (meaning the organization's paid employees and third parties' employees such as contractors and consultants), although some are of more direct concern to particular departments or groups within the organisation.
  • The actual policy, split into three subsections:
    • Background lays out the rationale/purpose and scope of the policy. While this half page or so could be cut out, I find it helps (for most readers, the rational thinkers at least) to set the scene, outline the information risks and so justify the need for the policy, briefly. It easily earns its keep as far as I'm concerned. 
    • Policy axioms (guiding principles) are high level policy statements, usually just one or two brief, pithy and formally worded sentences. These form an important link to the "information security policy", being the highest level policy in the structure. 
    • Detailed policy statements amplify and explain the axioms and requirements in more pragmatic terms - less formal or stilted, closer to plain English. These range from half to a few pages, depending on the breadth and depth or complexity of the topic.
  • Responsibilities are assigned, preferably to corporate functions or roles rather than named individuals, reducing the amount of policy maintenance to reflect staffing changes. The aim is to clarify what management is expecting the applicable people to do under this policy, although these are merely brief summaries: job/role descriptions, procedures, employment/service contracts, guidelines, work instructions and in some cases other policies (e.g. on auditing) and laws (e.g. the Privacy Officer role) expand on the stated responsibilities in various ways.  

The page layout, colours, fonts and formatting are also as consistent as we can make them across all the policy templates, hence workers who read/study any one should find others familiar and easier to navigate. We use just a handful of MS Word Styles for this, so customers can readily change to 11-point Arial with one inch borders on legal size paper, or whatever their own policy style guides dictate, simply by updating the Word Styles.

The language or writing style is another consistent aspect to all the SecAware policy templates. It helps immensely that I have personally written them all, hence they all reflect my natural style - or rather the particular style I have inevitably acquired through decades of practice, writing thousands of policies, procedures, standards, guidelines, awareness materials, management reports, audit reports, papers, articles and so on, oh and these bloggings of course. 

As organisations mature, they gradually accumulate numerous policies covering various topics, many written by different authors with different goals and different audiences in mind.  Hence it's no surprise to find substantial differences between then - even with the benefit of corporate guidance such as a style guide or policy management policy(!). You may have experienced (suffered!) the curiously officious pseudo-legal phrasing that some naive policy authors think appropriate - 'including but not limited to the following four (4) clauses ...' and so forth. A degree of formality is inevitable. Stilted, archaic language, heretofore and hereunder, is not. It's unhelpful and should be eschewed.

Having said that, my writing style is continually evolving so I can't resist refining the wording every couple of years or so when I review and maintain the policy templates, despite their maturity. That involves systematically updating the entire policy suite by the way, a laborious task given ~80 templates (!). It's an important quality and consistency/integrity check though, as well as an opportunity to ensure that the policies reflect current priorities and the state of the art/good practices in the field (e.g. replacing deprecated crytpographic algorithms with recommended ones), another aspect that is evolving in parallel.

Tuesday 19 October 2021

Topic-specific policy 7/11: backup

This is an interesting policy example to have been selected for inclusion in ISO/IEC 27002:2022, spanning the divide between 'cybersecurity' and 'the business'.

Why do data need to be backed up? What's the purpose? How should it be done? Questions like these immediately spring to mind (mine anyway!) when I read the recommendation for a topic-specific policy on backup ... but as usual, there's more to it than that.

Play along with me on this worked example. If you already have a backup policy (or something with a vaguely similar title), I urge you to dig it out at this point and study it (again!) before returning to read the remainder of this blog. 

Think about it. Does it address those three questions? What else does it cover? What is its scope? Is it readable, understandable, motivational - not just for you but for its intended audience/s? Does it state who those audiences are? Any spelling mistakes, grammatical errors or layout problems? Is it lengthy, officious, boring? Conversely, is it short, cryptic and puzzling? Is it more of a detailed plan for what backups to do, when and how, than a clear and unequivocal statement of management's overall expectations re backups? Is it consistent both internally (no contradictions or omissions) and externally (e.g. does it accord with other policies and adequately reflect any applicable compliance requirements)? 

All good so far? 

If not, hopefully this blog series has given you food for thought! 

Either way, what is it missing? What relevant matters does your backup policy not cover, either failing to mention them at all or perhaps gloss over them too superficially to have any impact?

That's a harder question to answer, even if you were the one who wrote the existing policy. We all (me included!) tend to focus on our areas of interest and expertise. Policies are often formulated and written with particular scenarios, situations or incidents in mind, typically forming part of the response that drives continuous improvement. We don't always take the trouble to consult with colleagues, research the topic, explore the risks and controls, and think both broadly and deeply about the subject area - the topic of the policy. Frankly, we just don't think, failing to recognise and address our own biases and failings. 

Don't agree? OK, look again at the start of my second paragraph. I consciously slipped "data" in there, just as I deliberately mentioned "cyber" in the first one. Did you even notice the bias towards IT? 

Is your backup policy exclusively about backing up computer data, most likely digital data from corporate IT systems? Does it lay out the technologies, plus the frequencies and types of backup, in some detail?

Don't get me wrong, that's a very important topic, essential in fact for virtually all modern organisations and indeed individuals today. My concern is that it still only covers part of the problem space, a peak on the risk landscape you could say.

What about information in other forms and locations:

  • Data somewhere out there in the cloud, perhaps dynamically shifting according to demand and supply of storage and processing facilities. 
  • Data on workers' personal devices authorised under a BYOD scheme (or not!).
  • Data on smartphones, laptops and all those IoT things proliferating like swarms of cockroaches in a horror movie. Is any important information stored in your car, for instance, or the smart bus ticket or passport in your pocket? How about the fancy coffee machine along the corridor? [Check your Y2K inventory for details. You have kept it running and up to date, right?] 
  • Software, data, metadata and configuration items tucked away in RAM, in firmware, on computer chips and tapes and floppy disks and DVDs ...
  • Backed up data: yes, backups are themselves valuable and yet vulnerable, so there may well be good reason to make additional backup copies and store them separately, at least for the most critical data that you really cannot afford to lose, ever. [Other controls and risk treatments are also available.]
  • Data plus non-digital information stashed away in home offices, briefcases, purses and wallets.
  • Information written on paper e.g. data entry forms, agreements and contracts formally completed and signed by customers and employees, scribbled notes from important meetings and interviews, annotated reports ... simply look around the offices or check your organisation's expenditure on paper and pens for clues about the amount of hardcopy information. 
  • Intangible yet valuable information in workers' heads, as yet either unexpressed or shared verbally/informally and not recorded.
  • Information belonging to third parties, placed in the organisation's care.
  • Historical information - stuff that can become more valuable over time in contrast to most that loses its flavour as rapidly as stale chewing gum.
  • Lower-value information that is untrustworthy, out of date, irrelevant or whatever - begging questions about what not to backup, given storage constraints and performance limitations of backup and recovery systems.
  • Private information belonging to workers who just happen to be using corporate IT services and equipment.
  • Information in obscure formats and obsolete media.
  • Potentially valuable information that, for whatever reason, appears to be lost, perhaps tucked away in databases or disks that aren't properly structured, indexed, sorted and searchable.
And then there's the question of ensuring the availability of important information services, such as the Internet, as a whole. Down here on the Far Side in rural New Zealand, we have struggled with slow, expensive and unreliable Internet access ever since moving out of Wellington in 2006. All six technologies we have tried to date have been so unreliable that backup/fallback arrangements are essential, and yet unfortunately the backups are also unreliable. Even our last resort method (drive an hour into the city for WiFi or 4G ...) fails during COVID lockdowns. A Web-based business without Internet access is like hardware without software. The LEDs don't even flicker.

It should be obvious by now that challenging conventional boundaries or definitions is a legitimate, creative, even fun part of the policy development process ... but eventually the end product needs to reflect the intended purpose and scope sufficiently well to be accepted and mandated by management.  Scoping is a challenge in itself since the entire organisation uses and depends on information, hence information risk is (to some extent) relevant or integral to everything.  There is potential for turf wars if the information security policy suite dares to cross into areas such as HR, IT or procurement ... implying the need to appreciate and negotiate the boundaries and, if appropriate, collaborate with other functions to align the policies and practices for the best outcome - or, at the very least, to avoid conflicts and yawning gaps.

Narrowing the scope/coverage of a draft policy and focusing on its central aim/s is a useful way to reduce its size and complexity. There's a limit to what can be expressed in, say, 3 or 4 pages, given the need for various boilerplate clauses, and (most importantly) bearing in mind the poor person that ultimately must understand and comply with it. Here's what we came up with:



When we originally drafted the policy, archival was included as a special form of backup. Subsequently, we separated out the policies for backups and archives, partly to emphasise the differences in the information risks and hence controls required. You'll see also that in the end we stuck with data backups for this generic policy template, although we actively encourage customers to adapt the templates to their specifics. If you agree on the importance of having backups for critical information services, people etc., then by all means incorporate those aspects in your information security policy on backups, or develop separate policies, procedures, guidelines or whatever best suits your purposes - a policy on business continuity might be worthwhile, among others.

By the way, don't get too hung up on the nomenclature. I recommend reading Chris Hall's piece on LinkeDin for a discussion about the naming and structure of policies, procedures etc. Chris is a star, one of several greybeards generously donating their time to support the global community of 4,400 ISO27k fans on the ISO27k Forum.

Monday 18 October 2021

Topic-specific policy 6/11: information security incident management

I'm intrigued by the title of this topic-specific policy from the [draft] 3rd edition of ISO/IEC 27002, being the only one of eleven example titles in the standard that explicitly states "information security".  

I ask myself why? Is there something special about the management of events classed as 'information security incidents', as opposed to other kinds? 

Hmmmm, yes there are some specifics but I'm not entirely convinced of a need for a distinct, unique policy. I feel there is more in common with the management of all kinds of incident than there are differences in respect of infosec incidents, hence "Incident management policy" makes more sense to me.

Here's one I prepared earlier.



Organisations deal with events and incidents all the time. Aside from the humdrum routines of business, things don't always go to plan and unexpected situations crop up. Mature organisations typically have incident management policies already, plus the accompanying procedures and indeed people primed and ready to respond to 'stuff' at the drop of a hat. Wouldn't it make sense, therefore, to ensure that "information security incidents" are handled in much the same way as others?

That's fine for mature organisations. For the rest, the SecAware information security policy template on incident management concentrates on the specifics of infosec incidents and outlines incident management in general. A workable infosec policy can prompt the development and maturity of incident management by:

  • Documenting and formalising things - particularly the process, expressing management's expectations and requirements in clear terms (e.g. striking the right balance between investigating and resolving incidents, especially where business continuity is a factor).
  • Stabilising the working practices, de-cluttering things, making them more consistent and hence amenable to management control.
  • Enabling reviews and audits, leading to systematic process improvement where appropriate.
  • Discouraging inappropriate shortcuts (e.g. ineptly investigating serious issues, compromising important forensic evidence) while facilitating escalation and management decisions where appropriate (e.g. determining whether forensic investigation is justified). 
  • Making people aware of their responsibilities, helping them understand their roles in relation to the process and each other (e.g. IT and legal professionals collaborating to investigate and resolve cyber incidents, while others keep out of their way unless asked to participate).

By the way, we also offer a distinct incident reporting policy. Why call that out separately? Our reasoning is that the policies address different audiences: incident management is primarily relevant to the incident management team and managers, whereas incident reporting involves everyone - staff, managers, contractors and other workers, even total outsiders who spot and wish to notify the organisation of incidents such as privacy breaches and physical or staff vulnerabilities ... and on those lines, I'll simply mention whistleblowing and fraud, leaving you to contemplate.

Looking back at this and previous blog pieces in the series, the topic-specific policy examples simply, briefly and ambiguously named in the standard have led us into exploring related areas and stimulated creative thinking ... just as examples are intended to do. I'll circle back to discuss the policy mesh at the end of this series in just a few days.

Saturday 16 October 2021

Topic-specific policy 5/11: networking security

The information risk and security implications of data networking, along with the ubiquity of data networks, makes this an obvious policy topic and naturally we offer a policy template. I alluded to this at the end of the last blog piece as one of several security policies relating to information transfer:


Less obviously, there are also potentially significant information risks and security controls applicable to social networking and social media ... and yes, we have a policy template for that too:



Although 'social media' generally refers to Facebook, Twitter, LinkeDin and the like, many of the information risks pre-date them, back to the days of in-person personal and business interactions through professional membership organisations, special interest groups, town hall meetings, breakfast clubs and chambers of commerce. Other comms technologies such as the telephone, email and videoconferencing, plus 'groups' and collaborative working, have dramatically expanded our opportunities for social contact, and also materially increased our exposure to global threats. Globalisation is a far bigger issue than 'networking' implies, with pros and cons.

On the upside, ready access to peers, knowledgeable and experienced colleagues and heaps of advice through the Internet makes high quality information very available. It's a fantastic resource for the connected global community. 

On the downside, the sheer volume and variety of information online can be overwhelming. It is tricky to distinguish and sift the wheat from the chaff. Even your ninja Googling skills can only go so far! That dips into the realm of mis/disinformation, bias and fraud, further areas where well-written corporate policies can help. 

I'm circling around an issue that I'll bring up towards the end of this blog series, namely the design of a comprehensive suite of information security policies. It's all very well considering information security policies individually, but we also need to consider them as a whole. Think on - more later.  
 

Friday 15 October 2021

Topic-specific policy 4/11: information transfer


"Information transfer" is another ambiguous, potentially misleading title for a policy, even if it includes "information security". Depending on the context and the reader's understanding, it might mean or imply a security policy concerning:

  • Any passage of information between any two or more end points - network datacommunications, for instance, sending someone a letter, speaking to them or drawing them a picture, body language, discussing business or personal matters, voyeurism, surveillance and spying etc.
  • One way flows or a mutual, bilateral or multilateral exchange of information.
  • Formal business reporting between the organisation and some third party, such as the external auditors, stockholders, banks or authorities.
  • Discrete batch-mode data transfers (e.g. sending backup or archival tapes to a safe store, or updating secret keys in distributed hardware security modules), routine/regular/frequent transfers (e.g. strings of network packets), sporadic/exceptional/one-off transfers (e.g. subject access requests for personal information) or whatever. 
  • Transmission of information through broadcasting, training and awareness activities, reporting, policies, documentation, seminars, publications, blogs etc., plus its reception and comprehension.  
  • Internal communications within the organisation, for example between different business units, departments, teams and/or individuals, or between layers in the management hierarchy.
  • "Official"/mandatory, formalised disclosures to authorities or other third parties.
  • Informal/unintended or formal/intentional communications that reveal or disclose sensitive information (raising confidentiality concerns) or critical information (with integrity and availability aspects). 
  • Formal provision of valuable information, for instance when a client discusses a case with a lawyer, accountant, auditor or some other professional. 
  • Legal transfer of information ownership, copyright etc. between parties, for example when a company takes over another or licenses its intellectual property.
Again there are contextual ramifications. The nature and importance of information transfers differ between, say, hospitals and health service providers, consultants and their clients, social media companies and their customers, and battalion HQ with operating units out in the field. There is a common factor, however, namely information risk. The information security controls and other risk treatments (such as risk avoidance e.g. prohibiting social media disclosure of company matters) that are appropriate depend on the information risks in each situation ...

... so information risk identification and risk assessment might be a suitable place to start specifying and drafting an information transfer security policy - or indeed an information risk management policy, since the principles are universally applicable.




On the other hand, it's worth exploring the purpose/objective for such a policy: what is it expected to achieve? What need would it satisfy, and is a policy the best way to proceed?

It may be, for instance, that the intention to exchange valuable information between organisations in the course of a business relationship leads either or both parties to want to clarify the terms and conditions, including the information risk and security implications as well as other aspects such as the nature of the information, its volume and frequency, information ownership, and the media/technologies to be used. The 'policy' in this case would probably be discussed, drafted and eventually embedded in some form of agreement, possibly with accompanying procedures. In other words, the policy may be in the form of one or more contractual clauses plus the mutual understanding and commitments. It would probably be unique to that relationship, although either party may use a template or donor document to speed up the drafting.

Here's one possible approach, the guts of a donor policy provided for your education, consideration, inspiration and perhaps adaptation:

Background

We have numerous commercial relationships with third parties such as suppliers, partners, customers, regulators/authorities and the general public.  Information, sometimes extremely important, valuable or sensitive corporate or personal information, passes routinely between us and the third parties using a variety of communications mechanisms and media (e.g. conversation, emails, network connections and point-to-point links), hence information security is an important consideration.  General information security controls may be sufficient for some situations but are unlikely to be sufficient to offset significant information security risks associated with the routine exchange of significant information assets with third parties.

Policy axioms (guiding principles)

A.          Information security aspects must be assessed and taken fully into account in business relationships involving the exchange of information with third parties.

B.          The associated information security risks must be assessed and mitigated to an acceptable level, while the risks and controls must be actively monitored, managed and maintained during the life of the business relationships.

Detailed policy requirements

1.  Significant business/commercial relationships with external/third party organizations qualify as information assets that must be listed in the Information Asset Register.  In their rôle as Information Asset Owners, the corresponding Relationship Managers are accountable for protecting the information assets, necessitating information security risk assessment and the implementation of adequate information security controls.

2.  Relationship Managers are responsible for assessing the information security risks associated with business relationships and identifying the threats, vulnerabilities and potential impacts of security breaches, preferably using a recognized information security risk analysis process acceptable to the information Security Manager.  It is generally appropriate to assess information security risks jointly with the third party using a mutually acceptable method, since both parties may face unacceptable information security risks and both parties may require security controls, some of which may be jointly operated.

3.   Information security risks should be minimized through the design, implementation and operation of suitable information security controls, but the particular controls that are required depend on the specific situation at hand.  Generally speaking, strong controls are more expensive to implement and operate, hence the cost of selecting/designing, implementing, using and maintaining the controls should be offset against the cost savings likely to be achieved by reducing the number and/or severity of incidents – in other words, the controls need to be cost-effective.

4.  If the Information security risks associated with a particular relationship are determined to be low, relatively low-cost “baseline” controls such as the following are likely to be sufficient:

  • General information security requirements (such as compliance with ISO/IEC 27002) whether incorporated formally into binding commercial contracts or agreements, or expressed and agreed informally in relationship management meetings, by email, by letter etc.; 

  • Procedures or mechanisms for dealing with actual or potential information security breaches, and for communicating the facts in a timely manner to the other party (e.g. regular ‘relationship management’ meetings and reports); 

  • Trust and confidence established by an unblemished record, by reputation and by implicit undertakings between those involved in managing and conducting the relationships.

5.  Additional confidentiality, integrity and/or availability controls might be required for relationships whose risks are assessed as medium and will certainly be needed if the risks are high, or where there are specific compliance obligations on us (e.g. to protect the confidentiality of personal information in our care).  The following controls are merely illustrative examples:

  • Legally-binding contracts or agreements explicitly stating required information security controls, obligations, liabilities, compliance activities etc.; 

  • Encryption of data and/or of network links, storage media etc., typically using specified encryption schemes (i.e. particular encryption algorithms and key lengths) and associated procedures (e.g. for mutual authentication, resetting shared secret keys etc.); 

  • Digital signatures, message digests and similar cryptographic mechanisms to identify corruption or tampering with messages in transit or when stored and retrieved from disk; 

  • Automated network/system monitoring/alerting to identify and block attempts to communicate unencrypted information, plus various other network, system and/or data access and security controls (e.g. regularly-reviewed system security or audit logs); 

  • Prohibition of the assignment of non-security-cleared or similar potentially untrustworthy employees to the relationship, and possibly all such employees to be pre-agreed by the other party before starting work on the relationship; 

  • Compliance of the third party with information security management standards such as ISO/IEC 27001 (whether self-asserted or certified, depending on the levels of risk and trust involved); 

  • Explicit prohibition of onward communication of sensitive proprietary or personal information to any third party, or to certain third parties (e.g. sending Personal Information beyond legally-permissible “Safe Harbor” countries); 

  • Technical security controls for data communications (e.g. antivirus and integrity checks on data received; failover or disaster recovery arrangements for high availability links); 

  • A ‘right of audit’ meaning the right to determine in person or through a mutually trusted intermediary (such as an external auditor, consultant or other competent authority) whether specified security requirements and controls are being upheld by the third party, particularly following a breach; 

  • Various other specific controls recommended by security, legal or other expert advisors.

6.  High risk information must not be passed to a third party until the necessary information security controls have been fully implemented by them.  Ideally, the information exchange should be delayed until the information security controls have been reviewed and confirmed as adequate by our inspection or by some other form of proof acceptable to us.

7.  Once implemented, the information security controls must be used properly and maintained, for instance responding to substantial changes in the risks as the result of greater volumes of information being exchanged.  The Relationship Manager is responsible for maintaining the risk management arrangements and should review the security risks and controls at least every two years, or annually in the case of high risk information exchanges.  The risks and controls should also be reviewed promptly if there are information security incidents or near-misses.  Information Security Management can advise and assist with these activities.


Information transfers, communications or exchanges are so commonplace and so varied these days that there may be little point in trying to come up with a security policy that applies to all circumstances, particularly if there are adequate policies covering related aspects already. I will cover one in the next blog piece in this series, so tune in soon for the next thrilling episode.

Thursday 14 October 2021

Topic-specific policy 3/11: asset management


This piece is
different to the others in this blog series. I'm seizing the opportunity to explain the thinking behind, and the steps involved in researching and drafting, an information security policy through a worked example. This is about the policy development process, more than the asset management policy per se

One reason is that, despite having written numerous policies on other topics in the same general area, we hadn't appreciated the value of an asset management policy, as such, even allowing for the ambiguous title of the example given in the current draft of ISO/IEC 27002:2022.  The standard formally but (in my opinion) misleadingly defines asset as 'anything that has value to the organization', with an unhelpful note distinguishing primary from supporting assets. By literal substitution, 'anything that has value to the organization management' is the third example information security policy topic in section 5.1 ... but what does that actually mean?

Hmmmm. 

Isn't it tautologous? Does anything not of value even require management? 

Is the final word in 'anything that has value to the organization management' a noun or verb i.e. does the policy concern the management of organizational assets, or is it about securing organizational assets that are valuable to its managers; or both, or something else entirely?  

Well, OK then, perhaps the standard is suggesting a policy on the information security aspects involved in managing information assets, by which I mean both the intangible information content and (as applicable) the physical storage media and processing/communications systems such as hard drives and computer networks?

Seeking inspiration, Googling 'information security asset management policy' found me a policy by Sefton Council along those lines: with about 4 full pages of content, it covers security aspects of both the information content and IT systems, more specifically information ownership, valuation and acceptable use:

1.2. Policy Statement 

The purpose of this policy is to achieve and maintain appropriate protection of organisational assets. It does this by ensuring that every information asset has an owner and that the nature and value of each asset is fully understood. It also ensures that the boundaries of acceptable use are clearly defined for anyone that has access to the information. 

Interesting! I like the way they summarize the policy, condensing it down to just a couple of key sentences. From the busy and easily-distracted reader's perspective, this important chunk determines whether they should continue reading and taking notice of the remainder of the policy. From the policy developer and authorisers' perspectives, it focuses attention on the stated matters.

Aside: our policies all include policy axioms, generally just one or two of them. Crafting these is harder than it looks - balancing readability against formality and tone, while remaining on-topic. In practice, we find a separate policy summary in a less formal and stilted style is also worthwhile, as well as a set of supporting policy statements with details expanding pragmatically on the axioms, giving workers the guidance to know what they are expected to do in practice to comply with the policy and so satisfy management's stated objectives.

Re Sefton, we already have policies covering information ownership and classification which, arguably, is a form of [e]valuation, plus a pack of eight Acceptable Use Policies, albeit closer to guidelines than policies in style. But how does the council policy differ?

I notice the council's listing of "important" information assets:

  • filing cabinets and stores containing paper records
  • computer databases
  • data files and folders
  • software licenses
  • physical assets (computer equipment and accessories, PDAs, cell phones) 
  • key services
  • key people
  • intangible assets such as reputation and brand 

Ignoring the now dated technology references (in this ancient 2008 policy!), I'm impressed that it not only recognises the value of paper records as well as computer data, but calls out the final three bullet points: those are not commonly considered in this context (we the people are much neglected!), but they are undoubtedly highly valuable forms of information - cloud services for a modern day example, plus intellectual property and trade secrets. They are clearly all assets, however defined. I quite like the thought of the policy emphasizing particularly valuable information assets ... although I'd change the emphasis a little towards high-risk information assets, linking the policy to information risk management.

I'm angling towards developing an "Information [asset] protection policy" at this point, as opposed to an "Asset management policy". The title of a policy is quite important, being an obvious indication of its coverage and purpose. Don't be fixated on the particular policy examples given by '27002, especially the more ambiguous ones such as this and, yes, even "information security policy". Adopting the wrong (misleading, inappropriate, ambiguous ...) title markedly increases the risk that workers will blithely disregard it without even taking the trouble to read the content, and could cause managers to misunderstand it, mistakenly believing the organisation has a policy on a distinct topic. What a waste, an opportunity lost!

Sefton Council's policy goes on to mandate an [information] asset inventory, a control listed separately in Annex A of ISO/IEC 27001 and explained in '27002. The underlying principle is obvious: management needs to appreciate their [information] assets in order to both protect and exploit them appropriately, maximising their value. So that's something well worth considering ... but pragmatically. Based on experience, I'm keen to avoid the inventory taking on a life of its own, sucking in resources beyond the point that it adds net value. It has to be a useful tool for business purposes, not an objective in its own right. That means keeping it to the essentials, cataloguing just those high-risk information assets, perhaps ... which brings it closer to a risk register in style. Maybe they can be combined - something as simple as a column in the risk register listing the associated information assets? 

What else?

Among other things, the information asset management policy from the Lamar Institute of Technology covers information disposal, prompting another consideration: information assets have cradle-to-grave lifecycles during which their value and the associated utility and risks vary. Should this be reflected, somehow, in the policy? I'm idly thinking of constructing a circular diagram to point out the key risk and security aspects visually - a picture to break up the turgid mass of words in almost all formal policies, increasing readability and engagement for at least some readers ... 

... which leads me to another aspect: who is (or are) the audience for the policy? Who is it for? What is its intended purpose? What is it meant to achieve for the organisation? These rhetorical questions are worth addressing, briefly, in the policy preamble/introduction.

Also, how will the policy generate more value than it costs to design, develop, review, mandate, publish, implement, achieve compliance and maintain? These are not inconsiderable costs, although I have never (yet!) seen this overtly considered after someone sets the process rolling with 'We need a policy on X.  Make it so!'.

There are things that can be done to minimise the costs and maximise the value of policies, like for instance:

  • Engaging people with the right expertise for the policy development process, including a professional competent in effective business communications as well as the subject matter of information asset management, risk, security and all that - someone with the talent, expertise, knowledge and passion for both the process and the end product. A small, focused core team (perhaps just one or two people) is supplemented at the relevant point by interacting with representative implementers, trainers and auditors, plus reviewers and authorizers from management. 
  • Treating the development of a policy as a small project, applying conventional (hopefully highly efficient and effective!) project management techniques and appropriate (lightweight) governance arrangements. As with software development 
  • Investing thinking time and energy into the policy specification, research and design phase, rather than blundering directly into the writing. Remember: Steady > Aim > Fire is the preferred sequence. It's the same, by the way, if you are reviewing and maintaining existing policies: clarify the destination and plan the route before blundering into the forest of issues ahead. 
  • Using a corporate template to speed the process along while producing a policy in the organisation's preferred style and format that is consistent with the content of other policies (e.g. cross-referenced, using some sort of policy matrix or map). Naturally, someone first needs to design and develop such a template, possibly even mandate it through a policy management policy if that helps (cue spinning head!). 
  • Using a commercial policy template as a quick-start, a basis for the customisation inevitably needed by the organisation. Bear in mind, though, that after purchasing and using a single policy template, you will probably want more if it worked well for you, which is fine if the supplier offers a comprehensive, coherent, consistent suite of policies of the same quality at a reasonable price. Think ahead a little. Do you need a point solution covering a specific policy matter, or an integrated system of good practice policies and controls covering the entire gamut of information risk, security, privacy and so forth, in the context of corporate/business objectives, governance and compliance?
  • Seeking honest feedback regarding the existing policies from those actively involved in their management, use, authorisation, assurance etc., refining the whole policy management process accordingly, rather than simply updating or withdrawing/replacing problematic policies individually. This is part of the maturation of an ISMS and the organisation's approach to information risk, security and related areas. 
That's enough from me on the process: here's a glimpse of one of the SecAware policy templates concerning the protection and [legitimate] exploitation of intellectual property (valuable information).



Wednesday 13 October 2021

Topic-specific policy 2/11: physical and environmental security

Yesterday I blogged about the "access control" topic-specific policy example in ISO/IEC 27002:2022. Today's subject is the "physical and environmental security" policy example.

Physical security controls are clearly important for tangible information assets, including IT systems and media, documentation and people - yes, people.

The first "computers" were humans who computed numbers, preparing look-up tables to set up field guns at the right elevation and azimuth angles to hit designated targets at specific ranges given the wind speed and direction, terrain and ordinance - quite a lot of factors to take into account in the field, so the pre-calculated tables helped speed and accuracy provided the gunners used them correctly anyway, and I'm sure they were highly trained and closely overseen!

Aside from a little mental arithmetic, most of us don't "compute" many numbers today but we still process staggering quantities of information flowing constantly from our senses and memories. In the work context, the trite mantra "Our people are our greatest assets" may be literally true, given the knowledge, experience, expertise and creativity of workers. We have valuable intangible proprietary and personal information locked in our heads, trade secrets, innovative ideas and more. We are information assets, although to be fair the true values vary markedly (and, yes, some are liabilities!). Why do you think some people are paid more than others?

Aside from the commercial value aspect, workers require adequate protection against unacceptable health and safety risks according to national laws and regulations. We also deserve respect, personal space, privacy, understanding, fair and reasonable compensation and so on, raising ethical and further legal or contractual obligations. 

Environmental protection ensures that workers have reasonably pleasant workplaces, partly for health and ethical reasons, partly for productivity reasons. Computer systems likewise work more reliably under manufacturer-specified ambient temperatures and require appropriate electricity supplies. The total demands for cooling and power can be significant in a large computer room or data centre. Oh and don't forget the physical security and environmental controls for portable equipment and home offices - safe storage, for instance, plus security cables, etched corporate logos, good quality power supplies and UPSs, spare batteries and more. 

Environmental controls relating to noxious by-products, greenhouse gases, dangerous emissions, excessive noise, explosive/flammable products, dangerous processes etc. are particularly important for chemical and manufacturing industries, among others ... but are they 'information security controls'? I would argue yes for some, perhaps most of them. For instance, electric valve and sluice gate controllers on a sewage treatment plant that are computerised and networked smart things are at risk from malware, hackers, inept system administration or configuration errors, software design flaws and programming bugs, mechanical problems, power glitches and more. 

So, there is clearly a wide variety of information risks and controls in this area, collectively presenting significant challenges in various organisations (e.g. an airport) and situations (e.g. on a passenger jet). 

Conversely, however, many other organisations get by with nothing special in the way of physical and environmental protection, although maybe they simply don't appreciate the risks they are implicitly accepting unless/until something goes seriously wrong, such as an office accident, theft, fire, flood or power cut. 

If you determine that a policy in this area would be worthwhile for your organisation, but don't presently have one, the SecAware "physical information security" policy template is a starting point. It doesn't attempt to cover everything, merely laying down the fundamentals that are common to most organisations as a foundation, a basis on which to build an appropriate risk management and control structure.


These are exactly the kinds of controls that I look for during computer installation audits, site security reviews etc. It's surprising how often I find basic physical security issues in otherwise well managed companies. Checking the policies in this area is just one part of the job: if I find none at all, or something rough and ready, badly worded with limited awareness and training, there's a fair chance I'll find obvious issues by simply wandering about with my eyes and ears open, camera in hand.

The SecAware physical and environmental security policy template costs $20. That's just over $2 per page of good practice guidance in that case. We charge the same flat $20 for our other topic-specific policy templates too, regardless of their length since length is not a good guide to the quality or value of a policy. If anything, shorter is better provided it covers all the essentials, and is readable. Perhaps we should trim this one back further than we did the last time we reviewed the templates.  

Tomorrow's topic is asset management.