Sunday 24 February 2019

How to challenge audit findings

Although I wrote this in the context of ISO/IEC 27001 certification audits, it applies in other situations where there is a problem with something the auditors are reporting such as a misguided, out of scope or simply wrong audit finding.

Here are some possible strategies to consider:
  • Have a quiet word with the auditor/s about it, ideally before it gets written up and finalized in writing. Discuss the issue – talk it through, consider various perspectives. Negotiate a pragmatic mutually-acceptable resolution, or at least form a better view of the sticking points.
  • Have a quiet word with your management and specialist colleagues about it, before the audit gets reported. Discuss the issue. Agree how you will respond and try to resolve this. Develop a cunning plan and gain their support to present a united front. Ideally, get management ready to demonstrate that they are definitely committing to fixing this e.g. with budget proposals, memos, project plans etc. to substantiate their commitment, and preferably firm timescales or agreed deadlines.
  • Gather your own evidence to strengthen your case. For example:
    • If you believe an issue is irrelevant to certification since there is no explicit requirement in 27001, identify the relevant guidance about the audit process from ISO/IEC 27007 plus the section of 27001 that does not state the requirement (!)
    • If the audit finding is wrong, prove it wrong with credible counter-evidence, counter-examples etc. Quality of evidence does matter but quantity plays a part. Engage your extended team, management and the wider business in the hunt.
    • If it’s a subjective matter, try to make it more objective e.g. by gathering and evaluating more evidence, more examples, more advice from other sources etc. ‘Stick to the facts’. Be explicit about stuff. Choose your words carefully.
    • Ask us for second opinions and guidance e.g. on the ISO27k Forum and other social media, industry peers etc.
  • Wing-it. Duck-and-dive. Battle it out. Cut-and-thrust. Wear down the auditor’s resolve and push for concessions, while making limited concessions yourself if you must. Negotiate using concessions and promises in one area to offset challenges and complaints in another. Agree on and work towards a mutually-acceptable outcome (such as, um, being certified!).
  • Be up-front about it. Openly challenge the audit process, findings, analysis etc. Provide counter-evidence and arguments. Challenge the language/wording. Push the auditors to their limit. [NB This is a distinctly risky approach! Experienced auditors have earned their stripes and are well practiced at this, whereas it may be your first time. As a strategy, it could go horribly wrong, so what’s your fallback position? Do you feel lucky, punk?]
  • Suck it up! Sometimes, the easiest, quickest, least stressful, least risky (in terms of being certified) and perhaps most business-like response is to accept it, do whatever you are being asked to do by the auditors and move on. Regardless of its validity for certification purposes, the audit point might be correct and of value to the business. It might actually be something worth doing … so swallow your pride and get it done. Try not to grumble or bear a grudge. Re-focus on other more important and pressing matters, such as celebrating your certification!
  • Negotiate a truce. Challenge and discuss the finding and explore possible ways to address it. Get senior management to commit to whichever solution/s work best for the business and simultaneously persuade/convince the auditors (and/or their managers) of that.
  • Push back informally by complaining to the certification body’s management and/or the body that accredited them. Be prepared to discuss the issue and substantiate your concerns with some evidence, more than just vague assertions and generalities.
  • Push back hard. Review your contract with the certification body for anything useful to your case. Raise a formal complaint with the certification body through your senior management … which means briefing them and gaining their explicit support first. Good luck with that. You’ll need even stronger, more explicit evidence here. [NB This and the next bullet are viable options even after you have been certified … but generally, by then, nobody has the energy to pursue it and risk yet more grief.]
  • Push back even harder. Raise a complaint with the accreditation body about the certification body’s incompetence through your senior management … which again means briefing them and gaining their explicit support first, and having the concrete evidence to make a case. Consider enlisting the help of your lawyers and compliance experts willing to get down to the brass tacks, and with the experience to build and present your case.
  • Delay things. Let the dust settle. Review, reconsider, replan. Let your ISMS mature further, particularly in the areas that the auditors were critical of. Raise your game. Redouble your efforts. Use your metrics and processes fully.
  • Consider engaging a different certification body (on the assumption that they won’t raise the same concerns … nor any others: they might be even harder to deal with!).
  • Consider engaging different advisors, consultants and specialists. Review your extended ISMS team. Perhaps push for more training, to enhance the team’s competence in the problem areas. Perhaps broaden ‘the team’ to take on-board other specialists from across the business. Raise awareness.
  • Walk away from the whole mess. Forget about certification. Go back to your cave to lick your wounds. Perhaps offer your resignation, accepting personal accountability for your part in the situation. Or fire someone else!
Although that's already a long list of options, I'm sure there are others including combinations of the above. The fact is that you have choices in how to handle such challenges: your knee-jerk response may not be ideal.

For bonus marks, you might even raise an incident report concerning the issue at hand, then handle it in the conventional manner through the incident management part of your ISMS. An adverse audit finding is, after all, a concern that needs to be addressed and resolved just like other information incidents. It is an information risk that has eventuated. You will probably need to fix whatever is broken, but first you need to assess and evaluate the incident report, then decide what (if anything) needs to be done about it. The process offers a more sensible, planned and rational response than jerking your knee. It's more business-like, more professional. I commend it to the house.

Friday 22 February 2019

Classification versus tagging

I'm not happy with the idea of 'levels' in many contexts, including information classification schemes. The term 'level' implies a stepped progression in one dimension. Information risk and security is more nuanced or fine-grained than that, and multidimensional too.
The problems with 'levels' include:
  • Boundary/borderline cases, when decisions about which level is appropriate are arbitrary but the implications can be significant; 
  • Dynamics - something that is a medium level right now may turn into a high or a low at some future point, perhaps when certain event occurs; 
  • Context e.g. determining the sensitivity of information for deliberate internal distribution is not the same as for unauthorized access, especially external leakage and legal discovery (think: internal email); 
  • Dependencies and linkages e.g. an individual data point has more value as part of a time sequence or data set ... 
  • ... and aggregation e.g. a structured and systematic compilation of public information aggregated from various sources can be sensitive; 
  • Differing perspectives, biases and prejudices, plus limited knowledge, misunderstandings, plain mistakes and secret agendas of those who classify stuff, almost inevitably bringing an element of subjectivity to the process despite the appearance of objectivity; 
  • And the implicit "We've classified it and [maybe] done something about securing it ... so we're done here. Next!". It's dismissive. 
The complexities are pretty obvious if you think about it, especially if you have been through the pain of developing and implementing a practical classification scheme. Take a blood pressure reading, for instance, or an annual report or a system security log. How would you classify them? Whatever your answer, I'm sure I can think of situations where those classifications are inappropriate. We might agree on the classification for a particular situation, hence a specific level or label might be appropriate right there, but information and situations are constantly changing, in general, hence in the real world the classification can be misleading and unhelpful. And if you insist on narrowing the classification criteria, we're moving away from the main advantage of classification which is to apply broadly similar risk treatments to each level. Ultimately, every item needs its own unique classification, so why bother?

Another issue with classification schemes is that they over-emphasize one aspect or feature of information - almost always that's confidentiality. What about integrity, availability, utility, value and so forth? I prefer a conceptually different approach using several tags or parameters rather than single classification 'levels'. A given item of information, or perhaps a collection of related items, might usefully be measured and tagged according to several parameters such as:
  • Sensitivity, confidentiality or privacy expectations; 
  • Source e.g. was it generated internally, found on the web, or supplied by a third party?; 
  • Trustworthiness, credibility and authenticity - could it have been faked?; 
  • Accuracy and precision which matters for some applications, quite a lot really; 
  • Criticality for the business, safety, stakeholders, the world ...; 
  • Timeliness or freshness, age and history, hinting at the information lifecycle; 
  • Extent of distribution, whether known and authorized or not; 
  • Utility and value to various parties - not just the current or authorized possessors; 
  • Probability and impact of various incidents i.e. the information risks; 
  • Etc. 
The tags or parameters required depend on what needs to be done. If we're determining access rights, for instance, access-related tags are more relevant than the others. If we're worried about fraud and deception, those integrity aspects are of interest. In other words, there's no need to attempt to fully assess and tag or measure everything, right now: a more pragmatic approach (measuring and tagging whatever is needed for the job in hand) works fine.

Within each parameter, you might consider the different tags or labels to represent levels but I'm more concerned with the broader concept of taking into account a number of relevant parameters in parallel, not just sensitivity or whatever. 

All that complexity can be hidden within Gary's Little World, handled internally within the information risk and security function and related colleagues. Beyond that in the wider organization, things get messy in practice but, generally speaking, people working routinely with information "just know" how important/valuable it is, what's important about it, and so on. They may express it in all sorts of ways (not just in words!), and that's fine. They may need a little guidance here and there but I'm not keen on classification as a method for managing information risk. It's too crude for me, except perhaps as a basic starting point. More useful is the process of getting people to think about this stuff and do whatever is appropriate under the circumstances. It's one of those situations where the journey is more valuable than the destination. The analysis generates understanding and insight which are more important that the 'level'.

Thursday 21 February 2019

Victimization as a policy matter


An interesting example of warped thinking from Amos Shapir in the latest RISKS-List newsletter:

"A common tactic of authoritarian regimes is to make laws which are next to impossible to abide by, then not enforce them. This creates a culture where it's perfectly acceptable to ignore such laws, yet the regime may use selective enforcement to punish dissenters -- since legally, everyone is delinquent."
Amos is talking (I believe) about national governments and laws but the same approach could be applied by authoritarian managers through corporate rules, including policies. Imagine, for instance, a security policy stating that all employees must use a secret password of at least 35 random characters: it would be unworkable in practice but potentially it could be used by management as an excuse to single-out, discipline and fire a particularly troublesome employee, while at the same time ignoring noncompliance by everyone else (including themselves, of course).

It's not quite as straightforward as I've implied, though, since organizations have to work within the laws of the land, particularly employment laws designed to protect individual workers from rampant exploitation by authoritarian bosses. There may be a valid legal defense for workers sacked in such circumstances due to the general lack of enforcement of the policy and the reasonable assumption that the policy is not in force, regardless of any stated mandate or obligations to comply ... which in turn has implications for all corporate policies and other rules (procedures, work instructions, contracts and agreements): if they are not substantially and fairly enforced, they may not have a legal standing. 

[IANAL  This piece is probably wrong and/or inapplicable. It's a thought-provoker, not legal advice.]

Wednesday 20 February 2019

Policy governance

Kaspersky blogged about security policies in the context of human factors making organizations vulnerable to malware:
"In many cases, policies are written in such a difficult way that they simply cannot be effectively absorbed by employees. Instead of communicating risks, dangers and good practices in clear and comprehensive instructions, businesses often give employees multipage documents that everyone signs but very few read – and even less understand."
That is just the tip of an iceberg. Lack of readability is just one of at least six reasons why corporate security policies are so often found lacking in practice:
  • Lack of scope: ‘security policies’ are typically restricted to IT/cyber security matters, leaving substantial gaps, especially in the wider aspects of information risk and security such as human factors, fraud, privacy, intellectual property and business continuity.

  • Lack of consistency: policies that were drafted by various people at various times for various reasons, and may have been updated later by others, tend to drift apart and become disjointed. It is not uncommon to find bald contradictions, gross discrepancies or conflicts. Security-related obligations or expectations are often scattered liberally across the organization, partly on the corporate intranet, partly embedded in employment contracts, employee handbooks, union rulebooks, printed on the back of staff/visitor passes and so on. 

  • Lack of awareness: policies are passive, formal and hence rather boring written documents - dust-magnets. They take some effort to find, read and understand. Unless they are accompanied by suitable standards, procedures, guidelines and other awareness materials, and supported by structured training, awareness and compliance activities to promote and bring them to life, employees can legitimately claim that they didn’t even know of their existence - which indeed they often do when facing disciplinary action. 

  • Lack of accountability: if it is unclear who owns the policies and to whom they apply, noncompliance is the almost inevitable outcome. This, in turn, makes it risky for the organization to discipline, sack or prosecute people for noncompliance, even if the awareness, compliance and enforcement mechanisms are in place. Do your policies have specific owners and explicit responsibilities, including their promotion through awareness and training? Are people - including managers - actually held to account for compliance failures and incidents?

  • Lack of compliance: policy compliance and enforcement activities tend to be minimalist, often little more than sporadic reviews and the occasional ticking-off. Circulating a curt reminder to staff shortly before the auditors arrive, or shortly after a security incident, is not uncommon. Policies that are simply not enforced for some reason are merely worthless, whereas those that are literally unenforceable (including those where strict compliance would be physically impossible or illegal) can be a liability: management believes they have the information risks covered while in reality they do not. Badly-written, disjointed and inconsistent security policies are literally worse than useless.
Many of these issues can be traced back to lacking or inconsistent policy management processes. Policy ownership and purpose are often unclear. Even simple housekeeping activities such as version control and reviews are beyond many organizations, while policies generally lag well behind emerging issues.

That litany of issues and dysfunctional organizational practices stems from poor governance ... which intrigues me to the extent that I'm planning to write an article about it in conjunction with a colleague. He has similar views to me but brings a different perspective from working in the US healthcare industry. I'm looking forward to it.

Thursday 14 February 2019

Online lovers, offline scammers





Social engineering scams are all the rage, a point worth noting today of all days.

A Kiwi farmer literally lost the farm to a scammer he met and fell for online. 

Reading the original TVNZ news report (no longer online), this was evidently a classic advance fee fraud or 419 scam that cost him a stunning $1.25m. 

This is not the first time I've heard about victims being drawn-in by the scammers to the extent that they refuse to accept that they have been duped when it is pointed out to them. There's probably something in the biology of our brains that leads us astray - some sort of emotional hijack going on, bypassing the normal rational thought processes.

On a more positive note, the risks associated with online dating are reasonably well known and relatively straightforward to counter. And old-school offline dating is not risk-free either. 

Relationships generally are a minefield ... but tread carefully and amazing things can happen. Be careful (safe hex, remember), be lucky.

Saturday 9 February 2019

Inform and motivate

The malware encyclopedia destined for inclusion in our next awareness module is coming along nicely ...




It's interesting to research and fun to write in an informative but more informal style than the glossary, with several decidedly tongue-in-cheek entries so far and a few graphics to break up the text.

I guess it will end up at about 20 pages, longer than usual for a general security awareness briefing but 100% on-topic. There's a lot to say about malware, being such a complex and constantly evolving threat. I hope the relaxed style draws readers in and makes them think more carefully about what they are doing without being too do-goody, too finger-wagging. Prompting changes of attitudes and behaviors is our aim, not just lecturing the troops. Awareness and training is pointless if it's not sufficiently motivational.

PS After trimming out the more obscure entries, it worked out at 11 pages plus the cover page.

Friday 8 February 2019

Creative security awareness



We're slaving away on the 'malware update' security awareness and training module for March. Malware is such a common and widespread issue that we cover it every year, making it potentially tedious and dull. People soon get bored by the same old notices - not exactly ideal for awareness and training purposes. 

Simply tarting-up and repackaging malware awareness materials we have delivered previously would be relatively easy for us but is not sufficient. Our subscribers deserve more! Aside from needing to reflect today's malware threats and current security approaches, we must find new angles and inject new content each time in order to spark imaginations and engage the audiences, again and again. 

Luckily (in a way), malware is a writhing vipers' pit, constantly morphing as the VXers and antivirus pro's do battle on a daily basis. So what's new this year?

The rapid evolution of malware risks is a story worth telling, but how can we actually do that in practice? We favor a strongly visual approach using an animated sequence of Probability Impact Graphs to explain, year-by-year, how specific malware risks have emerged, grown and then mostly faded away as the world gets on top of them. 

It would be great to have the foresight to predict next year's malware PIG, projecting forward from to today's but that's tricky, even for malware experts (which I'm not). The best I can do is pick out a few trends that illustrate the kinds of things that we might be facing over the remainder of 2019 ... and perhaps make the point that uncertainty is the very essence of 'risk'. If we knew exactly what to expect, we could of course prepare for it and better yet avoid or prevent it happening: we don't, hence we can't, hence we need to be ready for anything, which point links neatly back to January's awareness topic of resilience and business continuity, and forward to April's on incident detection. 

And so our cunning strategic plan continues to bear fruit. Although we cover different topics every month, they are all part of information security, all in and around the same core area. The approach is quite deliberate: we're poking at the same blob from different directions, exposing and exploring different aspects in order to help our audiences appreciate the whole thing, whilst at the same time avoiding information overload (trying to cover it all at once) and boredom (the blinkered view). Sometimes we take a step back for more of an overview, occasionally we dive deeper into some particular aspect that catches our attention and hopefully intrigues our customers, especially those with relatively mature awareness and training programs. Advanced topics tend to be quite narrow in scope, but even with those we make a conscious effort to link them into the broader context. 

Key words such as 'information', 'risk', 'security', 'control', 'governance' and 'compliance' inevitably crop up in almost every module. Talking of which, we've come up with a new style of awareness material for March, a malware encyclopedia derived from our information security glossary. The full glossary is a substantial piece of work, over 300 pages long, a whole book's worth of content. It's a fantastic reference source for professionals and specialists working in the field, so good in fact that we use it ourselves since remembering all the fine details on more than 2,000 information security terms is beyond us.

I'll have more to say about the encyclopedia tomorrow. For now, must press on, lots to do.

Thursday 7 February 2019

Risks and opportunities defined

In the ISO27k context, 'risks and opportunities' has at least four meanings or interpretations:
  1. Information risks and information opportunities are the possibilities of information being exploited in a negative and positive sense, respectively. The negative sense is the normal/default meaning of risk in our field, in other words the possibility of harmful consequences arising from incidents involving information, data, IT and other ‘systems’, devices, IT and social networks, intellectual property, knowledge etc. This blog piece is an example of positively exploiting information: I am deliberately sharing information in order to inform, stimulate and educate people, for the benefit of the wider ISO27k user community (at least, that's my aim!). 
  2. Business risks and business opportunities arise from the use of information, data, IT and other ‘systems’, devices, IT and social networks, intellectual property, knowledge etc. to harm or further the organization’s business objectives, respectively. The kind of manipulative social engineering known as ‘marketing’ and ‘advertising’ is an example of the beneficial use of information for business purposes. The need for the organization to address its information-related compliance obligations is an example that could be a risk (e.g. being caught out and penalized for noncompliance) or an opportunity (e.g. not being caught and dodging the penalties) depending on circumstances.
  3. The ISMS itself is subject to risks and opportunities. Risks here include sub-optimal approaches and failure to gain sufficient support from management, leading to lack of resources and insufficient implementation, severely curtailing the capability and effectiveness of the ISMS, meaning that information risks are greater and information opportunities are lower than would otherwise have been achieved. Opportunities include fostering a corporate security culture through the ISMS leading to strong and growing support for information risk management, information security, information exploitation and more.
  4. There are further risks and opportunities in a more general sense. The possibility of gaining an ISO/IEC 27001 compliance certificate that will enhance organization’s reputation and lead to more business, along with the increased competence and capabilities arising from having a compliant ISMS, is an example of an opportunity that spans the 3 perspectives above. ‘Opportunities for improvement’ involve possible changes to the ISMS, the information security policies and procedures, other controls, security metrics etc. in order to make the ISMS work better, where ‘work better’ is highly context-dependent. This is the concept of continuous improvement, gradual evolution, maturity, proactive governance and systematic management of any management system. Risks here involve anything that might prevent or slow down the ongoing adaptation and maturity processes, for example if the ISMS metrics are so poor (e.g. irrelevant, unconvincing, badly conceived and designed, or the measurement results are so utterly disappointing) that management loses confidence in the ISMS and decides on a different approach, or simply gives up on the whole thing as a bad job. Again, the opportunities go beyond the ISMS to include the business, its information, its objectives and constraints etc.
Unfortunately in my opinion, ISO/IEC JTC 1/SC2 7 utterly confused interpretation (1) with (3) in 27001 clause 6. As I understand it, the ISO boilerplate text for all management systems standards concerns sense (3), specifically. Clause 6 should therefore have focused on the planning required by an organization to ensure that its ISMS meets its needs both initially and in perpetuity, gradually integrating the ISMS as a routine, integral and beneficial part of the organization’s overall governance and management arrangements. Instead, ‘27001 clause 6 babbles on about information security objectives rather than the governance, management and planning needed to define and satisfy the organization’s objectives for its ISMS. The committee lost the plot - at least, that’s what I think, as a member of SC27: others probably disagree! 

Friday 1 February 2019

Security awareness module on mistakes

Security awareness and training programs are primarily concerned with incidents involving deliberate or intentional threats such as hackers and malware. In February, we take a look at mistakes, errors, accidents and other situations that inadvertently cause problems with the integrity of information, such as:
  • Typos;
  • Using inaccurate data, often without realizing it;
  • Having to make decisions based on incomplete and/or out-of-date information;
  • Mistakes when designing, developing, using and administering IT systems, including those that create or expose vulnerabilities to further incidents (such as hacks and malware);
  • Misunderstandings, untrustworthiness, unreliability etc. harming the organization’s reputation and its business relationships.
Mistakes are far more numerous than hacks and malware infections but thankfully most are trivial or inconsequential, and many are spotted and corrected before any damage is done. However, serious incidents involving inaccurate or incomplete information do occur occasionally, reminding us (after the fact!) to be more careful about what we are doing. 
The awareness and training materials take a more proactive angle, encouraging workers to take more care with information especially when handling (providing, communicating, processing or using) particularly important business- or safety-critical information – when the information risks are greater.

Learning objectives

The latest security awareness and training module:
  • Introduces the topic, describing the context and relevance of 'mistakes' to information risk and security;
  • Expands on the associated information risks and typical information security controls to cut down on mistakes involving information;
  • Offers straightforward information and pragmatic advice, motivating people to think - and most of all act – so as to reduce the number and severity of mistakes involving information;
  • Fosters a corporate culture of error-intolerance through greater awareness, accountability and a focus on information quality and integrity.
Our subscribers are encouraged to customize the content supplied, adapting both the look-and-feel (the logo, style, formatting etc.) to suit their awareness program’s branding, and the content to fit their information risk, security and business situations. Subscribers are free to incorporate additional content from other sources, or to cut-and-paste selections from the awareness materials into staff newsletters, internal company magazines, management reports etc. making the best possible use of the awareness content supplied.

So what about your learning objectives in relation to mistakes, errors etc. Does your organization have persistent problems in this area? Is this an issue that deserves greater attention from staff and management, perhaps in one or more departments, sites/business units or teams? Have mistakes with information ever led to significant incidents? What have you actually done to address the risk?

HINT: Don't be surprised if the same methods lead to the same results. "The successful man will profit from his mistakes ... and try again in a different way" [Dale Carnegie].