Friday 29 November 2019

Social engineering awareness module



December 2019 sees the release of our 200th security awareness and training module, this one covering social engineering. The topic was planned to coincide with the end of year holiday period - peak hunting season for social engineers on the prowl, including those portly, bearded gentlemen in red suits, allegedly carrying sacks full of presents down chimneys. Yeah right!
I'm fascinated by the paradox at the heart of social engineering. Certain humans threaten our interests by exploiting or harming our information. They are the tricksters, scammers, con-artists and fraudsters who evade our beautiful technological and physical security controls, exploiting the vulnerable underbelly of information security: the people. At the same time, humans are intimately involved in protecting and legitimately exploiting information for beneficial purposes. We depend on our good people to protect us against the bad people.
Vigilance is often the only remaining hurdle to be overcome, making security awareness and training crucial to our defense. It’s do or die, quite literally in some cases! 
The module concerns information risks, controls and incidents involving and affecting people:
  • Various types of social engineering attacks, scams, cons and frauds – phishing being just one of many topical examples;
  • Exploitation of information and people via social media, social networks, social apps and social proofing e.g. fraudulent manipulation of brands and reputations through fake customer feedback, blog comments etc.;
  • The social engineer’s tradecraft i.e. pretexts, spoofs, masquerading, psychological manipulation and coercion.
While there are many indiscriminate scams and cons in operation, most are relatively minor (except, perhaps, ransomware). However, social engineering attacks and frauds specifically targeting the organization through its workforce are of greater concern. 
Adversaries who patiently research us and our people through social media and social networks stand a better chance of gaining our trust, reducing our wariness of unknown people and unusual requests, so catching us off-guard. Our being cautious about what we reveal to outsiders makes their task that bit harder, a subtle but effective control.
Creative scammers are developing ever more sophisticated attacks, sometimes combining hacking, malware, physical site penetration and social engineering methods. Business Email Compromise, for instance, is highly lucrative, some attacks netting tens of millions of dollars by tricking professionals into making fraudulent payments from corporate bank accounts, bypassing the normal checking and authorization controls due to some trumped-up emergency situation. Tricking them into installing malware or changing payee account numbers are just two of their cunning tricks.





I'm especially pleased with these three A-to-Z guides covering social engineering scams, techniques and controls respectively - a
neat set with plenty
of meaty content in
an engaging format.





Buy the materials today at SecAware.com and download them instantly: all our content is electronic, provided as MS Office files mostly, so that you can customize and adapt them to suit your specific needs. If you don't like our logo, swap it for yours. If our version of a social engineering policy doesn't quite work for your organization, hack it about as much as you like.

Thursday 28 November 2019

Risks, dynamics and strategies


Of information risk management, "It's dynamic" said my greybeard friend Anton Aylward - a good point that set me thinking as Anton so often did.

Whereas normally we address information risks as if they are static situations using our crude risk models and simplistic analysis, we know many things are changing ... sometimes unpredictably, although often there are discernible trends.

On Probability-Impact Graphics, it is possible to represent changing risks with arrows or trajectories, or even time-sequences. I generated an animated GIF PIG once showing how my assessment of malware risks had changed over recent years, with certain risks ascending (and projected to increase further) whereas others declined (partly because our controls were reasonably effective).  

[Click the PIG to watch it dance]




It's tricky though, and highly subjective ... and the added complexity/whizz-factor tends to distract attention from the very pressing current risks, plus the uncertainties that make evaluating and treating the risks so, errrr, risky (e.g. I didn't foresee the rise of cryptomining malware, and who knows what novel malware might suddenly appear at any time?).

A simpler approach is to project or imagine what will be the most significant information risks for, say, the year or two or three ahead. You don't need many, perhaps as few as the "top 5" or "top 10", since treating them involves a lot of work, while other risks are often also reduced coincidentally as controls are introduced or improved. It's possible to imagine/project risks even further out, which may suit a security architectural development or strategic planning approach e.g. planning to implement biometrics in a few years' time to address increasing requirements for worker authentication.

Another aspect of strategic planning for information risk and security management is that the risk modelling, analysis, treatment and projections are all inherently uncertain, therefore taking us into the realm of resilience and contingency thinking. An ISO27k Information Security Management System (or, in fact, any structured approach to managing the corporation's risks) that helps the organization cope with an uncertain future is an asset, whereas one that rigidly restricts its options may turn out to be a liability if things don't quite go to plan.

The point of this ramble, prompted by Anton's throwaway yet insightful comment about dynamics, is the need to consider both the 'here and now' and the future - even if you find yourself still desperately trying to catch up with the past!

Tuesday 26 November 2019

7 ways to improve security awareness & training

Although 7 Ways to Improve Employee Development Programs by Keith Ferrazzi in the Harvard Business Review is not specifically about information security awareness and training, it's straightforward to apply it in that context. The 7 ways in bold below are quoted from Keith's paper, followed by my take.

1. Ignite managers’ passion to coach their employees.  I quite like this one: the idea is to incentivize managers to coach the workforce. As far as I'm concerned, this is an inherent part of management and leadership, something that can be enabled and encouraged in a general manner not just through explicit (e.g. financial) incentives. For me, this starts right at the very top: a proactive CEO, MD and executive/leadership team is in an ideal position to set this ball rolling on down the cascade - or not. If the top table is ambiguous or even negative about this, guess what happens! So, right there is an obvious strategy worth pursuing: start at, or at the very least, include those at the very top of the organization ... which means taking their perspectives and addressing their current information needs, preferred learning styles and so forth (more below: directors and execs are - allegedly - as human as the rest of us!).

2. Deal with the short-shelf life of learning and development needs. 'Short shelf-life' is a nice way to put it. In the field of information risk and security, the emergence of novel threats that exploit previously unrecognized vulnerabilities causing substantial business impacts, is a key and recurrent challenge. I totally agree with the need to make security awareness an ongoing, ideally continuous activity, drip-feeding workers with current, pertinent information and guidance all year long rather than attempting to dump everything on them in a once-in-a-blue-moon event, session or course. Apart from anything else, keeping the awareness materials and activities topical makes them more interesting than stale old irrelevant and distracting junk that is 'so last year' (at best!).

3. Teach employees to own their career development. An interesting suggestion, this, especially for the more involved infosec topics normally taught through intensive training courses rather than general spare-time awareness activities. I'm not sure off-hand how this suggestion would work in practice, but it occurs to me that periodic employee appraisals and team meetings provide ample opportunities to offer training and encourage workers to take up whatever suits their career and personal development aspirations.

4. Provide flexible learning options. This hardly needs saying, does it? Maybe it is news to some that 'on demand' learning can usefully exploit workers' free-time, filling-in odd moments in the working day that would otherwise go to waste.  It's not quite as simple as that in that the awareness and training content should be appealing, engaging and worthwhile for the individuals to encourage them to participate, which in turn means it ideally needs to be developed by creative professionals with a good appreciation of both the audiences and the learning content. There's more to it than making stuff 'accessible'.

5. Serve the learning needs of more virtual teams. For me, this goes hand-in-hand with suggestions 1 and 4. 'Virtual teams' comprised of geographically-dispersed social groups present both opportunities and challenges for security awareness and training, especially if you accept that 'the virtual team' extends way beyond the organization these days.

6. Build trust in organizational leadership. Keith asserts provocatively that "People crave transparency, openness, and honesty from their leaders. Unfortunately, business leaders continue to face issues of trust". Hmmmm. If true (and I'm not at all sure I accept that, at least not as a general statement of fact), it undermines all aspects of management and leadership, not just security awareness, making this a more fundamental and potentially very serious issue for corporations. On the other hand, Keith's suggestion to “lead by example” is sound, regardless of how deep the issues go. For me, this is another inherent part of management, leadership and motivation - a word that is conspicuously and curiously absent from the HBR article. Openly addressing "What's in it for me?" is an obvious means of motivating people, especially if coupled with both enforcement and reinforcement - in other words, don't just threaten to hammer people for doing the wrong things, entice them to do the right things though rewards and incentives of all sorts (again, not just financial). 

7. Match different learning options to different learning styles. It is hardly rocket-surgery to suggest that individuals vary in their preferred 'learning styles', and although Keith only refers to Millennials using "cell phones, computers, and video games consoles", it's not hard to interpret this advice much more openly. For example, some people prefer to discover/learn stuff by reading, others by being told, others by doing. Some of us consider stuff before either accepting and internalizing it, or rejecting it, or (as with this blog piece) adapting and incorporating information and advice into a broader framework known as 'experience'. Some prefer to be told, simply and straightforwardly (and in far fewer words than this blog) what to do or not to do (lists of up to seven items for instance ...), and may only engage to the extent necessary to read the instructions, or view a diagram. Some "don't have the time for this", and some of us just love to explore the topic at our leisure. A few naturally resent being told to do anything and will rebel ... unless they are persuaded that it's in their best interests to comply (which can be tough!). Most of us have our interests and concerns, plus our non-interests and unconcerns, and we all differ, hence any attempt to offer a one-size-fits-all approach to security awareness and training is (I believe) doomed to failure.

Friday 22 November 2019

Who owns compliance?

For some weeks now on the ISO27k Forum we've been vigorously and passionately debating whether an Information Security Management System should, or should not, include the organization's compliance with "information security-related" laws, regulations and other obligations such as contractual clauses specifying compliance with PCI-DSS.

The issue arises because:
  • The relevant infosec compliance section is tucked away at the end of ISO/IEC 27001 Annex A, which has an ambiguous status with respect to '27001 certification. Although Annex A is discretionary rather than mandatory, certifiable organizations must use Annex A as a checklist to confirm that their ISMS incorporates all the information security controls necessary to address the information risks within scope of the ISMS. Interpret that paradox as you will ... and hope that the certification auditors take the same line;
  • It could be argued that, in a very broad sense, all the laws, regs, contracts, standards, ethical codes etc. which apply to the organization are "information security-related". The requirements are all forms of information with associated information risks. Therefore, they fall at least partially within the remit of an ISMS;
  • Likewise, "compliance", as a whole, could be seen as an information security control, a suite of organizational activities and measures to both satisfy and be able to demonstrate conformance with requirements, plus the associated assurance, reinforcement (awareness, acceptance) and enforcement aspects. In philosophical terms, compliance is an integrity issue, and integrity is part of information security, therefore compliance is part of infosec; 
  • However, most organizations either fail to identify and manage some of these risks and opportunities, or choose to manage them in other ways, for example through Legal and Compliance or Internal Audit departments. In practice, compliance with, say, the accounting and tax laws, or the health and safety regs, falls largely to the respective departments, teams and individuals who specialize in those areas. There is limited involvement of the information risk and security pro's, for example advising on the need for integrity, access and backup controls for the financial systems. In the event of conflict among the specialists, issues should be escalated to management, hopefully without blood being drawn, even if heads need to be knocked together to get things resolved.
The “mandatory ISMS documents” paper in the free ISO27k Toolkit draws distinctions between documentation that is:
  1. Formally and explicitly required by '27001 - the documentation that is clearly mandatory for certification, reading the standard as strictly as I guess a lawyer might read it [IANAL!]; 
  2. Informally implied by ‘27001 - additional evidence that, in practice, is typically used to demonstrate to the certification auditors that all the standard's requirements are fully satisfied;
  3. Potentially required by the organization, not by the standard as such, to direct, operate and control the ISMS according to business objectives.
    At one end of the scale, the ‘keep it simple’ approach to designing and implementing an ISMS emphasizes (1): if ‘being certified’ is the prime focus, the purpose of having an ISMS at all, then doing the absolute bare minimum to achieve that with as little as possible of (2) and (3) helps achieve that specific aim as quickly and cheaply as possible. The end result, as I see it, is a minimalist ISMS that satisfies the standard but only just. The business value to the organization arises more from having the compliance certificate than from the ISMS itself. For some organizations, that’s good enough, job done. Well OK it's a starting point: it may evolve from there.

    At the other end of the scale, the pure ‘business-driven’ approach emphasizes (3): regardless of what the standard demands or expects, the purpose of a business-driven ISMS is to support and enable the business to manage its information risks and security controls. The end result, as I see it, could be a bloated ISMS, bigger and more complex than the minimalist version.

    There are risks associated with both approaches:
    • An extremely minimalist ISMS might run into trouble with certain certification auditors who interpret the standard differently and hence demand more of (2) and (3) than what the organization believes it needs to do for compliance. It also might not earn its keep and fail, for example if the compliance certificate turns out to be an asset of little value, perhaps even a liability if the limitations of the extreme minimalist approach come to light. It could be a dog!
    • A bloated ISMS might experience conflict between ‘doing what’s best for the business’ and ‘doing what is demanded by the standard’, and hence might not be certifiable. Being bigger and more complex also implies greater risks: the ISMS might lose focus and direction. It too might fail. It could be a monster!
    So, personally, I prefer an approach part way between those extremes: do whatever the standard requires or implies, plus whatever extras suit the organization’s objectives and can be justified on business grounds. It seems to me a pragmatic ISMS should:
    • Sail through the certification audit, since it more than satisfies the mandatory requirements; 
    • Exploit the sage advice offered by '27001 and other standards (ISO27k plus whatever); 
    • Exploit and supplement/support the expertise elsewhere in the organization, building effective working relationships with assorted experts and managers on compliance and other matters; 
    • Incorporate joint rather than sole control in areas where the ISMS scope overlaps other functions, working collaboratively together towards shared business goals, addressing the bigger picture (a governance issue); 
    • Have a long-term future, since it also addresses broader business objectives relating to the systematic management of information risk and security; 
    • Do all that in a businesslike manner - cost-effectively, with strong governance and priorities that align with the organization's objectives and values. 
    Business alignment is, as I see it, key to the long-term success of the ISMS and, naturally, I have plenty more to say on that!

    For now I'll end by mentioning three other specialist areas that intersect the ISMS scope: IT, risk and business continuity. The issues are much the same as with compliance.

    Monday 18 November 2019

    Enough is enough

    Keeping ISO27k Information Security Management Systems tight, constrained within narrow scopes, avoiding unnecessary elaboration, seems an admirable objective. The advantages of ISMS simplicity include having less to design, implement, monitor, manage, maintain, review and audit. There's less to go wrong. The ISMS is more focused, a valuable business tool with a specific purpose rather than a costly overhead. 

    All good. However, that doesn't necessarily mean that it is better to have fewer ISMS documents. In practice, simplifying ISMS documentation generally means combining docs or dispensing with any that are deemed irrelevant. That may not be the best approach for every organization, especially if it goes a step too far.

    Take information security policies for example. Separate, smaller policy docs are easier to generate and maintain, {re}authorize and {re}circulate individually than a thick monolithic “policy manual”. It’s easier for authors, authorisers and recipients to focus on the specific issue/s at hand. That's important from the governance, awareness and compliance perspective. At a basic level, what are the chances of people actually bothering to read the change management/version control/document history info then check out all the individual changes (many of which are relatively insignificant) when yet another updated policy manual update drops into their inbox? In practice, it aint gonna happen, much to the chagrin of QA experts!

    On the other hand, individual policies are necessarily interlinked, forming a governance mesh: substantial changes in one part can have a ripple effect across the rest, which means someone has the unenviable task of updating and maintaining the entire suite, keeping everything reasonably consistent. Having all the policies in one big document makes maintenance easier for the author/maintainer, but harder for change managers, authorisers and the intended audiences/users.

    As if that’s not challenging enough already, bear in mind that information risk and security is itself just part of corporate management, with obvious links to IT, risk management, HR, compliance and many other areas, some of which are more obscure or tenuous (e.g. health & safety is an information security issue in the sense that people are information assets worth protecting). The ripples go all the way, and flow both ways: changes in, say, IT or HR policies can have an effect on information risk and security, requiring policy updates there too.

    Even within the ISMS, extending your policy management approach to take in the associated procedures plus awareness and training materials multiplies the problems. Extending it to include myriad other ISMS-related documentation makes it worse again. 

    Alternative approaches include using a ‘document management system’ or ‘policy management system’ – essentially a database system used to manage and control the materials as a coherent set – and hybrid approaches such as Word’s “compound document” facility – with one master doc linking to all the subsidiary docs, one for each policy. Here again there are pros and cons, not least the costs involved plus the rigidity and red-tape they inevitably introduce.

    Rationalising and simplifying the ISMS documentation to reduce the practical problems and costs clearly makes a lot of sense, but be careful: information risk and security is an inherently complex, far-reaching concept. There’s a lot to it. If for instance you drop a given policy from the ISMS suite on the basis that it is only marginally relevant, too narrow, too obscure or whatever, that leaves you without a stated policy in that area, which may have implications elsewhere, implications that may not be immediately obvious. Damn those ripples!

    Bottom line: governing, structuring, managing and maintaining ISMS documentation is tougher than you may think. The trick is to find the best balance point for your organization, specifically, and the generic standards can only offer so much guidance on that.

    Friday 15 November 2019

    Risky business

    Physical penetration testing is a worthwhile extension to classical IT network pentests, since most technological controls can be negated by physical access to the IT equipment and storage media. In Iowa, a pentest incident that led to two professional pentesters being jailed and taken to court illustrates the importance of the legalities for such work. 

    A badly-drafted pentest contract and 'get out of jail free' authorization letter led to genuine differences of opinion about whether the pentesters were or were not acting with due authority when they broke into a court building and were arrested. 

    With the court case now pending against the pentesters, little errors and omissions, conflicts and doubts in the contract have taken on greater significance than either the pentest firm or its client appreciated, despite both parties appreciating the need for the contract. They thought they were doing the right thing by completing the formalities. Turns out maybe they hadn't.

    I hope common sense will prevail and all parties will learn the lessons here, and so should other pentesters and clients. The contract must be air-tight (which includes, by the way, being certain that the client has the legal authority to authorize the testing as stated), and the pentesters must act entirely within the scope and terms as agreed (in doubt, stay out!).  Communications around the contract, the scope and nature of work, and the tests themselves, are all crucial, and I will just mention the little matter of ethics, trust and competence.

    PS  An article about the alleged shortage of pentesters casually mentions:
    "The ideal pen tester also exhibits a healthy dose of deviancy. Some people are so bound by the rules of a system that they can’t think beyond it. They can’t fathom the failure modes of a system. Future penetration testers should have a natural inclination toward pushing the boundaries – especially when they are told, in no uncertain terms, not to do so."
    Hmm. So pentesters are supposed to go beyond the boundaries in their testing, but remain strictly within the formally contracted scope, terms and conditions. 'Nuff said.

    PPS  Charges against the duo were dropped ~4 months after the incident.

    Tuesday 12 November 2019

    On being a professional

    While Googling for something else entirely, I chanced across this statement on an old forum thread:

    "The essence of my job as an information security architect is to understand the balance between risk (legal, practical, and otherwise) and the need for an organization to conduct business efficiently. I think a lot of what I do really does boil down to seeing the other side of things; I know what the “most secure” way is, but I also have to understand that implementing it might mean debilitating restrictions on the way my employer does business. So what I have to do is see their point of view, clearly articulate mine, and propose a compromise that works. There’s a reason a lot of IT security folks become lawyers."

    Nicely put, Darren! While personally I'd be reluctant to claim that I 'know what the most secure way is', the point remains that an information security - or indeed any professional's job revolves around achieving workable compromises. For us, it's about helping or persuading clients and employers identify and reduce their information risks to 'reasonable' levels, then maintaining the status quo through ongoing risk management.

    Some of our professional peers struggle with this, particularly inexperienced ones with IT backgrounds. They (well OK, we) can come across as assertive, sometimes to the point of being arrogant and pig-headed, obstinate or even rude. Things 'must' be done in a certain way - their way. They are trained professionals who have been taught the 'most secure way' and are unwilling to countenance any other/lesser approach. Situations appear black or white to them, with no shades of grey.

    Along with with Darren, presumably, I view most situations as greys, sometimes multicoloured or even multidimensional due to inherent complexities and differing perspectives. There is almost always more to a situation than it first appears, and often more to it that I appreciate even after studying it hard. I embrace ambiguity. I value flexibility and open-mindedness, and strive to be flexible and open-minded in my work: for me, it's an integral part of 'being professional'. 

    Such pragmatism is fine ... up to a point. However there are situations where it gets harder to back down and eventually I may stand my ground, refusing to compromise any further on my core values (particularly personal integrity). That, too, is a part of 'being professional'. 

    There are behavioural clues that I'm approaching my sticking point, such as:
    • Doubling-down on the analysis, carefully reviewing and reconsidering the position, searching even harder for those 'workable compromises';
    • Openly acknowledging what I know about the situation, including other perspectives, ambiguities, the limits of my/our knowledge and (ideally) the pros and cons of the range of options available;
    • Being explicit about my advice/recommendations, explaining myself as clearly as I can - generally in writing;
    • Focusing on 'what's best for the organization' and 'the business' rather than me/us as individuals, or our professional judgement, or best practices, compliance obligations or whatever;
    • Trying (not always successfully!) to distinguish the relationship, personal and more subjective or emotive issues from [what I believe to be] the objective situation and decisions at hand;
    • Either negotiating the workable compromise, or playing my trump card - usually something along the lines of "They are your information risks, not mine. You are accountable for the risk management decisions you make, but I stand by my advice." That's my reasonably polite but hardly subtle version of take-it-or-leave-it, my-way-or-the-highway - and I mean it. I have literally walked away from untenable situations and don't regret it one bit.
    Talking of which, I'm so busy now that I'm turning down new work because I don't the energy and time to do things 'properly'. Must dash, things to do. 

    Sunday 10 November 2019

    Strategic risk management

    There's an old old joke about a passing stranger asking for directions to Limerick.  "Well," says the farmer, "If oi was you, oi wouldn't start from here".

    So it is with infosec strategies. Regardless of where your organization may be headed, by definition you set out from a less than ideal starting point. If it was ideal, you wouldn't be heading somewhere else, would you? That naive perspective immediately suggests two alternatives:
    1. Given where you are today, planning your route accordingly.

    2. Regardless of where you are today, focus exclusively on the destination and how best to get there.
    Actually, those are just two of many possibilities. It's even possible to do both: strategic thinking generally includes a good measure of blue-sky idealist thinking, tempered by at least a modicum of reality and pragmatism. 'We are where we are'. We have a history and finite resources at our disposal ... including limited knowledge about our history, current situation and future direction. What's more, the world is a dynamic place and we don't exist in a vacuum, hence any sensible infosec strategy needs to take account of factors such as competitors, compliance and other challenges ahead - situational awareness plus conjecture about how the situation might conceivably change as we put our cunning strategy into practice (as in chess). 

    That's risk, information risk in fact, amenable to information risk management in the conventional, straightforward, systematic manner:
    • Identify and characterise the risk/s, both negative and positive (opportunities, the possibility that things might turn out even better than planned);
    • Quantify and evaluate the risk/s;
    • Decide what to do about them;
    • Do it! Finalise the strategy, negotiate its approval (with all that entails) and make it so;
    • Manage and monitor things as the strategy unfolds and changes inevitably happen;
    • Learn new stuff.
    That final bullet is usually an implicit part of the process. We discover flaws in our strategy, things that don't quite go to plan, activities that take longer or go in different directions for all sorts of reasons. 'We are where we are' as a result of past and current strategies, successes and failures, and there's a load of learning points there if you think about it:
    • Do we often over- or under-estimate things? How much variation is there, and is it biased one way or the other?
    • Are we frequently blind-sided by unexpected events?
    • Is it always a struggle to get anywhere, with too little energy to overcome the organization's inertia?
    • Are we resource-constrained/ Which are the tightest? Is there any slack we might redeploy?
    • Do we almost always achieve what we set out to achieve? Are we pushing hard enough?
    • Are we creative? Are we early, middle or late adopters, ahead, within or behind the curve? Do we miss out on opportunities, and if so what kinds, typically? Compared to our peers and competitors, are we usually in the right place at the right time?
    That's all in addition to learning about our strengths and weaknesses in information risk and security management, controls, threats, vulnerabilities, impacts, governance, compliance, assurance and so forth: I'm waffling on about gaining knowledge of the process of strategic risk management, figuring out why we ended up right here, lost, floundering about in this b(l)og, searching for Limerick ...

    Thursday 7 November 2019

    Super management systems

    ISO 22301, already an excellent standard on business continuity, has just been revised and republished. Advisera has a useful page of info about ISO 22301 here.

    There’s quite a bit of common ground between business continuity and information risk and security, especially as most organizations are highly dependent on their information, IT systems and processes. The most significant risks are often the same, hence it makes sense to manage both aspects competently and consistently. The ISO ‘management system’ structured approach is effective from the governance and management perspective. 

    Aligning/coordinating the infosec and business continuity management systems has several valuable benefits since they are complementary. 

    Extending that thought, it occurs to me that most if not all other areas of management also have information risk and security implications:
    • Physical site security and facilities management (e.g. reliable power and cooling for the servers);
    • IT and information management (dataflows, information architecture, information systems and networks and processes, intellectual property, innovation, creativity);
    • Change management (ranging from version control through projects and initiatives up to strategic changes);
    • Incident management (see below);
    • Risk management (as a whole, not just information risks);
    • Privacy management;
    • Relationship management (relationships with suppliers of goods and services, business partners, customers and prospects, owners/investors, authorities and other stakeholders, communities);
    • Compliance management (laws and regs, contracts and agreements, corporate policies, ethics);
    • Health-and-safety plus HR management (people are invaluable information assets!  Corporate culture, change/initiatives, motivation and compliance);
    • Product and operations management (core business!);
    • Quality management (especially quality assurance);
    • Assurance (reviews, audits, testing and checking functions, both internal and external);
    • Financial and general commercial management. 
    Your management might even consider developing a corporate strategy or policy to adopt ISO Management Systems where available, perhaps with an overarching ‘governance committee’, 'executive team', 'board' or similar to drive the alignment, exploit the common ground between them, and address any gaps, conflicts or other issues arising. You probably already have such a beast (commonly but ambiguously known as "senior management", the "C-suite" or "mahogany row"), although it may not consider itself to be operating a super-management-system.

    You might even take this a step further, aiming to integrate rather than simply coordinate and align those management systems. An obvious example concerns incident management - even something as basic as having a single multi-function contact point (Help Line, Service Desk or whatever) to receive and assess incident reports, initiate the relevant activities and coordinate communications among those involved.

    Or not. The ISO MS approach is not the only option, and there may well be something even better for your organization – other standard methods, ‘best of breed’ solutions, something home-grown or a patchwork. There may be sound business reasons for keeping various areas separate (e.g. if they are, or might be, contracted out). I’m simply suggesting that coordination, alignment and integration between management systems might be worth considering, if and when you and your management are in a position to do so (not necessarily right now … although this is peak season for strategising and planning!).

    I'll end today's sermon with a pertinent quote from an interview with Marc Goodman:
    "CIOs and CISOs will also have to work much more closely with the executives in charge of functions like HR, facilities, physical security, and loss prevention to close security gaps. The bad guys have repeatedly demonstrated their ability to slip through the gaps created when enterprises segment security across various functions within their organizations."
    Marc describes himself as “a global strategist, author and consultant focused on the disruptive impact of advancing technologies on security, business and international affairs”. He holds the Chair for Policy and Law at Singularity University in silicon valley. So no slouch then.

    Wednesday 6 November 2019

    Insight into ISO27k editing

    Today I find myself poring through ISO/IEC 27000:2018 looking for quotable snippets to use on our awareness posters in January. Although there’s plenty of good content, I can’t help but notice a few rough edges, such as this:
    “Conducting a methodical assessment of the risks associated with the organization’s information assets involves analysing threats to information assets, vulnerabilities to and the likelihood of a threat materializing to information assets, and the potential impact of any information security incident on information assets. The expenditure on relevant controls is expected to be proportionate to the perceived business impact of the risk materializing.” [part of clause 4.5.2].

    First off, here and elsewhere the ‘27000 text uses the term “information asset” which is no longer defined in the standard since the committee couldn’t reach consensus on that. Readers are left to figure out the meaning for themselves, with the possibility of differing interpretations that may affect the sense in places. The term is, or probably should be, deprecated.

    Secondly, the first sentence is long and confusing – badly constructed and (perhaps) grammatically incorrect. “Vulnerabilities to” is incomplete: vulnerabilities to what? Shouldn’t that be “vulnerabilities in” anyway? Threats get mentioned twice for no obvious reason, overemphasizing that aspect. “Likelihood” is a vague and problematic word with no precise equivalent in some languages - it too should probably be deprecated. The final clause as worded could be interpreted to mean that the process is only concerned with potential impacts on information assets, whereas incidents can cause direct and/or indirect/consequential impacts on systems, organizations, business relationships, compliance status, reputations and brands, commercial prospects, profits, individuals, partners, society at large and so forth, not all of which are information assets (as commonly interpreted, anyway!). 

    Thirdly, do “the organization’s information assets” include personal information? Some might argue that personal information belongs to the person concerned – the data subject – not the organization that holds it. My point is that it’s ambiguous and potentially misleading.

    Lastly, I don’t entirely accept the premise of the second sentence. Sure, in business terms, the total cost of controls should normally be less than the total benefits but that’s not what the clause actually says – and anyway, information security is not entirely a matter of net value: some controls are mandated or imposed on the organization.  

    If you think I’m being unreasonably critical or anal about this, fair enough: that’s the level of analysis typically used to justify changes to draft standards through JTC 1/SC 27. Now imagine the effort involved to review and comment on, say, ISO/IEC 27002, and to suggest changes (ideally explicitly proposing the replacement text in each case) and you’ll appreciate the time and effort involved as the international project team slogs its way laboriously through hundreds of pages of comments. It’s a wonder anything gets produced at all, let alone anything usable and as well respected as ISO27k!

    The lawyers among us will probably appreciate the issue. The legal profession performs this painstaking analysis much more seriously and deeply. Even, punctuation, is ... of-concern. Each new law/regulation has to fit neatly into the existing body of legislation without causing conflicts. We’ve got it easy!

    Monday 4 November 2019

    Social engineering awareness

    The next awareness topic is one of our regular annual topics. Social engineering has been around for millennia - literally, in the sense that deliberate deception is a survival strategy adopted by many living beings, right back to primordial times.

    So, what shall we cover this time around? 

    last time, we took a deep dive into phishing, a modern-day scourge ... but definitely not the only form of social engineering, despite what those companies pushing their 'phishing solutions' would have us believe. We picked up on 'business email compromise' as well, another name for spear-phishing. 

    In 2017, we explored 'frauds and scams' in the broad, producing a set of 'scam buster' leaflets explaining common attacks in straightforward terms, illustrated with genuine examples and offering pragmatic advice to avoid falling victim to similar tricks.

    Back in 2016, the 'protecting people' module covered: social engineering attacks, scams and frauds, such as phishing, spear-phishing and whaling; exploitation of information and people via social media, social networks, social apps and social proofing e.g. fraudulent manipulation of brands and reputations through fake customer feedback, blog comments etc.; the use of pretexts, spoofs, masquerading, psychological manipulation and coercion, the social engineer’s tradecraft; and significant information risks involving blended or multimode attacks and insider threats.

    Although we already have lots of content to draw upon and update, we always aim to cover current threats, which means this week our research phase draws to a close with a clearer idea of the scope of December's module, plus a bunch of recent incidents to illustrate the materials.

    As to precisely what aspects of social engineering we'll be covering this time around, I'll drop a few more hints here on the blog as the module comes together.