Thursday 30 November 2017

Social engineering module

We've been busier than ever the past week or so, particularly with the awareness materials on social engineering. It is a core topic for security awareness since workers' vigilance is the primary control, hence a lot of effort goes into preparing materials that are interesting, informing, engaging and motivational. It's benign social engineering! 

The materials are prepared and are in the final stage now, being proofread before being delivered to subscribers later today.

This is a bumper module with a wealth of content, most of which is brand new. I blogged previously about the A-to-Z guides on social engineering scams, con-tricks and frauds, methods and techniques, and controls and countermeasures. I'll describe the remainder of the materials soon, once everything is finished and out the door. 

Meanwhile, I must get on: lots to do!

Tuesday 28 November 2017

ISO27k internal audits for small organizations

Figuring out how to organize, resource and conduct internal audits of an ISO/IEC 27001 Information Security Management System can be awkward for small organizations.

Independence is the overriding factor in auditing of all forms. For internal auditing, it’s not just a question of who the auditors report to and their freedom to ‘say what needs to be said’ (important though that is), but more fundamentally their mindset, experience and attitude. They need to see things with fresh eyes, pointing out and where necessary challenging management to deal with deep-seated long-term ‘cultural’ issues that are part of the fabric in any established organization. That’s hard if they are part of the day-to-day running of the organization, fully immersed in the culture and (for managers in small organizations especially) partly responsible for the culture being the way it is. We all have our biases and blind spots, our habits and routines: a truly independent view hopefully does not - at least, not entirely the same one!

ISO/IEC 27001 recommends both management reviews and internal audits. The people you have mentioned may well be technically qualified to do both but (especially without appropriate experience/training, management support and the independent, critical perspective I’ve mentioned) they may not do so well at auditing as, say, consultants. The decision is a business issue for you and your management: do the benefits of having a truly independent and competent audit outweigh the additional cost? Or do you think your own people would do it well enough at lower cost?

As the customer, you get to specify exactly what you want the consultants to bid for. A very tightly scoped and focused internal audit for a relatively small and simple ISMS might only take a day or two of consulting time, keeping the costs down. On the other hand, they will be able to dig deeper and put more effort into the reporting and achieving improvements if you allow them more time for the job – again, a management decision, worth discussing with potential consultants.

One strategy you might consider is to rotate the internal audit responsibility among your own people, having different individuals perform successive audits. That way, although they are not totally independent, they do at least have the chance to bring different perspectives to areas that they would not normally get involved in. It would help to have a solid, standardized audit process though, so each of the auditors is performing and reporting the audit work in a similar way … and to get you started and set that up, you might like to engage a consultant for the first audit, designing and documenting the audit process, providing checklist and reporting templates etc., and ideally training up one or more of your own people to take the lead on the next audit (like a relay race, passing the baton down the line). 

Another possibility is to send one or more of your people on a training course for internal auditing, perhaps one of the ISO27k/ISMS-specific Lead Auditor courses. Although I believe the LA courses only cover compliance or certification auditing, they do at least teach the concepts and processes that are much the same for internal audits. Personally, I would recommend ISACA’s CISA instead, as it is more suited to IT auditing in general.

Yet another potential approach is to ask appropriate newcomers to the organization (management level, probably) to do your audits. They would need support and guidance on the audit process, but they would at least be free of the baggage that existing employees carry! On top of that, it would be an excellent way to introduce them to all of management, giving them a view across the whole enterprise – a jump start if you like.

Oh and here’s one more option. How about ‘swapping’ with a partner organization: you audit them and they audit you? Obviously you’d need to be careful about the confidentiality, trust and commercial aspects, and you’d still have to be careful about the competence of the individuals doing the work, but it might work out conveniently for both parties, with the added advantage of perhaps sharing good practices between you.

The beauty of ISO27k is that you have plenty of latitude on how to manage information security, even within the constraints of '27001 certification, so you can be quite creative with how your ISMS is designed. At the end of the day, it is your ISMS and your information at risk, so do whatever is best for your business. That’s even more important than being certified compliant!

Wednesday 22 November 2017

A to Z of social engineering controls

I didn't quite finish the A-to-Z on social engineering methods yesterday as planned but that's OK, it's coming along nicely and we're still on track. 

I found myself dipping back into the A-to-Z on scams, con-tricks and frauds for inspiration or to make little changes, and moving forward to sketch rough notes on the third and final part of our hot new security awareness trilogy: an A-to-Z on the controls and countermeasures against social engineering. Writing that is my main task for today, and all three pieces are now progressing in parallel as a coherent suite.

It's no blockbuster but I have a good feeling about this, and encouraging feedback from readers who took me up on my offer of a free copy of the first part.

Along the way, a distinctive new style and format has evolved for the A-to-Zs, using big red drop caps to emphasize the first item under each letter of the alphabet. I've created and saved a Word template to make it easier and quicker to write A-to-Zs in future - a handy tip, that, for those of you who are singing along at home, writing your own awareness and training content.

I'd like to include some graphics and examples to illustrate them and lighten them up a bit, but with the deadline fast approaching that may have to wait until they are next updated. Getting the entire awareness module across the line by December 1st comes first, which limits the amount of tweaking time I can afford - arguably a good thing as I find this topic fascinating, and I could easily prepare much more than is strictly necessary for awareness purposes. 

Aside from that, the release of an updated OWASP top 10 list of application security controls prompted me to update our information security glossary with a couple of new definitions, and a radio NZ program about a book fair in Edinburgh (!) prompted me to explain improv sessions as a creative suggestion for the train-the-trainer guide for the social engineering module.

Breaking news about Uber losing millions of personal records to hackers has the potential to become a case study at some point. Initial rather vague news reports speak of hacking user credentials from Github and using them to access and steal info from cloud storage services, and raise concerns about the way the privacy noncompliance incident was handled and concealed, which in turn hints at a governance issue - in other words, this looks like becoming yet another multi-faceted incident, relevant to several infosec topics. Possibly, as with the Sony Pictures Entertainment incident, there may be enough meat on the bones to merit creating a special awareness module all by itself: it depends how the story evolves from here, and how much pertinent information is published.

Tuesday 21 November 2017

A to Z of social engineering techniques

On a roll from yesterday's A-to-Z catalog of scams, con-tricks and frauds, I'm writing another A-Z today, this time focusing on social engineering techniques and methods.  

Yesterday's piece was about what they do.  Today's is about how they do it.

Given my background and the research we've done, it's surprisingly easy to find appropriate entries for most letters of the alphabet, albeit with a bit of creativity and lateral thinking needed for some (e.g. "Xtreme social engineering"!).  That's part of the challenge of writing any A to Z listing ... and part of the allure for the reader. 

What will the Z entry be? As of this moment, I don't actually know but I will come up with zomething!

Both awareness pieces impress upon the reader the sheer variety of social engineering, while at the same time the alphabetical sequence provides a logical order to what would otherwise be a confusing jumble of stuff. Making people aware of the breadth and diversity of social engineering is one of the key learning objectives for December's awareness module. Providing structured, useful, innovative awareness content is what we do.

We hope to leave a lasting impression that almost any social interaction or communication could be social engineering - any email or text message, any phone call or conversation, any glance or frown, any blog item (am I manipulating your thoughts? Concentrate on the eyes. You are starting to feel drowsy ...)

Yes, hypnosis will make an appearance in today's A-Z.  It's not entirely serious!

Tomorrow, after completing the second, I'd like to complete the set with a third piece concerning the controls against social engineering. Can we come up with a reasonable list of 26? Come back tomorrow to find out how that turns out.

Monday 20 November 2017

An A to Z catalog of social engineering


A productive couple of days' graft has seen what was envisaged to be a fairly short and high-level general staff awareness briefing on social engineering morph gradually into an A-to-Z list of scams, con-tricks and frauds.

It has grown to about 9 pages in the process. That may sound like a tome, over-the-top for awareness purposes ... and maybe it is, but the scams are described in an informal style in just a few lines each, making it readable and easily digestible. The A-to-Z format leads the reader naturally through a logical sequence, perhaps skim-reading in places and hopefully stopping to think in others.

For slow/struggling readers, there are visual cues and images to catch their eyes but I'll be honest: this briefing is not for them. They would benefit more from seminars, case studies, chatting with their colleagues and getting involved in other interactive activities ... which we also support through our other awareness content. The awareness mind maps and posters, for instance, express things visually with few words.

Taking a step back from the A-Z list, the sheer variety and creativity of scams is fascinating, and I'm not just saying that because I wrote it! That's a key security awareness lesson in itself. Social engineering is hard to pin down to a few simple characteristics, in a way that workers can be expected to recognize easily. Some social engineering methods, such as ordinary phishing, are readily explained and fairly obvious but even then there are more obscure variants (such as whaling and spear phishing) that take the technique and threat level up a gear. 

It's not feasible for an awareness program to explain all forms of social engineering in depth, literally impossible in fact. It's something that an intensive work or college course might attempt, perhaps, for fraud specialists who will be fully immersed in the topic, but that's fraud training, not security awareness. We can't bank on workers taking time out from their day-jobs to sit in a room, paying full attention to their lecturers and scribbling notes for hour after hour. There probably aren't 'lecturers' in practice: most of this stuff is delivered online today, pushed out impersonally through the corporate intranet and learning management systems.

Our aim is to grab workers' attention, fleetingly, impart useful information and guidance, and motivate them to take even more care in future: yes, that's a benign form of social engineering, with beneficial rather than malicious intent. Maybe we should include it in the A-to-Z?

Sunday 19 November 2017

IoD advises members to develop "cyber security strategy"


report for the UK Institute of Directors by Professor Richard Benham encourages IoD members to develop “a formal cyber security strategy”.

As is so often the way, 'cyber' is not explicitly defined by the authors although it is strongly implied that the report concerns the commercial use of IT, the Internet, digital systems and computer data (as opposed to cyberwar perpetrated by well-resourced nation states - a markedly different interpretation of 'cyber' involving substantially greater threats).


A 'formal cyber security strategy' would be context dependent, reflecting the organization's business situation. That broader perspective introduces other aspects of information risk, security, governance and compliance. All relevant aspects need to be considered at the strategic level, including but not just 'cyber security'. 

Counteracting or balancing the desire to lock down information systems and hence data so tightly that its value to the business is squeezed out, 'cyber security strategy' should be closely aligned with, if not an integral part of, information management. For instance it should elaborate on proactively exploiting and maximising the value of information the organization already holds or can obtain or generate, working the asset harder for more productive business purposes. In some circumstances, that means deliberately relaxing the security, consciously accepting the risks in order to gain the rewards. 

I find it ironic that the professor is quoted:
“This issue must stop being treated as the domain of the IT department and be the subject of boardroom policy. Businesses need to develop a cyber security policy, educate their staff, review supplier contracts and think about cyber insurance.”
Does he not appreciate that, in common parlance and understanding of the term, cyber is the geeks' domain, their home turf? Over-use of both 'cyber' and 'security' biases the entire report and perpetuates the issue, unfortunately.

'Information risk management' would be a more appropriate term since it concerns: 
  • 'Information' not just 'data': there's a huge amount of valuable information outside the computer systems and networks, not least in workers' heads. That, too, is a valuable asset which deserves to be nurtured, exploited and protected. No amount of 'cyber security' is going to stop an experienced employee resigning to work for a competitor, taking loads of proprietary information with them, or blabbing about trade secrets on social media, over coffee or down the pub.
  • 'Risk' not just 'security'. Security is not inherently valuable unless it addresses risk ... and security controls are not the only way to address risks. In referring to 'cyber insurance' for instance, the report yet again over-emphasizes IT, whereas insurance plus incident management, business continuity management and other aspects would provide a more rounded, sensible, strategic approach, fundamental to which is an appreciation of the risks.
  • 'Management', as in systematically planning, directing, monitoring and controlling things to achieve business objectives. Fire-and-forget does not apply here: management needs to keep a close eye on developments, especially as the risks are changing rapidly around us. There are governance aspects to it too, including that point about not leaving it to IT!
An 'information risk management strategy', then, has legs. We're getting somewhere!

To be clear, my beef is not just with the semantics. Frequent and widespread reference to 'cyber security' and related neologisms doesn't make it right. It is too specific, too narrow to address the real issues, bordering on being a dangerous diversion. It's a bit like the distinction between 'global warming' and 'climate change'. They are strongly related concepts, of course, but need to be handled differently in practice. There's more to climate change than the Earth warming up a bit.

On a positive note, I’m pleased to see the report state:
"Ensure all your staff have regular cyber awareness training, building it into induction processes and ensure your people are a robust and secure first line of defence."
Personally, I’d have preferred the term “continuous information risk and security awareness” to counteract the obsessive focus on both 'cyber' and 'security', and to draw the distinction between awareness and training. They are complementary approaches with different objectives and methods.  If that's unclear, take a good look at NIST SP800-50 "Building an Information Technology Security Awareness and Training Program" or Rebecca Herold's "Managing an Information Security and Privacy Awareness and Training Program".

Thursday 16 November 2017

Color-coding awareness

Looking back, I see that I've blogged quite a few times in different contexts about color.

For example, most of the security metrics I discuss are colored, and color is one of several important factors when communicating metrics, drawing the viewer's eye towards certain aspects for emphasis. 

We talk of white hats and black hatsred teams and so on.

Traffic light RAG coloring (Red-Amber-Green) is more or less universally understood to represent a logical sequence of speed, intensity, threat level, concern or whatever - perhaps an over-used metaphor but effective nonetheless. Bright primary colors are commonly used on warning signs and indications, sometimes glinting or flashing for extra eye-catchiness.


Jeff Cooper, father of the "modern technique" of handgun shooting, raised the concept of Condition White, the state of mind of someone who is totally oblivious to a serious threat to their personal safety. Cooper's Color Code is readily adapted to the information risk and security context, for example in relation to a worker's state of alertness and readiness for an impending hack, malware infection or social engineering attack. We're currently exploring and expanding on that idea as part of December's awareness briefing for professionals on social engineering.

Wednesday 15 November 2017

Ethical social engineering for awareness

Security awareness involves persuading, influencing and you could say manipulating people to behave differently ... and so does social engineering. So could social engineering techniques be used for security awareness purposes?

The answer is a resounding yes - in fact we already do, in all sorts of ways.  Take the security policies and procedures, for instance: they inform and direct people to do our bidding. We even include process controls and compliance checks to make sure things go to plan. This is manipulative.

Obviously the motivations, objectives and outcomes differ, but social engineering methods can be used ethically, beneficially and productively to achieve awareness. Exploring that idea even reveals some novel approaches that might just work, and some that are probably best avoided or reversed.


Social engineering method,
technique or approach
Security awareness & training equivalents
Pretexting: fabricating plausible situations
Case studies, rôle-plays, scenarios, simulations, tests and exercises
Plausible cover stories, escape routes, scorched earth, covering tracks
‘What-if’ scenarios, worst-case risk analysis, continuity and contingency planning
Persuading, manipulating, using subconscious, visual, auditory and/or behavioral cues such as body language, verbal phrasing     and       emphatic     timing
Apply the methods and techniques used in education, marketing and advertising (e.g. branding disparate awareness materials consistently to link them together)
Deceiving/telling lies, making false promises, masquerading/mimicry, fitting-in, going undercover, building the picture, putting on a persona or mask (figuratively speaking), acting and generally getting-in-character
Emphasize the personal and organizational benefits of being secure; “self-phishing” and various other vulnerability/penetration tests
Distracting, exploiting confusion/doubt to slip through, doing the unexpected
Develop subtle underlying themes and approaches (such as ethics, a form of self-control) while ostensibly promoting more obvious aspects (such as compliance)
Appealing to greed/vanity, charming, flirting
Emphasize the positives, identify and reward secure behaviors
Playing dumb, appealing for assistance
Audience-led awareness activities e.g. a workshop on “What can we do to improve our record on malware incidents?”
Exploiting relationships, trust and reliance
Collaborating with other corporate functions such as risk, HR, compliance, health & safety etc. on joint or complementary awareness activities
Empathizing, befriending, establishing trust, investing time, effort and resources
Being realistic about timescales, and setting suitable expectations.  Anticipating and planning for long-term ‘cultural’ changes taking months and years rather than days and weeks to occur
Exploiting reputation and referrals from third parties (transitive trust)
Gather and exploit metrics/evidence of the success of awareness activities
Claiming or presenting false or exaggerated credentials, using weak credentials to obtain stronger ones
Do the opposite i.e. study for qualifications in information security and/or adult education
Assertiveness, aggression, 'front', cojones, brazen confidence, putting the victim on the back foot or catching them off-guard
Be more creative, adopting or developing unusual, surprising, challenging and perhaps counter-cultural awareness activities
Creating and using urgency and compulsion to justify bypassing controls
(Over?) Emphasizing ‘clear and present dangers’ (within reason!)
Bypassing, sidestepping or undermining controls
Addressing individuals and teams directly, regardless of hierarchies and norms
Exploiting management/support overrides
Using managers, auditors and other authority figures as communications vehicles
Puppetry, persuading others to do our bidding (possibly several layers deep)
‘Train-the-trainer’!  Develop and support a cadre of security friends/ambassadors.  Gain their trust and favor.  Involve them proactively.
Fast/full-frontal/noisy or slow/gradual attrition/blind-side/silent attacks, or both!
Focus on a series of discrete topics, issues or events, while also consistently promoting longer-term themes
Mutuality, paying a debt forward (e.g. if I give you a gift, you feel indebted to me)
Give rewards and gifts, “be nice” to your audience, respect their other business/personal interests and priorities
Targeting the vulnerable, profiling, building a coherent picture of individual targets, researching possible vulnerabilities and developing novel exploits
Working on specific topics for specific audiences e.g. following up after security incidents, systematically identifying and addressing root causes
Shotgunning (i.e. blasting out attacks indiscriminately to hook the few who are vulnerable) and snipering (e.g. spear phishing)
Combining general-purpose awareness materials plus targeted/custom materials aimed at more specific audiences
Pre-planned & engineered, or opportunistic attacks (carpe diem), or both!
Planned awareness program but with ‘interrupts’ (see below)
Dynamic, reactive/responsive attacks, turning the victim on himself, not entirely pre-scripted/pre-determined, being alert and quick-witted enough to grasp opportunities that arise unexpectedly
Spotting and incorporating recent/current security incidents, news etc., including business situations and changes, into the awareness program
Con-man, con-artist, fraudster, sleight-of-hand, underhand, unethical, selfish, goal-oriented, covertly focused
Do the opposite i.e. be very open and honest, sharing the ultimate goals of the awareness program
Using/replaying insider information and terminology obtained previously
Referring back to issues covered before, and ‘leaving the door open’ to come back to present issues later on; re-phrasing old stuff and incorporating new information
Systematically gathering, combining, analyzing and exploiting information
Systematically gather, analyze and use metrics (measures and statistics) on awareness levels and various other aspects of information security
Exploiting technical, procedural and humanistic vulnerabilities
Work on policies, procedures, practices and attitudes, including those within IT
Multi-mode, blended or contingent attacks e.g. combining malware with social engineering, plus hacking if that is appropriate to get the flag
True multimedia e.g. written/self-study materials, facilitated presentations/seminars, case studies, exercises, team/town-hall/brown-bag meetings, videos, blogs, system messages, corridor conversations, posters, quizzes, games, classes, security clubs, Learning Management Systems, outreach programs …

Tuesday 14 November 2017

50 best infosec blogs

I'm delighted that this blog has been featured among the 50 Best Information Security Blogs. Fantastic! Thank you, top10vpn.com ... and congrats to the other top blogs on the list, many of which I read and enjoy too. It's humbling to be among such august company.

We update this blog frequently in connection with the security awareness materials we're preparing, on security awareness techniques in general, or on hot infosec topics of the day. Blogging helps get our thoughts in order and expand on the thinking and research that goes into our security awareness modules. More than just an account of what's going on, updating the blog (including this very item) is an integral part of the production process.

A perennial theme is that it's harder than it appears to security awareness properly. Anyone can scrabble together and push out a crude mishmash of awareness content (typically stealing or plagiarizing other people's intellectual property - tut tut) but if they don't really appreciate what it all means, nor how to apply the principles of awareness, training and adult education, they are unlikely to achieve much. It's all too easy to add to the clutter and noise of modern life, more junk than mail.

Simply understanding what awareness is intended to achieve is a challenge for some! As I blogged the other day, being aware is not the ultimate goal, just another step on the journey - a crucial distinction. 



It could be said that this lack of understanding, rather than the usual lame excuse - lack of funds - is the main reason that security awareness programs falter or fail. I'm sure there are many other reasons too:
  • Lack of creativity: people gradually tune-out of dull, uninspiring approaches and come to ignore the same old same old. If all the awareness program ever blabbers on about is compliance, privacy and phishing, over and over like a cracked record, don't be surprised if the audience nods off or slips quietly away for something more stimulating;
  • Poor quality communications: a lot of this stuff is technical and complex, so there's an art to explaining it in terms that resonate with the audience. Simply writing and drawing things professionally takes skill, effort and practice, and time (perhaps our most valuable resource). A perfectionist by nature, I cringe when I look back at some of the awareness content we first delivered when we launched this service, or for that matter when I see a simple typo in this blog or an error in something we delivered just last month. I hope I never stop learning and improving;
  • Lack of skills and competencies: I hinted at this just a moment ago. Awareness is an interpersonal/human activity, while information security is mostly about the technology. Spot the difference! Few cybersecurity professionals, in particular, are comfortable, let alone competent at relating to ordinary non-tech people. Disparagingly and dismissively referring to them as "users" is a massive clue about a lack of respect. Even presidents need to appreciate the importance of earning and retaining the trust and support of the people. I've blogged about innovative approaches such as operant conditioning and treating security awareness as a (beneficial!) form of social engineering;
  • Limited or waning support, particularly from influential managers and other individuals.  Awareness is a cultural issue, hence the tone at the top can mine or undermine it;
  • Naive, superficial approaches with a preponderance of childish cartoons, games and trivia. Having fun is appropriate in moderation but some of this stuff is deadly serious and should not be taken too lightly;
  • Weak or absent awareness metrics: if it's uncertain whether the awareness program is or is not having a positive effect on the organization, creating more value than it expends, then don't be surprised at lackluster support from management and limited funding (as I said, a lame excuse: rather than just bemoaning the fact, ask why the budget is inadequate, then work hard to address the reasons);
  • Lack of focus and purpose: in the corporate context, security awareness has to support the achievement of the organization's business objectives, otherwise it's irrelevant, unhelpful and doomed. Awareness is best designed-in as an integral part of the information risk and security machinery, greasing the cogs and oiling the bearings as it were;
  • Conversely, there's myopia: intense focus on too narrow a field of view, ignoring or failing to address the wider issues, not least how information risk and security concerns the organization, its business and its people. It's really not hard to think up dozens of potential topic areas, turning a creative awareness program into something much richer and more vibrant than the norm. Just lose the blinkers;
  • Irrelevance: a tricky one, this, given the diversity of the intended audiences and the topics. People are unlikely to be equally interested on every awareness item, yet others may benefit, hence the need for a spectrum, a mixture of ingredients that, together, bake a tasty cake;
  • Lack of direction: where are we going with this? Good question! This blog meanders from side to side, even glancing off at tangents some times but generally it tends back towards the middle ground: awareness is an essential and valuable means of mitigating information risks. Thinking about your awareness program, do you have a crystal clear vision of what it is intended to achieve, and how it is going to do that? What's your cunning plan?
Anyway, I encourage you to browse all 50 best infosec blogs and track the ones that appeal to your imagination. Part of the fun of securing information is that it is a complex and dynamic enterprise. We need all the help and inspiration we can get!

A rich seam

So much of human interaction involves techniques that could legitimately be called social engineering that we're spoilt for choice on the awareness front for December.  

December's topic exemplifies the limitations of "cybersecurity" with its myopic focus on IT and the Internet. Social engineers bypass, undermine or totally ignore the IT route with all its tech controls, and that's partly what makes them such a formidable threat. 

IT may be a convenient mechanism for identifying, researching and communicating with potential victims, for putting on the appearance of legitimate, trustworthy individuals and organizations, and for administering the scams, but it's incidental to the main action: fooling the people.

Maybe it's true that you can't fool all of the people all of the time, depending on precisely what is meant by 'all'. I think it's fair to say that we are all (virtually without exception) prone, predisposed or vulnerable to social engineering of one form or another. We can't help it: social interaction is genetically programmed into us and reinforced throughout our lives from the moment we're born, or even before. Some expectant mothers report their babies respond to the music and other sounds around them. A newborn baby probably recognizes its mother's and other familiar voices and sounds immediately. To what extent it trusts or could be fooled by them is a separate issue though!

The idea that we are inherently vulnerable, while powerful, is only part of the story. We're also inherently capable of social engineering. We have the capacity, the tools and capabilities to influence and manipulate others to varying extents. Again, that newborn baby is sending out an avalanche of signals to humans in the area, from the moment of its first gasp and cry. The communications may be non-verbal but they are loud and clear!

Friday 10 November 2017

One step at a time


This colorful image popped onto my screen as I searched our stash of security awareness content for social engineering-related graphics. It's a simple but striking visual expression of the concept that security awareness is not the ultimate goal, but an important step on the way towards achieving a positive outcome for the organization. 

A major part of the art of raising awareness in any area is actively engaging with people in such a way that they think and behave differently as a result of the awareness activities. For some people, providing cold, hard, factual information may be all it takes, which even the most basic awareness programs aim to do. That's not enough for the majority though: most of us need things to be explained to us in terms that resonate and motivate us to respond in some fashion. In physical terms, we need to overcome inertia. In biology, we need to break bad habits to form better ones.

Social engineering is a particular challenge for awareness since scammers, fraudsters and other social engineers actively exploit our lack of awareness or (if that fails) subvert the very security mechanisms we put in place. "Your password has expired: pick a new one now to avoid losing access to your account!" is a classic example used by many a phisher. It hinges on tricking victims into accepting the premise (password expired) at face value and taking the easy option, clicking a link that leads them to the phisher's lair while thinking they are going to a legitimate password-change function. Our raising awareness of the need to choose strong passwords may be counterproductive if employees unwittingly associate phishing messages with user authentication and security!

Part of our awareness approach in December's materials on social engineering will be to hook-in to our natural tendency to notice something amiss, something strange and different. Humans are strong at spotting patterns at a subconscious level. For instance, did you even notice the gradation from red to green on the ladder image? That was a deliberate choice in designing the image, a fairly crude and obvious example ... once it has been pointed out anyway! See if you can spot the other, more subtle visual cues (and by all means email me to see what you missed!). 

Those occasional flukes we call "coincidences" hold an extra-special significance for us, popping into our conscious thoughts in a remarkable way. As we are routinely bombarded with information through our five senses, pattern recognition is an efficient way to interpret the information flow in relation to our prior experience and expectations (in 'normal' situations), and to identify new or different patterns (something 'abnormal' and perhaps threatening). In the jungle, such a difference might alert us to a well-camouflaged lion lurking among the grasses, a potentially harmful item of food that smells rotten, or the howl of a pack of hyenas closing in. Especially when there's precious little time to react, and failing to respond may be life-threatening, reflexes can literally save our skins. 

There are some reflexive aspects to security awareness concerning information security incidents or crises that threaten our personal safety. Mostly, though, we must supplement reflexes with learned behaviors. Awareness starts by pointing out dangers and encouraging/promoting particular responses in a deliberate, conscious way ... but through repetition, rehearsal and reinforcement we aim to make even learned responses subconscious - quick and automatic, similar to true reflexes.

I'm currently working up a suite of 'scam busters' - leaflets that describe different scams, frauds and social engineering attacks (providing information), and explain how to bust or avoid them (motivational guidance and advice, a 'call to action' you could say). Each scam buster fits on a single page, including a distinctive image that, we hope, will catch the eye and pop into the person's memory if they find themselves facing the situations described, or rather variants thereof. I'm in two minds about providing an example of each scam on the other side of the page: sometimes less is more, but briefly describing actual social engineering incidents might help bring home the point that these are genuine, real-world threats, not just theoretical concerns. Some readers will barely skim the front page, others may enjoy reading and thinking on. Either way, it's a win for security awareness.   

Tuesday 7 November 2017

Pipes and bikes

The past few days have been very successful.  

Yesterday, at last, I fixed the water pipe feeding water to the stock tanks in the nick of time before the animals went thirsty, a mammoth job for this long-time office worker (!). 

The pipe is an old galvanized steel pipe, laid when this was a working farm, well before it became a pine forest. An ancient Lister diesel engine and piston pump sends water in two directions, either to the house tanks or to the stock tanks. 

The house line was fine, luckily but the stock line wasn't, and evidently hadn't been maintained in a long time. Just getting to the start of the line across the stream was a mission with a 60 degree muddy incline going up about 8m, then a strip of native bush, then the pines ... which had been toppled by a cyclone back in April. 

What would once have been just forest is now a forest clearing with a few hundred near full-sized trees laying on the ground, toppled like the matchsticks some of them were destined to become. 

Spurred on by the falling firs, the vicious NZ bramble seized the opportunity to flourish in the Spring sunshine, forming man-eating bramble patches a few metres high and several metres across the hillside.  Here's the easy bit at the bottom of the hill after a day or two's clambering, de-brambling and chainsawing ...



It has taken several days spread over several weeks to cut back the brambles to locate the pipeline as it climbs out of the gulley where the stream flows, chainsaw the fallen firs off the line, then replace the munted (broken) bits of pipe with modern high-density polythene pressure pipe and fittings. Last evening, I was elated to hear the sound of water flowing into the stock tanks above the paddocks where the now-thirsty sheep and cattle live. 

Don't tell anyone, but I did a little dance. 

On Saturday I took a motorbike training course, for which I received a pin badge and certificate that will hopefully reduce the cost of my bike insurance. I guess I can put "GCR" after my name too!

So now it's back to the office, replacing physical with mental effort as we crack on with the next awareness module, covering social engineering.  More on that tomorrow.

Friday 3 November 2017

Audit sampling (LONG)

[This piece was prompted by a question on the ISO27k Forum about ISO27k certification auditors checking information security controls, and a response about compliance audit requirements. It's a backgrounder, an essay or a rant if you like. Feel free to skip it, or wait until you have a spare 10 mins, a strong coffee and the urge to read and think on!]

“Sampling” is an important concept in both auditing and science. Sampling (i.e. selecting a sample of a set or population for review) is necessary because under most circumstances it is practically impossible to assess every single member  – in fact it is often uncertain how many items belong to the set, where they are, what state they are in etc. There is often lots of uncertainty.

For example, imagine an auditor needs to check an organization’s information security policies in the course of an internal audit or more formal certification/compliance audit.

Some organizations make that quite easy by having a policy library or manual or database, typically a single place on the intranet where all the official corporate policies exist and are maintained and controlled as a suite. In a large/diverse organization there may be hundreds of policies, thousands if you include procedures and guidelines and work instructions and forms and so forth. Some of them may be tagged or organized under an “information security” heading, so the auditor can simply work down that list … but almost straight away he/she will run into the issue that information security is part of information risk is part of risk, and information security management is part of risk management is part of management, hence there should be lots of cross-references to other kinds of policy. A “privacy policy”, for instance, may well refer to policies on identification and authentication, access control, encryption etc. (within the information security domain) plus other policies in areas such as accountability, compliance, awareness and training, incident management etc. which may or may not fall outside the information security domain depending on how it is defined, plus applicable privacy-related laws and regulations, plus contracts and agreements (e.g.nondisclosure agreements) … hence the auditor could potentially end up attempting to audit the entire corporate policy suite and beyond! In practice, that’s not going to happen.

In many organizations, the job would be harder still because the policies etc. are not maintained as a coherent suite in one place, but are managed by various parts of the business for various purposes in various formats and styles. On top of that, ‘policy lifecycle management’ is an alien concept to some organizations, hence even the basics such as having a defined owner, an ‘issued’ or ‘effective from’ date, a clear status (e.g. draft, exposure draft, issued and current, withdrawn) etc. may not be there. Simply getting hands on copies of current policies is sometimes tricky, making it hard to determine how many policies there are, where they are, who owns them, whether they are current, whether they have been formally sanctioned or mandated or whatever.

Note: there could be several ‘audit findings’ in these circumstances, particularly the latter, before the auditor has even started reviewing a single policy in detail!

Scope concerns are emerging already: are ‘compliance policies’ part of the ‘information security policies’ that were to be checked? What about ‘business continuity policies’ or ‘health and safety policies’? What about the ‘employee rulebook’, oh and that nice little booklet used by the on-boarding team in the depths of HR in a business unit in Mongolia? What about a key supplier’s information security policies …? Information is a vital part of the entire business, the entire supply chain or network in fact, making information risk and security a very broad issue. An audit can’t realistically cover “everything” unless it is deliberately pitched at a very high level – in which case there would be no intent to delve deeply into each and every policy.

The next issue to consider is the time and resources available for the audit. Audits are inevitably constrained in practice: usually there is an audit plan or schedule or diary for each audit within the period (often several years), and auditors are in short supply, especially in specialist areas where deep technical knowledge is needed (e.g. tax, information security, risk, health and safety, engineering …).

Another issue is the depth and detail of the audit checks or tests or assessments or reviews or whatever you call them. I could spend hours poring over and painstakingly picking apart a relatively simple website privacy policy in great detail, digging out and checking all the external references (plus looking for any that are missing), exploring all the concerns (and the plus points too: I strive to be balanced and fair!), writing up my findings and perhaps elaborating on a set of recommended improvements. Add on the time needed to initiate and plan the audit, contact the business people responsible, schedule interviews and meetings, complete the internal quality assurance, discuss the draft findings and report, and close the audit, and the whole thing could easily consume a week or three – auditing a single, simple policy in depth. It would need to be an unusually valuable audit to justify the expense, since I could have spent my time on other, more worthwhile audit work instead (an opportunity cost). 

Yet another relevant matter is how the auditors go about sampling, the sampling rationale or technique or method. Again, there are lots of possibilities e.g. random sampling, stratified sampling, sampling by exception, pragmatic sampling, dependent sampling etc. The auditors might pick out a couple of items at each level in the policy pyramid, or all the information security policies released within the past six months, or every one produced by the Information Risk and Security Management function at HQ, or every one with a “C” or a “D” in the title, or all those on a pre-compiled shortlist of ‘dubious quality, worth a look’, or all those that explicitly reference GDPR, or whatever. Rather than all, they might pick ‘the top 10%’ by some criterion, or ‘the bottom 10%’ or whatever. They might simply start with whatever policies are most readily available, or whichever ones happen to catch their eye first, and then ‘go from there’, following a trail or a contingent sequence that arises naturally in the course of the initial reviews. The auditors' nose often leads the way.

In my experience, surprisingly few audits are conducted on a truly scientific basis, using sound statistical techniques for sampling and data analysis. It’s fairly unusual for the sampling rationale even to be formally considered and documented, except perhaps as a line or two of boilerplate text in the audit scoping and planning documentation. Usually, the auditors and/or their managers and audit clients come to an informal arrangement, or simply ‘get on with it and see how it goes’, relying on the auditors’ experience and preference. For sausage-machine audits that are repeated often (e.g. certification audits), the sampling rationale may be established by convention or habit, perhaps modified according to the particular circumstances (e.g. an initial infosec policy audit at a new client might seek first to assess the entire policy suite at a high level, with more in-depth audits in specific areas of concern in later audits; an audit at a small local firm might sample just 1 or 2 key policies, while auditing a global conglomerate might involve sampling 10 or more).

Finally, there’s a sting in the tail. All sampling entails risk. The auditors are trying to determine the characteristics of a population by sampling a part of it and generalizing or extrapolating the results to the whole. If the sample is not truly representative, the conclusions may be invalid and misleading, possibly quite wrong. More likely, they will be related in some fashion to the truth … but just how closely related we don’t normally know. There are statistical techniques to help us determine that, if we have taken the statistical approach, but even they have assumptions and uncertainties, which means risk. Furthermore, the evidence made available to the auditors varies in terms of its representativeness. Sensible auditors are quite careful to point out that they can only draw conclusions based on the evidence provided. So not only are they practically unable to conduct 100% sampling, the sample itself might not be truly representative, hence they may miss material facts, hence an audit “pass” does not necessarily mean everything is OK!  Most formal audit reports include some boilerplate text to that effect. That is not just a ‘get out of jail free’ card, an excuse or an attempt to gloss-over audit limitations: there is a genuine issue underneath to do with the audit process. It’s reminiscent of the issue that we can identify, assess and quantify various kinds of information risk, but we can’t prove the absence of risk. We can say things are probably safe and secure, but we can never be totally certain of that (except in theoretical situations with specific assumptions and constraints). Same thing with audits.