Saturday 27 October 2018

What is 'integrity'?

‘Integrity’ is a fascinating property of information, multi-faceted, more complex and more widely applicable in information security that it might seem.

It involves aspects and issues such as:
  • Factual correctness of information (objectivity versus subjectivity, plus the huge grey area in between and issues arising such as impartiality and perspective);
  • Relevance of information to the matter/s at hand and the substantiality or weight of evidence (e.g. 'contemporaneous notes' recorded in the policeman’s pocket book at the time of an alleged offence may carry more weight in court than later, verbal or written accounts and recollections, but audio/video footage and other evidence captured at the scene with all the right controls in effect tends to be even stronger, even weightier);
  • Completeness of information (which also touches on context and scope issues, and practicalities in a legal setting: there isn't time to present, consider and take into account absolutely everything, so someone has to select the most valuable bits, introducing their judgement into the process); 
  • Timeliness and up-to-date-ness of information (not being too outdated or stale, being applicable to and valid within the specific context);
  • Impact of information (some things are inherently notable and more important than others, perhaps having shock value or otherwise eliciting strong emotional reactions ... which has implications on what information is provided, how it is expressed, to whom, when, in what manner, with what emphasis etc.);
  • Proof and provability (the ability to demonstrate, confidently and convincingly, that everything is in order, with sufficient strength to resist challenges, hence the importance of ‘chain of custody’, for instance, and all manner of physical and logical controls to prevent or at least detect tampering, substitution etc. in forensics);
  • Trust and trustworthiness, confidence, credibility etc. of the information, plus the associated activities, systems, storage, analytical methods, analysts and so on (goes hand-in-hand with proof and provability, includes aspects such as compliance with applicable rules concerning how evidence may be obtained or captured in the first place);
  • Presentation, discussion, interpretation and ultimately the perceived meaning and value of information (that part of information integrity around communicating things properly in a manner that leads to them being correctly understood: communication involves both sending and receiving, remember, plus other issues such as interception, duplication, duplication, interruption, modification, delays, mis-routing, redirection etc.);
  • Competence, capability, credibility and suitability of various witnesses, analysts and advisors, lawyers, judges etc. involved in cases (e.g. what does it really mean to be an “expert witness”? What are the criteria and obligations of that role? Who determines whether a judge is competent to judge, and how?) ... and similar issues in other contexts (e.g. in business, managers rely on sound, expert advice from competent professional specialists);
  • IT systems, communications and data integrity (e.g. cyclic redundancy checks, cryptographic methods such as digital signatures using hashing, database/referential integrity and more - the technological and mathematical basis for ICT), plus the whole area of digital or eForensics as opposed to the more traditional forms of forensics;
  • Fairness and equitability (e.g. treating similar crimes on a similar basis, and protecting the rights of the weak against the might of the strong – with the interesting consequence that even low-weight ‘circumstantial’ evidence may be valuable if there is nothing better and simply discounting it would be ‘unfair’);
  • Ethics, plus all manner of frauds and scams, social engineering, manipulation, deception and more (human integrity failures! This, arguably, makes integrity the ultimate challenge in politics).
I realise this is a brain dump ... but it's clear that there is a lot of stuff here, more than enough to fill a month's awareness module on 'integrity'. The same is true of 'confidentiality' and 'availability', two closely-related core concepts in information security. 

But should we go down this route at all or is it all too 'academic', too 'theoretical', too 'airy-fairy'??

I am undecided at the moment. Even if we don't produce C, I and A awareness modules as such, we routinely cover C, I and A in the course of our other topics anyway since these are fundamental to all that we do. However, I find that long shopping list of things above intriguing: there's lots we could say in this area, and plenty of real-world examples we could use to illustrate and explain the topic pragmatically. It would be educational ... but would it be sufficiently interesting and motivational for the majority of our audience?

The list above was prompted by a question on the ISO27k Forum about integrity in forensics ... which suggest another awareness topic. I guess the endless stream of TV shows in this area has set the scene for us, and would provide an opportunity to poke fun at gross inaccuracies such as detectives wandering willy-nilly through crime scenes that are being, or have yet to be, forensically examined. Hmmm, "fun" is something that everyone enjoys so an awareness module on forensics is a definite possibility. I guess I should start watching those CSI programs and taking notes.

Meanwhile, the jury's out.

Thursday 25 October 2018

Risk awareness


In a discussion thread on the ISO27k Forum about engaging corporate Risk Management functions with the information security work, Nigel Landman mentioned that ‘Everything becomes a business risk’ ... which set me thinking.

Managing risks to the organization is a significant element of business management – in fact it is possible to express virtually everything about management in terms of managing risks and opportunities (upside risks). It's a very broadly-applicable and fundamental concept.

Given the importance and value of ‘information’ in any business, it’s hard to imagine any full-scope Risk Management function failing to be concerned about information risk and security, unless for some reason they are limited to specific categories or types of risk (e.g. financial, strategic, compliance, competitive etc.) and for some reason haven’t (yet!) made the connection with information risks in those areas … in which case exploring, explaining and elaborating on the information risk and security aspects in conjunction with the Risk Management function would seem to be a worthwhile activity early-on in the ISO27k implementation.

The same goes for various other corporate functions that are currently disengaged, unaware or reluctant to get involved in information risk and security. The usual excuse is that “it's an IT thing”, a myth perpetuated by crudely labeling it “IT risk”, “IT security” or “cybersecurity”. Of course there are risks to or involving IT but that’s just the tip of the iceberg of information risks, business risks, and risk in general. It's fine to focus-in but makes little sense to attempt to manage individual categories or types of risk (includinginformation risk, by the way) in isolation from the rest. You could even say that failing to manage information risks within the broader business context is itself a business risk - or an opportunity for improvement!

At a deeper psychological level, lack of understanding and fear of the unknown may well be factors behind the reluctance of some business people to engage with the ISO27k implementation, the Information Security Management System and information risk management. Some of the issues we are dealing with are complex and scary even for us, let alone those without a background and professional interest in the field. Couple that with our profession's almost obsessive focus on harmful, downside risks and it's easy to see why business managers might be reluctant to engage. We're making it easy for them to drop it in the "bad news" bin, leaving it to someone else. Hopefully. Fingers crossed.

I recommend making security awareness an integral part of the ISO27k implementation project as well as the ISMS. Specifically, I'm suggesting explaining information risk and security patiently to managers and other business people using business language and concepts. I gave an example here yesterday in the piece about preparing an elevator pitch on cloud security: rather than blabbering on about virtual systems and network security, we're emphasizing the business implications of cloud-related risks and opportunities. "Cloud services can be cost-effective and reliable, provided the associated risks are treated appropriately." may be just a single sentence but it's one-tenth of the elevator pitch, a key point worth emphasizing.

Wednesday 24 October 2018

Cloud security elevator pitch

Imagine that you bump into a senior manager - an executive, maybe the CEO or MD or someone else who sits at the helm of your organization - presenting you with a fleeting opportunity to communicate.

Imagine that you have concerns about the organization's approach to cloud computing - what it is doing or not doing, the way things are going, the strategies and priorities, objectives and resources, that sort of thing.

Now imagine how you might put across your concerns and interests in that moment that either just occurs (a chance meeting in the elevator, perhaps), or that you engineer in some way (maybe targeting and snaring your prey en route to or from the Executive Suite, or lunch).

What would you say?  I'm not asking 'what would you talk about' in a sweeping hand-waving cloudy sort of way but more precisely what are the few key points you want to express, and exactly how would you do that?  

The challenge is similar to writing an executive summary on a management report, or preparing the introduction and conclusion of a management presentation, essentially getting yourself in the zone to make the most of the brief opportunity. Less is more, so condensing or collapsing all the things you'd quite like to say down to those particulars that you need to say is a management skill that takes practice. It's almost triage: when the elevator doors open and your prey heads into the distance, what is or are the messages you most want to leave them with, above all else?

It's a challenge for us, too, to generate generic security awareness materials for exactly that kind of situation. What are the key issues for senior management relating to the monthly topic (i.e. cloud security for November's module)? What thoughts or impressions or action points are likely to be the most important for most if not all our clients? And how can we communicate those as efficiently and effectively as possible, as succinctly and yet poignantly as we can?

We have the luxury of time to contemplate and help prepare our clients for the possibility of that chance meeting. They have the benefit of the awareness materials as a whole, the research and thinking that goes into the awareness module as well as the 'elevator pitch' itself. Through less than 150 words, we're encouraging them to get in the zone, prepared for whatever situation occurs - a form of contingency preparation really. We can help them get at least one step ahead of the game, ready, set and willing to seize the moment. 

Thursday 18 October 2018

Intentions to actions

"Asking for a Friend: Evaluating Response Biases in Security User Studies" is a lengthy scientific research paper exploring consumer software update behavior. Authors Elissa M. Redmiles, Ziyun Zhu, Sean Kross, Dhruv Kuchhal, Tudor Dumitras, and Michelle L. Mazurek conclude, in part, that people don't in fact update their systems as promptly as they say they do, or should do.

The study is primarily concerned with the methods used to survey human behaviors. The authors acknowledge the extensive body of scientific research concerning survey methods and common biases. In respect of discrepancies between lab tests and real-world results, they acknowledge typical reasons such as: 
  • Sub-optimal study designs;
  • Inadequate survey population sampling;
  • Cognitive biases by respondents, including a reluctance to admit to socially unacceptable behavior; and 
  • Other issues with some approaches (e.g. online surveys).
They actively countered some of the biases in this study, for example by:
  • Carefully framing and wording each survey question and the available responses (e.g. asking how respondents would advise a friend on speed of updates, as opposed to how they report their own update speeds);
  • Randomizing the queston sequence;
  • Comparing online with interview-based surveys. 
My interest is more pragmatic than academic: why is it that people don't update as promptly as they think or should do? Is there anything we might do to close that gap between intention and action?

Awareness efforts (including  ours!) typically emphasize the importance of rapid patching of vulnerable systems for security reasons ... but it would be helpful if our approach was even more motivational.

To be fair, it would also help if the process of patching systems was less arduous, disruptive and risky in its own right. Automating the new-version checks, patch downloading and installation reduce the effort but increase the risk, especially on today's relatively complex IT systems with numerous applications sharing and sometimes conflicting for the same resources.  There's a lot to be said for the IoT-type approach, simplifying things (and things) through specialization. Why install a networked Windows or Linux PC to control an elevator when a dedicated and isolated control system can do the job with much less complexity and risk? 

And one more thing: if software was better specified, designed, developed and quality-assured in the first place, there would be less need for security patches at all! Dream on.

Saturday 13 October 2018

CERT NZ goes phishing

CERT NZ (apparently) has once again circulated an email warning about phishing, containing a distinctly phishy link to "READ MORE INFORMATION". The hyperlink leads from there to certnz.cmail20.com with a tracker-type URL tail.

Unlike most of the intended audience, I guess, I'm cyber-smart enough to check out the whois record: cmail20.com domain is registered to Campaign Monitor Pty Ltd of New South Wales - presumably a legitimate mass emailer/marketing company whose services are being used by CERT NZ to circulate the warnings - but that's not the point: the fact is that the embedded link target is patently not CERT NZ's own domain.

What's more, the body of the email is a rather vaguely-worded warning, not entirely dissimilar to many a classic phisher. "Nasty stuff is going to happen unless you do something" just about sums it up. It isn't even addressed to me by name, despite me being required to supply my name and email address when I signed up for CERT NZ's "updates". They know who I am.

I've notified CERT NZ about this kind of thing privately before, to no avail, so this time around I'm going public, here on the blog.

CERT NZ, you are perpetuating the problem. Wake up guys! It's simply not good enough. I expect more of you. Your sponsors, partners and taxpayers expect more of you. NZ expects more of you.

Is it really that difficult to either drop the marketing tracking, or at least to route clickers via cert.govt.nz first, with a redirect from there to the tracker?

Is there nobody in CERT NZ with sufficient clue to appreciate and respond to such an obvious concern? 

Am I wasting these bytes? Hello, CERT NZ! Anyone home?

Ironically, CERT NZ has allegedly been promoting the past five days as "Cyber Smart Week 2018", which as far as I can make out appears to consist of a single web page on CERT NZ's website expanding a little on these four simple tips:
  1. Use unique passwords
  2. Turn on 2FA
  3. Update your apps
  4. Check your privacy
Admirably brief ... but there's nothing explicit about phishing or business email compromise, nor social engineering, scams and frauds. No obvious links to further information. 

Ironically, again, the Cyber Smart page ends: 
"Report any cyber security issue you experience to CERT NZ. We’ll help you identify it and let you know what the next steps are to resolve it. We’ll also use the information to create advice and guidance for others who might be experiencing the same issue."
Been there, done that, got precisely nowhere. I despair.

Next time I receive a phishing-like email from CERT NZ, I'll take it up with the news media. Maybe they care as much as me.

Little boxes, little boxes

In preparation for a forthcoming security awareness and training module on business continuity, I'm re-reading The Power of Resilience by Yossi Sheffi (one of my top ten books I blogged about the other day). 

It's a fascinating, well-written and thought-provoking book. Yossi uses numerous case studies based on companies with relatively mature approaches to business continuity to illustrate how they are dealing with the practical issues that arise from today's complex and dynamic supply chains - or rather supply networks or meshes.

Risk assessment is of course an important part of business continuity management, for example:
  • Identifying weak, unreliable or vulnerable parts of the massive global 'system' needed to manufacture and supply, say, aircraft or PCs;
  • Determining what if anything can be done to strengthen or bolster them; and 
  • Putting in place the necessary arrangements (controls) to make the extended system as a whole more resilient.
Yossi covers the probability plus impact approach to risk analysis that I've described several times on this blog, with (on page 34) a version of the classic Probability Impact Graph:


The dotted lines divide the example PIG into quadrants forming the dreaded 2x2 matrix much overused by consultants and politicians. He discusses more involved versions including the 5x5 matrix used by 'a large beverage company' with numbers arbitrarily assigned to each axis - not the obvious 1,2,3,4,5 linear sequence but (for some barely credible reason) 1,3,7,15 and 31 along the impact axis and 1,2,4,7 and 11 for likelihood or probability, with the implication that they then multiply the values to generate their risk scores.

That appears straightforward but is in fact an inappropriate application of mathematics since the numbers are not cardinal numbers or percentages denoting specific quantities but category labels (ordinals). The axes on the 2x2 matrix could have been labeled green and red or Freda and Fred: it makes no sense to multiply them together ... but that's exactly what happens, often.

Yossi's example PIG above demonstrates another problem with the approach: "Earthquake" is shown across the middle of the impact axis, spanning the Light and Severe categories. So which is it? If it must be in a box, which box?

The obvious response is either to shift "Earthquake" away from the boundary, arbitrarily, or add another central category, dividing that axis into three ... which simply perpetuates the issue since there are so few clear columns on the PIG to draw the lines. Likewise with the rows.

What's more, earthquakes vary from barely detectable up to totally devastating in impact, way more range than the PIG shows. Those barely-detectable quakes happen much more frequently than the devastating ones (fortunately!) hence a more accurate representation would be a long diagonal shape (a line?  An oval?  A banana? Some irregular fluffy cloud maybe?) mostly sloping down from left to right, crossing two or three of the four quadrants and extending beyond the graph area to the left and right. A single risk score is inappropriate in this case, in almost all cases in fact since most risks show the same effect: more significant and damaging incidents typically occur less often than relatively minor ones. We can't accurately determine where they fall on the PIG since the boundaries are indistinct. We seldom have reliable data, especially for infrequent incidents or those that often remain somewhat hidden and perhaps totally unrecognized as such (e.g. frauds). 

As if that's not enough already, the whole situation is dynamic. The PIG is a snapshot representing our understanding at a single point in time ... but some of the risks may have materially changed since then, or could materially change in an instant. Others 'evolve' gradually, while some vary unpredictably over the time horizons typical in business. Some of them may be related or linked, perhaps even inter-dependent (e.g. "Computer virus", or more accurately "Malware",  is one of many causes of "IT system failure", hence is it appropriate to show those as two distinct, separated risks on the PIG?). 

The possibility of cascading failures is one of Yossi's core messages: it is not sufficient or appropriate to consider individual parts of a complex system in isolation - "the straw that broke the camel's back" or "the butterfly effect". A seemingly insignificant issue in some obscure part of a complex system may trigger a cascade that substantially magnifies the resulting impact. System-level thinking is required, a wholly different conceptual basis.

Given all the above complexity, and more, it makes sense (I think) to dispense with the categories and quadrants, the dodgy mathematics and the pretense at being objective or scientific, using the PIG instead as a tool for subjective analysis, discussion and hopefully agreement among people who understand and are affected by the issues at hand. An obvious yet very worthwhile purpose is to focus attention first and foremost on the "significant" risks towards the top right of the PIG plus those across the diagonal from top left to bottom right, while downplaying (but not totally ignoring!) those towards the bottom left. That's the reason our PIGs have no specific values on the axes, no little boxes, a variety of sizes and shapes of text indicating the risks, overlaid on a background simplistically but highly effectively colored red-amber-green. We're not ignoring the complexities - far from it: we're consciously and deliberately simplifying things down to the point that experts and ordinary people (managers, mostly) can consider, discuss and decide stuff, especially those red and amber zone risks. Are they 'about right'? What have we missed here? Are there any linkages or common factors that we ought to consider? It's a pragmatic approach that works very well in practice, thank you, as both an awareness and a risk management tool.

I commend it to the house.

Friday 12 October 2018

Evolving perspectives

We're slaving away this month on a set of awareness materials about the information security aspects of cloud computing - an approach that was new and scary when we first covered it just a few years back.

These days, cloud computing has become an accepted, conventional, mainstream part of the IT and business worlds. Some of the information risks have materially changed but most are simply better understood today, meaning we are better able to predict their probabilities and impacts.

Hence I am re-drawing the generic Probability Impact Graph for cloud security, shifting the identified risks around, checking and adjusting the wording and hunting for any new ones.  

Those 'new ones' include information risks that:
  • We simply didn't identify when we last performed the risk analysis - oversights, failures in our risk identification process;
  • We identified but didn't include explicitly on the PIG, most likely because we didn't understand them well enough to figure them out, thought them too trivial even to mention, or considered them to be part of the risks shown;
  • Were literally not present at the time of our original risk analysis but have come into being subsequently.
The same thing happens routinely in our field due to frequent innovation - IoT being an obvious current example. When we next revise the IoT PIG, I wonder how the picture will change and what risks we'll add to the graph that didn't even feature before?

In addition to changing information risks, the information security controls also change over time. Some are completely new, others are refined or re-purposed, and some are downplayed or retired, perhaps replaced by different (hopefully more effective!) ones. And, behind all of this, the world around us is constantly moving on. The bigger picture of society, business and culture is also shifting.

... which all makes information security and security awareness both challenging and fun. There's always something new to raise, new perspectives, new angles to explore. Never a dull moment! 

Tuesday 9 October 2018

My top ten infosec books

As a bookworm, these are my top ten information security books, the ones I have found most insightful and provocative:
  1. The Cuckoo’s Egg by Clifford Stoll – the whodunnit that first got me seriously interested in hacking and IT security. A gripping story of intrigue and perseverance.

  2. Codebreakers by Hinsley & Stripp – the extraordinary tale of WWII cryptanalysis at Bletchley Park, and ultra-secrets.

  3. Secrets and Lies by Bruce Schneier – Bruce’s writing is always stimulating, thought-provoking. S&L was the first I read, and would remind me of the books that followed.

  4. The Art of Intrusion by Kevin Mitnick – as with Bruce, the first book reminds me of the series. More social engineering than hacking, but ingenious nevertheless. The hacker mindset laid bare.

  5. Information Paradox by John Thorp – the book that changed my way of thinking, treating IT and information as business tools. Underpins ISACA’s ValIT method.

  6. Managing an Information Security and Privacy Awareness and Training Program by Rebecca Herold – the book I wish I had written (and retitled!). Full to the brim with bright ideas.

  7. How to Measure Anything by Doug Hubbard – creative approaches to measure and analyse situations that seem unmeasurable. All Doug's are well worth studying. 

  8. Security Engineering by Ross Anderson – my infosec textbook of choice, though rather outdated. Emphasizes a systematic, engineering approach to infosec.

  9. DTI Code of Practice for Information Security (BSI DISC PD003), or the Shell corporate infosec manual before that – both precursors to BS 7799 and ISO27k. A chance to think about how far we’ve come and where we are, or rather should be, heading next with security standards.

  10. The Power of Resilience by Yossi Sheffi – the business continuity book that truly explores supply chain risks and proposes pragmatic controls.
What would you suggest for my Amazon wish-list?

Tuesday 2 October 2018

Phishing awareness and training module

It's out: a fully revised (almost completely rewritten!) awareness and training module on phishing.

Phishing is one of many social engineering threats, perhaps the most widespread and most threatening.

Socially-engineering people into opening malicious messages, attachments and links has proven an effective way to bypass many technical security controls.

Phishing is a business enterprise, a highly profitable and successful one making this a growth industry. Typical losses from phishing attacks have been estimated at $1.6m per incident, with some stretching into the tens and perhaps hundreds of millions of dollars.

Just as Advanced Persistent Threat (APT) takes malware to a higher level of risk, so Business Email Compromise (BEC) puts an even more sinister spin on regular phishing. With BEC, the social engineering is custom-designed to coerce employees in powerful, trusted corporate roles to compromise their organizations, for example by making unauthorized and inappropriate wire transfers or online payments from corporate bank accounts to accounts controlled by the fraudsters.

As with ordinary phishing, the fraudsters behind BEC and other novel forms of social engineering have plenty of opportunities to develop variants of existing attacks as well as developing totally novel ones. Therefore, we can expect to see more numerous, sophisticated and costly incidents as a result. Aggressive dark-side innovation is a particular feature of the challenges in this area, making creative approaches to awareness and training even more valuable. We hope to prompt managers and professionals especially to think through the ramifications of the specific incidents described, generalize the lessons and consider the broader implications. We’re doing our best to make the organization future-proof. It’s a big ask though! Good luck.

Learning objectives

October’s module is designed to:
  • Introduce and explain phishing and related threats in straightforward terms, illustrated with examples and diagrams;
  • Expand on the associated information risks and controls, from the dual perspectives of individuals and the organization;
  • Encourage individuals to spot and react appropriately to possible phishing attempts targeting them personally;
  • Encourage workers to spot and react appropriately to phishing and BEC attacks targeting the organization, plus other social engineering attacks, frauds and scams;
  • Stimulate people to think - and most of all act - more securely in a general way, for example being more alert for the clues or indicators of trouble ahead, and reporting them.
Consider your organization’s learning objectives in relation to phishing. Are there specific concerns in this area, or just a general interest? Has your organization been used as a phishing lure, maybe, or suffered spear-phishing or BEC incidents? Do you feel particularly vulnerable in some way, perhaps having narrowly avoided disaster (a near-miss)? Are there certain business units, departments, functions, teams or individuals that could really do with a knowledge and motivational boost? Lots to think about this month!

Content outline