Saturday 30 March 2019

Spotting incidents


'Spotting incidents’ is the brand new security awareness and training module for April.

It concerns vigilance, early detection and (where appropriate) prompt reporting of a deliberately diverse and open-ended set of information-related incidents, concerns and risks ... 

Whether you consider them to be incidents or not, suspicious activities and near-misses are also worth reporting if ‘early warning’ is something you and your management would appreciate. Nasty surprises are, well, nasty.  The sooner you know about trouble on the horizon, the more options you have, not least the possibility of deftly avoiding the minefields ahead.

Scope

The awareness module concerns two critical early steps that kick-start the incident management cycle:

We have covered the remainder of the incident management process before and will do so again - in fact every single awareness module concerns incidents since they are the very reason that information risks are of concern, and information security is necessary. 

Learning objectives

‘Spotting incidents’ is about identifying and reporting a wide range of information security-related incidents:
  • For the general staff audience, the awareness and training materials emphasize vigilance and diligence.  Simply put, we’re encouraging people to watch out for and report more stuff, as well as responding directly to threats (e.g. by not clicking suspicious links). 
  • For the management audience, the materials also cover reporting (e.g. enabling and actively encouraging staff to let management know about issues, incidents, risks, near-misses etc.) and edge forward into the analysis and response to reported incidents, including the need to disclose some incidents externally (e.g. privacy breaches).
  • For the professional audience, the materials touch on the ‘instrumentation’ of information systems and processes.  Automated flagging/alerting and logging of security-relevant events naturally complements the manual reporting by IT users, but is a neglected area of systems architecture and design.
Those three streams support each other, setting workers thinking and talking about this topic, fostering the security culture in a general way. It’s a good topic for socializing security among the organization because it is relevant to, involves and affects everyone.
Think about your learning objectives in this area. What are your organization’s challenges around spotting incidents? If you are struggling to deal with the volume of incident-related reports already flowing and reluctant to invite yet more, you’d better get more efficient at assessing, handling and using those reports! The preferred way to cut the volume of incident reports is to improve your information security, which includes improving the quality and relevance as well as timeliness of incident reporting.

Don’t just complain: raise your game!

As well as customizing the materials to suit your awareness branding and objectives, feel free to blend-in additional content.  Use the materials in the company newsletters and magazines, your intranet Security Zone, in awareness events and training courses, and for new employee induction or orientation purposes.

Wednesday 27 March 2019

Break-in news


Kaspersky has released information on Operation ShadowHammer, a malware/APT infection targeting ASUS systems with particular MAC addresses on their network adapters.

According to a Motherboard report:
"The issue highlights the growing threat from so-called supply-chain attacks, where malicious software or components get installed on systems as they’re manufactured or assembled, or afterward via trusted vendor channels. Last year the US launched a supply chain task force to examine the issue after a number of supply-chain attacks were uncovered in recent years. Although most attention on supply-chain attacks focuses on the potential for malicious implants to be added to hardware or software during manufacturing, vendor software updates are an ideal way for attackers to deliver malware to systems after they’re sold, because customers trust vendor updates, especially if they’re signed with a vendor’s legitimate digital certificate."
And that, in a nutshell, is a concern with, say, the Microsoft Windows 10 patches, pushed out at Microsoft's whim to Windows 10 users who haven't figured out yet how to prevent or at least defer them until they have been checked out.  Same thing with Android and other operating system and application auto-updates: aside from the inconvenience of downloading and installing the patches, and the aggravation caused by the need to patch up such shoddy software in the first place, the security issue is insidious ... and yet there is also a substantial risk of not patching at all, or of delaying patches.

Rock, meet hard place.

As we know from Stuxnet, bank ATM and other infections, even supposedly offline/isolated computer systems and private networks are not totally immune to online attacks. As for anything permanently connected to the Internet (IoT things, for instance ... plus virtually all other ICT devices), well that's like someone grabbing onto the exposed end of a high voltage power cable in the hope that it has been permanently disconnected.

The ultimate solution is to improve the quality of software substantially, in particular minimizing exploitable vulnerabilities which implies simplifying and formalizing the design and coding. Unfortunately, that goal has eluded us so far and, to be frank, seems unattainable in practice. Therefore we're stuck with this mess of our own creation. Automation is wonderful but we can't trust the robots.

Monday 25 March 2019

BNlog March 25 - awareness supports incident management


ITU X.1056 "Security incident management guidelines for telecommunications organizations" includes the following little nugget:


Well said ITU-T!

The idea of incorporating information about the organization's own incidents into the awareness program is something we suggest almost every month in the train-the-trainer guides for each security awareness module. Actual incidents naturally resonate with the audience, all the more so if they affected the organization directly. 

Saturday 23 March 2019

Business continuity lessons from Fukushima

As far as incidents go, a core meltdown at a nuclear power plant is about as big as they come. This afternoon, I've been reading an official US report into the Fukushima incident following the Sendai tsunami eight years ago this month. "Lessons Learned from the Fukushima Nuclear Accident for Improving Safety of U.S. Nuclear Plants" is an excellent treatise on the incident, published just over three years afterwards.

As you would expect from a formal report, the style is matter-of-fact, describing the sequence of events that unfolded as the tsunami struck, the plant was terminally damaged, the electrical supplies and hence the monitoring, control and communications systems all failed, and the operators went to heroic lengths to shutdown all the units. The scenario was so extreme that the well-practiced emergency operating procedures and fail-safe controls proved inadequate, leaving the operators firstly struggling to determine what was going on inside the reactor buildings and the cores, and secondly almost powerless to keep things under control.

This paragraph from chapter 4 in particular stands out for me:
"Accidents frequently involve a confluence of interacting faults resulting in situations that have not been previously anticipated, placing a premium on the ingenuity and adaptability of plant personnel. In the committee's judgment, the personnel at the Fukushima Daiichi nuclear plant showed courage and resilience in responding to the March 11, 2011, accident under extraordinarily difficult conditions. Their actions potentially prevented even more severe outcomes at the plant."
In other words, yes it was a nightmare scenario that would have been even worse still, if it were not for the heroes working in the plant at the time. Their resilience and resolve made a real difference when the chips were down.

This was a true contingency situation, worse than their worst-case planning and preparations. They had to make-do with limited available resources including information, under extreme pressure. True grit.

If you work in nuclear power, I guess you are well aware of the incident, the reports and the changes arising as the lessons were learnt. There are lessons for the rest of us, too, in respect of incident preparation and management, regardless of the specific nature of the incident or the context. It is obviously and directly relevant to power stations, chemical factories and oil refineries, for example, but also in different ways to literally any organization, even individuals. For instance, severe power and communications problems literally and figuratively left people in the dark: what are your communications and emergency power arrangements in the event of a disaster? 

[Hinson tip: if you need to login to the cloud to search your online disaster management manual for 'comms' and 'power', you've already made a huge leap of faith!]

The incident might feature in April's awareness module on 'Spotting incidents', in particular concerning those comms issues that prevented the operators, managers and authorities (both on and off-site, and not just in Japan) from finding out exactly what had happened during the incident and coordinating the response. The situation is too complex to explain simply though, so we'd need to pick out a few key points that have general appeal and value. Tasters, as it were, of the full report.

Thursday 21 March 2019

Overcoming inertia

Yesterday I wrote about a five-part strategy to increase the number and quality of incident reports. The fifth part involves making both staff and management vigilant or alert for trouble.

There is an obvious link here to the ongoing security awareness and training activities, pointing out and explaining the wide variety of threats that people should know about. Thanks to this month's awareness content on malware, for instance, workers should be in a better position to spot suspicious emails and other situations in which they are at high risk of picking up malware infections. Furthermore, they ought to know what to do when they spot threats - avoiding risky activities (e.g. not opening dodgy email attachments or links) and reporting them.

In April we have the opportunity to take that a step further. What could or should the organization do to empower (facilitate and encourage) alert workers to report the malware threats and other concerns that they spot? What's the best way to overcome the natural reluctance to speak-up, making 'Keep calm and carry on' seem like the easy option?

There's more to that issue than meets the eye ... making it an excellent open-ended poser to raise and discuss as a group during April's awareness seminars. It brings up issues such as:
  • Trust and respect - reporters believing that their incident reports will be taken seriously and in good faith, and recipients trusting that the reporters have a genuine basis for reporting;
  • Reasonable expectations concerning the activities to investigate and address reported incidents, following established processes;
  • Barriers - the need to overcome inertia and actively encourage, not just facilitate, incident reporting.
In the speaker notes for April's management seminar and in the accompanying management briefing, we will be raising a few issues along those lines but our aim is to prompt or kick-start the discussion in the particular context of a specific customer organization, not to spoon-feed them with the whole nine yards. Each of our lovely customers is unique in terms of their business situations - their industries, locations, cultures, maturity levels, objectives, risks and so on. They got wherever they are today by their own special route, and where they are heading tomorrow is down to them. We believe incident reporting is probably a valuable part of their journey but exactly what part it plays we can't say: they need to figure that out for themselves.

Providing valuable, informative and stimulating information security awareness and training content for a wide range of customers is an 'interesting' challenge. It's the reason we deliver fully-customizable content (mostly MS Office files that customers can adapt to suit their circumstances) and try hard not to impose solutions (e.g. our awareness posters are designed to intrigue rather than tell). That said, information risk and security is clearly our passion and we make no bones about it. We are evangelical about this stuff, keen to spread the word and fire people up. It's what we do.

Wednesday 20 March 2019

A big win for security awareness

Working on the management seminar slide-deck over the past couple of days, we've developed and documented a coherent five-part strategy for improving both the speed and the accuracy of incident reporting.

The strategy mostly involves changing the motivations and behaviors of both staff and management, possibly with some IT systems and metrics changes where appropriate to support the objectives.

Elaborating on the background and those objectives explains what the strategy is intended to achieve: the slides and notes justify the approach in business terms, in effect outlining a business case. It's generic, of course, but providing it in the form of a management seminar plus supporting notes and briefings encouragescustomers to engage their managers in a discussion around the proposal, hopefully leading to consensus and agreement to proceed, one way or another.

The nice thing about this is that it can't really fail: the very act of management considering and discussing the proposal itself drives the improvements we are suggesting in a general manner, even if the decision is made not to proceed with the specific changes proposed. If the response from management is more favorable, the outcome will no doubt be some version of the strategy customized to suit the specific organizational context and needs, plus management's commitment to see it through.

Either way, that's a win for security awareness!

Sunday 17 March 2019

Cat-skinning

Incident reporting is a key objective of next month's security awareness module. More specifically, we'd like workers to report information security matters promptly. 

So how might we achieve that through the awareness and training materials? Possible approaches include:
  1. Tell them to report incidents. Instruct them. Give them a direct order.

  2. Warn them about not doing it. Perhaps threaten some form of penalty if they don't.

  3. Convince them that it is in the organization's interests for workers to report stuff. Persuade them of the value.

  4. Convince workers that it is in their own best interest to report stuff. Persuade them.

  5. Explain the reporting requirement (e.g. what kinds of things should they report, and how?) and encourage them to do so.

  6. Make reporting incidents 'the easy option', and not reporting harder.

  7. Reward people for reporting incidents.

  8. Something else? Trick them? Goad them? Follow up on those who did not report stuff promptly, asking about their reasons?
Having considered all of them, we'll combine a selection of these approaches in the awareness content and the train-the-trainer guide.

In the staff seminar and staff briefing, for instance, the line we're taking is to describe everyday situations where reporting incidents directly benefits the reporter (approach #4 in the list). Having seeded the idea in the personal context, we'll make the connection to the business context (#3) and expand a little on what ought to be reported (#5) ... and that's pretty much it for the general audience.

For managers, there is mileage in #1 (policies and procedures) and #7 (an incentive scheme?) ... and #8 in the sense that we are only suggesting approaches, leaving our subscribers to interpret or adapt them as they wish. Even #2 might be necessary in some organizations, although it is rather negative compared to the alternatives. 

For professionals, #6 hints at designing reporting systems and processes for ease of use, encouraging people to report stuff ... and, where appropriate, automatic reporting if specific criteria are met, which takes the awareness materials into another potentially interesting area. If the professionals are prompted at least to think about the issue, our job is done.

Mandatory reporting of incidents to third parties is a distinct but important issue, especially for management. The privacy breach reporting deadline under GDPR (a topical example) is a very tough challenge for some organizations, requiring substantial changes in their approach to internal incident reporting, escalation and external reporting, and more generally the attitudes of those involved, making this a cultural issue. 

Saturday 16 March 2019

Terrorism in NZ

Last evening I turned on the TV to veg-out at the end of a busy week. Instead of my favourite NZ comedy quiz show, both main national channels were looping endlessly with news of the terrorist incident in Christchurch. Well I say 'news': mostly it was lame interviews with people tenuously connected to Christchurch or the Muslim community in NZ, and fumbling interviewers seemingly trying to fill air-time. Ticker-tape banners across the bottom of the screen, ALL IN CAPS, kept repeating the same few messages about the PM mentioning terrorism, yet neglected to say what had actually happened. I managed to piece together a sketchy outline of the incident before eventually giving up. Too much effort for a Friday night.

I gather around 50 people died yesterday in the event. Also yesterday, about 90 other people died, and another ~90 will die today, and every day on average according to the official government statistics:  



This year, some 6,000 Kiwis will die of heart disease, and between 300 and 400 of us will die on the roads.  

Against that backdrop, deaths due to terrorism do not even feature in the stats, so here I'll give you a very rough idea of where we stand:

Don't get me wrong, it is tragic that ~50 people died in the incident yesterday and of course I regret that anyone died at the hands of another. But get real. The media have, as usual, blown it out of all proportion, and turned a relatively minor incident into an enormous drop-everything disaster. 

So what it is about 'terrorism' that sends the media - and it seems the entire population - into such a frenzy? Why is 'terrorism' so newsworthy? Why is it reported so badly? Who benefits from scaring the general population in this way?

Oh, hang on, the clue is in the name. Terrorism only works if we are terrified.

This looks to me like yet another example of 'outrage', a fascinating social phenomenon involving an emotional rather than rational response, amplified by the news and social media with positive feedback leading to a runaway situation. Here I am providing a little negative feedback to redress the balance but I'm sure I will be criticised for having the temerity to even express this. And that, to me, is terrorism of a different kind - information terrorism.

Thursday 14 March 2019

Carving-up the policy pie

Today being Pi day 2019, think of the organization's suite of policies as a delicious pie with numerous ingredients, maybe a crunchy crust and toppings. Whether it's an award winning blue cheese and steak pie from my local baker, or a pecan pie with whipped cream and honey, the issue I'm circling around is how to slice up the pie. Are we going for symmetric segments, chords or layers? OK, enough of the pi-puns already, today I'm heading off at a tangent, prompted by an ongoing discussion around policies on the ISO27k Forum - specifically a thread about policy compliance.

Last month I blogged about policy management. Today I'll explore the policy management process and governance in more depth in the context of information risk and security or cybersecurity if you will.

In my experience, managers who are reluctant or unable to understand the [scary cyber] policy content stick to the bits they can do i.e. the formalities of 'policy approval' ... and that's about it. They leave the experts to write the guts of the policy, and even take their lead on whether there ought to be a policy at all, plus what the actual policy position should be. I rather suspect some don't even properly read and understand the policies they are asked to approve, not that they'd ever admit it!

The experts, in turn, naturally concentrate on the bits they are most comfortable with, namely writing that [cyber] content. Competent and experienced policy authors are well aware of the potential implications of [cyber] policies in their areas of specialty, so a lot of their effort goes into the fine details, crafting the specific wording to achieve [their view of] the intended effect with the least amount of collateral damage: they are busy down in the weeds of the standards and procedures, thinking especially about implementation issues and practicalities rather than true policies. For some of them anyway, everything else is dismissed as 'mere formalities'. 

Incompetent and inexperienced policy authors - well, they just kind of have a go at it in the hope that either it's good enough or maybe someone else will sort it out. Mostly they don't even appreciate the issues I'm discussing. Those dreadful policies written in pseudo-legal language are a bit of a giveaway, plus the ones that are literally unworkable, half-baked, sometimes unreadable and usually unhelpful. Occasionally worse than useless. 

Many experts and managers address each policy independently as if it exists in a vacuum, potentially leading to serious issues down the road such as direct conflicts with other policies and directives, perhaps even laws, regulations, strategies, contractual commitments, statements of intent, corporate values and so forth. Pity the poor worker instructed to comply with everything! The underlying issue is that the policies, procedures, directives, laws etc. form a complex and dynamic multidimensional matrix including but stretching far beyond the specific subject area of any one: they should all support and complement each other with few overlaps and no conflicts or gaps but good luck to anyone trying to achieve that in practice! Simply locating and mapping them all would be a job in itself, let alone consistently managing the entire suite as a coherent whole. 

So, in practice, organizations normally structure their policies into clusters around business departments such as finance, IT and HR. If we're lucky, the policies use templates making them are reasonably consistent in style and tone, look and feel, across all areas, and hopefully consistent in content within each area ... but that enterprise-wide consistency and integration of the entire suite is almost as rare as trustworthy politicians. 

That, to me, smells very much like a governance issue. Where is the high-level oversight, vision and direction? What kind of pie is it and how should it be sliced? Should 'cyber' policies (whatever that means) be part of the IT domain, or risk, or [information or IT] security, or assurance ... or should they form another distinct cluster? Who is going to deal with all those boundaries and interfaces, potential conflicts and overlaps? And how, in fact? 

But wait, there's more! Re the process, have you ever seen one of those, in practice - an actual, designed, documented and operational Policy Management Process? They do exist but I suspect only in mature, strongly ISO 9000-driven quality assurance cultures such as aerospace, or compliance-driven cultures such as finance, or highly bureaucratic organizations such as governments. Most organizations just seem to muddle through, essentially making things up as they go along. As auditors, we consider ourselves fortunate to find the basics such as identified policy owners and issue/approval status with a date! Refinements such as version numbers, defined review cycles, and systematic review processes, are sheer luxuries. As to proactively managing the entirety of the policy lifecycle from concept through to retirement, nah forgeddabahtit! 

Compliance is an example of something that ought to addressed in the policy management process, ideally leading to the compliance aspects being designed and then documented in the policies themselves and at implementation time being supported by associated awareness and training, metrics and activities to both enforce and reinforce compliance. Again, in practice, we're lucky if there is any real effort to 'implement' new policies: it's often an afterthought.

Finally, there's the time dimension: I just mentioned process maturity and policy lifecycle, but that's not all. The requirements and the organizational context are also dynamic. Laws, regs, contractual terms, standards and societal norms frequently change, sometimes quite sharply and dramatically (GDPR for a recent example) but usually more subtly. Statutes are relatively stable but the way they are interpreted and used in practice ('case law') evolves, especially early and late in their lifecycles - a bathtub curve. Various implementation challenges and incidents within the organization quite often lead to calls to 'update the policies and procedures', whether that's amending or drafting (seldom explicitly withdrawing or retiring failed or superseded policies!), plus there's the constant ebb and flow of new/amended policies (and strategies and objectives and ...) throughout the business - a version of the butterfly effect from chaos theory. And of course the people change. We come and go. We each have our interests and concerns, our blind spots and hot buttons. 

Bottom line: it's a mess because of those complications and dynamics. You may feel I'm over-complicating matters and yes maybe I am for the purposes of drawing attention to the issues ... but then I've been doing this stuff for decades, often stumbling across and trying to deal with similar issues in various organizations along the way. I see patterns. YMMV. 

I'm not sure these issues are even solvable but I believe that, as professionals, we could and should do better. This is the kind of thing that ISO27k could get further into, providing succinct, generic advice based on (I guess) ISO 9000 and governance practices. 

There's still more to say on this - another time. Meanwhile, I must press on with the awareness and training materials on 'spotting incidents'.

Tuesday 12 March 2019

Pragmatic information risk management

Over the past ~three or four decades, the information risk and security management profession has moved slowly from absolute security (also known as "best practices") to relative security (aka "good practices" or "generally-accepted security") such as ISO27k.

Now as we totter into the next phase we find ourselves navigating our way through pragmatic security (aka "good enough"). The idea, in a nutshell, is to satisfy local information risk management requirements (mostly internal organizational/business-related, some externally imposed including social/societal norms) using a practicable, workable assortment of security controls where appropriate and necessary, plus other risk treatments including risk acceptance. 

The very notion of accepting risks is a struggle for those of us in the field with high standards of integrity and professionalism. Seeing the dangers in even the smallest chinks in our armor, we expect and often demand more. It could be argued that we are expected to push for high ideals but, in practice at some point, we have no choice but to acknowledge reality and make the best of the situation before us - or resign, which achieves little except lamely register our extreme displeasure.

Speaking personally, my strategy for backing-off the pressure and accepting "good enough" security involves Business Continuity Management: I'll endorse incomplete, flawed and (to me) shoddy information security as being "good enough" IF management is willing to pay enough attention and invest sufficiently in BCM just in case unmitigated risks eventuate. 

That little bargain with management has two nice bonuses:
  1. Determining the relative criticality of various business processes, IT systems, business units, departments, teams, relationships, projects, initiatives etc. to the organization involves understanding the business in some depth, leading to a better appreciation of the associated information risks. Provided it is done well, the Business Impact Assessment part of BCM is sheer gold: it forces management to clarify, rationalize and prioritize ... which gives me a much tighter steer on where to push harder or back off the pressure. If we all agree that situation A is more valuable or important or critical to the organization than B, then I can readily justify (both to myself and to management, the auditors and other stakeholders) mitigating the risks in situation B to a lesser extent than for A. That's relative security in a form that makes sense and works for me. It gives me the rationale to accept imperfections.
  2. BCM (as I do it!) involves investing in appropriate resilience, recovery and contingency measures. The resilience part supports information security in a very general yet valuable way: it means not compromising too far on the preventive controls, ensuring they are sufficiently robust not to fall over like dominoes at the first whiff of trouble. The recovery part similarly involves detecting and responding reasonably effectively to incidents, hence I still have the mandate to maintain those areas too. Contingency adds a further element of preparing to deal with the unexpected, including information risks that weren't even foreseen, plus those that were in fact wrongly evaluated and only partially mitigated. Contingency thinking leads to flexible arrangements such as empowerment, multi-skilling, team working and broad capability development with numerous business benefits, adding to those from security, resilience and recovery.
My personal career-survival strategy also involves passing the buck, quite deliberately and explicitly. I value the whole information ownership thing, in particular the notion that whoever has the most to lose (or indeed gain) if information risks eventuate and incidents occur should be the one to determine and allocate resources for the risk treatments required. For me, it comes back to the oft-misunderstood distinction between accountability (being held to account for decisions, actions and inactions by some authority) and responsibility (being tasked with something, making the best of available resources). If an information owner - typically a senior manager for the department or business unit that most clearly has an interest in the information - is willing to live with greater information risks than I personally would feel comfortable accepting, and hence is unwilling to invest in even stronger information security, then fine: I'll help firstly in the identification and evaluation of information risks, and secondly by squeezing the most value I can from the available resources. 

At the end of the day, if it turns out that's not enough to avoid incidents, well too bad. Sorry it all turned to custard but my hands were tied. I'm only accountable for my part in the mess. Most of the grief falls to senior management, specifically the information owners. Now, let's learn the lessons here and make sure it doesn't happen again, eh?

So that's where we are at the moment but where next? Hmm, that's something interesting to mull over while I feed the animals and get my head in gear for the work-day ahead, writing security awareness and training content on incident detection.

I'd love to hear your thoughts on where we've come from, where we are now and especially where we're heading. There's no rush though: on past performance we have, oooh, about 10 or 20 years to get to grips with pragmatic security!

Meanwhile, here are two stimulating backgrounders to read and contemplate: The Ware Report from Rand, and a very topical piece by Andrew Odlyzko.

Friday 8 March 2019

Proofreading vs reading vs studying

In the course of sorting out the license formalities for a new customer, it occurred to me that there are several different ways of reading stuff:
  • Skimming or speed-reading barely gives your brain a chance to keep up with your eye as you quickly glance over or through something, getting the gist of it if you're lucky;

  • Proof-reading involves more or less ignoring the content or meaning of a piece, concentrating mostly on the spelling, grammar etc. with a keen eye for misteaks, specificaly;

  • Studying is a more careful, thorough and in-depth process of reading and re-reading, contemplating the meaning, considering things and mulling-over the messages at various levels. In an academic setting, it involves considering the piece in relation to the broader field of study, taking account of concepts and considerations from other academics plus the reader's own experience that both support and counter the piece, the credibility of the author and his/her team and institution, the techniques and methods used, the implications and so forth. To an extent, it involves filling-in missing pieces, considering the things left unstated by the author and trying to fathom whether there is meaning in both the gaps and the fillings;

  • Plain reading could involve the other forms shown here, or it may refer to any of a still wider range of activities including personal variants - for example, I like to doodle while reading complex pieces in some depth, typically sketching a mind map of the key and subsidiary points to help fathom and navigate the structure. I add icons or scribble cryptic notes to myself about things that catch my beady eye, or stuff I ought to explore further, or anything surprising/counterintuitive (to me). I link related issues using lines or asterisks. I highlight important points. Sometimes I just make mental notes, and maybe blog about them when my thoughts crystallize...
On top of all that, there are many different forms of information to 'read', such as:
  • The written, typed or printed word on paper (of various kinds) and/or on screen (in various formats);

  • Diagrams, pictures and figures including those mind maps, sketches and icons I mentioned plus more formalized diagrammatic representations, mathematical graphs, graphics and infographics, conceptual diagrams or 'models', videos and animations, artistic representations etc.;

  • The spoken word - presentations, seminars, lectures, conversations and many more, often supported by written content with words and diagrams plus (just as important) body language and visual cues from the people involved and the vicinity (e.g. a formal job interview situation in a stark office is rather different to a coffee-time chin-wag in a busy cafe);

  • 'Situations' - it is possible to read situations in a much more general hand-waving sense, taking account of the broader context, history and implications, even if there are no words, diagrams or even expressed language;

  • Language styles: stilted, formal language, especially that containing obscure words, terms of art and narrowly-defined meanings, is clearly different to everyday language ... or tabloid journalism ... or songs ... or casual street chat ... or ...
... which (finally!) brings me to my point. Security awareness and training content can support any or all of the above - in fact, ours does, quite deliberately. The reason is that our awareness and training content is not addressing an individual but a diverse group, a loose and mysterious collection of people in all sorts of situations. Although we identify three specific audiences (staff, management and professionals), that's really just for convenience to make sure we cover key perspectives: those are not exclusive groups (e.g. a professional manager is also 'just another employee', hence all three streams may be relevant), nor are they totally comprehensive (you, dear blog reader, are probably not yet a customer, maybe not even employed in the traditional sense, just a random person who stumbled across this blog piece).

Stirring the pot still further, an individual reader may have a preferred way of reading stuff but the details will vary according to circumstances. We expect different things when reading a contract, a newspaper or a blog, and we read them differently. We might skim-read a heading on a piece and move on, or continue reading in more depth, or make a mental note to come back to it later when we have more time and are less tired and emotional. Some of us gravitate towards the index or contents listing, the headings and subheadings, the diagrams and figures, the summary ... or flick from chunk to chunk perhaps following hyperlinks ... or simply start at the very top and work our way systematically to the bitter end. 

Bottom line: it pays to consider the readers when composing and writing stuff, especially in respect of awareness content since reading is almost entirely optional. If we don't provide value and interest to our diverse audience, and on occasions evoke an emotional or visceral response as much as a change of heart or behavior, we're going nowhere. We've lost the plot.

Oh yes, the plot ... must dash: work to do on the 'detectability' security awareness materials for April.

Wednesday 6 March 2019

New awareness topic: detectability

On the SecurityMetrics.org discussion form, Walt Williams posed a question about the value of 'time and distance' measures in information security, leading to someone suggesting that 'speed of response' might be a useful metric. However, it's a bit tricky to define and measure: exactly when does an incident occur? What about the response? Assuming we can define them, do we time the start, the end, or some intermediate point, or perhaps even measure the ranges?

Incidents that are highly visible and obvious to all (e.g. a ransomware attack at the point of the Denial of Service and ransom being demanded) are materially different from those that remain unrecognized for a long period, perhaps forever (e.g. a spyware attack) even if otherwise similar (using very similar remote-control Trojans in those cases). Detectability therefore might be a valuable third dimension to the classic Probability Impact Graphs for assessing and comparing risks. 

However, that still leaves the question of how one might measure detectability. 

As is my wont, I'm leaning towards a subjective measure using a continuous scale along these lines:



For the awareness module, we'll be defining four or five waypoints, indicators or scoring norms for each of several relevant criteria, helping users of the metric assess, compare and score whatever information risks or incidents they have in mind. 

You may have noticed the implicit 'detection time' element to detectability, ranging from infinity down to zero. That's a fairly simple concept and parameter to explain and discuss, but not so easy to determine or measure in, say, a risk workshop situation. In practice we prefer subjective or relative scales, reducing the measurement issue from "What is the probable detection time for incidents of type X?" to "Would type X incidents generally be detected before or after types Y and Z?" - in other words a classic bubble-sort or prioritization approach, with which managers generally are comfortable. The absolute value of a given point on the measurement scale is almost incidental, an optional outcome of the discussion and prioritization decisions made rather than an input or driver. What matters more is the overall pattern and spread of values, and even more important is the process of considering and discussing these matters in some depth. The journey trumps the destination.

To those who claim "It's not a metric if it doesn't have a unit of measurement!", I say "So what?  It's still a useful way to understand, compare and contrast risks ... which is more important in practice than satisfying some academic and frankly arbitrary and unhelpful definition!" As shown on the sketch, we normally do assign a range of values (percentages) to the scale for convenience (e.g. to facilitate the discussion and for recording outcomes) but the numeric values are only ever meant to be indicative and approximate. Scale linearity and scientific/mathematical precision don’t particularly matter in this context, especially as uncertainty is an inherent and overriding factor in risk anyway. It's good enough for government work, as they say.

Finally, circling back, 'speed of response' could add yet another dimension to the risk assessment process, or more accurately the risk treatment part of risk management. I envisage a response-speed percentage scale (naturally), ranging from 'tectonic or never' up to 'instantaneous', with an implied pressure to speed up responses, especially to certain types of incident ... sparking an interesting and perhaps enlightening discussion about those types. "Regardless of what we are actually capable of doing at present, which kinds of incidents should we respond to most or least urgently, and why is that?" ... a discussion point that we'll be bringing out in the management materials. 

Malware awareness update

Malware (malicious software) has been a concern for nearly five – yes five – decades. It’s an awareness topic worth updating annually for three key reasons:

  1. Malware is ubiquitous – it’s a threat we all face to some extent (even those of us who don’t own or use IT equipment rely on organizations that depend on it);

  2. Malware-related risks are changing – new malware is being actively developed and exploited all the time, while technical security controls inevitably lag behind;

  3. Security awareness is vital to prevent or avoid malware infections, and to recognize and respond promptly and effectively to those that almost inevitably occur.

Last year, we focused on crypto-currency-mining Trojans, and it was ransomware the year before that. Both remain of concern today. That’s the thing with malware: new forms expand the threat horizon. Much like the universe, it never seems to shrink.
Developing engaging and accessible awareness and training content on the current state of malware is quite a challenge. Malware is a complicated and dynamic field, a seething mass of issues that are hard to pin down in the first place, and awkward to describe in relatively simple and straightforward terms. 
However, so long as malware risks remain significant, we can’t afford to ignore them. Luckily, generic control measures such as workers’ vigilance, patching, backups, incident management and business continuity management are appropriate regardless of the particular incident scenarios that may unfold.  
Antivirus software is part of the solution – a major part, admittedly, necessary but not sufficient. That’s one of several awareness messages this year.
I'm especially pleased with the new 12-page 'Malware encyclopedia'. It turned out nicely, injecting a little humor into what might otherwise have been a desperately dull and depressing awareness module.