Wednesday 31 May 2017

Insecurity of Things awareness module ready



The Insecurity of Things, our latest security awareness module, is winging its way to customers this afternoon.  The zip file totals about 70Mb, containing all these goodies ...


If you can only dream of running an effective security awareness program, get in touch. We'd be happy to do the labour-intensive prep-work, leaving you the fun of interacting with your colleagues, informing and persuading them. We can get your program up and running in no time. Will you set off with the basic Information Security 101 module, the Insecurity of Things or something else from our bulging security awareness portfolio?

Don't delay!  Insecure things are proliferating like cockroaches.  

Tuesday 30 May 2017

More awareness tips


June's IoT awareness module will soon be ready for packaging and delivery. While proofreading continues, six new posters are winging their way to us and the newsletter will be completed in the next few hours. 

The primary purpose of the newsletter is to bring readers bang up to date with the current state of the art - tricky in such a fast-moving field as IoT. Having been systematically researching IoT security for quite some time though, we have amassed plenty of relevant news clips and quotable comments to weave into a coherent story.

We always try to present a reasonably comprehensive, accurate and balanced perspective on the monthly topic. Clued-up readers may spot errors or omissions and we're OK with that. If they talk things through with their less well informed colleagues (even if they poke holes in the content or disagree with us), they will be spreading awareness ... which is exactly what we want to achieve - awareness-by-proxy again. It's a no-lose situation, luckily, since try as we might we can't be experts in absolutely everything!

Another way to prompt discussion is to be outspoken or contentious. We've deliberately taken the line in the materials that IoT security is immature, so organizations should be extremely wary of using IoT, especially in business- and safety-critical situations where IoT is arguably best avoided. We suspect proponents of IoT, including suppliers of high-end things specifically designed to be secure, would see things differently. At the end of the day, it's a business decision one way or the other. Security-aware managers and professionals are more likely to make the right call on IoT than their naive peers. For a start, they appreciate that there are choices in this area, taking account of the information risks and business opportunities. They have some understanding of the background, the business and technology context. In other words, security awareness supports governance and management.

Remember this blog the next time you find yourself thinking management doesn't have a clue. Unawareness is a curable condition. Clues R us.

Monday 29 May 2017

Spiralling-in on IoT security awareness

The Insecurity of Things awareness module is nearly complete.

I thought the management stream was done and dusted over the weekend, but today in the course of preparing the awareness seminar for professionals, I developed a simple 3-step process flow for the management of IoT risks, then expanded on each of the steps and realized that the approach is strategic ... which meant re-opening and revising the management seminar and briefing to expand on the strategy, realigning the management and professional streams.

Such iterations are common for us. Developing awareness content is not a straightforward sequential or linear process - more like a spiral. Producing each item in the set forces us to consider things from the perspective of its intended audience, sometimes suggesting different angles to other awareness items. Round and round we go until the bell signals the end of month deadline and it's time to change mode: tidy things up, close off loose ends and stop forever elaborating and refining stuff in search of perfection.

Ding!  Must go.

Saturday 27 May 2017

Awareness-by-proxy

One of the IoT security issues we explore in June's awareness module is the use of compromised things as platforms for further attacks - for example not just spying on people but spreading malware or launching exploits against corporate systems and networks, including other things.  

While the preceding brief paragraph hopefully makes perfect sense to those who already have a reasonable understanding or appreciation of IoT security, it won't resonate with everyone. Although 'compromise', 'platform', 'attack' and 'exploit' are ordinary everyday English words, we're using them here in a particular context with quite specific meanings. The distinction is important in awareness because we are addressing people with varying levels of knowledge and understanding, ranging from next-to-nothing up to expert. It's fine for them to take away different things from the awareness materials just so long as they all have a reasonable grasp of the same core messages, the learning points. Those form the common ground that we hope will enable and stimulate people to chat about information security matters among themselves, thereby socializing security and ultimately behaving more securely.

One way to tackle the conundrum is to explain ourselves in writing, clarifying precisely what we really mean. That's entirely appropriate and necessary in some cases ... but if over-used the technique quickly becomes tedious*, especially for those towards the high end of the notional expertise scale. Written explanations are a useful means to explain neologisms (newly-coined words) as you see. Written content suits people who enjoy reading, contemplating and learning. It is hard to write about complex topics and nebulous concepts (of which there is no shortage in this field, 'security awareness' for instance!), and especially challenging to write clearly for significant segments of the awareness audience who don't really enjoy or have the time to get into this stuff. After all, that's the very reason we are into awareness! 

Another approach would be to explain what we really mean in person, interacting with the audiences (whether individually or in groups), empathizing and responding to their body language (such as puzzled looks) as well as addressing their vocalized questions and comments. Face-to-face interaction is a very powerful and effective way to communicate, making it the most valuable awareness-raising technique. However since we can't personally interact with our customers' workers on a regular basis, we provide customers with the content and motivation to do it themselves ... and that's where things get really interesting. We're doing awareness-by-proxy.

Aside from conventional written awareness materials, we find graphics extremely useful because:
  • They are visually appealing, stimulating and engaging, especially for those who don't enjoy or need a break from reading, or indeed talking ('death by PowerPoint' can be an issue for the presenter as well as the audience!);
  • They are universal, unlike English: complex technical documentation can be especially tough going for those who aren't fluent English speakers;
  • They succinctly express a huge amount of information, not just the literal content but also those ephemeral concepts I mentioned, plus relationships within and beyond the topic area;
  • It is straightforward for us to emphasize important stuff and down-play other aspects through judicious choice of images, sizes, colors, juxtaposition and overlays such as words, boxes, lines and arrows;
  • They prompt the audience to ponder the topic and internalize the points we've emphasized (hopefully!);
  • They are interpreted, live in real time, both by the presenter and the audience, putting across the intended learning points at least but there's plenty of latitude here, far more so than with descriptive text. The particular organizational and social context is often important, such as when someone draws parallels with IoT incidents they have personally experienced.
Here's an illustrative example (literally!) - an awareness image used as a PowerPoint slide concerning the use of things as attack platforms, jumping off points:


There are just 5 words overlaid on the slide and even they aren't strictly necessary if the seminar facilitator understands the message, points out the constituent parts and explains their meaning ... which I'm not going to do for you now. See what you make of it!

You've probably noticed a similar approach with the awareness poster thumbnails scattered throughout this blog. 

With very few words, the poster images are meant to make people puzzle over the meaning, thinking for themselves and chatting with their colleagues. 

We're explicitly aiming to catch their imaginations, stimulate contemplation and encourage discussion.

The other awareness materials and activities help fill-in-the-gaps so we don't feel the need to explain everything on the posters. In fact that kind of spoon-feeding would be counterproductive.

Along similar lines, we use Visio graphics quite a lot, including mind-maps and diagrams, PIGs for instance.

But that's more than enough words from me for today. Something for you to ponder over the weekend?


* It's ironic that this blog is so wordy. Sorry. [Note to self: cut the words, boost the graphics! Explore vlogging maybe?]  

Friday 26 May 2017

Insecurity of Things sit-rep

We're turning the corner into the final straight for June's awareness module on IoT security:














I'll take some time off at the weekend, recharging my built-in lithiums ready for a photo finish next week. 

This module looks like it will go to the line on Wednesday May 31st ... and we may even need to refer to UTC rather than NZ time to hit our deadline, one of the advantages of being just to the West of the international date line.

Must go, things to do, awareness to raise.

Thursday 25 May 2017

Peeling tiddles

Ours is not the only subject area that benefits from awareness in a corporate context. Typical organizations run several awareness programs, initiatives or activities in parallel, hopefully covering information risk and security (or security, IT security, or cybersecurity, or whatever they call it) plus:
  • IT/tech awareness;
  • Privacy awareness, and other compliance awareness concerning both external legal/regulatory and/or internal policy/strategy obligations;
  • Health and safety awareness;
  • Project and change awareness (e.g. new business initiatives, new systems, new ways of working ...);
  • Commercial/business/corporate awareness;
  • Strategy/vision/values awareness;
  • Brand/marketing/competitor/industry awareness;
  • Risk awareness;
  • Fraud awareness;
  • Financial/accounting awareness;
  • Management awareness;
  • Human Resources awareness, including discrimination, employment practices, motivation, team working, violence in the workplace, disciplinary processes, capability development, stress management etc.
I've called them all "awareness" but in practice they may be known as "training" or "education" or "information" or "support" or "mentoring" or "competence enhancement". Aside from the obvious subject matter differences, they also vary in terms of:
  • The audiences (e.g. managers and/or staff, company-wide or specific sites, departments, teams or individuals);
  • The delivery mechanisms (e.g. courses, meetings, seminars, lectures, Intranet content, leaflets, one-on-one ...);
  • Formats and styles of material;
  • Push and/or pull (e.g. information gets disseminated out to the audience, or is available on request from audience members, or both);
  • The timing (e.g. one off, annual, quarterly, monthly, weekly, daily, ad hoc/sporadic);
  • The learning objectives (e.g. strict compliance may be a primary or secondary goal: there may be business or personal objectives too). 
So far, I've only mentioned the typical corporate environment but awareness is a far broader concern. For example, there are many government-led public awareness activities ongoing, most but not all relating to compliance (e.g. tax, speeding, health, schooling), and several industry, focus-group and commercial awareness activities (not least the enormously active field of marketing, advertising and promotion).

Thinking about the above, it's obvious that there are many ways to skin a cat and many cats to skin ... which hints at two approaches to advance the practice of security awareness:
  1. There are clearly loads of ideas out there on how to 'do' awareness with an enormous variety of approaches in use right now. A little research will reveal many nuances and variants, including ideas stemming from the underlying psychology of education, influence, motivation and coercion, and creative approaches (such as social media, a massive growth area for at least the past decade - this very blog for example). Would you consider exploring and maybe trying some of them out? If not, is that because you are stuck in the groove, doing the same old stuff time after time through habit or because you (or your boss and colleagues) lack imagination, or are there other reasons/excuses (such as lack of time and budget)? How about starting small with little changes, maybe experimenting with new formats or delivery processes?

  2. Many of the ongoing parallel awareness activities share common ground, hence they could usefully be aligned and coordinated to make the most of their pooled resources ... except this is very rare in practice: it's as if every awareness team or person is selfishly pursuing their own goal. Some even talk of 'competing for head space', making this a competitive rather than cooperative activity. Why is that? 
Coordinating and collaborating on awareness is something that fascinates me. In our own little way, we actively encourage customers to liaise with their professional colleagues who share an interest in the monthly topic - for example May's email security awareness topic is of directly interest and concern to the IT department. The idea of collaborating with awareness and training colleagues on a much broader level suggests forming and exploiting social networks, and tapping into other fields of interest such as advertising and education. Innovation is an excellent way to stave-off boredom and improve the effectiveness of your security awareness program.

Wednesday 24 May 2017

The risk of false attribution

News relating to the WannaCry incident is still circulating, although a lot of what I'm reading strikes me as perhaps idle speculation, naive and biased reporting, politically-motivated 'fake news' or simply advertising copy.

Take for instance this chunk quoted from a piece in Cyberscoop under the title "Mounting evidence points to North Korean group for global ransomware attack":
"In the aftermath of a global ransomware attack, which impacted more than 300,000 computers in over 150 countries, a small, select group of security researchers announced they had found evidence suggesting a group previously linked to the North Korean government was likely behind the international cyber incident. Their theory gained new found credibility Monday when U.S. cybersecurity firm Symantec said it too discovered “strong links” between WannaCry ransomware and the so-called Lazarus Group."
Cybersecurity incidents such as WannaCry are often blamed on ("attributed to") certain perpetrators according to someone’s evaluation of evidence in the malware or hacking tools used, or other clues such as the demands and claims made. However the perpetrators of illegal acts are (for obvious reasons) keen to remain undercover, and may deliberately mislead the analysts by seeding false leads. Furthermore, attacks often involve a blend of code, tools, techniques and services from disparate sources, obtained through the hacking/criminal underground scene and used or adapted for the specific purpose at hand. 

It's a bit like blaming the company that made the nails used in the Manchester bombing for the attack. No, they just made nails.

Monday 22 May 2017

Updating trumps writing from scratch


Ticks are rapidly infesting the contents listing as the Insecurity of Things awareness module falls into place.  

I've just updated the ICQ (Internal Controls Questionnaire - an audit-style checklist supporting a review of the organization's IoT security arrangements) that we wrote way back in August 2015 - eons ago in Internet time. On top of the issues raised then, we've come up with a few more (e.g. ownership of things plus the associated information risks and the health and safety implications in some cases). 

Updating the ICQ took about half an hour, whereas writing it from scratch in the first place must have taken several hours plus the research and prep time, neatly illustrating the value of our awareness content. Customers are welcome actively encouraged to customize the materials to suit their circumstances and awareness needs, saving them many hours of time in the process - hopefully freeing them up to work on the awareness activities, such as delivering seminars, interacting face-to-face with their colleagues, explaining and expanding on the content in the specific context of their organizations.

It's a similar story with the FAQ. Using the 2015 version as a starting point, updating it for 2017 was straightforward, for instance replacing a paragraph on an early IoT security incident with a recent example. Job done in about 20 minutes ... and on to the next item on the virtual conveyor belt.

It doesn't work for everything though. I usually start the seminar slide decks from scratch, building up the story of the day. If I'm lucky, I might be able to re-use a few of the original slides, or at least the graphics and notes. Also, newly introduced types/formats of awareness material (such as the word clouds and puzzles) need to be prepared afresh.

Sometimes we re-scope a module, focusing on different angles or blending topics and further complicating matters for ourselves. On the upside, I'm easily bored so new challenges are invigorating, within reason anyway. The month-end delivery deadline can be a millstone.

Sunday 21 May 2017

Lame email scam

This plopped unceremoniously into my inbox today:















It's hard to imagine anyone falling for such a lame appeal ... but then perhaps the scammer's real aim was to be blogged about, and I've been phooled.

I presume neither "Gilda Ancheta" nor uhn.ca (the University Health Network based in Toronto, Canada, apparently) have anything to do with this email, especially as the reply-to address (not shown above but embedded in the email header) is [somebody]@rcn.com

I've forwarded the message to abuse@rcn.com.  Tag!

Saturday 20 May 2017

More biometric woes


In the course of a routine eye checkup yesterday, the optician took and showed me high-definition digital images of both my retinas. Fascinating! 

This morning while in the dual-purpose creative thinking + showering cubicle, I idly wondered about the information risks. Could I trust the optician to have properly secured their systems and networks, and to have encrypted my retinal images to prevent unauthorized disclosure? If not, what impact might such disclosure cause, and what are the threats? 

I don't personally use retina-scanning biometric authentication, and I seriously doubt anyone would be desperate enough to steal and use my retinal images to clone my identity (given other much easier ways to commit identity fraud) so I'm not that fussed about it - it's a risk I'm willing to accept, not being entirely paranoid. 

I'm curious about the risk on a wider level though: are opticians and other health professionals adequately securing their systems, networks, apps and data? Do they even appreciate the issue? It's far from a trivial consideration in practice.

The risks would be different for people such as, say, Mr Trump who might actually be using retina or iris images or other biometrics for critically important authentication purposes. I wonder whether the associated biometric data security and privacy controls are any better for such important people, in reality? Do the spooks make the effort to check? What stops someone taking high-res close-up photos of Donald's iris or finger or palmprints, or high quality audio recordings of his voice, or video recordings of his gait and handwriting or typing, or picking up one of his hairs for DNA analysis, perhaps in the guise of the press corps, a doting fan or a close confidante? Inadvertent disclosure is an issue with biometrics, along with the fact that they cannot be changed (short of surgery) ... so the security focus shifts to preventing or at least detecting possible biometric forgeries and replays, taking us right back to the issue of false negatives that I brought up a few short hours ago.

Friday 19 May 2017

SHOCK! HORROR! Biometrics not foolproof!


A BBC piece about the fallibility of a bank's voice recognition system annoyed me this evening, with its insinuation that the bank is not just insecure but incompetent.

The twin journalists are either being economical with the truth in order to make a lame story more sensational, or are genuinely naive and unaware of the realities of ANY user authentication system. This is basic security stuff: authentication systems must strike a balance between false negatives and false positives. In any real-world implementation, there are bound to be errors in both directions, so the system needs to be fine-tuned to find the sweet spot between the two which depends, in part, on whether the outcome of false negatives is better or worse than for false positives.  It also depends on the technology, the costs, and the presence of various other, compensating controls which the journalists don't go into - little things such anti-fraud systems coupled with the threat of fraudsters being prosecuted, and the access controls that lead on from authentication.

Authentication errors or failures are just one of many classes of risks to a bank. The implication that the bank is hopelessly incompetent is, frankly, insulting to the professionals concerned. Does it not occur to the journalists that it's the bank's business since, to a large extent, they carry the costs of fraud, plus the control costs, plus having to deal with the customer aggravation that stronger controls typically cause?  

There is no recognition for the technical capability: voice recognition may not be cutting-edge but it is advanced technology, particularly given the crappy audio quality of most phone networks. Now there's an issue worth reporting on!

Trotting out a few carefully selected, doubtless out-of-context and incomplete statements from security experts doesn't help matters either. I bet they are seething too.

This is cheap journalism, well below the standard I've come to expect from Auntie.  It's not fake news, but the thin end of the same wedge.

Insecurity of [sex] Toys

The Insecurity of Things awareness module is gradually taking shape, the staff stream in particular:

I have some ideas in mind for both the management and professional streams too, so the dearth of ticks there is not alarming.

A couple of the IoT security incidents I've come across concern hackers compromising smart sex toys, which creates a conundrum for the awareness program. Do we mention them because they are relevant and eye-opening cases, or do we ignore them because they may be inappropriate for some customers? On balance, I think we will cover them but delicately and in ways that customers can easily remove or skip them if they are deemed too contentious (politically incorrect) for corporate communications. As with the rest of the awareness content, cutting down or customizing the content is much easier and quicker than preparing it. 

Thursday 18 May 2017

Racing to rectify an Intel backdoor

A passing security advisory caught my beady eye this morning. It warns about a privilege escalation flaw in Intel's Active Management Technology, Small Business Technology and Intel Standard Manageability hardware subsystem incorporated into some of their CPU chips, ostensibly to facilitate low-level system management.

For convenience, I'll call it AMT.

18 days ago, Intel disclosed a design flaw in AMT that creates a severe vulnerability allowing hackers to gain privileged access to systems using the Intel “Q series” chipset, either locally or through the network depending on the particular technology.

In plain English, hackers and viruses may be able to infect and take control of your Intel-based computer through the Internet. It's similar to the WannaCry ransomware situation, only worse in that they don't need to trick you into opening an infectious email attachment or link first: they can just attack your system directly.

The wisdom of allowing low-level privileged system management in this way, through hardware that evidently bypasses normal BIOS and operating system security (i.e. a kind of backdoor), is in question. In corporate environments, I appreciate the need for IT to be able to manage distributed devices, and I guess they sometimes need to handle unresponsive systems where the CPU has locked up for some reason. Fine if the remote access facility employs adequate authentication, and cannot be compromised. Coarse if not.

Anyway, moving on, evidently "Q series" chipsets installed in 2010 or later may be vulnerable. Some PCs from HP, Dell, Lenovo, Fujitsu, Acer, Asus, Panasonic and Intel are affected, plus others such as custom or home-brew systems.

Intel have kindly released a software tool to check the vulnerability of a given system ... which means downloading and installing a program from a company that has admitted to a severe security flaw in its products - a risk in itself that you might like to evaluate before pressing ahead.

If you are willing to take chances, the tool is simple to run, generating a report like this on a vulnerable system:



Intel also released a technical guide on how to mitigate the vulnerability by disabling AMT. If the following acronym-laden paragraph doesn't put you off, it's worth reading the guide:
"Intel highly recommends that the first step in all mitigation paths is to unprovision the Intel manageability SKU to address the network privilege escalation vulnerability. For provisioned systems, unprovisioning must be performed prior to disabling or removing the LMS. Pending availability of the updated Intel manageability SKU firmware, Intel highly recommends mitigation of the local privilege escalation by removing or disabling the LMS."
If that is pure Geek, you'd best contact your IT support, or the company that supplied your PC, or Intel ... but please not me. I'm struggling to understand it myself. What is "CCM" that is evidently not disabled, and should I worry about the running microLMS service?

Wednesday 17 May 2017

Peripheral vision

Part of security awareness is situational or contextual awareness - being alert to potential concerns in any given situation or context. At its core, it is a biological capability, an inherent and natural part of being an animal. 

Think of meercats, for instance, constantly scanning the area for predators and other potential threats.




We humans are adept at it too, particularly in relation to physical safety issues. The weird creepy feeling that makes the hairs stand up on the back of your neck as you wander down a dark alley is the result of your heightened awareness of danger triggering hormonal changes. A rush of adrenaline primes you for the possible fight or flight response. I'm talking here about reflexes acting a level below conscious thought, where speed trumps analysis in decision-making.

When 'something catches your eye', it's often something towards the edge of your visual field: peripheral light receptors coupled with the sophisticated pattern-recognition capability in your visual cortex spot changes such as sudden movement and react in an instant, before your conscious brain has had the chance to figure out what it is. 

The same innate capability is what makes it hard to swat a housefly with your hand. It sees and responds to the incoming hand by springing up and away in milliseconds. [If you use a swatter with a lattice pattern, however, its compound eye and tiny brain gets confused over which way to fly - a fatal error!] 

You can probably guess where this is going. Security awareness works at both the conscious and subconscious levels. Short of radical surgery or a few million years of evolution, we can't change our biology ... but we can exploit it.

The conscious part revolves around rational thought - for example knowing that you might be sacked for causing a serious incident, or promoted for preventing one (if only!). We routinely inform, teach, instruct and warn people about stuff, encouraging them to do the right thing, behave sensibly. We hand out leaflets and briefings. We tell them to read and take note of the warning messages about dangerous links and viruses. We make them acknowledge receipt of the security policies, perhaps even test to make sure they have read and understood them. Through our security awareness service, we go a step further, prompting professionals and managers to address the information risks and implement good practice security controls. 

The subconscious part is more subtle. We don't just tell, we show - demonstrating stuff and getting people to practice their responses through exercises. We find interesting angles on stuff, using graphic illustrations and examples to open their eyes to the underlying issues.  We intrigue and motivate them, pointing out the dangers in situations that they would otherwise fail to recognize as such, removing their blinkers. We enhance their peripheral vision, and appeal to their emotions as well as their logical brains.  We make the shiny stuff glint, and make things feel uncomfortable when something isn't quite right. We like creepy. We heat topical infosec issues to make them hot, and chill good stuff to make it cool.

Consider the WannaCry incident: we couldn't predict precisely how, where, when or how the attack would come, but effective security awareness programs made people sufficiently alert to spot and react to the warning signs in a non-specific way. We're establishing a generalized capability, more than simply knowing about the particular nasty that happens to be ransomware ... or malware or phishing or social engineering or scams or ... whatever. 

The subconscious element is vital. If those hairs stand up when people receive dubious emails, phone calls, requests and other information, we are really getting somewhere. They still need to react appropriately, of course, which is generally a conscious activity such as don't click the link, and do call the help desk.


PS  I'm reminded of a standout line in the Faithless song, Reverence: "You don't need eyes to see, you need vision". 

Tuesday 16 May 2017

The art to policy

After the weekend's WannaCry excitement, we're pressing on with the IoT security materials.

I've been thinking about developing a model IoT security policy for the module. What policy axioms/principles and policy statements would be appropriate in this area? 

Identifying, analyzing and treating the associated information risks is a sensible, generic approach aligned with ISO27k, but the technological/cybersecurity controls typically employed in other contexts are somewhat challenging or impossible on many current-day IoT devices. Situations where the tech controls simply aren't sufficient to mitigate the risks perhaps ought to be covered as a policy matter. Giving up on IoT security and accepting the residual risks just because other options are too hard is not smart. 

Another angle is assurance. If an IoT supplier claims their thing uses strong authentication and encryption, it may or may not be appropriate to take it on trust and accept the assertions at face value, depending on the consequences of being wrong ... which again depends on the information risks and hence the context in which the thing is being used - and that triggers another thought: what happens when things change, such as new devices, new models, firmware or software patches, new applications, new business or technical situations etc.? The risks ought to be reviewed, requiring a link to the change management policy and process.

Oh and another thing: compliance. How will compliance with the policy be achieved, in practice? What stops workers from, say, casually introducing things into the corporate environment, or changing things, without bothering about the risk management formalities (perhaps because it doesn't even occur to them due to lack of awareness)? An approach that might help here is classification and business continuity thinking: where things are used within or supporting highly classified or critical business activities or information, the risks are likely to be higher so the security and process controls probably ought to be stronger, than with run-of-the-mill or more trivial IoT stuff. It makes sense to develop and maintain some sort of corporate database or register of the critical things, hopefully ensuring that the accompanying risk and security activities aren't neglected.

So, that's a reasonable starting point to develop a generic IoT security policy. Now all I need to do is open up our policy template in MS Word with the normal structure, headings and boilerplate, and flesh it out along those lines. Easier said than done! 

We should avoid the 10 reasons policies fail, for example the policy must be clear, easy to read and understand, so the language is important. 

Part of the policy-drafting challenge is to be succinct. We try to limit our example policies to about three pages in total, of which the axioms and policy statements are about 1 page.

It also needs to be persuasive and motivational: there's litle point in formally laying down the rules if readers disagree and/or fail to comply, which is why the template has an introductory section to explain (briefly) the background, justifying why the policy exists and why it is important. 

With the lyrics in mind, all that remains is to compose a catchy tune and write a smash-hit.

Monday 15 May 2017

WannaCry? We told you so

Yesterday I mentioned that I was preparing a quick update for customers in the aftermath of the WannaCry ransomware worm virus outbreak incident cyber hack nightmare (evidently I'm not sure what to call it, neither are the journalists). 

Having taken another look at the awareness materials we delivered on this topic already - particularly the ransomware awareness module - it turns out we've said all that needs to be said, really.

For example, we used this PIG (probability impact graph) to discuss current malware risks, locating ransomware up there in the red zone:


Trust me, I haven't altered the figure. That is exactly how it was delivered at the end of February 2017. I'm not claiming to have magical fortune-telling powers, however: the graphic is based on information that was in the public domain prior to March 1st.  

All we did was to research and analyze the information, present it in an eye-catching Visio graphic, and use it in the seminar slides and briefings to draw out the key issues in the awareness module. Easy when you know how.

Sunday 14 May 2017

Carpe diem

As the dust settles after yesterday's excitement, we're putting together a quick awareness update on the ransomware incident for our subscribers.

US CERT is already on the case with a well-written, straightforward guide and advice on how to mitigate the risk.  Good stuff!

To supplement the more technical advisories already circulating, I am preparing a simple one-pager awareness briefing for general employees, plus a management briefing focusing on the information risk management, assurance and governance aspects. Our recent 'ransomwareness' module has materials we can adapt/update to reference the latest incident - an advantage of having a comprehensive library of awareness materials.

Saturday 13 May 2017

Health service ransomware incident

Reading between the lines a bit, it seems to me that despite the scary headlines the security controls have worked on the whole: as initially reported, the ransomware has had limited effects on a relatively small number of UK National Health Service sites. 

Without adequate information security, it could have been much worse.

The NHS is huge and complex, with lots of interconnections and interdependencies between lots of IT systems (patient records, diagnostic systems, booking/scheduling systems, life support systems, things ...), many of which are critical, across lots of sites, businesses and departments, used and managed by lots of people ... so a virulent worm carrying ransomware must be a huge threat. The vulnerabilities are obvious (well some at least!), as are the impacts, in other words this is a significant risk.

It’s another nice case study in the making, useful for anyone struggling to convince management of the need to pay attention to information risk management and invest in appropriate security controls – not just against ransomware specifically or malware, but in general. Basic security controls such as frequent, reliable offline backups, proactive security awareness, slick incident response and business continuity arrangements are our Swiss army knives.

A case study based on the NHS ransomware incident would form a bridge linking several recent security awareness topics (e.g. email security this month, plus malware in March and Internet security in January) with next month's topic, IoT security. 

Major incidents that are widely covered by the mainstream media and news outlets are like awareness dragnets, snagging workers with very low levels of security awareness and little understanding or interest in information security are hard to engage with by conventional means. Headline incidents that catch their attention, even fleetingly, give us opportunities to explain and expand a little on the information risk and security angles, firing up their imaginations and reinforcing the point that we're not just doing this stuff for the sake of it. There are real-world consequences to incidents, some of which affect them personally, plus their families, friends, colleagues and employer. 


UPDATE: reports are still coming in as I write this. Seems the incident is not limited to the UK, with health services in around 100 countries also affected.

Friday 12 May 2017

Policies don't make us secure

Here are ten reasons why security policies fail:
  1. The policies are impracticable or simply unworkable - they get in the way of doing business.

  2. They are so badly written that they literally don't make sense and aren't entirely understood.

  3. They are out of date, irrelevant, inapplicable ... and hence widely ignored.

  4. They conflict in various ways (internally, with other policies and directives or laws and regulations, with reality, with common sense, with good practice, with sound ethics etc.).

  5. People honestly don't know about them, or can reasonably deny knowledge of them, or for some reason don't believe them to be applicable.

  6. The corporate culture is neutral or even toxic towards (policy) compliance - the policies themselves perhaps being presented as mere formalities, the rulebook, red-tape for appearance' sake or to satisfy the auditors.

  7. There are no actual or perceived benefits in compliance, for example little to no chance of being caught and sanctioned for noncompliance, and zero or even negative/begrudging/back-handed 'rewards' for compliance. Cynicism aside, managers and staff are inevitably juggling priorities and don't always get it right. Sometimes, finding themselves caught between a rock and a hard place, it's more a matter of striving for the least bad outcome!

  8. Some people are naturally resistant to or resent doing what they are told, especially if there is no attempt to explain why, or the explanations make no sense to them personally, or if they are facing other pressures, or if they are told in the wrong way, or are simply having a bad day.

  9. People are occasionally misled or instructed to ignore policies, for legitimate or illegitimate reasons (e.g. exemptions for business or technical purposes, to resolve conflicts, cut corners or perhaps commit fraud).

  10. Nobody actually monitors or checks for compliance and noncompliance, nor rewards the former and penalizes the latter, nor makes any real attempt to understand and grapple with the underlying issues, the root causes.
That's quite a litany of issues, yet they are all solvable provided management has the impetus to do so. If not, well that's #6 isn't it?  

The corporate culture (#6) is fundamental to the very concept of policies, compliance and accountability. Although some believe culture to be solely an emergent property of communities, relationships and behaviours, I believe it can be influenced but admittedly doing so is a tough and painfully slow process. The starting point is for management to acknowledge that it both can and ought to be done - which is a job for awareness. Maybe we should add 'Security culture' to our bulging portfolio of awareness topics?

Security policies don't make us secure.  We do.  Or don't, as the case may be.

Thursday 11 May 2017

Time manglement

Yesterday my afternoon mysteriously disappeared thanks to a trip to the dentist and time spent cutting up and transporting trees felled by the recent cyclone. This morning, I've found myself distracted by the ISO27k Forum, responding to some kind person wanting to donate content to the ISO27k Toolkit, proofreading and commenting on the glossary section of the NZ government information security manual, and drafting a new version of my paper on building the business case for an ISMS. All those activities are ongoing and need more of my time. I've also been 'attending to business' - running the company - and catching up with emails. I just rescued a goat with its head stuck through the deer fencing. Again. 

Time is my most valuable resource. Multi-tasking is the norm as I try to squeeze more things into less time ... thinking about stuff and eating my lunch as I update this blog, for instance.

I realise I'm not alone in that. We all lead busy lives today, even retired people I gather. Juggling competing priorities, struggling to focus, dealing with stress and trying to be more efficient is an ongoing concern. 

I'd take a time management course, except I don't have the time [cue groan]

The same is true of our customers, the information security awareness professionals and their colleagues. Done well, security awareness slots neatly into the daily grind, exploiting odd spare moments between activities. Short, succinct and pithy "byte-sized" chunks of information and awareness posters are designed to catch people's attention and trigger thoughts, hopefully in a way that resonates with them, sticks in mind and influences their future activities and decisions. Unlike training courses, we can't rely on them giving us their full attention, even for an hour. Ten minutes or more is a luxury.

So, with that, I'll leave it there. Must press on. Things to do.

Wednesday 10 May 2017

Getting our teeth into the module

Thinking up creative yet practical graphic designs for 6 awareness posters was particularly tough this month. The information risk, security and related issues with IoT are not easy to express pictorially.

One approach that sort of worked, in the end, was a play on words. Previously I mentioned the 'Insecurity of Things' working title for the next awareness module, a phrase that will appear on the red wax-seal blobs that brand all our posters. Along similar lines, I've come up with poster ideas around the Internet of Nothing, Anything, Something or Everything. Whether those will actually work out in practice is hard to say: mostly it depends on whether our graphics wizards can come up with appropriate images. We rely on their artistry and some appreciation of the topic area. We'll see, literally.

Another issue we're grappling with right now is to identify changes in the IoT risk and security domain since we first covered this awareness topic two years ago. IoT was all new and fresh back then: is it any more mature or secure today? Honestly I'm not sure at this point. The IoT marketing hype seems to have subsided, leaving behind ... nothing particularly distinctive, at least not much in terms of IoT offerings from the highstreet and online retail shops. Perhaps things are moving along in other markets, such as health, building and factory things?

Meanwhile there have been few IoT security incidents of note - mostly denial of service attacks and privacy breaches. Critical infrastructures haven't collapsed, and as far as I know we haven't experienced an epidemic of smart pacemaker hacks or malware-infested Fitbits. A consequence of our professional expertise is that we tend to over-state the risks and overlook the opportunities, which may be one of the deeper security awareness messages to explore for June. Perhaps.

So, here we are on the tenth of the month, mulling over the scope, concepts and messages that will need to be expressed and documented by the end of the month, ready to be communicated by our subscribers to their colleagues during the next month. Although we have a few deliverables in preparation already, there's a mountain of work ahead ... as there is every month ... so this is where our well-rehearsed processes kick in.

Or at least they will do soon. First I have an appointment with a dentist. Perhaps chatting about smart dentistry things will distract me from the engineering project going on in my gob!

Monday 8 May 2017

Probability Impact Graphs

A pal put me on to the work of David Slater concerning the validity of risk matrices, heat maps and PIGs (Probability Impact Graphs).

Google found a paper by Ben Ale and David Slater on "Risk matrix basics" published in 2012 (I think) at RiskArticles.com discussing the mathematical theory behind different kinds of PIG e.g. whether the axes are linear or logarithmic, and whether the probability axis is or is not cumulative (giving a Complementary Cumulative Distribution Function, apparently).

The introduction refers to financial, environmental, health and safety, project and engineering risks. In those domains, there is a wealth of risk data concerning the frequencies of incidents and the costs, returns etc. collected over hundreds of years in relatively stable markets. However, in information risk, we're working with a paucity of data in a field that is rapidly evolving ... which is part of the reason I'm still dubious about mathematical/scientific approaches to information risks, especially concerning new technologies.

The authors acknowledge the widely appreciated value of PIGs in decision-making:
"So far we have concentrated on the historical development and original intent of Probability Impact Graphs (PIGs). We have seen that they do have a legitimate mathematical basis and that their utilization without awareness of the 'rules' can be at best misleading and at worst disastrous. But the main driver for their continued use is that, as a way of assessing the relative positioning of identified risks (from the Risk Register), in terms of qualitative seriousness (notional relative imminence and scale?), it has proved useful in stimulating discussion, awareness and even action from non specialist, but crucial decision makers in an organization."
In practice, the Analog Risk Assessment method is a useful way to analyze and communicate information risks. It works nicely as an awareness-raising and decision-support tool. The fact that the axes have no explicit scales (other than low to high) and the graph has no boxes, is an advantage in that it avoids those distractions, letting us focus on the risks - describing them, understanding them and ranking them relative to each other on both probability and impact, then deciding how to treat them. Mathematical precision is not needed in that application ... in fact I could go further in suggesting that (apart from a few areas where we do have the data) precise numbers i.e. specific values, defined ranges or confidence limits could materially misrepresent the risks and so mislead decision makers. The way we interpret and deal with a risk that is "about here in the amber zone" is not the same as one that we believe has a given probability and impact.


UPDATE 31st May 2017: the US Food and Drug Administration (FDA) included a PIG in their advice concerning cybersecurity of medical things:



'Exploitability' is more-or-less equivalent to likelihood or probability of occurrence, while 'Severity of patient harm (if exploited)' means severity or impact, specifically from the perspective of the patient using the thing (there may also be impacts on the supplier of the thing plus the medical/support professionals involved in treating the patient, specifying and installing the thing etc.).


Notice their use of "controlled risk" and "uncontrolled risk" rather than the more conventional acceptable or unacceptable risk.