Thursday 29 June 2017

More than 5 years of ransomwareness

We are in the final stages of preparing July's awareness materials on "Workplace information security".  Six cool new poster designs have come in from the art department so the staff/general employee stream is practically finished, aside from proofreading. We're working hard to complete the management and professional briefings and tying up a couple of loose ends, leaving just the newsletter left to prepare, right on cue. As usual, we've left it to the very end of the month to make the newsletter, and in fact the whole module, as topical as humanly possible.

The latest ransomware outbreak all over the news this week is a classic illustration of the value of our innovative approach to security awareness. 

We've covered malware at least once a year since 2003, several times in fact since malware often crops up in awareness modules covering related topics such as social engineering, identity theft, phishing, fraud, email security and cybertage. Every time through the hoop, we endeavor to pick up on emerging risks and new trends ...

I've just done a quick search our back catalog. We first brought up ransomware way back in 2012, mentioning it in several awareness materials. It may be in the headlines now, but it's old news for us and our customers.

Here's an extract from the staff briefing on viruses delivered in February 2012:


Ransomware was an obscure issue when it first came to our notice, a risk that has grown steadily until today it is patently substantial - a real and present danger as they say. Because of that it's easy to catch people's eyes with awareness content on ransomware today, and that's great because there are clearly still organizations and individuals who have yet to get the message, unfortunately. So, in March this year, our annual malware awareness update focused almost exclusively on ransomware, an entire module dedicated to ransomwareness. 

Having said that, awareness of current risks and incidents is, in many ways, too late: employees and their employers need to be pre-warned so they have the chance to consider and address the risks before they get hit. I've said it before: forewarned is forearmed. 

If you are still running around desperately trying to cobble something together to get the word out to your employees about ransomware, or worse still simply too busy to do anything at all on this topic, we can help

We have more than 50 Mb of top-quality security awareness content on ransomware ready-to-roll, today:


There are seminar slide decks, posters, briefings, an FAQ, a test, a glossary and more - a smorgasbord of ransomwareness content from which to serve up a tasty meal for your organization. Aside from the general employee awareness stuff, there is a stream of content written specifically for management (e.g. a model policy and metrics), and another more technical stream for professionals. It's all customer-editable, so you are very welcome to adapt it to your particular circumstances and corporate comms style. No need to pay someone else a small fortune to customize it for you, do it yourself. 

Email me, now, before it's too late!

PS  What are you doing to raise awareness on workplace information security? Is it even on your risk-radar, let alone your to-do list?

Wednesday 28 June 2017

Branding security awareness

I find brands fascinating. We are immersed in a heavily branded world, surrounded and constantly bombarded by brands. They are thrust at us through advertisements and emblazoned on product packaging. Many are really quite crude and obvious - childish graphical logos in bright primary colors, simplistic tag lines, annoying jingles and endless endless repetitition. Others are far more subtle and sophisticated. The very best take subtlety to the point that we no longer appreciate we are being coerced, be we are, oh yes we are. 

Brands go well beyond the logos, jingles and taglines, taking in very diffuse perceptions about the organizations and their products in general - myriad aspects such as quality, price, reliability, innovation and, most of all, trustworthiness. Most of us are loyal to certain brands while avoiding others (brands can be liabilities as well as assets), spreading branding's influence into the social sphere as we demonstrate and discuss our preferences with friends. We even delude ourselves, quietly accepting and downplaying faults with our favorite branded products and yet pointing out even small flaws in hated brands. The prejudices run deep.

Notwithstanding that comment about liabilities, brands are extremely valuable for organizations, and not just in the commercial sphere: take any political party, for instance, or politician. Well OK there is of course a financial undercurrent but public perceptions and trust are crucial to being (re-)elected. Same thing with sports teams, even religions. Corporate departments and functions also have brands though they are seldom deliberately managed. Individuals have brands too - think of, say, Richard Branson, Kim Kardashian or Donald Trump. Regardless of what you personally make of them, merely mentioning certain well-known names without any context instantly conjures up a cloud of perceptions, beliefs and expectations, some of which have almost certainly been deliberately fabricated or manipulated by those people plus their allies and opponents. The investment is huge.

So, how does all that relate to security awareness? 

The obvious place to start is the dreaded logo. Awareness programs normally have some sort of logo - often, it has to be said, lame ones involving padlocks, chains and binary numbers. With a bit of thought and effort, we can do much better than that, in fact a challenge or contest to come up with a decent logo is itself a valuable awareness activity - something we probably ought to do to update the rather drab and lifeless ISO27001security.com logo!

But hang on a moment, what is the logo meant to express? What are the perceptions and values we'd like to associate with the awareness program? If we leap right in with a logo, we've missed out a crucial step. As I said earlier, there's more to branding, more to consider, more to plan. 

It's worth spending quality time with marketing professionals to explore and understand the entire package before designing the packaging.

Creativity can be stimulated through various activities, techniques and approaches, especially if there are naturally creative people on the team or co-opted to it - and by the way, 'the team' is itself a valuable concept in the context of security awareness. Who is or is not on the team? What draws them to want to belong and hopefully participate? Who are the opposing teams? What are the team colors? When do they get together to wave their flags, chant the team chant and hopefully celebrate success on the field? What is success, in fact? What does it look like? How does it make you feel? 

That brings us to those tag lines supporting and giving meaning to your logo. If you had to sum up information risk and security (or whatever) in a short, memorable, meaningful phrase, what are the fewest, most expressive words you can come up with? Shortlisting and deciding between your tags is another part of the branding process, another opportunity to get creative and solicit inputs from other parties. Does "cybersecurity" do it for you? How about "protecting and exploiting information" or "safety and security"? Are we focused on locking things down to prevent the badness, or setting things free to release the goodness? The subtleties of our field are worth exploring, within your organization and its culture - which is yet another angle to this, along with maturity since culture is both an emergent and an evolving concept. 

Hopefully I've got you thinking so I'll stop here and return to the day-job, but there's much more to say and I'm sure I'll come back to this later. Meanwhile, the comments are open. I'm dying to learn new tricks. Go ahead, make my day (now that's a tag line!). 

Tuesday 27 June 2017

Laptop ban [UPDATED]

One of the workplace information risk and security issues worth discussing with management is the possibility of a total ban on portable ICT devices such as laptops, tablets and smartphones by airlines, and perhaps other forms of mass public transport.

At present, some ICT devices are banned from the cabin by some airlines on some routes, but it is not inconceivable that the ban might be extended given escalating terrorism and safety threats. I presume the only reason we are still allowed to take our explosive battery packs on board at all is the inconvenience and customer dissatisfaction that would follow if portable and wearable devices were completely banned - a typical risk-reward trade-off.

As far as the security awareness program goes, whether and how a ban is extended is inconsequential: the point is to prompt the audience to think about how they would deal with that situation. It's a theoretical exercise at this stage, based on a credible scenario. What effects would it have on business travellers and information, and how might the organization respond? Considering and perhaps figuring that out now may put them ahead of the game if the ban is indeed extended. It's also a glimpse into the wider issues around workplace information security - so much more to this than just locking the office door at the end of a hard day's slog!

28 June UPDATE: Mike Eglan argues in Computerworld that we are perilously close to a total ban being imposed immediately, with zero warning, following a terrorist or blazing battery incident. Also Luke Bencie points out in Harvard Business Review that the risk of espionage increases when powerful information-rich people are separated from their ICT devices. Is this the perfect storm?

Monday 26 June 2017

Order from chaos

My physical workplace is, as usual at this time of the month, becoming cluttered with printouts and notes about the new module, vying for space with all the normal desk chaff - receipts and expenses claims, IT stuff, music CDs, crockery from lunch al-desko, and more. 

It's much the same with my virtual workplace too as my mind fills to the brim with thoughts, many part-formed, some tantalizingly close to crystallising out while others remain chaotic. This is a curiously appropriate representation of my brain right now - or at least it would be if it were constantly shifting about:

It's time to focus on completing the materials, discarding half-baked ideas and letting go of threads that aren't likely to mature in time. It's not entirely wasteful though as the notes, threads and other memories will be there the next time we work on a related security awareness topic. 

In infosec terms, there are risks in our way of working. We're on the critical path now, so any incidents that threatened completion and delivery of the next module would be magnified due to the impact. On top of that, the timescale pressures make us more vulnerable to issues. 

On the other hand, we've been through this situation so many times now that I'm confident we will hit the deadline, come what may. Thanks to several years of experience, the month end is more galvanising than stressful. We have ways of coping, well-practiced methods and, most of all, a tacit understanding in the office about priorities. We pull together to pull it together, as a team.

And being infosec pro's, we have contingency plans too, just in case.

So, with that, I must get on with the day-job. But first, another cup of tea and some calming music is called for.

Saturday 24 June 2017

Weaving news into awareness


Today I've been searching for news items to illustrate the awareness materials on workplace security, particularly incidents involving corporate information. 

At first I thought maybe we have over-estimated the risks: Googling for, say, "office security" brings up stacks of news about MS Office but not so much on traditional office break-ins, fires and the like. "Commercial burglary" was a more productive search term but still not exactly overwhelming. Likewise searching for "theft from vehicle" leads to a plethora of brief police incident logs and the occasional news piece about laptops and other IT gizmos stolen from parked cars - seemingly just opportunistic thefts by druggies.

Digging a little deeper, though, I realized that those police incident logs indicate a level of crime so widespread and commonplace that it is barely newsworthy any more. Tot up all those little incidents involving theft of computers, laptops, iPads, smartphones and the like, including that aren't even reported to the police, and the sheer scale of it is almost overwhelming. It's not that it isn't happening, so much as it is tolerated by society, expected even. We've become complacent, especially now that the technology is so cheap as to be disposable - not so the information content however.

Digging deeper still, I've been reminded of several more serious incidents reported recently - things such as the enormously disruptive incident at British Airways when a data center power problem took out their main and backup servers, plus the questions raised about Oval Office security after a Russian commercial photographer was able to enter and take pictures - and conceivably plant bugs - inside the office.

Then comes a raft of incidents involving thefts of computers from the offices of professionals such as doctors, lawyers, accountants and tax advisors. Some of these are 'reportable incidents' in that they involve loss of personal information with the potential for identity fraud on hundreds or thousands of people, begging serious questions about why the information wasn't encrypted.

In the UK, a few politicians and counter-terrorist professionals have been snapped lately by the paparazzi carrying highly confidential paperwork in plain view. Doh!

And finally the incidents involving trusted insiders such as Snowden and Manning simply walking out the door with extremely sensitive information concealed about their person ... or stored in their heads, which thought opens a huge can of worms. 

So, now we're busy weaving that little lot and more into the awareness seminars and briefings, using real-world incidents to 'tell the story' about workplace information security. It's all very well for us to blabber on about theoretical risks, but genuine incidents bring our points home with a bang. The awareness value of news reports? Priceless!

Friday 23 June 2017

Phishing myopia strikes again


A piece in the Redmond Magazine Protecting Office 365 from Attack caught my eye today - specifically this chunk on "User-Awareness Training" [sic]:
"One of the most effective but underutilized strategies for defending your network against malware such as Osiris/Locky is user-awareness training. Because it's impossible to catch all malware, your users are the last line of defense for your network, and they should be trained as such. Accordingly, you should implement the following user-awareness training strategies:
  • Threat awareness: Have your users take refresher courses on how to identify a phishing attempt and the importance of their participation in the fight to defend resources against malware once every quarter. Specifically, they must learn not to engage with any suspicious e-mail, report suspicious e-mail, and ensure that their endpoints are protected with anti-malware software and effective backups. It might sound simple, but many users still aren't aware of this.
  • Phishing Simulators: A very effective method of user training is the implementation of a phishing simulator. There are several free phishing simulator options available that allow you to create a simulated phishing campaign that you can send to your users. Those who fall victim to the simulation will be impacted far greater than any passive training course could ever achieve. Of course, you must obtain the proper permission from all authoritative stakeholders before pursuing this type of training."
Skimming deftly past the fact that "User-Awareness" literally means being aware of users (as in IT users, presumably, but drug users is the usual implication), the author's conflation of training (as in dog-training) with awareness makes this rather lame advice. It's superficial at best, admittedly just a small part of an article about securing Office 365 - Microsoft's answer to Google's online creative/collaborative tools.

Aside from the naive but typical myopic focus on phishing, there are so many other angles to security awareness, even in relation to Office 365 specifically, that it's hard to know where to start. FWIW here's a quick brain dump:
  • Security awareness for the managers responsible for enabling and authorizing use of online tools (e.g. helping them understand the risks and opportunities associated with various approaches and tools, the governance implications of using third party information services for business purposes, and how to measure this stuff through appropriate security metrics ...)
  • Security awareness for the technologists responsible for the associated technologies, filling-in some of the stuff they probably weren't taught at college (e.g. network security and crypto key management, logging and alerting, cloud insecurity, click-to-run automatic patching and security awareness ...)
  • Security awareness for customers, partners and other interested parties (e.g. how to spot and deal with phishing attacks using the organization's own brands, domains, people's names, project names etc. as lures ...)
  • Confidentiality, integrity and availability aspects, including incidents other than "attacks" (e.g. taking care to avoid inadvertent or inappropriate disclosure, privacy aspects such as trans-border processing, typos and outages, spotting and dealing with fraud ...)
  • Identification, authentication and access controls (e.g. online passwords, sharing files ...)
  • Business continuity (e.g. the pros and cons of online and offline toolsets, identifying critical aspects, ensuring resilience and recovery plus true contingency preparation ...)
  • Roles and responsibilities, plus accountabilities, plus compliance ...
  • Intellectual property rights, piracy And All That ...
  • Collaborative working and social engineering in general ...
  • Bugs! plus design flaws, secure development, testing, change-, version- and configuration-management ...
  • The rest of malware (just imagine the implications, for instance, if say Office 365, Google Docs and/or other online office services were hijacked by doomsday ransomware that affected all their clients simultaneously - not just individual clients infected with ransomware such as Cerbus ...)
Against that backdrop, do you see what I mean when I call phishing awareness myopic? Phishing is an important security awareness topic, just one of many. Ignore the rest at your peril.

Wednesday 21 June 2017

A positive spin on auditing


Over on the ISO27k Forum, a member told us about having passed an ISO/IEC 27001 certification surveillance audit with a minor noncompliance. The auditor reported that the firewall's firmware had not been updated since a year ago despite the availability of a more recent update. The auditor was concerned that this left the network exposed to malware such as Wannacry.

While not disputing the facts, reading between the lines, the auditee was clearly disappointed that this had been raised because the information risk does not seem significant, given that the organization has other effective controls in this area. A negative audit finding, even something as trivial as a minor nonconformance, can be hard to accept if you genuinely believe you are doing a great job. There may not be fireworks but it's a challenge, for sure, a knock to one's integrity and credibility.

Leaving aside the certification aspects for a moment, if it were me in that situation I’d be inclined to ask why the firewall firmware was not updated. Was or is there a good reason for NOT doing the update, for not addressing the information risks? 
  • Did the organization not even know there was a firmware update? If not, that points to a possible lack of communication/coordination with vendors (possibly on other platforms too) or something else. 
  • Did the organization know about the update but ignored it? Why? Was there some higher priority, or a lack of resources, a lack of policy or a broken process, or what? 
  • Did this ‘fall between the cracks’, for instance if there are several people or teams involved, each of whom thought it was someone else’s problem (hinting at a governance issue)? 
  • Did the organization know about the update, assessed it and the associated information risks (which, by the way, arise from both doing and not doing the update, as well as how and when to do it) and chose not to go ahead with it for a genuine business reason (e.g. the update does not address the risk)? If so, is there evidence of the assessment and risk acceptance decision, properly authorized by management? If that wasn't properly recorded/documented, maybe the process wasn't being followed correctly or maybe it needs to emphasize retaining such evidence in future. 
  • Did someone misunderstand or incorrectly assess the risk? What actual or potential consequences might that have caused? How serious is it? Does something need to be fixed here? 
  • Is the organization in fact planning to do the update at some point? That begs the classic audit response: “OK then, show me the plans and the resources allocated”! 

It would presumably be possible simply to update the firmware and close off the specific issue … but asking lots of questions in and around the area can help determine the real, underlying reasons for this little incident, and presents an opportunity to improve/mature your ISMS, which is of course A Good Thing. Taken in the right spirit, incidents (including audit comments) and near-misses are learning opportunities.

As an inherently optimistic former (reformed) IT-focused internal auditor, I heartily recommend taking nonconformances and other comments or concerns as prompts to at least openly consider and ideally make improvements. Try looking at things from the auditor’s perspective, responding positively to the audit and going a little out of your way to move things along in the right direction … unless you honestly feel the auditor is mistaken or misguided or whatever. That does happen (e.g. with naïve/inexperienced auditors, perhaps a junior obsessed with the Wannacry incident, and “jobsworth” tick-n-bash auditors who are only concerned about the tiny strip of the world they see through their blinkers) but it is unusual: be wary of going down that line, and be prepared to provide hard evidence to back up your assertions for what might turn out to be a full and frank discussion with the auditors. 

In my experience, issues like this are more to do with the organization’s evolving relationship with the auditors and appreciation of the audit rôle than with the actual findings. Also, in my experience, there are lots of little issues of this nature in every organization: auditors are spoilt for choice! Usually, auditors are aware of other stuff too but, for various reasons, choose to ignore them (this time around) and focus instead on a few specific issues that they feel are either significant in their own right, or are potentially valuable learning/improvement opportunities, ways to force the organization to bring deeper issues to the surface and deal with them. There’s quite an art to evaluating the findings and preparing audit reports which may not be obvious if you have never been an auditor and only see the end product. The decisions about whether the issue is reportable and if so how to report it (e.g. a major or minor nonconformance, a formal observation and maybe a recommendation, an off-the-record comment/suggestion, or merely a subtle hint in passing) are quite complex and subjective in practice.

The auditor's risks, liabilities and professional obligations are a particular concern, especially with formal external audits such as certification audits. If for whatever reason something was spotted but not reported, and it subsequently turned out to be a significant issue (e.g. if a serious malware infection or hacking incident had subsequently occurred in this case, materially harming the organization), the auditors could face some difficult questions, conceivably even legal action. They have get-out-of-jail-free cards to play concerning various theoretical and practical constraints on the audit work and their contract or terms of engagement, but still it's an awkward position to defend. 

By the way, it’s an excellent idea to build friendly professional relationships and chat to the auditors informally if you get the chance, preferably throughout the assignment. Most don’t bite and like to be consulted. Ask to see the evidence, check their understanding and risk assessment, and find out what particular aspects caught their attention. Talk through your options. Try hard to remain open-minded - suspend your disbelief and get over being affronted that they found something. Maybe they are indeed wrong ... but you might just find they are on to something (not necessarily what they think or state is the issue!), or there might be other/better ways to respond.

Tuesday 20 June 2017

Workplace infosec policies


Protecting information in the workplace is such a broad brief that we're working on 4 policy templates for the July awareness module:
  1. Workplace information security policy - concerns the need to identify and address information risks wherever work is performed, and wherever valuable information exists (not just at the office!).  This is an update to our 'office security policy'.

  2. Information retention policy - the timescales for retention and/or the criteria for disposal, of information should be specified when it is classified, along with the security requirements for safe storage, communications and access.

  3. Information disposal policy - when information is no longer required, it may need to be disposed of securely using forensically sound techniques.

  4. Information classification policy - updated to reflect the need to specify retention and destruction requirements where applicable (e.g. if mandated in laws, regulations or contracts).
Several other information security policies are also relevant - in fact virtually all of them - but if we attempted to promote them all, the key awareness messages would be diluted and lose their impact.  Even citing all the relevant policies from those 4 would become unwieldy, so instead we pick out those few that are most important in this context.

This situation illustrates the value of a coherent and integrated suite of information security policies, designed, developed and managed as a whole. Having personally written all our policies, I appreciate not just what they say, but what they are intended to achieve and how they inter-relate. At the same time, I'm only human! Every time I review and revise the policies, I spot 'opportunities' ranging from minor readability improvements to more substantive changes e.g. responding to the effects of BYOD and IoT on information risks. Revising a policy is also an opportunity to refresh the accompanying security awareness materials, reminding everyone about the topic.

Given that the landscape is constantly shifting around us, policy maintenance is inevitably an ongoing task. So when was the last time you checked and updated yours?

Hinson tip: sort the policy files by the 'last updated' date, and set to work on at least checking the ones that haven't been touched in ages. It's surprising how quickly they become limp, lackluster and lifeless if not actually moldy like stale bread.


PS  If you have to scrabble around just to find all the policies before sorting them, well the learning point is obvious, isn't it?

PPS  No, I think it's a daft idea to have a policy on policy maintenance!

Monday 19 June 2017

Weekend report


Hey, a weekend off! The weather was fine (no rain, blue skies) so we got some outside jobs done, including removing yet another fallen tree (about the fifteenth from the cyclone in April), repairing and installing a gate and despatching a dozen fattened lambs to market.

Friday 16 June 2017

Dress down Friday


Every day is dress-down day in the IsecT office. Like most Kiwis, we much prefer comfortable clothes to formal attire such as business suits and ties. Why anyone - especially knowledge workers - would voluntarily choose to don a noose that constricts the flow of blood to their own heads is beyond me. The necktie is a bizarre fashion legacy from the fifteenth century - the very antithesis of 'smart'.

Anyway, today was a tad more laid-back than I anticipated. I got up with the very best of intentions to crack on with the module, only "stuff" occured. 

Firstly came a string of emails from the CSA (Cloud Security Alliance) inviting me to get involved in their work on cloud and IoT security. They are doing fabulous things and it's very flattering to be asked, except I can't afford the time to wade in. By a process known as Chinese whispers (telephone in the US), my simple, naive inquiry about their activities on IoT security got transmogrified into an offer to help out. I'd love to, but I can't, sorry.

Next came the realization that one of the websites I manage on behalf of a group I belong to had fallen into a black hole when I rebuilt the server some months ago. As I tried to recover the site, I remembered why it wasn't already running: NetObjects Fusion (possibly the worst website management software) had, once again, scrambled the site beyond repair, entailing an hour or two regenerating and reloading the site from scratch.

Then a knock at the back door from a flustered Deborah told me one of our cattle was the wrong side of a 7 wire fence ... which meant leaving the office and donning my fencing garb to retrieve the beast and repair the fence. It's winter here, hence lots of mud and a fair bit of cow poo. Good thing I wasn't wearing my best suit!

After a quick lunch al-desko, another urgent farm job popped to the very top of my honey-do list: Deborah needed my help to round up and tidy up some sheep ready to send them to market on Monday. Apparently it is due to rain later today or over the weekend, so it couldn't possibly wait. Another few hours of my working day down the Swanee.

Finally at 5 pm I returned to the sanctuary of the office to write a case study for the workplace information security awareness materials and update this blog. It is officially drink o'clock so as I write these words a large glass of plonk is helping me relax as I contemplate a predicted rainy weekend ahead, catching up on work in the office no doubt.

"I'm only happy when it rains" rings true right now.

I've worked in more than enough organizations to appreciate the frustrations of "stuff" that is not "work" in a corporate context. There are meetings, meetings about meetings, quick jobs that are anything but quick, urgent tasks which wouldn't have been urgent if only someone had listened to someone pleading to get on to it sooner, and myriad other diversions of everyday office life. Filling in time sheets was one of the low-lights of my career, especially when management complained that I was working one and a half or two standard working weeks per week, and seemed curiously upset that I insisted on accounting for "Time spent completing pointless and counterproductive office admin". Against that backdrop, a few hours fencing and chasing sheep into the yards seems quite a pleasant way to waste my day.

Thursday 15 June 2017

Nose to the grindstone



Having completed and submitted our bids yesterday, it's back to the day-job today, picking up where we left off the workplace information security awareness module.

Well it would be noses-to-the-grindstone ... except MS Office is playing up for no obvious reason, so I sit here watching the clock tick while it reinstalls, again, idly wondering why an organization the size of Micro$oft can't be bothered to put enough resources and effort into sorting out its numerous information security and quality problems properly, for once ... and so here I am an hour and much frustration later. It seems to be running, for now, sort-of: Outlook still tells me it isn't activated while the Office365 online site says "We’re still setting a few things up, but feel free to get started" (thanks a bunch: it was working until you screwed it up, M$). No clue what was wrong with it - lack of oomph  in the dilithium crystals or something. Given how keen M$ is to charge us, perhaps we should send them an invoice for my wasted hour - just another in a long long run and I'm SURE it won't be the last.

Sorry, rant over.

As I was saying, the awareness module is coming along. Given the diverse nature of the modern workplace, the information risks and associated security controls are equally diverse, hence in some ways the module is losing focus - and yet that very diversity, along with the evolution of "work", presents challenges worth exploring. As I said the other day, workers are increasingly mobile while work of all kinds is increasingly IT-enabled, so the traditional emphasis on physical office security is becoming less relevant. Simply figuring out what the organization's information assets are, plus relevant third party information assets (not least BYOD and IoT things) plus where they are located, is hard enough even before we get down to assessing and deciding what to do about the information risks.

Wednesday 14 June 2017

The periodic table of atomic controls [updated]

Many information security controls are multi-purpose, hence they could be specified in several places, several policies plus procedures and standards and guidelines etc. That multiplicity creates a nightmare for the ISO/IEC JTC 1/SC 27 project team trying to generate a succinct version of ISO/IEC 27002 without duplications, gaps or discrepancies in the control catalog. It’s also a potential nightmare for anyone writing corporate policies, or an opportunity depending on how you deal with it. 

My current pragmatic approach is to mention [hopefully] all the important controls in each topic-specific policy template, with a reference section that mentions other related policies, creating a kind of policy matrix. I’m still wary of gaps and discrepancies though: with 60+ policies in our matrix so far, it’s fast approaching the limit of my intellectual abilities and memory to keep them all aligned! It’s an ongoing task to review and revise/update the policy templates, without breaking links, creating discrepancies, or missing anything important.

My mention of ‘control catalog’ hints at a more rigorous approach: a database where every control is listed once, definitively, and then referenced from all the places that need to describe or mandate or recommend the controls. That in turn requires us to be crystal-clear about what constitutes a control. User authentication, for instance, is in fact a complex of several controls such as identification, challenge-response, cryptography, biometrics, enrolment, awareness, logging, compliance, passwords/PINs and more. Some of those are themselves complex controls that could be broken down further … leading to the ultimate level of ‘atomic controls’ or ‘control elements’. The control catalog, then, would be built around a kind of periodic table of all known atomic information security controls, which can be used individually or assembled into 'compound controls' mitigating various information risks.  
Extending the analogy, it would be helpful if our periodic table (or 'information security elemental control catalog' or whatever we end up calling it) had a rational structure, some sort of logical sequence with groupings of related atomic controls in much the same way that, say, the 'noble gases' are clustered together on the real periodic table, giving the colored regions. Also, the atomic controls would need to be rigorously specified, with equivalents for the atomic number and other chemical parameters. Right now, though, I can only guess at some of the parameters that might be used to group related atomic controls: I suspect a structure might emerge once the complex controls are decomposed, the constituent atomic controls are identified, and they start piling up in a big unsightly heap. These are just some of the complexities that SC27 is currently grappling with in the ongoing revision of ISO/IEC 27002.It’s also, by the way, something where we might help out SC27 by compiling our periodic table. At the SC27 meeting in Hamilton, I tried unsuccessfully to persuade one of the project groups to set to work on that, instead of what they were proposing to do (yet another revamp of the glossary). It’s really a sizable research project, an idea for some enterprising academic, MSc/PhD student or research team maybe. It's entirely possible that someone out there is already on to it. If so, I'd love to hear about or from them. Do please get in touch.
UPDATE June 20: I published this blog item on LinkeDin to reach a wider spectrum of readers. Michala Liavaag kindly pointed out that NIST SP800-53 has a controls catalog ... but the controls listed in Appendix F are compound or complex controls, not elemental. I'm proposing to take the analysis down to the lowest level, to the building blocks from which practical controls are assembled.UPDATE June 27: in an opinion piece in CSO Magazine asserting that ROI is the wrong metric for cybersecurity, Rick Howard says:
"The idea of first principles has been around since the early Greek philosopher days. To paraphrase Aristotle, first principles in a designated problem space are atomic. They cannot be broken down any further. They are the building blocks for everything else. They drive every decision you make."

Monday 12 June 2017

Nothing small about business






















As a small business, we have to do and manage much the same stuff that any business has to do, such as:
  • Marketing, promoting and selling our products e.g. maintaining and updating our websites, preparing advertising copy etc.
  • Procurement and sales administration - licensing, invoicing etc.
  • Customer and supplier relations
  • Financial administration: budgeting, accounting, tax, expenses, pay & rations
  • HR & personal development
  • IT - hardware, software, firmware, wetware and - yes - IoT
  • Information risk and security, including awareness (golly!)
  • Strategy, governance, compliance 
  • Planning, resource allocation, priorization
  • Market and competitor analysis
  • Research and development
  • Operations/production - working hard to make the products we sell
  • Quality assurance and quality control
  • Packaging, delivery and logistics
  • Elf'n-safety
  • Blogging and other social marketing/social media stuff
In our case these are on a smaller, simpler scale compared to, say, a multinational megacorporation, but they are no less important to the business. The key difference is that (with some exceptions, namely our elite band of trusted advisors and specialist service providers) we rely on ourselves - our capabilities, expertise and skills across all of those areas, rather than calling on departments, teams and individuals who specialize. That necessarily makes us generalists, Jacks-and-Jills-of-all-trades with the attendant practical constraints and risks. We are constantly juggling priorities to meet deadlines.

On the other hand, being personally involved with virtually everything going on means we don't have the regimented hierarchy, internal communications issues, corporate politics and so forth of larger organizations. We are glad not to suffer the enormous inertia and conservatism that plague large, mature organizations, nor the attendant overheads. We don't need to consult the rule books, check the policies and refer to the procedures to get stuff done. We can make substantial changes almost the very moment we decide to do something different, provided we have the resources - the knowledge and time mostly but also the motivation which stems from doing a good job, being respected and most of all being commercially successful. Minimal overheads help but still we need income.

One of my tasks for the past week has been to prepare bids for a couple of prospective customers against their formal Requests For Tenders (RFTs) no doubt prepared by vast teams of procurement and legal specialists over the preceding weeks or months. Whereas they were able to spread the efforts and costs of planning, preparing, reviewing, approving, issuing and administering the RFT's across several people and functions representing a tiny fraction of their organizations' total activities and costs, we have no option but to dedicate almost all of our available resources to bidding. It's disproportionally costly for us, yet we have little option if we want the business.  

We're used to squeezing a quart from the pint pot but going for the whole gallon, well something has to give. With deadlines approaching and assorted jobs piling up on the side, I may be blogging less often for a while. Normal service will be resumed as soon as possible.

On the upside, the more bids we prepare, the more efficient and effective we become at doing so. At least, I tell myself that's the cunning plan that stops me becoming totally snowed-under, buried in the drift.

Saturday 10 June 2017

Beyond the cubicle


As information risks change, existing information security controls ought to be reviewed and if necessary updated. Abrupt, major changes tend to be obvious and, in mature organizations, trigger the risk review and security update process, whereas gradual, incremental changes may creep up on us unnoticed.

Working practices are evolving. We are spending less time tethered to our desk-based 'workstations' these days, and more time on the move, whether just wandering around the office from meeting to meeting, traveling between offices and other workplaces (and working on the hoof), working from temporary and makeshift workplaces or working from home (if only to avoid the tedium of commuting). 

The nature of 'work' is also evolving thanks to automation (e.g. robotics, computer-controlled machinery and IoT things) and networking (e.g. the Web plus WiFi, Bluetooth and cellular): manual labor is being supplemented or replaced by intellectual labor - we're thinking more than doing, 'working smarter not harder' as the trite saying goes. Higher-level qualifications are increasingly being demanded even for junior positions - and the impact that social change is having on those without qualifications cannot be ignored.

Talking of social change, we are interacting with expanding and diffuse social networks including people we have never met in person and who work for other organizations, as much as our close work colleagues. Physical distance is becoming less relevant, while [some] cultural and language barriers are sliding if not toppling. 

So, July's awareness module on workplace information security presents an opportunity for our customers to take stock, to consider the evolutionary changes that have already occurred plus those that are ongoing and likely to come along, from the perspective of the information risks and hence the security requirements. This is awareness in the broadest sense, opening eyes to the stuff going on beyond the cubicle. 

Workplace information security is an important awareness topic with profound implications for us and our organizations.

Friday 9 June 2017

Weaving the Web

One of the pleasures of my job is continual learning, doing my best to keep up with the field. I read loads, mostly on the Web but I also maintain a physical bookshelf well-stocked with books ... including:
















































Sir Tim Berners-Lee recounts the original design and development of the World Wide Web in the 1980s and 90s. This is more than merely an authoritative historical account, however valuable that may be. Tim elaborates on his big dreams and deep personal philosophy that drove him to conceive and gift to humanity the most powerful information technology invented - so far. 

62 years ago when Tim was born (happy birthday!), ENIAC was in the final few months of its life and the 5,000-tube UNIVAC was just 2 years into commercial production. Computers were monstrous beasts with (by today's standards) minimal processing, storage and communications capabilities, yet ironically they were known as 'electronic brains'. Networking was virtually nonexistent, and email wasn't even invented until Tim was 16.

Tim's early fascination with the 'power in arranging ideas in an unconstrained, weblike way' led him to create technologies to support that aim. This was true innovation, not merely coming up with bright ideas, wouldn't-it-be-nice pipe-dreams and theories but putting them into practice and exploring them hands-on. He has remained hands-on ever since, and is the Director of the World Wide Web Consortium

Tim's vision extends way beyond what we have right now, into the realm of artificial intelligence, machine learning and real-time global collaboration on a massive scale, the 'semantic web' as he calls it. But in the sense of a proud parent watching their progeny make their way in the world, I suspect he is keen to see the Web develop and mature without the shackles of his own mental framework. The free Web ideal is closer to free speech than free beer.

Bottom line: a fascinating insight into modern life.  Highly recommended and a steal at just $13 from Amazon.

Thursday 8 June 2017

Frame the problem to find the solution


Today we're exploring and elaborating on the information risks associated with the wide variety of modern-day workplaces I mentioned yesterday.

The risk-control spectrum diagram is a convenient way to get our thoughts - as well as the risks - in order. It's straightforward to present and discuss the risks along with the corresponding security controls, in a priority sequence that sort of makes sense. 

'Sort of' hints at an underlying issue that I'd like to discuss today. 

Whereas we strive to make the security awareness materials reasonably complete and accurate, we cannot entirely reflect any specific customer organization and its particular business context or needs, not least because we simply don't know what they are.

At the same time, that ambiguity presents an awareness opportunity. It opens the way for customers to consider, discuss, challenge, adapt and extend the generic content. Take for instance our placement of the "Working from home" to the left (lower risk) side of "Office fire/flood". "Working from home" is not actually an information risk - rather it's a commonplace scenario with several associated information risks ... which aren't called out explicitly on the diagram but will be expanded upon in the accompanying notes. Likewise "Office fire/flood" is not intended to be an explicit description of the risk, so much as a prompt or cue for the audience to consider that kind of situation from the information risk and security perspective. How you would describe the risks, and where you would place them on the spectrum (both in absolute terms and relative to others) is down to you ... but the diagram is a good starting point for contemplation and discussion, "close enough for government work" as it were.

There are limits to the generic approach though. Much of the security awareness content doing the rounds on the Interweb is so bland and of such poor quality that the authors' experience and expertise are called into question (to put it poilitely). A lot of it is myopically concerned with IT systems and data, neglecting the broader aspects such as - well - workplace information security for just one of many related topics. More perniciously, the free security awareness content in almost all those free slideshares and vendor white papers usually tops-out around the middle of the risk spectrum, in other words it only covers low to middling cybersecurity risks and baseline controls, without even acknowledging that there are more sigificant issues out there and other control options worth considering.

It's something that springs to mind whenever I see those 'top N' lists recommending someone's chosen subset of favorite controls. The well-meaning authors believe they are helping matters with their naive checklist approach. The implicit message is "If only everyone would adopt these N things, we'd all be better off!". Nice idea but unfortunately, experience tells us that hardly anyone has the will, patience and resources to complete the list - and that goes for any generic list presented as good or (worse still) best practice.

Some cybersecurity/awareness pro's might argue that there's no point even mentioning the high-risk end of the scale because the corresponding controls are infeasibly expensive and complex, 'only suited to the government and defense industry' but that's their judgment call which pre-empts management. As a consistent approach, it systematically biases the entire security awareness program and in fact information security management as a whole. 

Look at it this way. If we  were to offer managers these 3 options, which are they most likely to accept?
  1. Baseline, basic or trivial security controls addressing only the lowest risks, and not very well at that; 
  2. Top-N best practice controls addressing the low and middle-range risks;
  3. Serious controls addressing most of the risks, including the uncommon but potentially disastrous high-end bet-the-farm ones.
Most likely they would prefer options 1 or 2, possibly even option 0 - the do nothing, bury head in sand, la-la-la can't hear you option. Only the bravest and most far-sighted (well OK, risk-averse) managers would seriously consider, let alone choose option 3. However, look what happens instead if we silently drop option 3 from the table: we're left now with options 0 (which may well remain unspoken) plus 1 and 2, and once again it is human nature to go for the middle. By lopping-off the top end, the entire frame of reference has been lowered. The high-end risks and security controls are the elephant in the room, except for the awkward and inconvenient fact that only those people who are already security-aware even sense its ghostly presence.  

Oh oh. We have a problem Houston.

Bottom line: it is entirely appropriate to bring up the high-end risks and controls in the security awareness program even if, in practice, they are likely to be discounted. Framing the problem space broadly is necessary to avoid distorting the field and creating bias. If nothing else, it's a matter of integrity.