Friday 25 May 2018

GDPR day

Tucked in among the avalanche of 'please confirm your details to continue receiving our marketing tripe' and phishing emails this week came some sad news about the GDPR-related demise of what has been a useful service ...

"Dear AuroraWatch UK subscriber,

It’s with great sadness that we are going to have to close the AuroraWatch UK email alert system with immediate effect.  This doesn’t mean that we’re shutting down AuroraWatch, it’s just that we won’t be sending out any more alerts via email. You will still be able to get alerts via social media platforms including Twitter, Facebook, Telegram and via our smartphone apps (https://aurorawatch.lancs.ac.uk/alerts/).

We know that this will disappoint some users. We’re also very sad, but this is something we’ve been putting off for some time. Operating a reliable mailing list service for 100,000+ individuals requires constant effort and ongoing resources. Up to now, we have been able to undertake this service (for free) amid our research and teaching activities at Lancaster University, but that effort is no longer sustainable. 

Since AuroraWatch UK started almost 20 years ago, long before Facebook and smartphones, technology has moved on. Today AuroraWatch UK has almost 100,000 Twitter followers and almost 200,000 likes on Facebook meaning that email is a relatively small fraction of the AuroraWatch UK alerts issued. Nevertheless, the maintenance of the email infrastructure puts a considerable burden on the team and provides the most headaches, e.g. bouncing emails, delayed alerts, sign-up problems and mis-identified spam. By focussing on social media alerts we will benefit from more robust and efficient infrastructure.

Some of you might be wondering if this is related to the new General Data Protection Regulation (GDPR) that comes into effect from tomorrow (25 May 2018). The honest answer is that GDPR has contributed to the decision.  We’re very proud that AuroraWatch UK has always operated within the spirit of what the new GDPR is trying to achieve - we take privacy and data security very seriously.  However, we face challenges demonstrating when consent was received to store the email addresses of some 20,000 legacy users. Furthermore, the GDPR could result in significant financial penalties in the event of data loss.  The upshot of this is that we will be securely deleting your subscription email address shortly.

If you’re not already signed up to receive alerts via the AuroraWatch UK social media feeds, then you can find out more information on the Alerts page of our website at https://aurorawatch.lancs.ac.uk/alerts/

Once again, we’re sorry that we’ve have to make this decision, but we hope you will still wish to receive our alerts through other channels.

Very best wishes,
The AuroraWatch UK team"
I'm not a Twit and I actively avoid FarceBook. Patently I do blog and I track a bunch of blogs and call-in on my favorite Web haunts from time to time, but I despise pop-ups and generally try hard to limit the rate of push-interrupts. The email aurora alerts from Lancaster Uni were a rare example of a useful time-sensitive information service pushed out by email. I'll miss 'em ... but for me this is just an annoyance, hardly a show-stopping business-critical disaster.

It is, though, an example of collateral damage caused by the legislation. I'm sure there are others, other situations where information providers have looked at what they would need to do to comply with GDPR and decided it's simply not worth the effort and expense. I guess some are using this as an opportunity to Spring-clean their mailing lists. A few may have decided to abandon their existing approaches rather than try to sort out the accumulated mess and bring them into compliance, planning to rebuild their contact databases from scratch. Some, like Aurorawatch, may have pulled out for good. Today's GDPR deadline could be the final straw.

It will be interesting to see how this situation evolves over the remainder of 2018. I expect to see a lot of 'rebuilding our contact list' stuff going on, a billion desperate marketers frantically using all the stunts imaginable to net as many new customer prospects (fresh meat) as possible, hopefully but not necessarily in a GDPR-compliant privacy-aware manner. Just watch for the special offers, discount coupons, this-week-only deals and other incentives to convince people that they really do want to be marketed-at. 

As for us, we'll simply continue providing a unique information service that customers find valuable ... and persuading/hammering organizations that don't respect privacy and security ... which means knowing our stuff ...


PS  It seems Instapaper, for one, has failed to beat the deadline so is shutting off access to its service for European residents while it "continues to make changes".

Thursday 24 May 2018

Business Continuity Manager

One of the items in June's awareness module is a model job description for a Business Continuity Manager.

It's generic since our customers are unique and we don't know precisely what any of them might expect from a BCM. We do know, however, the kinds of things that a BCM would typically be expected to do, and the personal qualities that make for an effective BCM. Well at least we believe so.

Don't forget that we are providing a security awareness and training service. Its purpose is to support customers' security awareness and training programs. So, the job description doesn't have to be perfect: it has to be stimulating, something that some customers might like to use as a starting point to prompt a discussion with management around whether it might perhaps be worth appointing a BCM.  

It matters to our customers but not to us whether the eventual decision is yes or no. We want them to have a fruitful, informed and productive discussion, leading them to make the decision that's right for them, either way. 

For customers who already have a BCM or a similar role (we're not dead-set on that specific job title), we hope the job description might prompt management to review the role, discuss it with the person in-role and other colleagues, and if appropriate make changes to bring theirs closer into line with good practice. For example, if the current role is defined in terms of recovery, how about pumping up the resilience and contingency aspects to complement recovery? If it is myopically focused on IT or compliance, why not broaden the role to support wider business objectives such as the supply chain aspects? If the person performing the role isn't willing, suitable or able to take on the wider brief, might the role be split among several people, whether full or part-timers?

The BCM job description fills just one side of paper, 400 carefully-chosen words saying enough to be a stimulating awareness piece, hopefully, without being so prescriptive that customers feel coerced into our particular way of thinking. Email me for a copy if this has caught your eye. 

Wednesday 23 May 2018

Privacy breach ends in bankruptcy

The demise of Cambridge Analytica hot on the heels of the latest Facebook privacy scandal is, let's say, unsurprising. The firm has served its purpose. Its day done.

Call me a cynic ("Gary, you're a cynic!") but I'd be amazed if this was anything other than an attempt by the company owners and managers to bury the bad news and move on. Will their continuing and future business activities be any more ethical and appropriate? We shall see.

As long as there are paying customers, businesses will continue making money however they can, as they have always done. Whether you and I consider their activities legal, illegal or in the twilight zone doesn't particularly matter to them. Profit corrupts, obscene profit corrupts obscenely.

Tuesday 22 May 2018

EU vs Spammers

I guess everyone has received a slew of emails this week from companies asking us to opt-in to their newsletters, updates, special offers and other eJunk.

Most have said something along the lines of "If you don't click the link to reconfirm your details by May 25th, you will be unsubscribed", almost identical to a million phishers that we have been patiently training people to avoid for many years now. Hmmm.

Most are going directly to the bin, some as a result of the training but most as a result of people taking the opportunity not to opt-in to being marketed-at. I suspect contact databases around the world are being decimated as a result of GDPR, so we might finally see a drop in the volume of spam once this week is out of the way.

Spam reduction is a very welcome side-effect of GDPR. Previous anti-spam laws have had limited effect. This one, although badged 'privacy', could be the best yet.

Hoorah for 'privacy'!  A round of applause for the EU!

Monday 21 May 2018

Right on cue

I've mentioned already that we'll be using the imminent GDPR implementation deadline as an example of an incident in June's awareness module.

The eruption of Kilauea volcano on Hawaii's Big Island presents another awareness opportunity. To the people and organizations directly involved, it may qualify as a disaster already ... and it's not over yet.

The possibility of a massive explosive eruption cannot be totally discounted. Even the geologists, seismologists and vulcanologists aren't entirely sure what is going on and disagree on what will happen next. Yesterday's news coverage concerned lava flowing across major highways used as evacuation routes. Today it's acidic mists as molten lava hits the Pacific. Tomorrow there will probably be something else.

Dealing with that uncertainty, or risk, is bang on-topic for the awareness module. It's a classic contingency situation.

Some of our customers are also subject to volcanic/geological threats, while others face extreme weather, terrorism, intense commercial competition and more. There are valuable lessons to be gleaned from both GDPR and Kilauea, even for those who are not subject to those or even similar threats. 

So that's my task this afternoon, drawing out the main learning points and illustrating the materials by reference to a couple of specific incidents that everyone (hopefully!) will know something about.

Saturday 19 May 2018

PRAGMATIC security metrics

This week, a newcomer to the ISO27k Forum asked about metrics for vulnerability management:  
"[I] Would like to take your view on metrics from great vulnerability management perspective which may have integration with asset, patch, application and risk management databases.  Can you share [your] experience from security and business metrics based on vulnerability management - security metrics intended for technical management and business metrics for Board?"
The first respondent offered a stack-dump of possible metrics in three groups:
Security Metrics for Technical Management:
  1. Total no Critical, High, Medium, Low Vulnerabilities found on each Asset.
  2. Repeated Vulnerabilities from previous assessment.
  3. Total No of False-positive Vulnerabilities --> this is essential to evaluate your Vulnerability Management solution effectiveness.
Whenever Technical change happens or new launch happens you can present report to the management, because Technical Management should be aware of the potential Risk. 
Business Metrics for Board
  1. Top 10 or Top 20 or Top 25 Assets and Vulnerabilities
  2. When new vulnerabilities identified by your CISO or by your security team, How long the team took to identify those new vulnerability in your infrastructure? How long the team took to patch the vulnerability?
Over a period, you can show trend reports of your Vulnerability Management process effectiveness.  This is one of the latest metrics Board level management are interested, Because when ransomware attack occurred, I had shown this metrics. This is more helpful for the higher management. As a security team, you can show ROI on Vulnerability Management Solution, Assuming if the VM solution not available we might had downtime or loss of data, which would have cost us $$$$ , but by the solution we prevented it.  
Ad-hoc metric:
There is one common metrics is for both the category; Certain vulnerability will turn potential risk to the organization. You need to follow risk management exercise against those vulnerabilities those can be highlighted to the management.  Example: Legacy application (like XP, 16 Bit Application, EOL Application, etc.) or unsecured protocol used due to business requirement or technical limitation, most of the time, business will not understand the risk involved with that. So, consider taking Vulnerability Management finding into your risk assessment and show the residual risk to the management after considering the existing controls and ask them to treat the risk. This is one of the proven ways to meet your security needs.  
I make that at least seven metrics so far. The second respondent suggested four more:
  1. Potential consequences (including financials) when shit becomes real because of the vulnerabilities that are left open;
  2. Number of open, close and work-in-process vulnerabilities;
  3. Time to address high, medium and low vulnerabilities;
  4. Number of vulnerabilities that are assessed by the risk management process.
I then waded-in with yet another: “Proportion of harmful incidents that resulted from novel vulnerabilities” where 'novel' means something like ‘previously unrecognized or unknown and hence not specifically treated’.

I imagine the newcomer was feeling a bit overwhelmed with a dozen metrics on the table already and little in the way of explanation or justification about why he might want to use any of them ... so I decided to "help" by using the PRAGMATIC method to examine the metric I had just proposed:
  • 80% Predictability – strongly indicative of the information security status going forward, since it covers both identifying and resolving vulnerabilities;
  • 80% Relevance – highly relevant to information risk and security management, particularly the information risk identification element of information risk management;
  • 65% Actionable – there may be way to improve the identification of risks but the metric alone doesn’t indicate how, just ‘room for improvement’;
  • 70% Genuine – there may be some discussion/dispute over whether incidents were both harmful and novel, in which case the criteria might be clarified/specified;
  • 80% Meaningful – not hard to understand, some nuances might usefully be explained (e.g. we can also learn something useful from the proportion or number of incidents resulting from vulnerabilities that were not novel); 
  • 80% Accurate – little doubt over the numbers, especially if they are fully specified;
  • 90% Timely – although the metric uses historic data, those data can be obtained and analyzed rapidly following incidents with little delay;
  • 80% Independent – the metric is based on factual data that are readily obtained and verified;
  • 80% Cost-effective – a relative cheap-to-generate metric with a lot of business value, I believe.
A simple unweighted mean gives an impressive PRAGMATIC score of 78% making this a strong candidate for inclusion in the organization's suite of information risk and security metrics.

Don’t worry too much about the scores I have given: they are clearly subjective and make a bunch of assumptions, particularly about the organizational context and maturity in this area. I leave it as 'an exercise for the reader' to score all 12 metrics on a comparable basis - your homework this weekend maybe?

The PRAGMATIC method is a rational, systematic basis for considering, assessing and comparing possible metrics. Aside from helping us decide between the 12, the analytical process generates deeper insight into the measurement objectives, extending the brief request originally posted. 

It's a creative process: you can probably think up variants or derivatives of the 12 metrics, including combinations and perhaps some totally different approaches. 

The method is also useful for refining metrics, both before and after they are implemented. The lowest PRAGMATIC scores are obvious candidates for improvement. How might the suggested metric be modified to increase, say, its Actionability? And would the modified metric score differently on the other PRAGMATIC criteria?

Finally, I'll point out that I'm focusing here on the inherent qualities, the strengths and weaknesses of the metrics. There are lots of possibilities concerning their generation, presentation and use which further complicate the matter. And as if that's not enough already, the security metrics must align with the organization's other metrics, enabling and supporting various governance and management activities. I'm talking about a small part of a complex management system.

Friday 18 May 2018

Contingency prep


I love the Apollo 13 film with Tom Hanks. It is commonly used in management training courses to illustrate team working, particularly the coordination and communications between and among the flight and ground crews. 

Personally, I'm more impressed at the process of managing a serious incident to avert disaster. 

Not only that, it's a compelling story and great entertainment, eminently watchable many times over.

In the film, one of several life-threatening issues facing the crew of the stricken lunar module is the accumulation of carbon dioxide. The bright sparks on the ground quickly cook-up a cunning plan for the astronauts to fabricate a scrubber to remove CO2 from the cabin air supply before they are all asphyxiated.

Among other things such as the cover of a flight manual and a spare filter, the procedure calls for "a roll of gray tape - duct tape". Whoever had the foresight to propose putting duct tape on board, and to approve the proposal despite the substantial cost (just a few dollars per roll of tape, maybe a few hundred dollars for the associated procurement and stowage processes, and no doubt thousands of dollars for every gram of mass launched into space), truly understands contingency. The cunning plan would probably have failed without it. It turned out to be mission-critical.

The black-and-white photo above affirms that NASA really does get it. Notice the gray cross on the fender of the Apollo 17 lunar rover. Yep, that's duct tape helping to secure a makeshift cardboard fender in place to cut the amount of moondust kicked up as the rover roves, reducing the risk of it damaging or settling on the scientific instruments.

Both situations illustrate the value of contingency preparations. 'Making do with whatever is to hand' - especially Number 8 fencing wire - is something Kiwis are brilliant at. Remember the scene in The World's Fastest Indian when Burt Munro rigs up a log as a skid having lost a wheel from his trailer on the way to the salt flats? If good quality duct tape had been available in the early 1960's, I bet Burt would have had some in his toolbox.

Wednesday 16 May 2018

Preconceptions

A significant challenge we face on a daily basis is to convince people to drop their preconceptions, opening their eyes and ears to new stuff and considering things more broadly.

Here are three illustrative examples:
  1. We are concerned about information risks defined as risks to or involving information in all its forms, not just computer data. Information is the asset we are trying to protect, our prime focus. IT- or cyber-security is clearly a major part of it these days, but there's more besides. There are, have always been, and will always be, shed-loads of incidents involving information that have little if anything to do with computers, networks or technology. 

  2. Information incidents are not limited to the loss of confidentiality. Other aspects such as integrity and availability of information are just as important, sometimes more so. Details of a hospital patient's medication, for instance, should remain private but for obvious reasons must remain reasonably accurate, complete and accessible when needed by the nurses administering the drugs. Compromises are often needed in the security arrangements in order to keep things in balance, meaning that it is important to consider all aspects.

  3. Information security is not purely about locking things (especially IT things) down and preventing inappropriate activities. This point flows from the other two but is worth emphasizing separately, I feel. Not only is it literally impossible to eliminate information risk completely, but that is not a realistic objective anyway. Although we try to avoid or reduce unacceptable risks, some risks are worth taking. This leads to a different perspective on information security as a business-enabler and assurance activity, as much as a risk-reduction, controlling or compliance activity.
Dealing with preconceptions is tricky because they are often innate, unrecognized, deeply entrenched and cultural. People have certain expectations about what security awareness and training is all about, how it should be done, what it should or should not cover, and so forth. Their prejudices and biases can make it tough to get alternative perspectives and points across. In the extreme, they may tune out, completely ignoring or totally rejecting things that don't fit their preconceptions, their world view.

Worse still, we infosec pros are humans too (believe it or not!). We're not immune to preconception, prejudice and bias. Many of us are unnaturally passionate about this stuff. This very rant is more than just a hint! I maintain, though, that being sufficiently self-aware to acknowledge our limitations is an important step towards surmounting them. 

I'm going to leave it there for now, except for this parting thought for the day. The way we express stuff is just as important as what we are communicating. Security awareness and training is an emotional activity. If we fail to engage with our audiences on a personal level, we might as well not even bother. Remember this the next time you are writing a security policy ... or blogging about infosec [yep, do as I say, not as I do!]. 

Tuesday 15 May 2018

Joining the dots

Security awareness and training materials are inevitably aligned in the general sense that they all concern or relate in some way to information security. The materials have a lot in common, building upon the same foundational principles and concepts. 

With our service, consistency is virtually guaranteed since the materials are all conceived, researched and prepared by the same close-knit team. While we enjoy exploring novel approaches, and our own perspective is constantly evolving, we can't help but continue along the same tracks.

Most of the time, relationships between topics are incidental. Every so often, though, we like to point out and use the linkages deliberately as part of the awareness approach. We're delivering a coherent campaign, a planned rolling/continuous program rather than a sequence of discrete, independent and unconnected episodes. 

Grab the crayons and join the dots to reveal the whole glorious technicolor picture.

It occurred to me this morning that by the time June's awareness module is released, GDPR will be live, meaning that most if not all of our customers will be legally obliged to report or disclose privacy breaches within 72 hours.

That's just 3 days in old money [gulp]. Barely enough time for a corporate crisis [cue: panic].

I'm not entirely sure at this point precisely when the breach reporting clock starts counting down the 4,320 minutes, nor when it stops, so I ought to dig out and read the regulation, again, from this month's awareness module. Leaving that issue aside for a moment, those quarter-of-a-million seconds will doubtless fly right by in a flash, hence organizations would be wise to prepare for that eventuality ... which thought feeds directly into June's awareness topic around incidents and disasters. Breach disclosure is a neat example of the value in considering and preparing for incidents, getting ready to respond, ideally practicing and refining the response arrangements in order to beat the regulatory deadline in the most cost-effective and professional manner.

So, that's the topic of June's case study decided, plus a relevant example to bring up in the awareness seminars and briefings, and something for customers to check out using the Internal Controls Questionnaire from the module.

The cool part about these links between topics and modules is that they work both ways. We refer forward to future topics with little tasters of things to come without needing to delve right into them. We refer back to prior topics as reminders of what we covered previously. Glancing at our schedule for the rest of this year, I see we will be exploring security frameworks and methods in July, then insider and outsider threats pop up in August and September: we must remember to mention those topics where applicable in the incidents and disasters material for June.

Monday 14 May 2018

Zombie data

Over on the ISO27k Forum recently, someone raised the concern that a cloud services provider may have deleted and certified deletion of a customer's data at the primary location but somehow neglected to delete the copy/copies at their Disaster Recovery location/s, leading to problems later if the data then turns up unexpectedly, possibly in a different legal jurisdiction such as an overseas DR facility.

That scenario is possible and might be a concern (e.g. for GDPR compliance reasons) so yes it’s an information risk of sorts.

Potential mitigating controls include:
  • Clarifying the requirement for the cloud services provider to delete and certify deletion of ALL data copies including DR, backups, archives, caches and assorted fragments that might be loitering in odd corners of the data centres, IT systems, networks, fire safes and filing cabinets, and reinforcing it with additional checks/audits plus strong penalties and liabilities;
  • Using encryption with a small, tightly-controlled set of extremely strong keys which can be deleted and verified as such, for sure, no questions, using appropriate processes and controls;
  • Some sort of time-bomb arrangement that automatically destroys stored data or those crypto keys after the expiry date;
  • Some sort of remotely-triggerable data bomb that destroys the data or keys when triggered by a reliable mechanism;
  • Insisting that ALL the data remain within a defined boundary or jurisdiction where stronger controls can be both ensured and assured;
  • Improving the provider’s understanding and appreciation of the risk by building a strong working relationship, mutual respect and trust;
  • Improving their trustworthiness further with awareness and training, governance, compliance and assurance measures … such as a ‘mole’ – someone working within the provider but for the customer – or whistleblowers;
  • Insisting that ALL the data remain fully traceable at all times, then systematically deleting them and confirming that – possibly independently or in conjunction with the provider;
  • Planting tell-tale beacons in the data so that, if it ever does turn up unexpectedly, the leak will be noted and an incident flagged for some sort of urgent response;
  • … others? How else might this risk be mitigated? I'm quite sure there are other possible controls.
Personally, unless the risk really was high (i.e. high probability meaning significant threats and vulnerabilities, and high impact – perhaps highly-classified mission-critical data?), I would be tempted to accept it or to share it (through the contract/SLA/agreement with the provider), or better still avoid it in the first place (by not passing such important data to a third-party). Chances are high in such a scenario that there would be many other significant information risks as well, so the relative risk level might not justify such extreme controls. 

In other words, there are probably other things I would be even more concerned about.

Returning to the original issue, in any risk analysis, it is always possible for some bright spark to come up with some bizarre, highly unlikely scenario - the 'little green men from Mars' type of situation, or quantum computing (which can simultaneously check all possible crypto keys), or a total meltdown of all electronic devices (e.g. due to an electromagnetic pulse), or an asteroid impact, or … whatever. 
These are the extreme outliers, the black swans, the things that keep poor old Bruce Schneier awake at night.
They include possible but unlikely combinations and cascades of events – unfortunate coincidences as several things all go wrong 'at the worst possible moment'.
They include control failures, a surprisingly common yet often neglected cause of incidents - a massive blind-spot in the information risk management sphere, I fear. A risk to the profession.
Generally speaking, I would argue the best way to deal with them is through business continuity arrangements, specifically resilience, recovery and especially contingency since we can’t tell for sure exactly what might occur hence our response is contingent on what actually happens. Although these extreme events are extremely unlikely, it is true that something unexpected might just happen so plan for that eventuality by preparing to cope as well as possible with the aftermath and minimize the resulting damage, rather than pouring all available resources into avoiding or preventing them. That's a black hole, a bottomless pit. 

And here's today's Hinson Tip: extreme risks may require unusual forms of mitigation implying the need for more creative out-of-the-box thinking in your risk workshops etc. 

For some of us risk nerds, such a challenge qualifies as fun!

Sunday 13 May 2018

A new title

June's awareness module covers the related areas of incident management and business continuity management, but "Security awareness and training module on incident management and business continuity management" is decidedly unwieldy, so I've been trying to think of something more apt.

Today I've come up with a new snappier title: "Incidents and disasters". That covers it nicely, I think, well the core of it anyway. 

There is always some fuzziness at the scope boundary, and that's by intention since we're weaving the individual subjects together into a tapestry - the bigger picture.

The module's title is quite important because it sets expectations. It is the ultimate precis of the month's materials: if someone sees "Incidents and disasters" on some list, they have a clue about the module's focus. 

So there we are, the entire topic summed up in just 3 words.

I quite like the idea of "Keep calm and carry on" too but it's just a little too obscure, too tongue-in-cheek for most I guess. Makes me smile though.

Saturday 12 May 2018

Plummeting toward the deadline

With less than a fortnight now remaining, are you all set for the GDPR deadline with everything on your privacy projects either completed or well in hand?

If not, now is your last chance to refocus on priorities and squeeze the last ounce of effort from all involved.

The usual approach for many managers and team leaders facing just such a situation is to crack the whip. Maybe you have already done that. Maybe you are being thrashed, and feel obliged to do the same.

Hey, listen. Stop a moment and think. That's not the only way.

Assuming things have been run reasonably effectively to this point, everyone is well aware of the impending deadline. The increasing tension will be plain to all. People will have been slaving away, playing their part and (in most cases) doing their level best to hit the goal ... so piling on the pressure now may be counterproductive. When people are close to their breaking points, there's a chance they'll snap rather than bend, especially if they've learnt that bending get them nothing but sore backs and yet more grief. The team and team leader need to trust each other and that's achieved by experience, not by demand.

What else would help move things along in the right direction? There are almost always other options, other avenues to try besides whip-cracking. Has it occurred to you to ask the team? Seriously, find out what are their main pain points, and do something positive about them, now, before it's too late. 

A significant part of management's role is to facilitate things, enabling the workers to work and give of their best. This includes reducing or removing barriers, tackling issues and, well, teamworking. OK so the deadline is fixed. What about everything else? Look harder for slack in the system, opportunities to cut corners safely and sprint for the finish. Ask for creative suggestions and explore the options as a team. It's not just about 'sharing the solution': given some slack, people will often surprise us with novel responses.

By the way, once the line is crossed and the crowd cheers, what's in store for your little athletes? Maybe not a medal, but will there anything at all to thank them for their supreme efforts, and celebrate a job well done? 

Aside from you, who is most anxious right now? Who has the biggest stake in the success (or failure!) of this effort? What are their main concerns? And can you persuade them to help out, if only to turn up at or before the medal ceremony in order to congratulate the team on a job well done?

Thinking still further forward, what is the current situation teach us? Deadlines are a fact of life, hence we have plenty of chances to try different approaches and learn what works best. Aside from that, right now a substantial number of organizations and teams around the globe are plummeting towards May 25th. What can we learn from others' experiences?

Speaking personally, I'll certainly be reading all I can about how organizations, teams and individuals have faced up to the GDPR challenge, both out of my general interest in management and perhaps to pick up new motivational techniques worth including in my toolbox or, for that matter, the ones to avoid like the plague. 

This motivational stuff is highly relevant to making security awareness and training more or less effective - obvious, if you think about it, which hopefully now you are.

Friday 11 May 2018

Mind remapped

Yesterday I was wrestling with different ways to view and structure the topic on Post-It Notes. Today, a breakthrough!


[Click the diagram for a larger version]

We are not totally out of the weeds yet as the diagram is too "busy" for non-specialist audiences, but it won't be hard to simplify.  The incident management aspects need more work too.

The professionals' awareness and training seminar, plus accompanying briefing, will explain the diagram a section at a time, slide-by-slide building up the whole glorious picture.

For the management audience, a simpler version will emphasize the governance, strategic, management and business aspects.

For general staff, another simple version will emphasize their perspectives, the things they need to know - once we figure out what they are!

Thursday 10 May 2018

Mapping a troubled mind

Yesterday I said I'd invest some time into reconsidering and simplifying the awareness topic for June - "Incident and business continuity management". Specifically, I said I would have a go at mind-mapping on Post-It Notes.

So I did. I splashed out on 6 Post-Its and set aside 10 precious minutes for quiet contemplation. The first attempt broke down the processes associated with incident management into a conventional sequence - plan, prepare, exercise and refine ... but the sequence doesn't readily extend to cover business continuity, other than somehow 'coping' with incidents that turn out to be massive. And then I thought about focusing on the essentials, and added "Focus" as a reminder about focusing the incident and business continuity management activities on critical business processes. 

That doesn't quite work so let's try another approach. Still thinking about how the organization identifies its critical business processes, this time I came up with a set of basic questions, the kinds of things a worker coming across this awareness topic for the first time might ask themselves.  Why is incident and business continuity management worth addressing? What do the terms even mean? How are they done, when, and by whom?

That's all questions, no answers, so I'm not really getting anywhere here.

OK, onwards, upwards ...

Attempt #3: back to the incident management process again, this time extending it to cover not just fixing the immediate causes of incidents but addressing the underlying issues, thereby improving the organization's resilience and security.

That's better, and leads me to think about process maturity, the organization gradually refining and improving the approach over time.  Maturity is nice because it doesn't matter how good you are today: you can always improve. So, on to the fourth mind map.

Here I'm focused on the management activities, in other words how the organization might go about developing its incident management and business continuity management processes by:
  • Defining the objectives;
  • Clarifying and setting priorities, relative to other business initiatives;
  • Allocating suitable resources;
  • Measuring important stuff to know how well it is going, and to drive it along;
  • Using the metrics to improve, systematically, learning new tricks.
Mind-map #4 might serve for the management awareness stream, I guess, but it's of little relevance and interest to the others.

Fifth attempt: this time I'm thinking about business continuity management. The mind map has just 3 arms so at face value it is simpler than the previous 4 ... but I've added sub-items: the arms are fewer but more complex.

And it doesn't refer to incident management, at least not explicitly. I guess I could add it, particularly in connection with the 'recovery - correct - restore' arm which are important activities in most incidents. 

Well OK, #5 has some potential.

Frustratedly reviewing the previous 5 mind maps, I'm not making much headway here. Nothing really stands out clearly at this point - so it's time to try a radically different approach, a creative thinking method called 'reversal' - turning the problem on its head. 

Instead of struggling to find ways to describe what incident and business continuity management are, what are they not? What might be the consequences of not managing incidents and business continuity, of not bothering at all? Cue Post-It #6.

While the organization might essentially ignore or muddle through relatively minor incidents, above a certain point they become serious enough to cause material damage, all the way up to disastrous incidents causing total failure. But 'having faith' hints at an aspect barely mentioned so far: assurance is an important part of this. Organizations should not wait to suffer serious incidents to discover whether they will or will not cope. That's not good practice, not sound governance. Building confidence in the arrangements, strengthening and maturing them, is definitely something worthwhile.

My 10 minutes spent, I stopped brainstorming to scan the Post-Its and write this blog ... which took about 50 minutes more. All in all, that's an hour's slog with not much to show for it.

As my pal Lee used to say, "the floggings will continue until morale improves".

Wednesday 9 May 2018

Security essentials

There's more than a grain of truth in the saying that complexity is the enemy of security. 

Complex systems, processes and situations are harder to analyze and control. There are more things to go wrong, more interactions, more states to consider, more factors to bear in mind. Complex things are generally more fragile, less resilient, more likely to fail or be broken. 

The same applies to security awareness and training. People can only take in so much new stuff at a time.

I've blogged before about today's information overload, people constantly working on interrupt with a million distractions. If we make our awareness stuff too hard, requiring too much time and attention from the audiences, they won't bother so we're not going to achieve much.

Two complementary awareness and training approaches to address this issue are:
  1. Break the awareness and training content into discrete chunks - bite sized pieces from which to construct the whole jigsaw; and

  2. Simplify each chunk as far as possible. Make the pieces tastier, more digestible.
So, what does that mean for our next topic? We have already decided on the chunk, and as I said yesterday, we're well on the way towards defining the scope. At the same time however we're complicating matters by stitching together incident management and business continuity management, so we need to work on simplifying the content.

An approach that usually works well for me is to visualize the topic area in the form of a mind-map with a central blob for the title and satellite blobs for each of the main aspects, breaking those down further as appropriate and making links between related parts. Sometimes it takes a couple of iterations to get down to the nitty-gritty, just the key aspects in a logical sequence that makes sense but that's pretty easy with a graphics program or indeed on paper with pencil and eraser. 

Perhaps this month I'll try condensing the topic down to its essentials on a Post-It Note-sized mind map, hopefully without having to resort to a super-fine pencil and magnifying glass. Wish me luck!

Tuesday 8 May 2018

Wheels within wheels

Our awareness topic for June is in the area of incident and business continuity management

Although the scope is quite indistinct at this point, it will gradually fall into place as the materials come together and at first broad themes then specific awareness messages emerge during the remainder of May.

There are several aspects of interest and concern, such as:
  • Identifying events and incidents 
  • Reporting them
  • Evaluating them
  • Triggering incident responses
  • Responding appropriately
  • Maintaining critical information services, IT systems etc., supporting critical business processes
  • Recovering/restoring/replacing broken stuff
  • Getting back to normal 
  • Learning and improving for the next time around
So, straight away, the idea of a loop, a cyclical or repetitive process springs to mind, one that the organization runs routinely with relatively minor events and incidents, practicing and preparing for The Big One ... although I'm thinking there are probably material differences in the approach for dealing with disastrous showstoppers and coincident events compared to the usual everyday run-o'-the-mill stuff, which suggests researching and exploring that aspect, perhaps, for the management and professional awareness streams.

I'm fascinated by the concepts of resilience (keeping vital things going despite stuff going wrong) and contingency (coping with the unexpected, making the best of available resources, doing what needs to be done), so I expect they will feature in the awareness materials. Exactly how and where is yet to be determined.

Information risk and security management underpins all of what we do. Aside from the obvious detective and corrective controls, we probably ought to mention risk avoidance, risk sharing and incident prevention too - but only briefly. The cool part about a rolling/continuous approach to awareness is that we have plenty of opportunities to delve into those other areas during the year/s ahead. We can touch on them in June without having to explain and divert attention from the prime focus. Likewise, when they come up for more in-depth treatment, we can casually refer back to incident and business continuity management, reminding audiences about June's module with barely a word. Oh and in June we might tantalize our audiences with the merest glimpses of awareness topics already planned for July through October.

This is real "refresher training" - not just dusting off and trotting out the same old same old. We're teaching adults to think, not training seals to perform. Rote repetition is fine for learning multiplication tables but not for an area as dynamic and complex as ours. Aside from anything else, it is tedious. Boring even.

On that thought, it occurs to me that a privacy breach would be a good example incident to discuss in June, an obvious reference back to May's awareness topic. The idea is to trigger memories, reinforce conceptual linkages, remind people about the fundamentals and help the audiences assemble the bigger picture. Over time, security awareness levels are lifted and then maintained at a higher level, gradually leading to the deeper cultural changes that our customers are seeking.

Risk management failure is yet another possible angle to consider, both in terms of failures to identify and prevent incidents, and failures of the incident and business continuity activities themselves ... but maybe another time. There's already loads to do for June. As I said, the scope of the module is already starting to crystallize and fall into place, so I'm keeping calm and carrying on.

Monday 7 May 2018

[NZ] privacy week

I expect you know already but hey it's privacy week everyone!  Woo-hoo!  


[Cue rockets and Catherine wheels]




Well OK, it's privacy week in New Zealand.





And a short week at that, 5 days not 7.



But who am I to knock it?  We've settled and live here.  We pay our dues.  We have both a personal and proprietary interest in the NZ gummt's privacy and security, and we're doing our level best to ensure that the NZ authorities Get It.  We want the same things.

Don't get me wrong, 5 days of privacy awareness stuff is better than nothing ... but hang on, isn't this the month that GDPR comes into effect?  Isn't this privacy month?  Couldn't the week have at least been moved to coincide with the GDPR deadline, leveraging the global news coverage of privacy matters?

Oh well.

Here's Dilbert's take.

Friday 4 May 2018

Fraud and corruption


I was genuinely surprised to find New Zealand topping the 'corruption perceptions index 2017' from Transparency International. I thought we'd be in the top quartile maybe but didn't expect to lead the field.

New Zealand's 89% score leaves room for improvement but is way above the "average" (the mean score, presumably - or do they mean the median or some other statistic?) of 43%.

The index rates public sector corruption, specifically. According to Transparency International's video promoting the latest findings, high scores are associated with the ability for journalists and activists to speak up about corrupt officials. 

Ah, OK then, so this isn't really about bribery and corruption in general but more specifically about journalism and activism, and repression by the authorities. 

I'm not entirely sure I understand the scale. It is described as a 'scale of 0-100 where 0 equals the highest level of perceived corruption and 100 equals the lowest level of perceived corruption'.  Errr, I'm confused already since the top result in 2017 is patently 89%, not 100%. It gets worse when they say: 
"standardisation is done by subtracting the mean of each source in the baseline year from each country score and then dividing by the standard deviation of that source in the baseline year. This subtraction and division using the baseline year parameters ensures that the CPI scores are comparable year on year since 2012. After this procedure, the standardised scores are transformed to the CPI scale by multiplying with the value of the CPI standard deviation in 2012 (20) and adding the mean of CPI in 2012 (45), so that the data set fits the CPI’s 0-100 scale."
That resembles "think of a number, double it, and take away the number you first thought of" doublespeak, but perhaps it's just the fog in my mathematically challenged brain at 6:30pm on a Friday after a long week, plus 2 rum-n-tonics. The more detailed Technical Methodology Note that ends with "assuming a normal distribution" doesn't clarify so much as reinforce the impression than these are not lies, nor damn lies, but statistics.  

Worse still, it seems the index is not based on raw data obtained each year by a global research team in a scientific and statistically valid manner, but an analysis of data from other studies. Aside from the lack of clear references to those sources, I'm out of energy at this point to 

So, I am left thinking that the people behind Transparency International mean well, but I have some reservations about their methods and motivations.