Saturday 31 December 2016

Internet security awareness


We've just delivered our first awareness module for 2017 with a few brief hours left until the new year.  

Updating the awareness module on Internet security turned out to be a mammoth task: we've basically rewritten it from scratch, such is the pace of change in this area. We could probably have continued writing for another month, in which time I'm quite sure further issues would have emerged ... so we had to call a halt to the writing in order to hit our self-imposed delivery deadline. We can always come back later for another bite at the cherry and, to be fair, most security awareness topics touch on the Internet in some fashion.

"Fake news" is a recurring theme in the materials, picking up on media reports following the US presidential election. Today, we completed the final piece for the module, the awareness newsletter, drawing on a US CERT - DHS - FBI alert about GRIZZLY STEPPE published yesterday. Two Russian hacking groups used Remote Access Trojans to compromise systems belonging to US political parties and perhaps other targets. Russian interference in the US elections, through circulating propaganda and directly attacking political systems, marks a new phase in cyberwarfare, in effect using information as an offensive military weapon against a superpower. As noted in the newsletter, however, the US, UK and other governments have conducted Internet surveillance for years, and espionage predates the WWW by millennia. Whether you see sinister implications for civil liberties, or a legitimate use of modern technology to fight terrorism and foreign interference in the domestic economy, is just a matter of perspective. The fact remains that securing information on the Internet is an arduous, costly task with serious implications for privacy and commercial confidentiality, as well as politics and the economy.

Against that backdrop, awareness advice to patch systems, beware phishing links, use firewalls and so forth seems trivial but the truth is that from the outside the average corporation looks more like a colander than a shield. Without a decent level of security awareness throughout the organization, the fanciest of high-tech security technologies are worse than useless in the sense that they give a misleading impression of protection. They help, sure, but they are not sufficient.

Talking of technology, I am dismayed that so few organizations make any effort to train their technical staff on information security matters. How exactly are they supposed to pick up this stuff - Vulcan mind-meld perhaps, or some sort of virtual osmosis? If you think employing IT graduates magically lets you off the hook, take a look at the curricula or talk to your IT people about how much information security their courses actually covered. Go on, I dare you, ask them about the lectures on propaganda/fake news and surveillance! There's a good chance some of your IT pros left uni before the Internet was even invented.

Monday 26 December 2016

What worries CEOs?

According to KPMG, the answer is ...


I've picked out just one of a plethora of gaudily-colored horizontal bar charts from KPMG's 2016 Global CEO Outlook, which I encourage you to download and read. The Now or Never report blends insightful analysis with survey data and anecdotal comments. It's well-written, glossy and eminently readable.

The reason I chose to illustrate this blog piece with that particular chart is that it confirms what I already thought about risks of concern - in other words, I openly admit my 'confirmation bias'. The same could be said for most of the rest of the charts, and indeed the report as a whole, which at least partially explains why I'm blabbering on about it here. It was on-topic for me, interesting enough to read, consider ... and then pick holes in.

Even if you disagree with the report's findings, or have little to no interest whatsoever in the subject matter, it's still worth reading in respect of the manner in which the survey was designed, conducted and reported.

The survey methodology - or rather the surveyed population - is briefly outlined thus in the report:
"The survey data published in this report is based on a survey of 1,268 chief executives from Australia, China, France, Germany, India, Italy, Japan, Spain, the UK and the US. The findings from CEOs in an 18 additional regions and countries can be found in the Appendix section of this report. Eleven key industries are represented, including automotive, banking, infrastructure, insurance, investment management, life sciences, manufacturing, retail/ consumer markets, technology, and energy/utilities and telecom. Two hundred and seventy-five CEOs came from companies with revenues between US$500 million and US$999 million, 595 from companies with revenues from US$1 billion to US$9.9 billion, and 398 from companies with revenues of US$10 billion or more. Eight hundred and ninety-three CEOs came from public companies and 375 from private companies. The survey was conducted during 15 March and 29 April of 2016."  [page 39]
Hmmm, that one paragraph is about all they have to say about the survey, aside from calling out various KPMG people and respondents by name throughout the report (call me a cynic, but I suspect the names were not drawn at random out of a hat!). 1,268 may be a reasonably large size sample for this kind of survey but nevertheless it is an almost infinitesimal proportion of all CEOs in the world. 

Bearing that in mind, take a fresh look at, say, the colorful bar graph above. We can see what it tells us, but what is missing?

For a start, we aren't told how many respondents answered the particular question. Did all 1,268 of the CEO's surveyed answer, I wonder? More contentiously, did any of them answer it? The actual number of responses to that question probably lies between those extremes, quite likely towards the top end, but we simply don't know from the report.

That issue, in turn, has implications on how we interpret the numbers. Are those differences statistically significant or within the margin of error for this survey question? 24-30% is actually quite a narrow range, even for a fairly large scientifically-designed survey. The 1 or 2% differences between the bars, and hence the priority/sequence of their presentation, could easily be an artifact of the survey. We just don't know.

Furthermore, there is no report appendix listing the actual survey questions and (I guess) the multiple-choice answers permitted. Were respondents asked to choose between a list of those 5 particular risks, or were there others listed that didn't even make it into the report? How were those options selected and worded? What, precisely, is "cyber security risk" for instance? Was there a 'something else' option, allowing them to elaborate on whatever burning issues keep them up at night - and if so, how were those free-text responses handled? Were respondents able to select multiple answers, or required to rank them in some way? Were respondents induced, fooled or coerced into answering in a particular way, for example by dint of the context or preamble to the survey or this question, or by the fact that KPMG was asking? Did the bar chart even arise from 'a question', as such, or was it a synthesis of selected data plus some analyst's fertile imagination? Again, we simply don't know from the report.

On a broader note, there is no explanation of how those 1,268 survey respondents were selected (perhaps KPMG clients, or maybe a pool of Forbes Insight contacts?) nor how the survey was conducted (not even the fundamental approach such as on-line or in-person). Were any respondents de-selected i.e. disqualified outright, or their responses to this hypothetical question discounted for some reason? 

Talking of Forbes Insight (whose pretentious name I only spotted when I re-read the report while blabbering on about it here), this is what I've found out about them so far, a byline from the Forbes website:


That's hardly a detailed or convincing statement as to their competence and suitability for such a survey. They might as well have said, lamely. "We do this kind of stuff". Digging just a bit deeper, I find that Forbes is heavily into public relations and overtly self-promotional cobblers like this:


I suspect KPMG paid Forbes Insight either to conduct the entire survey, or maybe to access their mailing list of 'pre-qualified' (pre-duped, softened-up) potential respondents. Note, yet again, that I'm speculating due to the dearth of information in the report itself. There's a lot more omitted than stated ... which perhaps explains the 8 month delay between completing the survey in April and issuing the report in December. The statistical analysis and report-writing would have taken, I guess, about 1-4 months whereas it takes longer to spin a line. 

So much for the forward-looking 2017-2020 focus of the report: the CEO's were surveyed months ago, prior to Brexit (June) and Trump (December). Hey, KPMG, haven't you noticed? Big Things have happened this year.

Oh and another thing. Take a closer look at the survey sample. Why did they only apparently survey respondents from "Australia, China, France, Germany, India, Italy, Japan, Spain, the UK and the US"? What about the other 200 odd countries of the world? And how come the report discusses locations such as Korea, Ireland and East Africa if there were not (according to the report) any respondents from those areas?  Odd, that, very odd. [Cue faint whiff of rodent].

Clearly, I am a cynic. Take my inane comments with a pinch of salt. Argue with or berate me, if you will. But please at least think about what you are reading, both here on the blog and in surveys of the kind I've critiqued. FWIW this is one of the better ones.

Friday 23 December 2016

Tenable Global Cybersecurity Assurance Report Card critique

The "2017 Global Cybersecurity Assurance Report Card" reports on a web-based survey of 700 IT security people from mid to large organizations. The survey was sponsored by Tenable and conducted by CyberEdge.  It makes a reasonable job of measuring concerns and opinions within the narrow constraints of the specific questions posed. For example here's one of the survey questions:


The 5 categories of response (1-5) and the 7 listed items (a-g) constitutes a Likert scale, a commonplace opinion survey approach with both strengths (primarily low cost) and weaknesses. In respect of just that question, methodological concerns include:
  • The question stem is a little ambiguous, referring to 5 being "highest" when it presumably means "of greatest concern" or whatever.
  • The 1-5 values are ordinals and do not indicate absolute value or relative proportions. A value of 4 does not literally mean "twice as concerned" as a value of 2.  A value of 1 is not "one fifth of the concern" indicated by a 5.
  • The use of just a few discrete categories prevents respondents indicating intermediate or marginal responses (e.g. "Right at the top edge" of any category is indistinguishable from "Right at the lower edge" of the same category).
  • There is a natural bias against selecting the 'extremes' (or rather, the  1 and 5 categories), on top of which the center category is often chosen when the respondent has no opinion or knowledge, especially if the survey is configured to demand a response to every item. This bias leads to a statistically-significant preponderance of the middle category.
  • The question concerns "challenges facing IT security professionals" i.e. not the respondents' own challenges, nor those facing their organizations, but their opinions about an ill-defined/unspecified generic group. Who are these "IT security professionals" anyway? I suspect each survey respondent has their own interpretation.
  • The specific a-g sequence of Likert items can influence the result of this kind of question. The sequence could simply have been randomized for each respondent to eliminate the bias, but the methodology statement does not say so. 
  • Calling the items "challenges" immediately frames the problem space in a particular way: these are problems, things of concern, difficulties, troublesome areas. With slight re-wording, the question could have referred to "interests" or "investments", or even "opportunities [for improvement]" which would probably have influenced the result.
  • The Likert items - the particular "challenges" listed - narrowly constrains the responses, and they too are ambiguously worded. There appears to be no way for survey respondents to identify other "challenges", or or to question the meaning of the items listed, nor to provide explanatory comments amplifying or qualifyng their responses. Item e, for instance, begs questions about the precise meaning of every word: what is 'low' (e.g. is that below some other thing, low on some sort of measurement scale, or lower now than at a previous or later time?), 'security' (IT security, information security, physical security, safety, protection, national security, or something else?), 'awareness' (a general appreciation, alertness, the motivation to behave differently, completion of a training course, self-study, or what?), and yes even 'employees' (meaning everyone on the payroll or just staff or IT users maybe?).  
  • The methodology was (partially) explained, and the survey questions were included in the report along with (some) other basic parameters ... but there is virtually no detail on how the respondents engaged with the study, aside from it being a web survey. Was it on the Tenable website, maybe?Did Tenable send out emails with links to their mailing list, their customers, or some other pre-selected audience? Why did respondents participate? Were they expecting to get something in return? If they self-selected, was it because they have an unusual level of concern about the subject matter, or because they are outspoken critics, or for some other reason?

The report states the results of that question thus:


Although it is not actually stated in the report, I presume the numbers are means of the values selected by respondents, which immediately raises statistical concerns since (as stated above) the values are ordinal numbers. Are means even valid in this case? (I'm no statistician, but I suspect not).

Notice that all the reported values are above 3.3, indicating an overall preponderance of 4's and 5's - in other words, respondents are generally concerned about the 'challenges' listed. That is not unexpected since respondents are described as "IT security professionals" who are generally risk-averse by nature, but nevertheless it is a systematic bias to the method.

Notice also that the scale on figure 10 ranges from 3.00 to 3.90, not 1 to 5. Without further information, we have no way of determining whether the differences between the items are significant. There are no confidence limits or other statistics concerning the range of responses to each item. We aren't even told how many responses there were to each item in the question. Giving two decimal places implies a degree of precision that I suspect may be misleading, but it does rather conveniently allow them to rank the "challenges" in numerical order with no ties.

Oh and notice the legend to figure 10: it and the following text refers glibly to "top challenges" whereas in fact the challenges were chosen by the surveyors, not by respondents. Speaking personally, my "top challenges" are not even listed as options. I am barely even interested in the items listed, except for security awareness of course, and in that I willingly admit extreme bias since I make my living from addressing that "challenge" (except I consider it an opportunity!!).


PS  Despite several mentions of "2017" in the report, I believe we are still in 2016. Or have I been in a coma all year?

Wednesday 21 December 2016

You know you're a geek if ...

... your favourite hot drink is URL grey tea

... Chewbacca, Ytterbium and Hal are the names of just three of your many servers

... your Myers Briggs Personality Type starts with INT - as in integer

... you know what a Myers Briggs Personality Type is, even without Googling it

... assorted acquaintances and relatives genuinely expect you to be fully expert in whatever quaint technologies are currently giving them problems

... you've had the same PC for well over a decade, with 3 different cases, 4 different PSUs, 5 different motherboards, 6 different CPUs and at least a dozen new hard drives, now all solid state (naturally)

... you welcome the moniker 'geek', whereas other lesser beings may think it perjorative

... you play IT Top Trumps comparing processor architectures and memory bandwidth

... you spot technical errors in TV portrayals of geeks

.. you care about stuff that others don't even notice

... friends and family ask you to explain your Christmas wish-list in words of one syllable

... you are so well-stocked with batteries that you could open a battery shop

... you read (or indeed write) blog pieces titled "You know you're a geek if ..."

Merry Christmas all.  Have fun playing with your new toys.

ISO/IEC 27004 revised and much improved

A substantially improved version of the security metrics standard ISO/IEC 27004 has just been published.

The standard covers "Information security management ― Monitoring, measurement, analysis and evaluation", a direct reference to clause 9.1. of ISO/IEC 27001 ... in other words, it is primarily about the metrics needed to make management decisions about, and systematically improve, an ISO27k-style Information Security Management System.

These are the main sections:
  1. Rationale - explains the value of measuring stuff e.g. to increase accountability and performance;
  2. Characteristics - what to measure, monitor, analyze and evaluate, when to do it, and who to do it;
  3. Types of measures - performance (efficiency) and effectiveness measures;
  4. Processes - how to develop, implement and use metrics.
Annex B catalogs 35 metrics examples using a typical metrics definition form. These are not exactly shining demonstrations of the art, in fact some of the examples are of poor quality. I'm sure we can come up with a better set of example metrics, and in fact I plan to do so over the coming months, free time permitting. I have in mind documenting a suite of metrics relating to the whole of 27001, including both the management system aspects in the main body of the standard and the information security controls listed in Annex A. Watch this space.

I am pleased, relieved in fact, that the 2009 version of this standard is now consigned to history. It was an academic piece, full of theory and an obsessive focus on the calculation part of measurement - strange, really, given that it is such a simple and inconsequential part of metrics (essentially just 'collect data, run statistical analysis, generate result') compared to the far more important issues of what to measure and why. I honestly feel that its publication retarded rather than advanced the field of security metrics. The new release is much more pragmatic and helpful for those designing, implementing, using and improving ISO27k ISMSs. I commend it to the house.

The new standard is available to purchase from ISO, from ANSI, and no doubt from other official sales outlets too. It costs about $200 (don't shoot the messenger: I wish all the ISO27k standards were available free of charge in order to encourage widespread global adoption and improve the general state of information security but it's ISO that sets the price, not me).

Monday 19 December 2016

Online infosec dictionary

ComplianceDictionary.com is an online dictionary of terms defined in various standards, laws, regulations etc., maintained by UCF, the Unified Compliance Framework.

I have a lot of respect for the UCF and have blogged about them before. They systematically collate and analyze a wide variety of laws, regulations and standards, helping clients identify the areas of commonality that equate to both savings and good practice. If a given security control satisfies numerous compliance obligations or expectations, it make business sense to implement it properly, once. It may even qualify as a critical control.

Just in case you are wondering, I have no financial interest in UCF and don't earn any commission from them. I do however admit to being envious of the idea underpinning UCF!

The Compliance Dictionary is essentially a search engine that spews out both informal and formally-defined explanations for information security-related terms. The first term I entered to check it out gave a disappointing but not altogether surprising result: my search on "acountable" led to the following:


That's one informal/uncited reference to a generic definition ("The expectations or requirement to justify actions or decisions") with links from the 'Relationships' diagram to further entries including definitions of "accountability":

Notice that only the last definition has a cited source ... but (at least as far as I'm concerned) 'accountability' is a fundamental concept underpinning information security. It makes the difference between someone simply saying that information is a valuable asset worth protecting, and adequately protecting it in practice in order to avoid being held to account for incidents.

'Responsibility' is another fundamental concept, one that is also formally undefined according to the Compliance Dictionary. 

More surprisingly still, Compliance Dictionary identifies no formal definitions of 'control' ... but I know of at least one definition within the compliance documents that UCF claims to track, namely ISO/IEC 27000:


So, based on this small sample, the Compliance Dictionary is a nice idea that fails in practice. I'm disappointed but not surprised.

Tuesday 13 December 2016

IT security spend as a % of IT budget


According to an article in The Register, Gartner has pointed out that 'proportion of IT budget spent on IT security' is not a good metric.

One can determine any metric's strengths and weaknesses systematically and objectively using the PRAGMATIC method, so here goes:
  • Predictiveness: at a superficial level of analysis, the budget obviously affects the amount that can be spent on, or invested in, anything, hence there is bound to be some relationship between the money spent and the amount achieved ... but that is not a direct, linear relationship (in practice a somewhat vague correlation I suspect). Organizations with tight budget constraints have to spend more carefully, and naturally focus their efforts on optimizing the value they obtain. Furthermore, many would acknowledge the preponderance of snake oil salesmen in the IT security field, hence spending more might even, in some cases, be counterproductive. Score: 50%

  • Relevance: the metric may be relevant to IT security, and to financial management for the organization, but is that enough to score well on this criterion? What of its relevance to information risk and information security, compliance, governance and business continuity? Score: 75%

  • Actionability: at first glance, increasing or decreasing the proportion of the IT budget allocated to IT security would be the obvious response to low or high values of this metric. However, that's not how budgets are normally determined. Conceivably the metric might be one of the factors taken into account in the budget proposal. More likely, management would expect to see a reasoned, rational business case to spend money, not something as crude as a proportion of spend (even if it was presented as a benchmarking comparison relative to other similar organizations - assuming that could be done). Score: 55%

  • Genuineness: does the numeric value of this metric genuinely and straightforwardly reflect the object of measurement? Could it be manipulated, perhaps by someone with a hidden agenda? The metric is generated very simply by dividing one number by another, so there's not much leeway for manipulation ... but can those two numbers - the base data - be trusted? I'm not so sure (more notes below). Furthermore, issues with the true meaning of the metric (the next PRAGMATIC criterion) may be explained away by creative interpretation when presenting the metric, especially if the audience is unfamiliar with broader, more mature concepts such as information risk. Score: 65%

  • Meaning: the meaning of any metric depends on the intended audience. It's a matter of perspective. So who is or are the prime audiences for this metric? For financially- and/or IT-aware managers, the metric seems self-evident. To other general managers, it may also appear meaningful, at face value, but to anyone who digs a little deeper, and most likely to the CISO, ISM or other experts in risk, security, governance, compliance etc. (including, reportedly, the analysts responsible for Gartner's report), the metric is distinctly misleading. There is more to managing information risk than IT security, and anyway the amount of investment in IT security is not necessarily reflected in the results. It is depressingly easy to come up with examples of IT security investments that have not paid off, including some that have failed spectacularly: consider Target, Sony and others for instance. Score: 30%

  • Accuracy: Gartner acknowledged that the metric varies widely between organizations, in the range 1-13%. Does IT security status even vary by such an amount between organizations, let alone vary in accordance with the value of this metric? Possibly, yes, but personally I doubt it. There are other significant concerns over the accuracy (see below). Score: 10%

  • Timeliness: it takes hardly a moment to calculate this metric provided the base data are available - simply divide the two figures. Obtaining the base figures is straightforward too, assuming the organization captures or reports them as part of the budgetary/financial management. Score: 95%

  • Independence: could someone (such as an auditor or manager) validate the metric? Yes, checking the calculation is trivial but there is some question about the base data, and how they are determined (see below). Score: 65%

  • Cost-effectiveness: although the cost of generating this metric is negligible, its value is not strong. Consequently, and especially if compared to many other similar metrics, this one does not generate much net value for the business. Score: 30%

  • Overall PRAGMATIC score = 53% (a simple arithmetic mean of the individual scores)
By all means challenge the thinking, adjust the individual scores and even weight the individual criteria if you feel so inclined. According to my PRAGMATIC analysis, inaccuracy and low net value are the most significant issues with this metric, along with its potential to mislead naive recipients. There are questions about the base values from which the metric is calculated, and about the relationship between the metric and the organization's IT security status, plus still bigger questions about its relevance to information risk and to the organization's business objectives. The overall score of 53% is near the bottom of the 'acceptable' scoring range (just above the 50% cutoff point), making this metric barely acceptable - hardly worth considering unless there is no better metric (which I am sure there is - 'IT security maturity' is one such shining example). 

That said, the metric's PRAGMATIC score could be improved if those and other issues were addressed by refining it ... which I leave as an exercise for the keen reader.

Gartner clients can obtain the report (cited as "Identifying the Real Information Security Budget" in The Register but identified as "How to Manage and Defend Your Security Budget" on the Gartner page to which they linked) for free. Others must part with $195 for the pleasure of reading the 10-page report if this blog piece and a moment's quiet reflection was insufficient. As to whether the $195 represents good value for money, and whether it legitimately qualifies as 'IT security expenditure', I leave to your discretion. I'll simply point out that it prompted the journalist to comment on, and then me to scratch beneath the surface of, what turns out to be a commonplace but lame metric.

Friday 2 December 2016

Reflected anger


Friends,

Given my profession, I am of course utterly opposed to spam and dedicated to fighting the scourge, which makes it especially annoying when some noxious spammer uses one of my email addresses as the From: address for their nasty spam.

I usually discover this when assorted email servers send me error messages along the lines of "Sorry we could not deliver your spam".  Those reflected messages are just the tip of the iceberg, though, since I presume many other poor sods received the spam with my email address at the top.  Some of them probably cursed me.

Just in case any of them are reading this, I'd like to confirm that I am most certainly not a spammer.  I share your annoyance but it wasn't my fault!

Thursday 1 December 2016

Lifting the cover on privacy


Privacy, our security awareness topic for December, is a nebulous concept, more complex and involved than perhaps you might have thought if you accept that it includes concerns such as:

Compliance, obviously enough. Compliance with privacy or data protection laws and regulations was once described by Gartner as 'exceedingly complex', making it a significant challenge, especially for multinational organizations plus web-based companies and other with customers, suppliers and business contacts around the world. Workers' noncompliance with corporate privacy policies and procedures is another potential nightmare for management (with an obvious need for awareness - at least it is glaringly obvious to us!), while privacy-related contractual clauses concerning privacy and/or information security are hopefully not just put there to keep the lawyers occupied. Privacy is a substantial concern with professional services (such as outsourced HR or payroll) and cloud-computing services, particularly where personal data may be stored and processed in arbitrary global data center locations at the whim of the cloud infrastructure and load management systems. As if that's not enough already, laws, regulations, attitudes and practices in this area are constantly in flux. The EU General Data Protection Regulation GDPR and US-EU Privacy Shield are blazing hot topics right now, while we may be just moments away from breaking news on yet another massive privacy breach.

Human rights such as Article 8 of the EU Charter of Fundamental Rights:

Article 8
Protection of personal data
  1. Everyone has the right to the protection of personal data concerning him or her.
  2. Such data must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law. Everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified.
  3. Compliance with these rules shall be subject to control by an independent authority.


Personal space and safety with a biological, evolutionary basis in territoriality (e.g. wild animals actively defend their home range to secure a food supply and maintain a safe distance from threats, including others of their own species).

Personal choice includes maintaining control over information about us, especially things that we consider intensely personal and – yes – private. We all want to be able to determine how much or how little we reveal about ourselves or keep secret, plus how we reveal things (which brings up the context) and to whom. The excitement of playing ‘truth or dare’ stems from choosing to disclose private matters among friends, whereas the prospect of being forcibly injected with a ‘truth serum’ is scary.

Trust and ethics: when we disclose our personal information to another person or an organization, we implicitly expect and perhaps explicitly require them to take due care of it and protect our interests. We have little option but to trust them to do so, which raises issues of trustworthiness, assurance and ethics. There are things we’d reveal to our doctor or partner that we’d be extremely reluctant to disclose to others.

Cultural norms such as differing attitudes towards public nudity, shows of affection and sexuality, both between and within various nations, societies and groups

Last but not least, information risk and security. For example, there is a marked distinction between us willingly offering our personal information to someone, and their stealing or illicitly capturing it, perhaps without our knowledge or consent.

Taking such a broad perspective on the topic lets us focus on the aspects of interest, concern and relevance for each of the main awareness audiences:
  • For the general employee/staff audience, the materials emphasize protecting personal information they may be handling at work. Persuading workers to treat personal data on customers, fellow employees etc. as if it was their own is a simple way to press home the need to take privacy obligations seriously. What to do if a worker spots or is informed about a privacy breach or other incident is another issue;

  • For management, compliance, governance, strategies and policies are clearly relevant - for example the organization's preparedness for GDPR and Privacy Shield is a strategic matter for the business, particularly if the decision is made to seize the opportunity to align privacy with information risk management, compliance, business continuity etc. using an ISO27k Information Security Management System;

  • For professionals and specialists, there are technology and compliance factors in relation to privacy, including practical challenges such as changing IT systems, websites, forms and business processes to bring them into compliance, encrypting portable devices and more.

Monday 14 November 2016

Infosec awareness lessons from NZ quakes


A big earthquake at midnight last night on the Northern end of South Island New Zealand was a major incident with various implications for incident/disaster management. I'd like to pick up on a few security awareness aspects while the incident is fresh in my mind and still playing out on the NZ media as I write this.
  1. There is a lot of effort put into preparedness for such events, across the whole country. For instance, the central safety message "Drop, cover, hold" is simple, widely repeated and used consistently in a variety of media and situations. Even the iconic images and colours (lots of black-and-yellow, warning colours with a strong biological basis) are consistent. Schools run classroom teaching on it. Websites and public safety demonstrations repeat it, frequently. There are flyers and leaflets, plus local, regional and national exercises to practice the actions, with extensive media coverage. "Get ready, get thru" is a strong theme. Full marks!  [I have a slight concern about tourists and visitors new to NZ: how are they informed? I appreciate the mixed messages in "Welcome to NZ. Learn how to survive your trip ..." but public safety must be a high or top priority, surely?].

  2. The preparedness includes an extensive monitoring infrastructure to identify and analyze quakes in near-real-time. The location and severity of the quakes was known in short order, triggering various warnings and analyses that have continued throughout the day. However, there was no pre-warning: notice the flat line on the seismometer image above, prior to the main event. Also, the geology is complex, so early news was uncertain and confusing. [I'm not sure it helped, in fact, other than to know that the scientists are busy examining the evidence. Some filtering and coordination of the messages would be good.]

  3. The preparedness also includes a variety of disaster communications arrangements, using multiple media and mechanisms both for broadcasting and for person-to-person comms between the authorities, emergency services, geophysical experts, MPs etc. The awareness message "Text don't call" is widely repeated (albeit without really explaining why). The information flowing today through the news media has been impressive in terms of both the volume and clarity. As reported by RNZ and Checkpoint, 'Christchurch Mayor Lianne Dalziel tells John Campbell people most affected by the earthquake want information. “It’s an absolute necessity to be completely open with people” she says. [Trustworthy official information about an incident just passed or still in progress, confidently expressed by the People In Charge, helps put minds at rest. Simply knowing that the authorities, agencies, utilities and emergency services are fully engaged in dealing with the situation is very calming, compared to either not being told, or worse still strongly suspecting that the response is inadequate. It's an assurance issue.]

  4. Communications in the immediate area of the quake were not so robust. Failure of landlines and cellphones, coupled with road and rail blockages, made it difficult to establish the situation and coordinate the response. While the telcos are fixing things, portable emergency radios were flown into the area by the military. Meanwhile, some people were unreachable (causing obvious concern for their families and friends) and it was difficult for the emergency services to assess and respond to the situation. [Lessons here concerning the need for emergency shortwave and satellite radios, I think, plus more generator backups for cell sites, and perhaps a tech facility to pass priority messages preferentially (if it isn't already in place). Also, on a personal note, we need to nominate a few contacts that we can inform following an incident so friends and family can confirm we're OK without going frantic.]

  5. The civil defence and emergency services are well planned, coordinated and practiced e.g. tsunami experts have been meeting every 30 minutes from an hour after the midnight quake, providing a remarkably consistent if cautious series of tsunami warnings. [Excessive caution is a concern: beyond some point, people who think the warnings are excessive "cry wolf" tend to ignore them, perhaps placing themselves in danger. The frequency and nature of warnings is a delicate balancing act. Some adjustment is called-for, I think, although I appreciate that an onshore quake gives little to no time to issue tsunami warnings.]

  6. The preparedness extends to a nation-wide resilience, a cultural aspect. People are genuinely concerned for each other and willing - in fact keen to help out. The news reporting naturally and genuinely emphasizes the human angles as well as factually describing the situation. Today we've heard from farmers worried about damage to their stock water supplies and milking sheds, civil defence and insurance people talking about what to do now, and MPs talking about their families - a broad spectrum. We are still getting occasional stories about people patiently waiting for their quake-damaged Christchurch properties and services to be repaired, and there is genuine concern about the traumatic effects of the latest quake and aftershocks on survivors of the Christchurch quake in 2011.

  7. The period shortly after the incident, while everybody is still thinking and talking about it, is an opportune time for further awareness messages, intermingling warnings and preparedness messages (such as "A good time to check emergency kits this evening as aftershocks continue to roll on.") with news of the event. [Personally, I think more could be done on this. If your organization suffered a major privacy breach, ransomware attack, hack or whatever, would you be in a position to blend-in related awareness messages with your planned incident/disaster comms, or would resources be stretched to breaking point already? If so, could you draft in additional helpers.]

  8. This was not a single point event: aftershocks are continuing (roughly every 3 minutes for the first few hours) and may continue for months or years yet. A small tidal wave of water on a river near Kaikoura this afternoon (released when a blockage cleared) was hot news a few minutes ago. There's also bad weather on the way, placing even more urgency on the emergency responses in the epicenter region since choppers may soon be grounded. [Infosec incidents also drag on and on, especially the big ones that hit the public news media. Managing the incident and the associated comms is therefore an issue well beyond the immediate period and aftermath.]


PS  Even Google is playing its part.  I've just noticed the red message at the top of a query I did to find links for this very blog piece.  Good work Google!




Monday 7 November 2016

Exploiting the privacy-infosec overlaps

We're working hard on the next awareness module concerning privacy, in particular we're exploring the changes coming with GDPR (the EU General Data Protection Regulation).  

Two concepts from article 23 of GDPR caught my beady eye this afternoon:
  • Privacy by design is the idea that privacy should be an integral or inherent part of the design of a new system, service or process, from the outset (as opposed to being tacked-on later, with all the compromises and drawbacks that normally entails); and
  • Privacy by default - where there are options or alternative paths, the ones offering the greatest privacy should be selected automatically unless the user or data subject explicitly chooses otherwise.  
It occurs to me that conceptually those are not a million miles from 'secure by design' and 'secure by default', two strategic approaches with substantial benefits for information security as a whole, including most of privacy ... which hints at the intriguing possibility of using the forthcoming GDPR implementation to drive improvements to both privacy and information security.

Several other obligations in GDPR are also directly relevant to information security, such as the ability for organizations to demonstrate or prove their compliance (implying an assurance element) and to ensure that all employees are aware of the privacy obligations.  In my opinion, privacy, information risk security, and compliance, substantially overlap as illustrated by the scope blobs above: the overlaps are not complete but the parts of privacy that do not involve information risk and security (e.g. 'personal space' and a person's right to determine how their personal information is used and disclosed), while important, are relatively minor.

Friday 28 October 2016

Which comes first, the awareness budget ... or awareness?


If your annual cycle matches the calendar year, you’re probably working hard on a 2017 budget proposal to secure the funding for all you need to do on information security, cybersecurity, information risk management, compliance, business continuity and so on - doing it yourself or maybe helping the boss.

Is security awareness and training part of the plan, hopefully not just a single line item but an integral part of virtually everything you are proposing to do?  If not, don't be surprised if, once again, you struggle to make much headway in information security in 2017. Security awareness is not just an optional extra but an essential prerequisite for success ... and the magic starts with senior management having sufficient knowledge, understanding and appreciation of what information security is all about to approve an adequate budget.

With that in mind, do you see the conundrum? Awareness is needed to gain funding for ... awareness and the rest of information security. How is that possible?

Here are five possible routes out of the paradox:
  1. Do nothing - the straw man option. Hope that your budget proposal is so self evident, so stunningly elegant and convincing that management is right behind you all the way. Good luck with that.

  2. Rely on management's general awareness and appreciation of information risk, security and related matters, since they are business people and these are business issues, right? Errr, maybe, but if you're actually talking about IT security or cybersecurity exclusively, you should not be surprised to find that management believes this to be an IT issue, making it part of the IT budget, meaning that you are hostage to whatever is planned for IT spend. Good luck again squeezing some cash out of the CIO and the IT organization that has its own investment objectives and plans that may or may not truly encompass yours. Worse still, do you see the gaping gap that opened up? What about all the rest of information risk and security that does NOT fall within IT or cybersecurity - including, yes, you guessed it, full-scope security awareness and training?

  3. Hope that previous awareness activities have achieved their aim, and that management is fully clued-up on this stuff. Perhaps you honestly believe it but, being a cynic, excuse me if I'm more than a little dubious. What makes you so certain that management gets it? Have you already been running an effective awareness program addressing the senior managers making those big budget decisions in strategic business terms that make sense (and if so, how did was it funded)? Or will you concede that this is just another cop-out?

  4. Just do it, in other words run the corporate security awareness and training activities on a shoestring, eking out whatever funding you can beg, borrow or steal from other areas. Squeeze something out of IT, a bit more out of HR or training budgets. Code it as "risk management" or "compliance". Do the whole thing on the cheap, and yet overspend (then seek forgiveness). This is a surprisingly common approach in practice but it doesn't take a rocket scientist to spot the flaws and the missed opportunities. 'Just do it' implies a piecemeal, tactical approach with little forward planning or consistency throughout the year. You're unlikely to be able to employ an awareness specialist, but maybe you'll cross-charge an IT project for some awareness activities, perhaps stump up for the odd few hours of someone's time to prepare some materials, prioritizing that over funding someone's attendance at a professional security training course, conference or whatever - or not, as the case may be. 'Just do it' programs give security awareness, and information security, a bad name. We can do better than that, much better.

  5. Quickly plan and deliver security awareness activities specifically targeting senior management - in person - right now. The budget is a classic situation that benefits enormously from leg-work: some quality time spent one-on-one with senior managers, explaining and discussing your proposal over coffee, through email, on the phone or even snatched moments sharing the elevator to the exec suite, patiently listening to their queries and suggestions, and addressing their concerns, will pay off handsomely in due course when the budget proposals are duly considered. Start by thinking and talking seriously about how information security supports the achievement of business objectives. Look carefully at the corporate strategies and policies in this area for the security hooks. Go beyond the obvious compliance imperatives to find business opportunities that revolve around both protecting and exploiting information - BYOD, cloud and IoT security for three very topical if IT-centric examples. Find someone in Finance to explain the budgeting and forecasting process and help you craft an even better budget proposal, with clear objectives and measurable targets (yes, security metrics). Get the CIO, CLO, CRO and other key players on-board, for sure, and preferably others too. Identify the blockers and dig your secret tunnels under them. Build alliances and collaborate to accumulate. Sell sell sell.
By the way, option 5 is what your more politically-savvy 'competitors' in the budget race will be doing too. It's all part of the game, whether you call it awareness or schmoozing or persuading, even social engineering. For bonus points, find out what works for them and emulate the most successful ones. Why is it that Ed from Engineering always gets his way at budget time? What makes Engineering so special? Is it, perhaps, the way Ed puts it across as much as the literal words in the proposal ...?

Oh and keep notes for next year's budget rounds in the hope of making an earlier start and a better impression!


PS  In the unlikely event that you find yourself long on funds and short of time, we can help you spend whatever's left in your 2016 information security/awareness and training budgets to avoid the dreadful shame of handing it back ... with the risk of a corresponding budget cut next year. Seriously, let's talk. 

Saturday 22 October 2016

A little something for the weekend, sir?


The following bullet-points were inspired by another stimulating thread on the ISO27k Forum, this one stemming from a discussion about whether or not people qualify as "information assets", hence ought to be included in the information asset inventory and information risk management activities of an ISO27k ISMS. It's a crude list of people-related information risks:
  • Phishing, spear-phishing and whaling, and other social engineering attacks targeting trusted and privileged insiders;

  • ‘Insider threats’ of all sorts – bad apples on the payroll or at least on the premises, people who exploit information gained at work, and other opportunities, for personal or other reasons to the detriment of the organization;

  • ‘Victims’ – workers who are weak, withdrawn and easily (mis)lead or coerced and exploited by other workers or outsiders;

  • Reliance on and loss of key people (especially “knowledge workers”, creatives and lynch-pins such as founders and execs) through various causes (resignation/retirement, accidents, sickness and disease, poaching by competitors, demotivation, redundancy, the sack, whatever);

  • Fraud, misappropriation etc., including malicious collaboration between groups of people (breaking divisions of responsibility);

  • Insufficient creativity, motivation, dynamism and buzz relative to competitors including start-ups (important for online businesses);

  • Excessive stress, fragility and lack of resilience, with people, teams, business units and organizations operating 'on a knife edge', suboptimally and at times irrationally;

  • Misinformation, propaganda etc. used to mislead and manipulate workers into behaving inappropriately, making bad decisions etc.;

  • Conservatism and (unreasonable) resistance to change, including stubbornness, political interference, lack of vision/foresight, unwillingness to learn and improve, and excessive/inappropriate risk-aversion;

  • Conversely, gung-ho attitudes, lack of stability, inability to focus and complete important things, lack of strategic thinking and planning, short-term-ism and excessive risk-taking;

  • Bad/unethical/oppressive/coercive/aggressive/dysfunctional corporate cultures, usually where the tone from the top is off-key;

  • Political players, Machiavellian types with secret agendas who scheme and manipulate systems and people to their personal advantage and engage in turf wars, regardless of the organization as a whole or other people;

  • Incompetence, ignorance, laziness, misguidedness and the like – people not earning their keep, including those who assume false identities, fabricate qualifications and conceal criminality etc., and incompetent managers making bad decisions;

  • Moles, sleepers, plants, industrial spies – people deliberately placed within the organization by an adversary for various nefarious purposes, or insiders ‘turned’ through bribery, coercion, radical idealism or whatever;

  • People whose personal objectives and values do not align with corporate objectives and values, especially if they are diametrically opposed;

  • Workers with 'personal problems' including addictions, debts, mental illness, relationship issues and other interests or pressures besides work;

  • Other ‘outsider threats’ including, these days, the offensive exploitation of social media and social networks to malign, manipulate or blackmail an organization.
It's just a brain-dump really, a creative outpouring with minimal structure. Some of the risks overlap and could probably be combined (e.g. there are several risks associated with the corporate culture) and the wording is a bit cryptic or ambiguous in places. I'm quite sure I've missed some. Maybe one day I will return to update and sort it out. Meanwhile, I'm publishing it here in its rough and ready form to inspire you, dear blog reader, to contemplate your organization's people-related information risks this weekend, and maybe post a comment below with your thoughts.

For the record, I believe it is worthwhile counting workers as information assets and explicitly addressing the associated information risks such as those listed above. You may or may not agree - your choice - but if you don't, that's maybe another people-related risk to add to my list: "Naivete, unawareness, potentially unrealistic or dismissive attitudes and unfounded confidence in the organization's capability to address information risks relating to people"!

Have a good weekend,