Tuesday 2 August 2016

Another dubious survey

According to a Vanson Bourrne survey conducted for McAfee (now part of Intel Security), specialist "cybersecurity"* professionals are in high demand.

No surprise there.

The report reveals that respondents feel their governments are not doing enough to close the skills gap:
"Respondents in all countries surveyed said cybersecurity education was deficient. Eighty-two percent of respondents report a shortage of cybersecurity skills. More than three out of four (76%) respondents believe their government is not investing enough in cybersecurity talent. "
No surprise there either. 

Apparently the shortage is worse in 'high-value skills' (isn't that simply the result of supply and demand - a shortage of supply increases the price people are willing to pay?) and is worse in cybersecurity than in 'other IT professions' (implying that the report's authors consider cybersecurity to be an IT profession):
"High-value skills are in critically short supply, the most scarce being intrusion detection, secure software development, and attack mitigation. These skills are in greater demand than soft skills in communication and collaboration. A majority of respondents (53%) said that the cybersecurity skills shortage is worse than talent deficits in other IT professions." 
Hmmm: on that last point, 53% is barely above 50%, a 3% difference that looks to me as if it might fall within the margin of error for this kind of survey. In the same vein, did you spot that comment above about 76% being "more than three out of four"? Unfortunately, the report doesn't state the margin of error, and in fact gives barely enough information about the 'materials and methods' to determine whether the results have any scientific value at all. Tucked away in a sidebar towards the end, the small print reads:
"Intel Security commissioned independent technology market research specialist Vanson Bourne to undertake the research upon which this report is based. A total of 775 IT decision makers who are involved in cybersecurity within their organization were interviewed in May 2016 across the US (200), the UK (100), France (100), Germany (100), Australia (75), Japan (75), Mexico (75) and Israel (50). The respondents were from organizations with at least 500 employees, and came from within both public and private sectors. Interviews were conducted online using a rigorous multi-level screening process to ensure that only suitable candidates had the opportunity to participate."  
OK so the survey involved a stratified/selected sample of 775 "IT decision makers who are involved in cybersecurity", again indicating a bias towards IT. The fact that Vanson Bourne describes itself as an "independent technology market research specialist", while McAfee/Intel is an IT company, are further hints.

Aside from the bald assertion, we are told nothing more about that "rigorous multi-level screening process to ensure that only suitable candidates had the opportunity to participate". On what basis were candidates deemed "suitable" or "unsuitable"? Who decided? At what point was this determination made: before they were surveyed, during the process or afterwards (perhaps according to their responses to some qualification questions)? I can barely guess what a "rigorous multi-level screening process" might be: possibly just a few simple filters (e.g. country, job title and organization size) on Vanson Bourne's database of tame respondents (which, if true, suggests yet another source of potentially significant bias: this was not a random sample). 

I have to ask: why did respondents respond? What incentives were offered? Yep, another possible bias, especially if they were required to answer certain questions in a certain way to qualify for the incentives. 


We are also told next to nothing about the survey method, other than that it was "online" (implying a web-based survey). In particular, we aren't told how the questions were framed and phrased, nor even how the online survey question and response process was designed. I guess it was probably a simple multiple-choice survey in which respondents are required to select a single option from the handful of choices on offer: such surveys are quick, easy and cheap to construct, perform and analyse ... but there are all sorts of potential sources of bias in there. For starters, the title of the survey immediately sets a frame of reference for potential respondents. I would be surprised if the survey was not introduced to potential respondents as something along the lines of "cybersecurity skills survey", perhaps even "cybersecurity skills shortage survey" or possibly "Hacking the Skills Shortage: A study of the international shortage in cybersecurity skills" (the title of the issued report). 

Secondly, the specific wording of the question stems and answers is important, plus the number of options offered and the possibility of respondents selecting multiple or zero answers, or indicating a preference for certain answers over others, or writing in their own preferred answers. Consider the obvious difference between, for example "Do you consider cybersecurity education to be deficient?" and "Do you consider cybersecurity education to be sufficient?". While they amount to the same thing, there are distinctly different implications in each case. There is no end of possibilities for phrasing survey questions and answers, may far more subtle than my example. Even the specific order and number of both questions and answers can affect the outcome.

And then there are the questions that may have been asked and responded-to but the data were later discarded for some more or less legitimate reason. The authors could easily have come clean on that.  "The survey asked the following 25 questions ..." would have made a worthwhile annex to the report, along with the rationale for disregarding any of them e.g. legitimate concerns about the construction of the questions, ambiguity in the wording etc.

Oh yes, then there's the statistics - the analysis that generated the reported results, and the raw data that were analyzed. Aside from chucking in the odd term such as median, the report gives little indication of any statistical analysis. The more cynical of us may see that as a plus-point, but from a scientific perspective, sound statistical analysis can add value by drawing out the information and meaning lurking in any data set - like for instance whether 53% is or is not a statistically significant difference from 50% in the example I quoted earlier.

OK, enough already. The take home lesson from this survey, as with so many other marketing-led efforts of this nature, is that the report needs to be read and interpreted carefully, and largely discounted due to the inherent bias and uncertainty. I am repeatedly disappointed that such supposedly professional survey organizations seldom make much of an effort to explain their methods or convince us that the results are valid, beyond chucking in a few vague indications as to sample size. It's an integrity issue, and yes I realise he who pays the piper calls the tune so as far as I'm concerned both Vanson Bourne and McAfee/Intel Security join companies such as Ponemon on the 'dubious value' pile, at least for now. They can always change their ways with the next survey report ... but I'm not holding my breath.


They never do explain exactly what they mean by "cybersecurity". Presumably the respondents each interpreted it in their own way too.

No comments:

Post a Comment

The floor is yours ...