A fraudulent fraud report?



Our awareness module on digital forensics is coming along nicely. Today, in the course of researching forensics practices within organizations, I came across an interesting report from the Association of Certified Fraud Examiners. As is my wont, I started out by evaluating the validity of the survey on which it is based, and found this:
"The 2018 Report to the Nations is based on the results of the 2017 Global Fraud Survey, an online survey opened to 41,573 Certified Fraud Examiners (CFEs) from July 2017 to October 2017. As part of the survey, respondents were asked to provide a narrative description of the single largest fraud case they had investigated since January 2016. Additionally, after completing the survey the first time, respondents were provided the option to submit information about a second case that they investigated.
Respondents were then presented with 76 questions to answer regarding the particular details of the fraud case, including information about the perpetrator, the victim organization, and the methods of fraud employed, as well as fraud trends in general. (Respondents were not asked to identify the perpetrator or the victim.) We received 7,232 total responses to the survey, 2,690 of which were usable for purposes of this report. The data contained herein is based solely on the information provided in these 2,690 survey responses."
"2018 Report to the Nations", ACFE (2018)
OK so more than half the submitted responses were deemed unusable. That's a lot more rejects than I would normally expect for a survey which could be good, bad or indifferent: 

  • It's good if they were excluded for legitimate reasons such as being patently incomplete, inaccurate, out of scope or late - like spoiled votes in an election; 
  • It's bad (surprising and disappointing) if they were excluded illegitimately such as because they failed to support or refute some working hypothesis or prejudice;
  • It's indifferent if they were excluded for purely practical reasons e.g. they ran out of time to complete the analysis. Hopefully they used an unbiased sampling technique to trim down the data though. Perhaps the unusable responses were simply lost or corrupted for some reason.

Unfortunately, the reasons for exclusion aren't stated in the report, which to me is an unnecessary and avoidable flaw. We're reduced to guesswork. That they excluded so many responses could for instance indicate that the survey team was unusually cautious, excluding potentially as well patently dubious submissions. It could be that the survey method was changed for some reason during the survey, and the team decided to exclude those before and/or after the chosen method was used (begging further questions about what changed and how they chose the method/s).

The fact that this report comes from the ACFE strongly suggests that both the analytical methods and the team are trustworthy. Personal integrity is essential to be a professional fraud examiner, a fundamental requirement. Furthermore, they have at least disclosed the number of responses used and provide additional details in the report about the respondents. So, on balance, I'm willing to trust the report: to be clear, I do NOT think it is fraudulent! In fact, with 2,690 responses, the findings carry more weight than most vendor-sponsored "surveys" (advertisements) that I have criticised several times before.

Moving forward, I'm exploring the findings for tidbits relevant to security awareness programs, doing my level best to discount the ridiculous "infographics" they've used in the report - another unnecessary and avoidable source of bias, in my jaundiced opinion. Yes, the way metrics are reported does influence their interpretation and hence value. And no, I don't think it's necessary to resort to gaudy crayons to put key points across. Some of us aren't scared by lists, tables and graphs.