Epsilon and ISO27k
A report by Jeanette Fitzgerald, Epsilon Data Management's General Counsel, to the U.S. House of Representatives' Committee on Commerce, Manufacturing, and Trade outlines the sequence of events involved in the Epsilon data breach on March 30th that compromised names and email addresses on the mailing lists of about 50 Epsilon clients.
Epsilon's business is to provide the infrastructure enabling massive email marketing campaigns for its clients. While that may sound to some rather like legitimized spamming, Epsilon refers to it as "permission-based marketing" since recipients supposedly opt-in to the campaigns (albeit perhaps by failing to deselect the relevant option hidden deep in some marketing materials or during an inquiry or sales transaction) and have the ability to opt-out later. The hackers and scammers now in possession of the stolen personal information are unlikely to respect opt-in or opt-outs however. There have been gloomy predictions of spear phishing attacks over the coming weeks and months, perhaps using the branding of the 50 client companies - or indeed of Epsilon itself - to ensnare potentially vulnerable customers on the client mailing lists.
I find it interesting that the ISO27k standards featured heavily in their report. Epsilon's management, clearly under pressure to account for the security breach, must feel that their adoption of ISO27k demonstrates sound security or information governance. According to the report, Epsilon's Information Security Management System been certified compliant with ISO/IEC 27001 for about 5 years, and they have implemented the generally-accepted good security practices recommended by ISO/IEC 27002, the code of practice standard.
This begs the obvious question "How come the good security practices promoted by the ISO27k standards didn't prevent the breach?" ... from which, in turn, some might infer that ISO27k is worthless.
A similar issue cropped up this week on CISSPForum, an email reflector for CISSPs and other information security professionals. In the context of an ongoing discussion about security awareness, a colleague told us:
At a conference the speaker made the statement "If awareness was going to work, it would have worked by now."
... the implication being clearly that awareness is so broken that it's just not worth doing.
There's a logical fallacy in both cases. The may not have been perfect controls, but without ISO27k and without security awareness (which happens to be one of the ISO27k-recommended controls), the Epsilon incident might have been far worse.
After the fact, there is actually some evidence of the value of both the ISO27k security controls and the management system. That Epsilon responded so rapidly to the incident, notifying their clients in short order and liaising with the authorities, forensics experts and others indicates that their security incident response and management activities, at least, worked smoothly and efficiently. Senior management was engaged, and must have been sufficiently aware of the significance of the incident to react appropriately. It was phrased thus in the report:
"In identifying the recent attack on Epsilon’s systems, the company’s security program detected unauthorized download activity and invoked Epsilon’s security incident response program. This led to an immediate move to investigate and remediate the unauthorized entry and to put in place additional safeguards based on the company’s findings."
Further details about the incident response were provided in the report, albeit in summary. This does not read to me like the typical uncoordinated/panic reactions that we sometimes see elsewhere, although to be fair this is a formal, public report to a committee. The internal incident investigation findings might have told a different picture!
The 'if it was going to work, it would have worked by now' statement [I refuse legitimize it by calling it an argument] could apply to many different things, such as information security as a whole, or anti-corruption laws, or CFC bans, or restrictions on whaling. The fact is that, in each case, we can't tell for certain what would have happened if we had not acted. However, before we did whatever it was, we presumably weighed-up our options and thought it appropriate to go ahead. Afterwards, there may be some evidence to suggest that we did the right thing but it tends to be anecdotal or circumstantial, and so remains open to the challenge that it would probably have happened anyway. Short of conducting scientific trials under controlled conditions, the factual evidence is bound to be limited and disputable. Such is the nature of risk management.