Wednesday 31 August 2011

Securing people: the human side of information security

Information security involves far more than just computer security.  It's about protecting information in all its forms against all sorts of risks using whatever security controls are cost-effective.  Technology-based controls such as logins, firewalls and antivirus programs, plus physical controls such as padlocks, are merely parts of the information security space - important parts, maybe, but not sufficient in themselves to secure our information assets. 

This is where the modern approach to information security departs from traditional IT security in particular.  We need to secure not just the computer systems and networks but also the human beings - the people who design, develop, test, implement, use, manage and maintain the systems and networks, plus those who seem to get by perfectly well without IT ...

Information security is very much a human endeavor, which of course makes it an ideal security awareness topic, not least as security cannot be addressed through technology alone. So we have ... a new awareness module on people security ...

To be honest, it's actually the 102nd module since we released an additional module following the London Underground bombings in 2005, and module #101 is our security orientation module.  But please join with us in celebrating our centenary anyway!

Monday 29 August 2011

Oh no! Several stormy rainfall!

Phishers are already using the US hurricanes as the pretext:

"... After several stormy rainfall occurred recently, We regret to inform you that a computer failure has affected some of the modules of our systems notament sending wire transfers and credit card payments online.  But our teams have set up a verification process and reactivate your account.  To complete verification, you will be taken through the following stages: 
 1. Input your Personal Information
 2. Input your Account Information
 3. Input your Online Banking Information 
 4. Click on Continue ..."

Anyone gullible enough to believe that 'several stormy rainfall' is enough to knock out a bank's computer systems and require them to 'verify' themselves probably shouldn't have a bank account.   :-)

Wednesday 10 August 2011

Spoon-fed security

I've been reading the recently-issued revised FFIEC guidance to US financial institutions on user authentication and related 'layered' controls, and puzzling as to why such guidance is required  Is it really necessary for the FFIEC to tell banks, for example, to use "enhanced customer education to increase awareness of the fraud risk and effective techniques customers can use to mitigate the risk"?  Is that not stating the bleedin' obvious?  Isn't it clearly in the banks' interest to make their valued customers aware of keylogging Trojans, phishing, 419s, money-mules and a zillion other scams?

The financial institutions in which I have worked have all been hot on risk management, and have usually worked at or close to the cutting-edge of brand new security technologies.  My risk, security and fraud colleagues definitely appreciated the issues relating to failing to identify and authenticate customers, not least for Internet banking systems, while on the whole, management "gets" security.  After all, it is of course their core business.  Security is 'what banks do'.

Aside from generally-accepted good security practices and standards, plus industry norms shared informally through industry forums and employee migration, they experience and learn from information security and fraud incidents, in much the same way as they learnt the need for strong bank vaults from traditional stocking-masked bank heists.  For example, banks know that cheap low-resolution CCTV systems give woefully inadequate images, whereas good quality stills, or even better clear color video shots from multiple angles, substantially improves the probability of someone recognizing bank robbers caught in the act.  So too do they appreciate that strong forensic evidence concerning network hacks makes it much more likely to pin the attacks on the perpetrators.  I won't go into details about the controls but suffice to say that practice is good.

In Europe and Australasia, in my experience, the banking regulations are primarily concerned with corporate governance, accounting practices and systemic risk - areas in which banks' commercial interests might conceivably conflict with the wider interest of customers, tax authorities, shareholders and society.  There are of course laws and regulations about privacy, but compliance is relatively insignificant for banks given the pervasive security culture.  The laws and regulations mandate privacy 101 for the witless and clueless, while on the whole banks are in a completely different class*.

So is there something materially different about financial services in the States that for some reason requires rather minimal security standards to be imposed on the industry by a government regulator?  Without the regulations, would US banks not be concerned about protecting their customers' assets?  Unless spoon-fed the appropriate security advice, I wonder whether they would casually leave the vault doors open?

That the FFIEC guidance even exists perhaps implies that (some) US financial institutions are incompetent, negligent and/or irresponsible regarding information security.  Following hot on the heels of the 'sub prime' fiasco, there does seem to be something of a mental block there concerning risk and control.   Please tell me I'm wrong ...



* That's not to say that banks always get it right - like for instance the local branch that insisted on repeatedly FAXing confidential customer paperwork to my office phone, until I was annoyed enough to forward the call to our office FAX and discovered the culprit.  It was a simple case of digital dyslexia - a wrong number stored in the FAX machine's memory.  The branch was of course embarrassed to discover the breach and the annoying calls stopped immediately.  Lesson over.  Move along.  No need for an industry regulation.

Friday 5 August 2011

Hard lessons

Distribute.IT, an ISP that suffered a devastating hacker attack on June 11th was attempting disaster recovery by June 13th but in serious trouble by June 17th and finally admitted defeat with the complete loss of several important customer-facing servers by June 21st, just ten days after the hack. Some 4,800 domains and customer accounts were lost, with (it appears) no offsite data backups from which they might have been restored.

With 20/20 hindsight, someone in Distribute.IT's management presumably made some extremely unwise decisions regarding the risk that materialized. Whether they simply didn't consider or appreciate the risk, considered it too remote to address, or failed to treat the risk adequately, is now a moot point: whatever they did do was patently not good enough, and it looks like the business has failed. Controls that are meant to prevent hacks fail quite often in practice, so it would have been sensible to make suitable disaster recovery and business continuity arrangements on that basis. 

We know that now, and so do they and their customers - too late for this incident but hopefully not too late for the rest of us to learn the hard lessons.

Wednesday 3 August 2011

Hacking the Sun

The website for the Sun newspaper, formerly a competitor to the now defunct News of the World, has been hacked, compromising personal details of entrants to an online competition.  Whether this is linked to Lulzsec and Anonymous hacks remains to be seen, but I'm glad I'm not an information security manager for the British tabloid press, or in fact any British news media.

RSA hack cost >$66m

EMC, which owns RSA, spent US$66m 'between April and June' as a result of the Trojan/hack incident in March that compromised their SecureID product.

$66m may be Information Week's headline figure and that's a staggering amount of money for starters, but that's just it - it's for starters. We're told "It doesn't include post-breach expenses from the first quarter, when EMC began investigating the attack, hardening its systems, and working with customers to prevent their being exploited as a result of the attacks." so we know for sure it is an underestimate of the full breach costs. 

The wording of the disclosure also implies that it only covers the direct costs that are readily-attributed to the breach. Indirect costs such as the brand/reputation damage, customer defections, lost sales prospects, damaged employee morale and more are hard to even estimate, let alone with sufficient accuracy to satisfy the bean-counters and marketing people who typically drive these "earnings calls".  Furthermore, the costs of the incident to RSA/s customers are totally out of the picture. 

The ultimate grand total tally may be orders of magnitude greater than $66m, all thanks to an employee retrieving an email from the spam folder and unwisely opening the attachment.

[Was that a Freudian slip?  I originally typed "attackment" which is not far from the mark.]