Wednesday 30 January 2008

Plan B


Despite our best intentions and investment in a range of preventive security controls, serious incidents and disasters may still interrupt IT systems and impact the business processes which they support. As some say, **it happens. Just when everything is running sweetly, something unanticipated occurs, revealing that Plan A is not quite so perfect after all.

Contingency planning (Plan B) puts us in a better position to survive any disaster by:
1) Managing the immediate crisis professionally and confidently;
2) Keeping the organization’s essential processes and systems running despite the event through resilience and continuity planning; and
3) Recovering non-essential processes and systems as soon as possible thereafter disaster recovery planning.

The time to plan for a disaster is now, when things are going well: planning during a disaster will be too late.

As always, this month’s NoticeBored module provides a range of high quality security awareness materials aimed at staff, managers and IT pro’s. We found it relatively easy to write a detailed 9-page white paper on Disaster Recovery for IT and a 5½-page management briefing on Plan B. Crunching the key facts into one page staff, management and technical briefings was harder, and doing so without losing the plot was quite tough. Our solution was to put the subject in context for each audience:
- We encourage ordinary employees to find out about their department’s contingency plans and draw up their own personal Plan B;
- For managers we point out their governance responsibilities and highlight the risk management advantages of thinking ahead and preparing for the worst;
- Technical aspects of high availability systems architecture and DR are of interest to IT people, and it doesn’t hurt to emphasize IT’s critical role in keeping the average corporation on the air.

Monday 28 January 2008

The social engineering threat

Having recently submitted an article for EDPACS on social engineering myself, I was interested to read a similar piece by Dan Timko in the latest ISSA Journal. Dan explores the psychological/human factors that make social engineering such a significant threat. His description of the controls is a bit light but covers the basics - policies and awareness, coupled with suitable technical controls where possible. Well worth a read.

The ISSA Journal is just one of the benefits enjoyed by ISSA members. The Information Systems Security Association is primarily an international social network that has brought information security professionals together at meetings for over 2 decades. Along with CISSPforum, ISSA neatly complements CISSP and similar qualifications, taking professional education well beyond the study guides, exam cramming and boot camps.

Saturday 26 January 2008

And yet another bad office day

A woman mistakenly thinking she was about to be fired allegedly took revenge on her employer by going into the office late one evening and deleting data files worth $2.5m. Although the deleted data were later retrieved (whether from backups or by 'undeleting' them is not stated), the potential remains for trusted insiders with access to corporate IT assets to cause enormously costly damage by sabotage.

Deliberate or accidental sabotage by backup operators are tough threats to control against. They have both physical and logical access to servers and their data, often work unsupervised out-of-hours, and are mostly relatively junior staff. Trust is the primary control, though many would argue that it is no control at all, merely blind faith in many cases. The risks can be reduced by various security control measures, such as:
- Alternating backup operators
- Combining on- and off-site backups
- Tightly controlling physical access to backup storage and especially archives
- Closer management supervision and/or physical monitoring of trusted employees working in the data center
- Better training and automation of backup processes, reducing the need to give backup ops unrestricted logical access to data
- Better HR processes for monitoring employees in such trusted positions and more respect for the valuable jobs they perform.

New security standard for teleworkers

NIST security standard SP800-114 is a new User’s Guide to Securing External Devices for Telework and Remote Access.

"Many people telework (also known as telecommuting), which is the ability for an organization’s employees and contractors to conduct work from locations other than the organization’s facilities. Teleworkers use various devices, such as desktop and laptop computers, cell phones, and personal digital assistants (PDA), to read and send email, access Web sites, review and edit documents, and perform many other tasks. Most teleworkers use remote access, which is the ability of an organization’s users to access its nonpublic computing resources from locations other than the organization’s facilities. Organizations have many options for providing remote access, including virtual private networks, remote system control, and individual application access (e.g., Web-based email)."

The 14,000 customers of an ISP who lost their email accounts (see our previous blog entry) could have avoided disaster by taking the 46 pages of free but sound advice in SP800-114. Its scope is much broader than data backups, covering aspects such as securely configuring and maintaining operating systems, using VPNs for remote access etc.

Another bad day at the office

A software error during routine maintenance caused an ISP, Charter Communications, to delete the contents of 14,000 customer email accounts.

"Charter gives each new Internet user a free e-mail account, but some customers opt to use other accounts instead. So every three months the company deletes inactive accounts, Lamont said. "During this maintenance we erroneously deleted active accounts along with the others," Lamont said. "It's never happened before. They are taking steps to make sure it never happens again."


The news article doesn't mention whether the "software error" was an unfortunate and evidently untested change to the maintenance scripts (indicating a hole in their change management processes), a genuine bug in the code (possible I guess), or a simple human error by an operator/systems manager (seems entirely possible). Since the lost email accounts disappeared forever in a puff of logic, it seems the ISP had no backups of customer data - not just 'no recent backups' but 'no backups whatsoever' (a gaping hole as far as their customers are concerned but no doubt a legitimate money-saving measure from the ISPs perspective).

This incident cost the ISP $50 credits to the affected customers, presumably rather less than 14,000x$50 ($700k) as some will defect before using up all their credit. The reputational damage could be even costlier, although the truth is that such unfortunate incidents can and indeed occasionally do strike most organizations.

The Silicon Valley piece ends rather lamely with "Computer experts advise backing up all important e-mail.", implying in effect that customers are to blame for losing their emails. In some ways that is true (presumably any small businesses or power users will have been using local emaiil clients such as Outlook to download and read their emails and so should have local backup copies) but I would advise Charter Comms to look long and hard at its information security arrangements.

Thursday 24 January 2008

New IT security standards for US electricity industry

FERC, the Federal Energy Regulatory Commission, has approved eight new mandatory critical infrastructure protection (CIP) reliability standards developed by NERC, the North American Electric Reliability Corporation, covering:
- Critical cyber asset identification (NERC standard CIP-002) - essentially inventory and risk assessment of critical information assets;
- Security management controls (CIP-003) - security policy and management structure, exceptions process etc.;
- Personnel and training (CIP-004) - personnel risk assessment, training and, of course, security awareness;
- Electronic security perimeters (CIP-005) - a 'crunchy outer shell' for networks;
- Physical security of critical cyber assets (CIP-006) - physical perimeter controls, card locks, processes, visitor logs etc.;
- Systems security management (CIP-007) - security testing and patching, controlled network services, antivirus, security monitoring and various other IT security controls including, I note, minimum 6 alphanumeric+punctuation character passwords with a lifetime of up to one year (!);
- Incident reporting and response planning (CIP-008) - an annually-reviewed incident response plan; and
- Recovery plans for critical cyber assets (CIP-009) - DR plans with at least annual exercises.

For completeness, CIP-001 covers sabotage reporting, the critical infrastructure equivalent of SB-1386 and similar requirements to report unauthorized credit card or personal data disclosures.

FERC's IT security standards are stronger that mere recommendations and will probably become fully mandatory when get-out clauses relating to business judgement are removed. In-scope companies should all have started work on this by now and have to be fully compliant by mid-2008 or mid-2009 depending on the type of company and the specific standards.

FERC did not go as far as to mandate NIST's SP800-series security standards, however, excellent though they are, nor indeed international standards such as ISO/IEC 27002. The stated reason was not to delay implementation. While I applaud their haste to beef up infrastructure security, it's a shame to ignore the large existing body of work on information security from the likes of NIST, ANSI, BSI, ISO, IEC and others. Arguably there is a need for specific security standards covering SCADA (Supervisory Controls And Data Acquisition) systems, but the electricity industry is not pure SCADA by a long shot: there are conventional systems, many running Microsoft Windows and various UNIX/Linux variants, and TCP/IP networks all over the place, and security architecture, operations and management issues are basically the same as for any other industry. [I guess adopting existing standards would put a posse of electricity industry security consultants out of jobs but IMHO they are better deployed implementing security standards than creating new ones.]

Looking over the lit of bullets above, it is not hard to align FERC's advice with ISO/IEC 27002 ... whereupon gaps such as compliance stand out. FERC evidently intends to assess or audit the utilities' security against the standards but there's more to compliance than formal assessments/audits. Electricity companies should have suitable governance structures and processes in place to ensure compliance with their internal security requirements (policies, standards, guidelines and procedures) and with legal obligations unrelated to FERC (e.g. software license compliance plus other intellectual property issues, SOX and protection of Personally Identifiable Information) along with compliance by their suppliers and business partners. There are solid commercial drivers for information security in the electricity industry, quite separate from the critical infrastructure protection angle. Surely FERC could leverage this to their advantage?

The standard on DR is also notable for the absence of any advice on contingency planning and business continuity. I would have thought that 'keeping the light on' is absolutely number 1 top priority for the electricity industry, therefore resilience is more important than recovery. Perhaps this is so ingrained that it is taken as read but I'm surprised by the omission.

By the way, I also couldn't help but notice that "Facilities regulated by the U.S. Nuclear Regulatory Commission or the Canadian Nuclear Safety Commission" are explicitly excluded from the scope of the standards. I trust the nukes have their own, strong, rigorous, comprehensive cyber security standards ... they do, don't they?

Wednesday 23 January 2008

Social engineering for $$$$$$

Following an entry on the excellent Realtime Community Compliance Blog (hi Rebecca! Nice one!), I've been reading about social engineering attacks on US Credit Unions. The Credit Union Times reported that social engineers have successfully bypassed inadequate user authentication methods to authorize fraudulent transfers of large credit balances to other banks and, presumably, quickly moved on through unwitting money mules to lovely untraceable folding munny.

The Credit Unions appear to be using telephone call-backs as part of the authentication but those naughty scammers have allegedly discovered how to get the phone companies to redirect phones and thus spoof the phone numbers. They are also able to answer the pretty lame authentication questions typical of single-factor authentication schemes (you know - "What is your secret password? What is your mother's maiden name? What is your inside leg measurement?" - that kind of thing) evidently, perhaps through insider access to the Credit Union's systems, through phishing or spyware on the customers' systems (probably introduced using more social engineering techniques), or else by directly socially engineering the genuine customers into revealing the very same secrets. Now that's one excellent reason to be extremely dubious when out of the blue you get a call "from your bank, just needing to check a few things, but first we need you to authenticate. What is your secret password? What is your mother's maiden name? What is your inside leg measurement? ...".

In the past, I have personally been on the receiving end of what were probably legitimate but unsolicited calls from my bank, yet the bankers invariably went all defensive or indignant when I insisted that THEY authenticate themselves to ME before I would authenticate myself to THEM. The irony of it was absolutely lost on them. "We're your bank: trust us" was basically their best 'response', lame though it is. Some of them get quite obnoxious but the harder they insist, the wider my smile. It's fun in fact and a good wind-up for other unsolicited sales callers too. Anyway I digress.

It's not too hard to think of simple methods by which the bank could authenticate to its customers, like for example asking the caller to reveal certain letters from your password or confirm the amount of a specific transaction from your latest statement, but all such simple schemes are vulnerable to replay attacks. It's exactly the same problem that the bank has, but vice versa.

I'm sorely tempted to take in to my bank branch my own one-time-password bingo card just like the ones that various cheapskate banks are using to implement the el cheapo form of two factor authentication, cheaply, insisting that they read out and scratch off the next number whenever we speak. You can be sure that the bingo codes will be horrendously complex 'cos I know about entropy. You can be equally sure that the bank won't fall for it.

Of course all of this bank-authenticates-to-customer stuff is highly inconvenient for the bank, so we're left with "Trust us. We're your bank! No really! We are! We are we are we are! We are so your bank ...".

CUNA Mutual advised credit unions to "establish a password system" (single factor authentication - surely they have this already, no?) and "have a written agreement with the member for the use of these passwords" (to limit their liabilities, of course - again, don't they do this by default?). They said "If there is any doubt as to authenticity of the funds transfer request, credit unions are reminded they do not have to perform a wire transfer." (no, really? Golly!). Other advice included "Limit the amount of wire transfer that can be completed by a call center employee. Managers should approve all wire-transfer requests." (divisions of responsibility are good but do not address the basic problem of authenticating transfer requests), "Record conversations during the call-back and compare it to previously recorded conversations [and] listen to the caller. Does he or she have an accent that is inconsistent to your membership?" (that's an interesting idea but a rather weak and awkward control), "Perform an additional verification to the member’s work and/or cellular telephone number." (another weak control, but at least they are thinking along the right lines), and finally "send an e-mail to the member at home and/or work" (presumably confirming the transaction - a useful post-hoc activity that would make a stronger control if the transaction were put on hold pending final confirmation by digitally-signed email).

Come along CUNA Mutual: US banks are grudgingly implementing two factor authentication that European and other banks have used for years. Anyone who lags the field is a sitting duck.

Do I look that stupid?

Look what just plopped into my inbox ...

Subject: Capital Investment and Management Request

Dear Friend,

I am a freelance, independent investment broker based here in Britain.

My client wishes to invest a part of his financial estate into productive ventures in your country under your direct supervision.

He looks to make this investment discreetly under discretionary asset Management arrangement, in the areas of agriculture, real estate, transport, oil and gas and other viable venture(s) which you might recommend. I have contacted you on the consideration that I could discuss with you on the possibility of my client placing this fund with you for management either in your existing establishment or other venture to be undertaken at your discretion under terms to be agreed upon. He Prefers that this investment be made in your country.

I would be expecting your response in order that we may discuss further in detail.

Please write through my email address so that we may work out modalities.

Yours faithfully,

Mr. William Smith


"Mr. William Smith" is clearly a pseudonym: no-one loves that word "modalities" quite as much as those kinky West African 419ers. What is it with "modalities"? Is it one of the standard English words taught in West African high schools? Or is it just a meme? I'll have to ask my Nigerian colleagues ...

Meanwhile, I reported the email to abuse@google.com with the original header and got a useful auto-reply:
Hello,

Thank you for your report. Your email has been provided to the Gmail Abuse team.

To help us process your request as quickly as possible, we recommend visiting the Gmail Privacy & Security topic at
https://mail.google.com/support/bin/topic.py?topic=12784

WHAT HAPPENS WHEN YOU REPORT ABUSE?

Your email has been provided to the Gmail Abuse team. Any additional information that you provide through the forms in the Gmail Security Center will be added to your original message, and will help us to more efficiently process your request.

Google takes abuse situations very seriously -- your claim will be given the highest priority. When submitting a claim through our Security Center, please include as much information as possible, so that the Gmail Abuse team can investigate thoroughly and work quickly to resolve your claim. As appropriate, we may warn users or discontinue Gmail service for the
account(s) in question. For privacy and security reasons, we may not reveal the final outcome of an abuse case to the person who reported it.
To read the Gmail Terms of Use, please visit http://mail.google.com/gmail/help/terms_of_use.html.

If your issue is not related to abuse, you may want to visit our Help Center at http://mail.google.com/support/, or by clicking 'Help' at the top of any Gmail page within your account.

We appreciate the urgent nature of your message, and thank you for your cooperation.

Sincerely,

The Google Team

Monday 14 January 2008

Computer data more valuable than coins and equipment

An office breakin story (highlit by InfoSec News) appears to indicate a targeted theft of computers for the valuable data they contained, rather than the hardware itself.
"PICKY thieves have led one private education centre to believe that industrial espionage might be the motive for a recent break-in. Early this week, three of the CES group's computers - containing the personal details and contacts of its 30,000 students - were stolen from its Eu Tong Sen Street office. Surprisingly, 10 other computers in the same location, some of them newer than the stolen items, and other expensive equipment like scanners were left untouched. The thieves' specific choices have led CES group chairman Desmond Lim, 35, to suspect that they could have been looking for the information stored in these computers for business reasons. ... And while the computer stolen from the administration room might have been the oldest, it was also the only one with all the students' data, said Camford Business School principal Indra Padmakumara, 30, whose school is part of the CES group. The other three computers in that room were not taken, she said. Nor were they tampered with. The door to Mr Lim's room was forced open, although a brand new projector, a digital camera and a box full of coins, all lying within plain view, were not taken."

Look around you and think: how much valuable data is stored on your office systems? Are the disks and offline storage media encrypted? Are there sufficiently strong access controls protecting the office itself?

Friday 11 January 2008

Barclays chairman ID stolen

An identity thief has stolen £10,000 from Barclays Bank by requesting a credit card in the name of the bank's chairman and withdrawing cash from a branch.

According to the bank, 'procedures have been tightened' as a result.
Barclaycard has repaid the £10,000 to Mr Agius after admitting 'human error' was to blame for the blunder.

As a company director, the chairman's personal details are freely available from the public records at Companies House - details such as his full name, date of birth and home address. It's not clear from the newspaper article what if any further information was required of the identity thief, nor what credentials, if any, he/she presented at the branch other than the fraudulently-obtained credit card. I would guess that anyone asking to take out ten grand in readies would be given the third degree at the desk and would most likely be seen on the branch CCTV system ...

Blogs trump piracy

An intriguing article in the Washington Post recounts a handful of copyright abuse cases in which corporations have used photographs taken by amateurs and published online, for example in their blogs or on social networking websites. There's a curiously ambiguous thread to the piece: on the one hand it says perhaps people shouldn't publish material online if they don't want it to be copied and used elsewhere, while on the other it notes that people are increasingly calling their lawyers to defend their rights. It is strongly implied that corporations should know better, in other words there's a David and Goliath element to it, especially if the self-same corporations are quick to defend their own copyright material against abuse by others.

Blogs and other online social interactions are credited with informing people that their images are being abused, and helping them defend their rights. Online communication between people is definitely changing the nature of human culture. How else could loose-knit communities spread across various countries collaborate with such ease?

Copyright law makes no distinction between original materials created/published/used by amateurs versus professionals. Anyone who uses images and other original materials in their own work either needs explicit permission from the copyright owner (for example through a license agreement or contract) or has to conform to the narrow "fair use" provisions (at least in countries that allow "fair use" - I gather Canada is a notable exception to the norm).

Several of the cases noted involved abuse of images by 'low level employees', corporate-speak for office juniors who are either unaware of, or choose to ignore, their copyright obligations. Clearly, corporate security awareness programs should cover copyright and other compliance obligations [as indeed NoticeBored recently did!].

Thursday 10 January 2008

Having a bad day at the office?

An IT systems administrator, fearing that he was about to be laid off, planted a logic bomb in his employer's systems. He survived the round of redundancies but detonated the logic bomb anyway. Fortunately for all concerned, bugs in the code prevented it working properly. In court, he was found guilty, sentenced to 30 months' jail time and found liable for $81,200 in restitution.

This story touches on quite a number of security topics:
- He was a trusted insider who went bad
- Logic bombs are a form of malware
- His office/day-job gave him privileged access to the company's IT assets
- Weak change management process controls did not prevent the bomb being installed
- The logic bomb had one or more bugs in the program/script
- Nevertheless it sparked a security incident
- He was called to account for the damage
- There was legal and presumably corporate policy noncompliance
- The risk of recurrence presumably remains

All in all, a nice multi-purpose security awareness case study.

PS The official US DOJ press release about the conviction is dated "Dec 8 2008", an integrity failure to boot.

Tuesday 8 January 2008

Clarkson eats humble pie

Arrogant British motoring journo Jeremy Clarkson, star of Top Gear, pooh-pood the potential for identity theft after millions of benefit claimants' personal details were lost recently. He claimed personal information is freely available when people write cheques etc. and even published his own bank details in a newspaper to push the point home.
Well, someone evidently took up the challenge and committed Clarkson to a Direct Debit payment of £500 to a charity. Clarkson has now done a swift U-turn, admitting he was wrong and deserved to be punished. The BBC reports him saying:
"Contrary to what I said at the time, we must go after the idiots who lost the discs and stick cocktail sticks in their eyes until they beg for mercy."

Whether that is the end of his troubles remains to be seen. He's probably got that nagging identity theft victim's feeling that someone is still spending his money, living his life, opening lines of credit in his name ...

Sunday 6 January 2008

When losing the office key codes makes headline news

When a vehicle maintenance contractor's car was stolen, thieves removed a clipboard with a sheet of paper listing access codes for pushbutton locks on 73 Police station yards in West London. The contractor disclosed the loss and all the numbers were changed within 11 hours, but this was yet another embarrassing security blunder for HM Government. Questions have been posed about why a civilian had access to such sensitive information and why he failed adequately to secure it. The relatively poor security afforded by mechanical pushbutton locks would be another concern although thankfully Police stations have multiple overlapping layers of physical security.