Friday 29 June 2012

Security and privacy compliance awareness



Especially if you work in a heavily-regulated industry, you may not be the least bit surprised to discover that our latest awareness module on security compliance is weighty. Admittedly the high-res poster graphics account for much of its 100Mb but the annotated seminar presentations, briefing papers, mind maps and so forth ended up bigger than normal, even without going into detail on specific compliance requirements or resorting to the convoluted and archaic language heretofore favored by the legal profession.

So what makes compliance such a bulky awareness topic?

Part of the reason is of course that compliance obligations are many and varied. As a taster of the content in the module, here are 10 types of information security and privacy-related laws and regulations, taken from a list of 20 in one of the general employee awareness papers:
  1. IT and corporate governance - directors’ responsibilities to society and owners 

  2. Integrity, availability, accurate and complete reporting of financial data (mainly)
     
  3. Copyright, patents, trademarks and designs, laying down Intellectual Property Rights 
     
  4. Reporting and notification of those affected by information security incidents/breaches 

  5. Disclosure of information by public bodies or in the public interest (Freedom Of Information)
     
  6. Information security and privacy standards recommending good practices 

  7. Distance selling and tax laws (e.g. running businesses on eBay) 

  8. Restrictions on the import/export and use of strong cryptography (e.g. in France and Israel) 

  9. Contracts, agreements and warranties (e.g. the validity of electronic signatures) 

  10. Internet Service Providers, IP addresses, domain names (industry regulations). 
Since our customers are doing business all over the globe, we touched on the complexities of having to comply with laws and regs in the international context, again without going into specifics. Issues such as jurisdiction and the differing rules of evidence make this a significant challenge but there is an important rider to all our awareness content: we are not dispensing legal advice! We've done our level best to keep it generic, readable and most of all interesting and engaging.

We deliberately interpreted our scope widely, going beyond security/privacy laws and regs to discuss compliance with corporate security policies for instance. This gave us an opportunity to raise the ethical and cultural aspects of compliance - again, just a light touch to prompt managers and staff to think things through for themselves, perhaps reminding them of previous awareness materials on those topics. [One of the advantages of our monthly cycle is that we don't have to go into depth on everything right now: we can refer back to stuff we've raised before, and we will pick up various loose ends in future months, giving continuity and consistency over the course of the awareness campaign that more conventional approaches lack.] 

Security/privacy clauses in commercial contracts get a mention too, and with good reason: they are often quietly slipped in there by the legal and procurement people only to be forgotten ... until a security or privacy incident blows up and all of a sudden they pop out of the woodwork. One of the case studies picks up on exactly that issue, hopefully prompting the class to think about what perhaps ought to be done in the way of security compliance during the life of the contract, as part of routine relationship management.

It was tempting to bleat on about penalties and enforcement actions but aside from the odd mention (oh, and that poster image!) we consciously chose not to flog that particular horse. Enforcement is such a downer that we preferred instead to focus on the advantages of voluntary compliance, particularly the value of adopting good practice security standards and frameworks such as ISO27k and COBIT - a far more positive and upbeat awareness message, don't you think?

Wednesday 27 June 2012

A PRAGMATIC security/privacy compliance metric

In the course of considering how to measure an organization's compliance with security and privacy related obligations, the PRAGMATIC method has proven itself a valuable way to structure the analysis.  Today I want to discuss how taking the PRAGMATIC approach led me to design a better compliance metric by addressing the weaknesses in one of the candidate metrics.

I started by brainstorming possible ways to measure security/privacy compliance activities, focusing on the key factors or parameters that are most likely to be of interest to management for decision making purposes.  With a bit of Googling and creative thinking in odd spare moments over the course of a few days, I came up with a little collection of about 8 candidate compliance metrics:
  • The rate of occurrence of security/privacy-related compliance incidents, possibly just a simple timeline or trend, but ideally with some analysis of  the nature and significance of the incidents;

  • A 'compliance status' metric derived through reviews, audits or assessments across the organization;

  • Compliance process maturity using a maturity scale; 

  • 'Compliance burden'.  Management would presumably be quite keen to know how much compliance is really costing the organization, and could use this information to focus on areas where the costs are excessive;

  • Plus 4 other metrics I won't bother outlining right now, plus an further undetermined number of minor variants. 
In exploring the 'compliance burden' metric idea, it occurred to me that although it is technically possible for management to attempt to measure the time, effort and money spent on all security/privacy-related compliance-related activities such as compliance reviews/audits, disciplinary action, legal and other enforcement actions, it would be difficult and costly to measure all aspects accurately.  There is also the issue of 'double-accounting', in other words categorizing costs under multiple accounting headings and so artificially inflating the total.

However, simply recording, tracking and periodically reporting security/privacy-related enforcement actions (i.e. penalties imposed, disciplinary actions taken, successful prosecutions etc.) would significantly reduce the Cost (and complexity) of the metric, and at the same time makes it more Accurate, Meaningful and Relevant.  Focusing on enforcement improves the metric's Independence too since enforcement actions are almost invariably formally recorded somewhere, making it much harder for someone to falsify or ignore them - which a manager might well be tempted to do if, say, the metric reflects badly on his/her department.

The icing on the cake is that the metric remains highly Actionable: it is patently obvious that a department with a bad record of enforcement (e.g. a string of costly noncompliance penalties) needs to up its game, significantly improving its compliance efforts to reduce the threat of  further enforcement actions.  Since most enforcement actions either have direct costs (fines and legal bills), or the costs can be quite easily calculated or at least estimated, the metric could be expressed in dollars, resulting in the usual galvanizing effect on management.  

Creative managers might even be prompted to initiate enforcement actions against third parties who fail to comply with the organization's security/privacy requirements imposed through contractual clauses, nondisclosure agreements etc., since successful actions might offset enforcement actions against the organization and so improve the metric in their areas of responsibility.

This, then, is an example of an indicator: measuring enforcement actions, specifically, does not account for the full costs of compliance but looks to be a reasonable analog.  Over time, I anticipate management improving compliance activities to bring negative enforcement costs down and positive enforcement actions up to acceptable levels - the metric should gradually level off and act as a natural restraint against excessive, overly-aggressive and counterproductive compliance actions.

That's it for now.  I won't elaborate further on using the PRAGMATIC scores to rank the candidate metrics or to guide the design and selection of the best variants of the 8 metrics I started with, but if you have specific questions, please comment on this blog or raise it on the SecurityMetametrics forum.

Monday 25 June 2012

SMotW #12: Firewall rule changes

This is one of the lowest-ranked example metrics in our collection of 150, with a pathetic PRAGMATIC score of just 9%.  What makes this one so bad?

For starters, as described, it is expressed as a simple number, a count.  What are recipients of the metric expected to make of a value such as, say, 243?  Is 243 a good number or does it indicate a security issue?  What about 0 - is that good or bad?  Without additional context, the count is close to meaningless.

Additional context would involve knowing things such as:
  • The count from previous periods, giving trends (assuming a fixed period)
  • Expected value or ranges for the count (often expressed in practice by traffic-light color coding)
  • Verbal explanation for values that are outside the expected range
Even assuming we have such contextual information, sufficient to recognize that the latest value of the metric is high enough to take it into the red zone, what are we expected to do about it? Presumably the number of firewall rule changes in the next period should be reduced to bring the metric back into the green. Therefore tightening-up the change management processes that are reviewing rule changes to reject a greater proportion of changes would be a good thing, right? Errr, no, not necessarily. The very purpose of most firewall rule changes is to improve network security, in which case rejecting them purely on the basis that there are too many changes would harm security ... and this line of reasoning raises serious questions about the fundamental basis of this  metric. We're going in circles at this point. If we move on to ask whether rule changes on different firewalls are summed or averaged in some way, and what happens if some firewalls are much more dynamic than others, and we are fast losing the plot.

It must be obvious at this point that we have grave doubts about the metric's Relevance, Meaning and Actionability, which in turn mean is is not at all Predictive of security.  The Integrity (or Independence) rating is terrible and Accuracy rating poor since the most likely person to measure and report the metric is most likely the same person responsible for making firewall changes, and they are hardly going to recognize, let alone admit to others, that they might be harming security.  Unless the metric is much more carefully specified, they have plenty of leeway to determine whether a dozen new rules associated with, for instance, the introduction of IPv6 'counts' as 12 or 1.

The PRAGMATIC scoring table sums it up: this is a rotten metric, a lemon, almost certainly beyond redemption unless we are so totally lacking in imagination and experience that we can't think up better way of measuring network security! 

P
R
A
G
M
A
T
I
C
Score
2
1
1
10
2
33
14
4
17
9%




That's almost it for this week's metric, except to leave you with a parting thought: if someone such as the CEO seriously proposed this metric or something equally lame, how would you have dealt with the proposal before the PRAGMATIC approach became available?  Being PRAGMATIC about it gives you a rational, objective basis for the analysis but does this help, in fact?  We are convinced your discussion with the CEO will be much more robust and objective if you have taken the time to think through the issues and scores.  What's more, the ability to suggest other network security metrics with  substantially better PRAGMATIC scores means you are far less likely to be landed with a lemon by default.


Monday 18 June 2012

SMotW #11: Security budget

Security Metric of the Week #11: Security budget as a proportion of IT budget or turnover

Given how often this metric is mentioned, it was quite a surprise to find that it scores a measly 16% on the PRAGMATIC scale. Why is that?  What's so dreadful about this particular metric?

Our prime concern stems from the validity of comparing the 'security budget' with either the 'IT budget' or 'turnover' (the quotes are justified because those are somewhat ambiguous terms that would probably have to be clarified if we were actually going to use this metric).  First of all, comparing anything to the IT budget implies that we are talking about IT or technical security, whereas professional practice has expanded into the broader church of information security.  Information security is important for anyone using and relying on information.  It could be argued that it is even more important outside of the IT department, in the rest of the business, than within it.  Likewise, comparing the [information] security budget against the organization's turnover may be essentially meaningless as there are lots of factors determining each aspect independently of the other. 

<Cut to the chase>  Answer us this: what proportion should we be aiming for?  In other words, what's our target or ideal proportion?  If you can explain, rationally, how to determine that value, you are doing better than us!

The metric may have some value in enabling us to compare the security budgets over successive years, across a number of different organizations, or between several different operating units within one group structure, provided we compare them on an equal footing.  If, for example, a whole bunch of engineering companies belonging to a large conglomerate reported about 10% for this metric (making that the norm i.e. an implied target), apart from one company that stuck out with say 20% or 5%, management might be prompted to dig deeper to understand what makes that one so markedly different from the rest.  It's a fair bet that pressure would be brought to bear on the outlier to bring itself into line with the rest - such is the nature of metrics.  But would that necessarily be appropriate?  Who is to say that the majority are budgeting appropriately for security whereas the odd-man-out has got it wrong?  It is certainly conceivable that in fact it is taking the lead on security, or that there are perfectly valid and appropriate reasons that make it unique.  Perhaps the way it calculates its budgets is different, or maybe it is at a different state of security maturity.  It could be recovering from a major security incident or noncompliance, or its management may have a substantially different risk appetite than the others in the group.

The point is that the metric could be distinctly misleading if considered in isolation.  Management might even be accused of being negligent if they were to act on it without a lot more information about the security and business situations that underpin it, in which case would we be any worse off if we didn't bother with it at all?

P
R
A
G
M
A
T
I
C
Score
13
3
16
2
2
0
4
18
88
16%








Single-digit scores for five of the nine PRAGMATIC criteria banish this candidate metric to the realm of soothsayers and astrologers in respect of Acme Enterprises Inc anyway.  Perhaps in your specific organizational context, this metric makes more sense, provides true value and justifies its slot on the security management dashboard - if so, we'd love to hear from you.  Feel free to comment below.   What are we missing here?  How do you make this one work?

Friday 15 June 2012

Rogue insiders

The kind of insider incidents pulled by Nick Leeson at Barings Bank and Jerome Kerviel at Societe Generale demonstrate how much risk is associated with those in such powerful positions.  Both guys successfully bypassed sophisticated controls designed to limit their ability to take risky trading positions without proper authority, eventually causing eye-watering losses that nearly tipped over the global financial system's house of cards.  

Big risk-related questions remain about this type of massive internal threat: 
  • How many more rogue traders are still out there, doing much the same thing today?  

  • Is it even sensible, let alone possible to draw the line between legitimate and illegitimate activities?  Given that, how can the really dangerous rogues (*) be identified from star performers?

  • How many people in other such powerful positions are rogues (*) working for themselves rather than their employers, with dubious ethics if not outright fraudsters?
     
  • Which controls can truly be relied upon?

  • Where are the control gaps and vulnerabilities and which controls are needed?
I certainly don't have all the answers but I do know that multi-level security awareness is part of the solution. The corporate snitchline, for instance, is a powerful control that only works if a number of conditions are met, most importantly that people are aware that they have responsibilities to themselves, their employer and to society to report suspicious and inappropriate activities.

* "Rogue" is not the right word really.  It glamorizes fraud.  It has connotations of the cheeky chappy, the wide-boy, someone who is a bit of a trickster but is lovable and has a heart of gold.  In reality, their hearts aren't gold but their safety deposit boxes probably contain some.

Monday 11 June 2012

SMotW #10: Unsecured access points

Security Metric of the Week #10: Number of unsecured access points

As worded, this candidate metric potentially involves simply counting how many access points are unsecured.  In practice, we would have to define both "access points" and "unsecured" to avoid significant variations (errors) in the numbers depending on who was doing the counting.

Depending on how broadly or narrowly it is interpreted, "access points" might mean any of the following, if not something completely different:
  • WiFi Access Points, specifically; 
  • Legitimate/authorized points of access into/out of the corporate network e.g. routers, modems, gateways, WiFi Access Points, Bluetooth connections etc.;
  • Both legitimate/authorized and illegitimate/unauthorized points of access into/out of the corporate network - assuming we can find and identify them as such;
  • Designated security/access control points between network segments or networks e.g. firewalls and authentication/access control gateways;
  • Physical points of access to/from the organization's buildings or sites - again both legitimate/authorized and illegitimate/unauthorized (e.g. unlocked or vulnerable windows, service ducts, sewers), assuming we can identify these too;
  • Points of contact and communications between the organization's systems, processes and people and the outside world e.g. telephones, social media, email, face-to-face meetings, post ... 

Similarly, absolutely any access point might be deemed "unsecured" (more likely, "insecure") by a professionally-paranoid risk-averse security person who can envisage particular scenarios or modes of attack/compromise that would defeat whatever  controls are in place, or who knows through experience that controls sometimes fail in service.  Conversely, a non-security-professional might claim that every single access point is "secured" since he/she personally can't easily bypass/defeat it.  This kind of discrepancy could be resolved by some sort of rational decision process according to an assessment of the risks and the strength of the controls.  However, if the metric is used by management specifically to attempt to drive through security improvements at the access points, the people making the improvements tend to be the very same people who are assessing risks and controls, hence the metric would lose its objectivity and teeth.  Defining security standards for access points might help address the issue, and in fact that might be a useful spin-off benefit of using such a metric.


P
R
A
G
M
A
T
I
C
Score
95
80
90
70
85
77
45
75
55
75%








The PRAGMATIC score for this metric worked out at a very respectable 75% in the imaginary context of Acme Enterprises Inc.  It scored very well for Predictiveness (given that access control is a core part of security, so weaknesses in access control undermine most other controls) and Actionability (it is pretty obvious what needs to be done to improve the measurements: secure those vulnerable access points!).  The lowest-scoring element was Cost at 55% since defining security standards, locating potential access points and assessing them against the standards would undoubtedly be a labor-intensive process.

In the course of discussing the scoring, we considered possible variants of the metric itself and variations in the measurement process.  For instance, there might be advantages in reporting the proportion of access points that are unsecured: without more information about the total number of access points, recipients can't tell whether, say, 87 is a good number for the simple count version of this metric, whereas 87% is more Meaningful.  That straightforward change to the metric has a minor impact on the Cost since someone would have to count and/or estimate the total number of access points, and periodically revisit the calculation as things change.  We suspect Acme's management would like it too.

Furthermore, for some purposes, it would be worthwhile knowing just how insecure are the unsecured access points, implying a rating scheme, perhaps something as crude as a read/amber/green rating for the security of each access point identified, maybe with a clear (uncolored) rating for those that have yet to be assessed.  Assessments that involve penetration testing, IT audits or professional security reviews might well generate the additional information anyway in order to prioritize the follow-up activities needed to secure the unsecured.  In short, although the metric's Cost would increase, so would its value, hence it might still rate 55% (the PRAGMATIC parameter we call Cost for short is in reality Cost-effectiveness).

The previous two paragraphs demonstrate how the PRAGMATIC approach is more than simply a static rating or ranking scheme for metrics: it facilitates and encourages creative discussion and improvement of metrics that are under consideration, focusing most attention on whichever aspects hold back the overall PRAGMATIC score.  Given the specific situation of this candidate metric, it would be feasible, for instance, to trade-off Accuracy and precision to improve both the Cost and Timeliness scores by settling for rough but ideally informed and reasonable estimates of the proportions of secured versus unsecured access points instead of actual counts.  That might be a perfectly acceptable compromise for Acme's management.  The PRAGMATIC method provides the framework within which to frame this kind of sensible discussion.

NZ Cybersecurity Awareness week - woo hoo

The following sentence is quoted directly from the top of the first awareness leaflet I downloaded from the new website associated with a public information security awareness campaign, running in New Zealand this week:
"NetSafe has heard from hundreds of people who have has their account broken into because their passwords where weak - meaning they where easily acccessed by hackers." [sic]
Aside from the evident lack of competent proofreading, other concerns regarding the free security advice they are offering hardly inspire confidence in the campaign. For example, the same leaflet continues:
"PASSWORDS SHOULD BE:
STRONG:
Made up of a mix of 15 letters, characters and symbols. 
An example would be: Th1sI5a5tr0ngP@ssw0rd!
Maybe the leaflet's author is not aware that:
  • Th1sI5a5tr0ngP@ssw0rd! is not 15 characters but 22 (it should have advised "at least 15 characters", or simply said "the longer the better").

  • Rather than "letters, characters and symbols" the author presumably meant "letters, numbers and punctuation".

  • Pass phrases in most modern systems can include spaces, so normal sentences, with conventional capitalization and punctuation, are OK. The short phrase "This is a strong password!", for instance, is 28 characters including the quotes making it stronger than the convoluted example, and much easier to recall and type accurately. [The convenient password tester at Rumkin.com tells us the leaflet's example password has 112 bits of entropy, whereas mine has 132 bits, and still has 122 bits even without the quotes. My case rests m'lud.]

  • Complete lines from favourite songs, poems, books, quotations or  sayings make long, memorable passphrases, and better still suggest an obvious family of distinct passwords for different sites or when changing passwords (I won't lay into the dubious, outdated advice later in the same leaflet to change passwords every 90 days, at least not right now). 
In summary, the leaflet is badly written, somewhat inaccurate and misleading, and doesn't bode well for the rest of the campaign.

Arguing that it is "better than nothing" is lame because they are missing a golden opportunity to give helpful information security advice to naive Kiwis, and no doubt spending my tax dollars to do it.


PS  Aside from ourselves (we weren't invited), notably absent from the list of corporate sponsors are the banks, and I can't say I blame them, despite their obvious reliance on customers to avoid phishing, malware and other nasties, most of which ultimately cost the banks $$$. Wespac's plain-speaking information security advice to its customers, for instance, would knock spots off the stuff in this campaign.

PPS  Please stop using "cyber" as a prefix. It reminds me of the terrifying cybermen from the iconic BBC series Doctor Who that I used to watch from behind the settee as a kid some 30-odd years ago. Security should be friendly, positive and welcoming, not scary and outdated. Computer security, IT security, network security, Internet security or information security are perfectly adequate and understandable terms without the connotations.

PPPS   Many other countries have run public security/privacy awareness campaigns, a few quite successfully over several years. I wonder if it even occurred to Netsafe to find out about them and apply the lessons from abroad, or was it "not invented here"?

PPPPS (June 18)  A classic spot-it-a-mile-off 419 scam story that led to a Christchurch man losing about $20k in advance fees for a nonexistent $600k prize from Ghana is yet another reminder of the importance of security awareness for Kiwis. Who knows: maybe the penny finally dropped for him when he saw the NetSafe campaign?

Saturday 9 June 2012

The California State Office of Information Security and Privacy Protection publishes a fair range of awareness materials of interest to State agencies and others. Their 4-page Hostile Takeover paper gives a decent outline of multiple controls against insider threats, including the need to cater for such incidents in incident response procedures.  Good point!

As with other forms of contingency planning, there are two common ways of preparing incident response procedures:
  1. Create a detailed manual explaining how to respond to a range of types of incident.  This is costly and tedious for the documentation team, since such detailed manuals are usually voluminous and complex to maintain. Keeping the manual updated, and ensuring that responders are adequately trained and aware of the latest procedures is an ongoing requirement. On the other hand, it is easier for responders to grab a manual, look up the type of incident, and follow the instructions - rather like a pilot might consult his flight manual to deal with unusual situations when flying.

  2. Create a simpler generic incident response process and multi-skilled team that can deal with practically anything that occurs. Train the team, emphasizing flexibility and thinking-on-your-feet. There is less documentation to prepare, agree and maintain, but a lot more depends on the skills and capabilities of the particular responders, hence responses to similar incidents are more likely to vary.
Approach 1 can run into trouble if the particular incident that unfolds is not covered by one of the scenarios in the manual, or (just as bad) is covered by several e.g. an insider attack involving malware and fraud might be covered by three response plans. Approach 2 can lead to confusion and errors in the process, particularly if different people are working on the same incident simultaneously, but separately.

A third way involves a combination i.e. a less-detailed manual covering a suite of common scenarios, with the responders being trained and skilled to cope with more complex situations on the fly. 

If your organization takes a different approach, I'd be fascinated to hear about it, and to find out how it works in practice. Please comment on this posting, or email me.

Friday 8 June 2012

Employer = Insidious insider?

A recent privacy case in New Zealand raises ethical and legal concerns in relation to whether an employer can legitimately snoop on its employees using keyloggers etc. on corporate IT equipment. Although I have absolutely no knowledge of this case other than that one newspaper report (which may be accurate but is certainly not complete), and I am definitely not a lawyer, forgive me if I consider the privacy, ethics and insider threat aspects that this kind of situation raises in more general terms.

From the employer's perspective, the IT equipment and network are its property, and of course it is likely that employees are using it during normal work hours when they are expected to be working for the employer. The employer would probably claim ownership of the information on its systems and network, hence using a keylogger to grab a password on an office PC and then rifling through the employee's emails could be deemed legitimate, particularly in a situation in which the employee is being investigated for some reason (i.e. the snooping was justified and legal because there was already probable cause to suspect serious wrongdoing, particularly some illegal act). The employer can potentially access the emails on its systems even without the employees' passwords, although the most direct way of gaining access (changing a user's password) would probably tip-off the employee that they were being investigated.   

From the employees' perspective, the content of emails, web sessions and phone calls at work inevitably include private matters that are of no direct concern to the employer. We all have a reasonable expectation of privacy, even while physically at work during working hours - in exactly the same way that society agrees that it is inappropriate to site CCTV surveillance in toilets, even if there are genuine security concerns. In such situations, privacy trumps security. We retain the right to control intimate knowledge of ourselves, forcing others to respect our dignity. 

Ethically, most reasonable people would agree that practices such as keylogging, secretive CCTV or telephone monitoring and bugging are distinctly dubious, rather devious if not wholly unacceptable, since they pry into areas that are considered private and personal. Information is unlikely to be admissible in court unless it has been properly and fairly obtained, for instance under a court order permitting surveillance as a result of prior evidence of illegality. Without controls of this nature, society would be firmly in the oppressive realm of 1984 and Big Brother.  

The employer evidently argued that its policies allowed it to snoop in this manner since employees had been informed that their use of the IT facilities was being monitored. Statements to this effect are commonplace, often repeated in several places such as employment contracts, employment manuals or codes of conduct, security policies, system banner notices, and related security awareness and training materials. The Privacy Commissioner argued that keylogging was not specifically mentioned and went beyond the implied access right in the corporate policy. Furthermore, the employer had rifled through old emails, going beyond what it needed to check for the particular situation at hand.   

Take-away lessons from the case include: 
  • The importance of having explicit policies and making sure employees are fully aware of them (the courts may reject or react badly to information obtained in ways that would generally be considered sneaky, underhand or otherwise unethical);

  • The need to make sure that employees investigating possible wrongdoing also respect the policies and laws of the land, for example gathering evidence in a legitimate, forensically sound manner, knowing when to stop probing, and respecting the privacy of people whose information they obtain;

  • Be careful - be very careful about what you say, type or do at work, and don't be surprised if your information is captured, reviewed and used against you, outside the original context.
The final bullet could be considered an insider threat for employees: most of us trust our employers as much as they trust us, but we all know that trust is a fragile control.