Thursday 20 June 2019

Conspicuous consumption

A short article set me thinking this morning about the interplay between rights, compliance, personal freedoms, ethics and culture. The article is about tax authorities picking up on conspicuous consumption by citizens, suggesting that they are 'living beyond their means' - a classic fraud indicator.

Although the article specifically concerns disclosures through social media, that's just one of many ways of voluntarily disclosing information. Furthermore, some disclosures are involuntary: the authorities can demand information from and about us, for example, and we inadvertently or incidentally disclose information about ourselves in the course of living our lives.

The tax authorities have to address tax fraud, of course, using relevant information legitimately obtained from anywhere ... but in this situation the information was not disclosed for that specific purpose. Tax fraudsters would happily prohibit the authorities from accessing and using the information if they could. So is it ethical for the authorities to use it? Hmmm, tricky! 

I would argue that, in choosing to consume so conspicuously and publicly, tax fraudsters have made the information available to third parties and, by implication, third parties are free to use it legitimately. Preventing tax fraud is a legitimate purpose, so that's that.

Even if tax fraudsters explicitly prohibited the authorities from using the information disclosed through social media, I believe the laws about investigating crime take precedence (although I'm not a lawyer). Small print on the fraudsters' Facebook pages or blogs along the lines of "The tax authorities are expressly prohibited from using this information" would also be a bit of a giveaway!

Tuesday 18 June 2019

Craftsmanship



Currently I'm getting things ready for the next consultancy gig. Figuratively speaking, having cleared a space on the workbench, I'm stocking up on raw materials and selecting tools for my toolbox. Literally, that's simply a new directory for the assignment, a few potentially useful templates and public information from the client, and a bunch of methods and techniques in mind.

My favourite tools are pre-loved and well-honed. They are familiar, comfortable and trustworthy. Some of them (such as the ISO27k standards) are off-the-shelf products. Others are either homebrewed or customized for particular purposes. They all have their advantages and disadvantages and, like any craftsman, I much prefer to use the right tool for the job, hence some specialist items are rarely used but invaluable for specific tasks. I make the effort to check and maintain my tools, from time to time investing in new ones or "improving" (well OK, refurbishing and adapting) old ones. Very rarely is a tool discarded, except for those that are plain worn out and are replaced, often by something shinier. My workshop is bulging, placing a premium on small/simple/multipurpose tools.

In many areas, ‘pragmatic’ approaches are the only tools available. It’s down to me to apply them to the tasks at hand with skill and passion, although it's hard to keep in mind their limitations. There's a tendency to press on regardless, leading to uncertain results and occasional accidents. I hate bodging things and yet that's an inevitable part of practicing and improving. 

A valuable routine at the end of any assignment is to look back and draw out the learning points. Those templates I mentioned are an example: having drafted, say, a standard form for describing security metrics, I use and gradually refine it on successive metrics until it stabilises. Every field on the form has a purpose. The structure, layout and sequence makes sense and works ... so it's worth turning into a template, an MS Word template in fact. The next time I'm describing a security metric, I can simply grab the template and start filling it in, avoiding the time and effort of starting from scratch.

Alternatively, if the client already has a structured way of describing security metrics, I can probably use that instead, perhaps proposing changes based on my experience but more likely adapting my approach to suit the client. Who knows, I might even learn some new tricks along the way, leading to an updated template. That's what I mean by investing in the tools of my trade, best practices you could say - well good-enough practices anyway. The quest for perfection is never ending.

Friday 14 June 2019

The compliance burden

I've spent an enjoyable day exploring, thinking and writing about the enormous breadth of "compliance", our security awareness topic for July. You might be forgiven for thinking that compliance in this context is just about hacking and privacy laws, but no, oh no: important though those are, there's much more to it.

We have thus far compiled a list of 15 categories of law relevant in some way to information security - not just 15 actual laws but 15 typesThere's a similar range of regulations, plus contracts and agreements. No wonder corporate lawyers, compliance teams and management as a whole complain about the compliance burden on businesses!

On top of that there are myriad internal corporate rules in the form of infosec-related policies, procedures and all that jazz - again, quite a variety when you think about it. 

And we all have self-imposed rules of behavior - our habits and conventions, codes of ethics, belief systems, rules for what's right and what's wrong.

Figuring out and talking about the different kinds of 'rules' will make an interesting awareness challenge for July. We've come up with more than 60 so far, and we're not done yet!

Aside from that, I've also been exploring the conceptual angle: what are rules for anyway? Why do we have rules, in general, let alone infosec, privacy and all those other rules I've alluded to? Why is compliance necessary? What's wrong with noncompliance? That led me along a tangent into creativity, again relevant to information if marginal to information security. 

Wednesday 12 June 2019

Lack of control is not a vulnerability [LONG]

Another of those apparently simple but profound questions came up on ISO27k Forum this morning. Juan from Peru said:
"Well, I am pretty confused about how to correctly describe a vulnerability. I´ve seen many sheets/registers (even a topic in this group) where a vulnerability is described as a "LACK OF A CONTROL" For example if I say that a VIRUS is a threat agent, my vulnerability would be a "LACK OF A VACCINE FOR THAT SPECIFIC VIRUS", this is quite redundant, I think but in a certain way, has sense. Now, I´ve also read that a Vulnerability CAN NOT be described as "LACK OF A CONTROL" because a Vulnerability is AN INHERENT WEAKNESS OF THE ASSET, which I think has more sense than the vaccine´s example. But, there is a problem, I could not find any official literature (I mean and ISO 27k) that supports that definition. I searched in ISO 27000 and 27005, and those Standards just say that a vulnerability is a weakness that can be exploit (Nothing about INHERENT). Also in ISO 27005 I found many examples of vulnerabilities (In the annex, I think) and they are described as "LACK OF CONTROLS". This is really confusing for me."
“Inherent weakness” is my succinct working definition of vulnerability. I use the word “inherent” to refer to issues within or integral to the system of concern (not necessarily an "asset"), in contrast to threats which (again, as I use the term in practice) are outside the system and impinge upon it. Furthermore, by ‘system’ I mean a coherent collection of things acting in concert - not just, say, an IT system (the computer hardware, firmware and software plus the data) but also the associated processes involved in using and administering it, and the users and administrators, the managers overseeing it, its owners/stakeholders … and so forth. I use 'system' in as broad a sense as “information security management system".

So why does my working definition of 'vulnerability' differ from that in ISO/IEC 27000:2018? Why don't I just use the formal definition? Good point ... but my reasoning is complicated to explain. Bear with me.

I'll start with the formalities. Among other terms, ISO/IEC 27000:2018 defines:
  • Control as “measure that is modifying risk (3.61)” plus 2 notes: "controls include any process (3.54), policy (3.53), device, practice, or other actions which modify risk (3.61)" [and] "it is possible that controls not always exert the intended or assumed modifying effect." [sic*]

  • Vulnerability as “weakness of an asset or control (3.14) that can be exploited by one or more threats (3.74)”. That definition is a little ambiguous* but I understand it to mean that a weakness of a control would constitute a vulnerability if it might be exploited;

  • Threat as "potential cause of an unwanted incident, which can result in harm to a system or organization (3.50)"; and

  • Risk as "effect of uncertainty on objectives (3.49)" plus 6 notes: "an effect is a deviation from the expected — positive or negative; uncertainty is the state, even partial, of deficiency of information related to, understanding or knowledge of, an event, its consequence, or likelihood; risk is often characterized by reference to potential “events” (as defined in ISO Guide 73:2009, 3.5.1.3) and “consequences” (as defined in ISO Guide 73:2009, 3.6.1.3), or a combination of these; risk is often expressed in terms of a combination of the consequences of an event (including changes in circumstances) and the associated “likelihood” (as defined in ISO Guide 73:2009, 3.6.1.1) of occurrence; in the context of information security management systems, information security risks can be expressed as effect of uncertainty on information security objectives; information security risk is associated with the potential that threats will exploit vulnerabilities of an information asset or group of information assets and thereby cause harm to an organization."
Unfortunately, there are numerous issues and ambiguities in those four definitions: 
  • Only a few of those words are explicitly defined in the standard - the ones that are used in a particular way within the context of the ISO27k standards ('terms of art' you could say). I believe we are meant to refer to the Oxford Dictionary definitions for the rest although this is not actually stated anywhere in the published standard: it is merely a convention, possibly stemming from ISO's directives to the drafting committees;

  • As formally defined, control distinguishes two concepts: controls ‘modify’ risks and hence (I would argue) are not part of them. You could consider them to be optional extras, add-ons that you may or may not want to use – at least that’s how I think of them although the definition doesn't actually say so;

  • The definition of vulnerability is ambiguously worded in the first clause. Does it mean "weakness of an asset, or weakness of a control" or "a weakness of an asset, or a control,"? I believe it is the former but that's just my interpretation - and ideally there should ideally be no room for interpretation in a formal definition;

  • Threat seems quite straightforwardly defined (aside from referring to "system or organization", implying that they are distinct ... but one could argue that an organization is one type of system, a social system, often with a legal basis; 'system' is undefined). However, as Juan noted, even ISO/IEC 27005 misinterprets the term. A lack of control may modify the risk compared to its presence, but does not actually cause an incident: it is simply an omission. Incidents are caused by circumstances or acts - the commission of something, not omissions;

  • The definition of risk is particularly awkward and unsatisfactory. It is the product of a committee of people holding differing views, hence its extraordinary length. The definition part is so vague as to be almost meaningless, while the notes compound matters by mashing up several separate concepts. What a mess! It might not be so bad if 'risk' were peripheral to ISO27k but in fact quite the opposite. Risk (or rather, as I would prefer to put it, "information risk") is absolutely central. 
I personally do not consider lack of a control to be a vulnerability for several reasons:

  1. It makes it easier to consider and evaluate the underlying risks in a situation, deliberately ignoring any current or proposed controls during the analysis. It helps us distinguish risks from controls (as per the definition), and simplifies the risk analysis.

  2. Some existing or proposed controls may be unnecessary and unhelpful (e.g. periodic password changes), but we are less likely to consider that if we always take them for granted and assume they are present and working as intended in our risk analyses. Periodic password changes, for example, are a costly control incorporated into many systems for years without good reason other than habit or convention. In any given system, there may well be other controls that serve little to no purpose, perhaps even some that are counterproductive (they actually weaken rather than strengthen the system, perhaps opening up new avenues for attack or failure: antivirus software and automated software updates are two possible examples of this).

  3. The list of potential controls is unbounded. Aside from the large variety of possible types of controls, each control has many variants, and there is a huge (possibly infinite) variety of possible combinations and sequences of controls. So how are you going to determine which controls to add to, or exclude from, the list of missing/ineffective/inadequate controls? Answering that question presupposes that you understand the risks, in other words it is a circular or self-referential issue. 

  4. It allows/encourages us to figure out which controls we require according to the risks we have identified and evaluated. It also suggests a natural priority or ranking of the controls, since those controls mitigating the most significant risks are clearly important (‘key’) controls. This has substantial implications that are not widely considered, at present e.g. resilience, effectiveness and assurance are likely to be strong requirements for key controls.

  5. Controls are not 100% reliable – in other words, there are risks associated with the controls themselves, as implied by the second note to the '27001 definition of control. This again complicates the risk analysis and (in my experience) is usually ignored … but that’s a mistake, particularly in the case of key controls. The possibility of key controls failing to operate as intended or as required means significant risks might be insufficiently mitigated in practice. Now you might say that this means 'control reliability' therefore ought to be part of the risk analysis, in addition perhaps to 'control suitability', 'control value' and maybe other considerations. Personally, I prefer to address this separately in the risk management process, particularly in the phase following the decisions about how to treat the identified and evaluated risks, plus in the ongoing management, measurement and assurance activities once the controls are in use.
Another way to look at this is that a missing, weak, inadequate, failing or inappropriate control, exposes or fails to compensate for a vulnerability ... but 'exposure and 'compensating control' are ambiguous and confusing concepts too. Maybe I'll come back to that another day.

So, that's it for today. Sorry to be so anal about the words and definitions, but Juan is certainly not the only confused soul - even ISO/IEC JTC 1/SC 27 has trouble with this stuff! 

* The second note is missing a word. It should be "it is possible that controls do not always exert the intended or assumed modifying effect.", I think.

Tuesday 11 June 2019

Resistance is futile

Generally speaking, there's no point in complaining about applicable laws and regulations: like it or not, compliance is obligatory. That's not the end of the matter though: ifor starters, there are questions about precisely what the obligations are, their applicability, and the potential consequences of noncompliance.

Those questions are all the more interesting in respect of other kinds of rules, especially those that are not written formally by highly trained lawyers following strict drafting practices finely honed over hundreds of years - corporate security policies for instance. 

Positioning compliance as a business or risk management issue puts a different spin on things. One particularly worthwhile approach is to elaborate on and explore the objectives behind the wording of the rules. Why is it considered necessary to protect someone's privacy, for example? What might happen if personal information was unrestricted, freely available, a commodity that could be freely shared or traded? Such questions are trickier to answer than they might appear.

Consider the actual real-world effects of "major" privacy breaches such as the Target incident in 2013. Aside from the public outcry or outrage, the enforcement penalties and various other costs relating to the clean-up, the organizations concerned are mostly still operating ... but are they the same, or have the incidents changed things? And what if any are the effects on the rest of us?

One difference stems directly from the media coverage of major incidents, specifically headline news raises awareness of the related issues among the general population and management, right up to executive level. But once the furor has died down, awareness tends to subside gradually back towards pre-incident levels - maybe a little higher due to the residual memories and reminders such as this very piece! 'A little more awareness', then, is the net, long-term effect of incidents on those not directly affected, perhaps also the individual and corporate victims who were involved.

'A little more awareness' is the least we can reasonably expect to achieve through security awareness and training activities - hopefully more than just 'a little', of course! Repeatedly topping-up on awareness levels is the approach we have taken for decades: regular refreshers work for us, in the same way that each subsequent privacy breach reminds us, yet again, that there are compliance obligations in that area. It's a ratchet or cumulative effect, each episode raising the level by some amount. 

Monday 10 June 2019

Playing by the rules

Compliance is our security awareness and training topic for July.  As usual, we'll be taking a deliberately broad perspective, finding angles of interest to staff, management and professionals.

'Playing by the rules' hints at how we're planning to address the staff awareness stream. People who enjoy playing all sorts of quizzes, competitions, games and sports appreciate that the rules are there to level the playing field, keeping things reasonably fair to all concerned. That leads on to the concept of rule-bending and breaking i.e. cheating to gain an unfair advantage over other players. 'The rules of the road' suggest another possible avenue to explore around safety and security, picking up on this month's awareness topic (physical infosec).

The management stream will also dip into rule-making, the process of defining rules, plus enforcement and reinforcement of the rules. In the information security context, the rules include laws, regulations, policies, directives, instructions, contractual terms and more, some very narrowly scoped and others much more general in nature. We might even take a tangent into actively exploiting lax rules for business advantage, raising ethical and risk questions worth pondering.

The pro stream will get into technological rules such as cybersecurity standards, tech protocols and firewall rulesets ...

... at least, that's our cunning plan at this point. Part of the fun of providing our security awareness and training service is to get creative with the messages, picking up on topical issues. We're on the lookout for interesting compliance-related news during June - incidents, changes, and different approaches to the age-old problems in this area. 

Tuesday 4 June 2019

Physical information security


June’s security awareness and training topic is an interesting blend of traditional physical/site security and cybersecurity, with just a touch of health and safety to spice things up.
Hot on the heels of May’s module about working off-site, this month we’re exploring the risks and controls applicable to physical information assets such as:
  • ICT devices e.g. servers, laptops, phones, network cables, microwave dishes;
  • Hardware security devices and controls e.g. keys, staff passes, cryptographic key-fobs, walls, fences/barriers, turnstiles, locks/padlocks, smoke detectors, fire and flood alarms …;
  • Information storage media e.g. hard drives, USB sticks, tapes, papers;
  • Information communication and display devices e.g. screens, management panels, annunciators, modems;
  • People – particularly “knowledge workers” employed for their intellectual capacity, expertise and skills, implying a business need to ensure their health and safety.
Physically securing information assets is just as important as the logical security controls (cybersecurity) normally considered. Adversaries with physical access to ICT devices may be able to defeat/reset the logical security controls, power down or damage them, substitute or simply make off with them. 
Card skimmers fitted to bank ATMs are an example of a physical threat to information - namely the card data and PIN codes used to authenticate card holders.
Crime investigators sometimes employ physical techniques to obtain forensic evidence from devices and media recovered from the scenes of crime, so it’s not all bad news!
The physical harm that can impact information includes:
  • Theft or loss by insiders, intruders/burglars, thieves, industrial spies, vandals and saboteurs;
  • Tailgating or physical intrusion, allowing intruders to observe, copy, steal, replace or damage information assets (both physical and digital) on-site;
  • Damage - criminal or accidental such as fires, floods, storms, lightning, static electricity, voltage surges and power cuts, electromagnetic disturbances and radio interference, mold;
  • Mechanical/electronic failure or obsolescence, ICT equipment prematurely becoming unreliable, intermittent or failing completely, especially if it has been stored or used under adverse physical conditions such as high temperatures, vibration or corrosive atmospheres;
  • Subversive hardware e.g. covert surveillance using microphones and cameras built -in to many IT devices, installation of bugs and wireless network taps;
  • Interception, compromise and failure of both wired and wireless networks;
  • Compromise of technological security controls e.g. reset device to factory defaults, replace firmware or hack the hardware, disable security controls, and copying/cloning/counterfeiting of inadequately secured authentication devices (such as credit cards and passports);
  • Illness, accident, death, coercion, bribery and corruption etc. of workers, including injuries and stress, depression and other potentially devastating forms of mental ill-health.
Physically securing information involves: physical access controls; fire, smoke and flood protection; redundant/spare equipment, supplies, communications routes and people; UPSs, generators, spare batteries; lightning conductors, surge arrestors etc.; health and safety plus welfare arrangements for workers; laws, policies, agreements and other rules and regulations; physical security-related processes and activities ... including security awareness of course!
Don't bother contacting us if your people are all fully up-to-speed on the physical side of information security as outlined here. If your management already understands the need and willingly invests in physical security controls, good on you. If you and your professional colleagues actively encourage and enable the implementation of physical controls, excellent! Otherwise, we're keen to help.