Friday 30 August 2019

Awareness module on hackers and hacking

We've just completed and delivered  our security awareness and training module about hackers - a topic we haven’t covered specifically for a few years, although most of the awareness modules at least touch on hacking – some more than others,

The hacking risks have changed perceptibly in that time. The rise of state-sponsored (spooky!) hacking is of great concern to those of us who care about critical national infrastructures, human society and world peace. The United Nations is due to meet in a couple of weeks to discuss the possibility of reaching agreement on the rules of cyberwarfare, mirroring those for conventional, nuclear and biological warfare. Let’s hope they manage to align the ~200 countries represented at the UN – a tough task for the diplomats, politicians and cyberwar experts. That aspect gives a distinctly sinister tinge to the awareness module, and yet I hope we’ve succeeded in keeping the materials reasonably light, interesting and engaging as ever, a delicate balance. 

Bug bounties merit a mention this time around as an innovative way to get hackers on-side that seems to be paying off for some organizations. Of course, not all hackers will be enticed by the filthy lucre but those who are help the organizations address vulnerabilities that otherwise might have been exploited. Reducing information risks and earning legitimate income has to be A Good Thing, right?

Monday 26 August 2019

Hacking awareness module

September's security awareness module is rapidly falling into place with lots of juicy content for all three streams already:
  • For the general/staff audience, we'll be giving an overview, an outline of the main information risks and information security controls, and promoting ethics; 
  • For professionals, there's a bit more technical content, still without giving too much away (we're trying to encourage people to control against, not commit, hacking!);
  • For management, we've updated the anti-hacking policy template to mention the bug bounty idea;
  • All three streams emphasize the need for detective and corrective controls, supplementing the preventive controls because they are fallible. 
The sheer variety of risks and controls is overwhelming, so we'll pick out a few topical aspects to discuss, such as using bug bounties as a technique to both encourage (ethical) disclosure and improve information security, a nice combination. 

Hardware hacking will make an appearance too. Over the weekend I've been reading about a hobbyist reconstructing a DEC PDP/11 using modern programmable chips to replicate the original, and last month I was fascinated by a project to rebuild the lunar lander guidance system - not a replica but an original test system. Amazing stuff!


Sunday 25 August 2019

20 creative ways to use looping PowerPoint intros

Yesterday I promised to share some ideas for looping intros on your PowerPoint presentations, primarily but not exclusively for security awareness seminars and the like. 

Rather than wasting the time between opening the door and starting the session, it's a mini awareness opportunity you can exploit.

Here are 20 ways to use your loopy intros:
  1. Show short security awareness videos, maybe ‘talking heads’ clips of people talking about current threats, recent incidents, new policies etc.;
  2. Quotes from attendees at past awareness events, possibly again as video or audio clips or written quotations in their own words;
  3. A slide-show of still photos from previous awareness and training events, preferably showing people having a good time and enjoying a laugh;
  4. Awareness posters: you do have plenty of these, right?;
  5. Clips from your intranet Security Zone - just a few headline items, not whole pages, with the Zone's URL;
  6. Clips from your security policies and procedures – little snippets to interest, intrigue and remind rather than inform the audience;
  7. News headlines relating to recent infosec incidents in your industry or locale, new compliance obligations, surveys published etc.;
  8. Topical warnings – things people generally ought to know about – and tips to make things a bit easier (e.g. patch Tuesday, or the Windows-L key sequence to screenLock a Windows computer);
  9. Team photos or individual mugshots of your people, especially anyone new or doing unusual things – "team-building sessions" and hobbies are good for that, think 'human interest story';
  10. A [partial] diary of planned awareness events, courses etc. with brief how-to-book info;
  11. A [partial] list or tasters of the awareness content you’re making available this month – the main items at least;
  12. Short clips from security reviews, audits and management reports - again, less is more so be highly selective;
  13. Amusing content - jokes, speling errrors, fake news, cartoons and kids’ drawings, funny answers to quizzes and tests, spoof versions of original content, dire warnings about the zombie apocalypse, captioned photos etc.;
  14. [Recent] feedback comments and suggestions about the awareness program etc.;
  15. Requests for input and involvement on various [infosec related] projects, initiatives and activities within the organization;
  16. Contact details for people to get more info, raise concerns, report incidents and near misses, suggest further awareness activities etc.;
  17. A few security metrics – maybe just one or two, ideally simple, eye-catching designs, something people will think and chat about as they wait patiently for the start;
  18. Ice breaker suggestions e.g. "If you don't know someone nearby, take this opportunity to say hello, tell them a little about yourself and find out a bit about them, such as why they are here";
  19. Photos of physical security incidents and controls - security fails and protective hardware;
  20. Winners and maybe the results from recent security awareness quizzes, competitions, challenges etc.
What about you: what would you suggest? Comments are open. Feedback and bright ideas are very welcome ...

Friday 23 August 2019

Subversive metrics (surrogation)


Don't let metrics undermine your business by Harris and Taylor is a thought-provoking piece in the wonderful Harvard Business Review.

It concerns a tough old problem, that of metrics themselves becoming the focus of attention within the organization rather than the objects of measurement and, more importantly still, the business activities for which the metrics are intended to support improvement.

"Every day, across almost every organization, strategy is being hijacked by numbers ... It turns out that the tendency to mentally replace strategy with metrics — called surrogation — is quite pervasive. And it can destroy company value."

According to Wikipedia, Charles Goodheart advanced the idea in 1975, although I suspect people have been manipulating metrics and duping each other pretty much since the dawn of measurement. 

My eyes were opened to the issue by Hauser and Katz in Metrics: you are what you measure! Krag Brotby and I wrote about that in PRAGMATIC Security Metrics

Surrogation is surprisingly common in practice, for example "Thank you for your business. Please give five stars feedback after this transaction" is vaguely coercive, more so when appended with something along the lines of "My bonus depends on high scores" or "Visit our Facebook page to enter our prize draw".

Government officials and politicians do it all the time - it's a job requirement to know how to appear to be doing good things for the nation or society, regardless of reality. [The game cuts both ways: 'the opposition' is naturally expected to criticise the government's performance by challenging the results and/or the measures, perhaps simply casting doubt on their veracity.]

VW was famously caught doing it by having their engine management systems detect the conditions indicating that emissions testing was being performed, enabling the emission controls to ace those tests then disabling them to improve other aspects of performance (such as fuel economy) after the emissions tests were done. Sneaky - and a risky strategy as VW discovered to its cost and shame. I would be astonished to discover that VW was the only, or indeed the worst culprit though.

If a process or system is measured by a metric, and if the metric governs bonuses or other benefits for those performing the process, then they have an incentive to optimize the process/system and/or the metric: both routes lead to rewardCreative if unethical thinkers can often find ways to drive up apparent performance without necessarily improving actual performance, and if the bonuses or benefits are substantial, the pressure to do so can be strong.

One unethical way to optimize a metric is to manipulate the measurement process, for example selectively discounting, blocking or simply ignoring bad values, creating a bias such that the metric no longer truly represents the process being measured - an integrity failure. Comparative metrics such as benchmarks can be optimized by decreasing the actual or apparent (measured) performance of peers or other comparators: that may not align with business objectives and would generally be considered unethical. Subjective metrics can be manipulated by coercion of the people doing the measurement, at any stage of the process (data collection, analysis, reporting/presentation and consumption ... perhaps even way back at the metrics specification and design phase, or during 'refinements' of an existing metric).

The same thing applies, by the way, if those 'bonuses or benefits for good performance' are in fact penalties or disincentives for poor performance. Manipulating the measurement, analysis and reporting activities to conceal actual performance issues may be easier than addressing underlying problems in whatever is being measured, especially if the measurement aspects are poorly designed and lack adequate controls ... 

The risk of someone gaming, subverting or hacking the measurement processes and systems is, of course, an information risk, one that ought to be identified, evaluated and treated just like any other. The classical risk management approach involves:
  • Considering the probability of occurrence (threats exploiting vulnerabilities) and the impacts or consequences of incidents with an obvious emphasis on critical or key metrics, plus any that lead directly to cash or convertible assets, such as the stock options commonly used as performance incentives for executives;
  • Deciding what to do about the risks;
  • Doing it, generally by implementing suitable measurement process controls such as monitoring and managing the processes/systems to pick up on and address any issues in practice, including obvious or more subtle signs of manipulation/gaming/coercion - a step in the risk management process that (in my experience) is woefully neglected when it comes to metrics. Metrics aren't fire-and-forget weapons.
That's enough for today. I'll return to explore the management and other controls around metrics at some future point. 


Meanwhile, please vote for this blog.
My bonus depends on it.
 


Not really. It's a joke.
I don't get a bonus.
I'm lucky to get paid.

Thursday 22 August 2019

Policy and compliance

This morning, "PS" asked the ISO27k Forum for advice about reviewing access rights.
"I just got a minor NC for not showing compliance with review of user access rights control. At present, a report containing leavers [is] reviewed by servicedesk to ensure removal of access. This process supplements the leaver process owned by department managers. But [an] auditor has insisted that we should retrieve all access reports and review them. So question is how do demonstrate compliance with this control in your organisation? Appreciate your guidance"
Some respondents duly mentioned typical controls in this area, while some of us spotted an issue with the issue as described. Why did the auditor raise a minor non-compliance? On what basis did the auditor insist that they should ‘retrieve and review all access reports’ - if in fact he/she did?

With a little creative/lateral thinking, it turns out there are several intriguing possibilities in the situation described by PS aside from the obvious:
  • The organization had instituted and mandated a formal policy stating that ‘All access reports will be reviewed’ – a bad move unless they truly expected precisely that to happen. They are committed to doing whatever their policy says. If they don’t do so, it is a valid noncompliance finding;
  • The organization had [perhaps unwisely or inadvertently] instituted a formal policy stating something vaguely similar to ‘all access reports will be reviewed’, which the auditor interpreted to mean just that, whether correctly or incorrectly. This is always a possibility if policies are poorly/vaguely worded, or if the supporting procedures, guidelines, help text, advisories, course notes, management instructions etc. are similarly worded or simply missing (leaving it to workers to interpret things as they see fit … which may not be the same as the auditors, or management, or lawyers and judges if incidents escalate);
  • The organization had a procedure or guideline stating [something similar to] ‘all access reports will be reviewed’, in support of a formal policy on information access or whatever, and again the auditor was right to raise an issue;
  • The organization had a policy or whatever outside the information security arena (e.g. tucked away in an IT or HR policy, procedure, work instruction etc.) stating that ‘All access reports will be reviewed’ ... which in turn begs a bunch of questions about the scope of the Information Security Management System and the audit, plus the organization's policy management practices;
  • An old, deprecated, withdrawn, draft or proposed policy had the words ‘all access reports will be reviewed’, and somehow the auditor got hold of it and (due to flaws in the organization’s policy controls) believed it might be, or could not exclude the possibility that it was, current, valid and applicable in this situation - another valid finding;
  • A stakeholder such as a manager verbally informed the auditor that it was his/her belief or wish that ‘All access reports must be reviewed’, inventing policy on the spot. This kind of thing is more likely to happen if the actual policy is unclear or unwritten, or if individual workers don't know about and understand it. It could also have been a simple error by the manager, or a misunderstanding by the auditor ... which possibility emphasizes the value of audit evidence and the process of systematically reviewing and confirming anything that ends up in the audit report (plus potentially reportable issues that are not, in fact, reported for various reasons);
  • The organization had formally stated that some or all of the controls summarized in section A.9 of ISO/IEC 27001:2013 were applicable without clarifying the details, which the auditor further [mis?]interpreted to mean that they were committed to ‘retrieve and review all access reports’;
  • For some reason, the auditor asserted that the organization ought to be ‘retrieving and reviewing all access reports’ without any formal basis in fact: he/she [perhaps unintentionally] imagined or misinterpreted a compliance obligation and hence inaccurately identified non-compliance when none exists;
  • The auditor may have sniffed out a genuine information risk, using the minor non-compliance as a mechanism to raise it with management in the hope of getting it addressed, whether by achieving compliance or by amending the control;
  • The auditor may have made the whole thing up, perhaps confusing matters that he/she didn't understand, or under pressure to generate findings in order to justify his/her existence and charges;
  • The auditor simply had a bad day and made a mistake (yes, even auditors are human beings!);
  • PS had a bad day e.g. the minor non-compliance was not actually reported as stated in his question to the forum, but was [mis]interpreted as such. Perhaps someone spuriously injected the word “all” into the finding (Chinese whispers?);
  • PS wasn't actually posing a genuine question, but invented the scenario to fish for more information on the way forum members tackle this issue, or was hoping for answers to a homework assignment;
  • The auditor was trying it on: was this a competent, experienced, qualified, independent, accredited compliance auditor, in fact? Was it someone pretending/claiming to be such - someone in a suit with an assertive manner maybe? Was it just someone with “auditor” scribbled on their business card? Was it a social engineer or fraudster at play?!;
  • It wasn’t a minor non-compliance, after all. Maybe I have misinterpreted “NC” in the original forum question;
  • etc. ...
... Compiling and discussing lists like this makes an excellent exercise in awareness sessions or courses – including auditor training by the way. In this particular case, the sheer variety of possibilities is a warning for information security and other professionals re policies, compliance, auditing etc. In practice, “policy” is a more nebulous, tricky, important and far-reaching concept than implied by the typical dictionary definition of the word. Just consider the myriad implications of "government policy" or speak to a tame lawyer for a glimpse into the complexities.

Wednesday 21 August 2019

Please be inform that our service to you will be terminated in a shortly time

Friends, Romans, customers, lend me your screens. I come to bury NoticeBored, not to praise it.

Sadly, the time has come to draw a lengthy chapter in our lives to a close.

Our monthly  security awareness and training subscription service will cease to be early next year. As of April 2020, it will be no more. It will be pushing up the daisies. We'll be nailing it to the perch and sending it off to the choir invisibule.


The final straw and inspiration for the title of this piece was yet another exasperating phisher:


... and the realisation that suckers will inevitably fall for scams as ridiculous as that, no matter what we do. There will always be victims in this world. Some people are simply beyond help ... and so too, it seems, are organizations that evidently don't understand how much they need security awareness and training. "It's OK, we have technology" they say, or "Our IT people run a seminar once a year!" and sure enough the results are plain for all to see. Don't say we didn't warn them.

We tried, believe me we tried to establish a viable market for top-quality professionally written creative awareness and training content. Along the way we've had the pleasure of helping our fabulous customers deliver world-class programs with minimal cost and effort. But in the end we were exhausted by the overwhelming apathy of the majority. 

As we begin the research for our 200th security awareness module, it's time to move on, refocusing our resources and energies on more productive areas - consulting and auditing on information risk and security, ISO27k, security metrics and suchlike.

We're determined that the gigabytes of creative security awareness and training content we've created since 2003 will not end up on some virtual landfill so we'll continue to offer and occasionally update the security policies and other materials through SecAware.com. The regular monthly updates will have to go though as there simply aren't enough hours in the day. "She cannae take it, Cap'n!"

Meanwhile these bloggings will continue. We're still just as passionate as ever about this stuff (including the value of security awareness, despite everything). We've got articles, books and courses to write and deliver, standards to contribute to, global user communities to support, proposals to prepare. 

Must go, things to do.

Tuesday 20 August 2019

Cyber-insurance standard published


We are delighted to announce the birth of another ISO27k standard

ISO/IEC 27102:2019 — Information security management —

Guidelines for cyber-insurance

The newest, shiniest member of the ISO27k family nearly didn't make it into this world. Some in the insurance industry are concerned about this standard muscling-in on their territory. Apparently, no other ISO/IEC standards seek to define categories of insurance, especially one as volatile as this. Despite some pressure not to publish, this standard flew through the drafting process in record time thanks mostly to starting with an excellent ‘donor’ document and a project team tightly focused on producing a standard to support and guide this emerging business market. Well done I say! Blaze that trail! This is what standards are all about.

‘Cyber’ is not yet a clearly-, formally- and explicitly-defined prefix, despite being bandied about willy-nilly, a solid-gold buzzword. It is scattered like confetti throughout but unfortunately not defined in this standard, although some cyber-prefixed conventional common-or-garden information risk and security terms are defined by reference to “cyberspace” which is - of course - the “interconnected digital environment of networks, services, systems, and processes”. Ah, OK then. Got yer.

We each have our own interpretations and understandings of the meaning of cyber, some of which differ markedly. The information risks associated with cyberwarfare and critical national and international infrastructures (such as the Internet), for example, are much more substantial than those associated with the activities of hackers, VXers and script kiddies generally. Even a ‘massive’ privacy breach or ransomware incident is trivial compared to, say, all-out global cyberwar. The range is huge ... and yet people (including ISO/IEC JTC1/SC27) are using 'cyber' without clarifying which part or parts of the range they mean. Worse still, some (even within the profession) evidently don’t appreciate that there are materially different uses of the same term. It’s a recipe for confusion and misunderstanding.

The standard concerns what I would call everyday [cyber] incidents, not the kinds of incident we can expect to see in a cyberwar or state-sponsored full-on balls-out all-knobs-to-eleven cyber attack. I believe [some? most? all?] policies explicitly exclude cyberwarfare ... but defining that may be tricky for all concerned! No doubt the loss adjusters and lawyers will be heavily involved, especially in major claims. At the same time, the insurance industry as a whole is well aware that its business model depends on its integrity and credibility, as well as its ability to pay out on rare but severe events: if clients are dubious about being compensated for losses, why would they pay for insurance? Hopefully this standard provides the basis for mutual understanding and a full and frank discussion between cyber-insurers and their clients leading to contracts (confusingly termed “policies”!) that meet everyone’s needs and expectations.

There are legal and regulatory aspects to this too e.g. compensation for ransomware payments may be legally prohibited in some countries. Competent professional advice is highly recommended, if not essential.

Depending on how the term is (a) defined and (b) interpreted, ‘cyber incidents’ covers a subset of information security incidents. Incidents such as frauds, intellectual property theft and business interruption can also be covered by various types of insurance, and some such as loss of critical people may or may not be insurable. Whether these are included or excluded from cyber-insurance is uncertain and would again depend on the policy wording and interpretation. 

Likewise the standard offers sage advice on the categories or types of costs that may or may not be covered, depending on the policy wording. I heartily recommend breaking out the magnifying glasses and poring over the small-print carefully. Do it during the negotiation and agreement phase prior to signing on the dotted line, or argue it out later in court - your choice.

Personally, I’d like to see the business case for using cyber-insurance as a risk treatment option expanded further (beyond what the standard already covers), laying out the pros and cons, the costs and benefits of so doing, in business terms. It is a classic example of the risk treatment now known as ‘sharing’, formerly ‘transferral’. Maybe I will write a paper on that very topic. Watch this space.

Monday 19 August 2019

Vote for your favorite security blogs

Purely by chance, I discovered today that this blog has been nominated in the "Most entertaining security blog" 2019 category at Security Boulevard.

What a nice surprise! 

Regardless of the eventual outcome of the voting, it's humbling to make it onto the nominations list alongside several excellent blogs that I enjoy reading. Please visit the voting page to see what I mean, browse the nominated blogs and vote for your favorites [you can suggest blogs in addition to those nominated].

Meanwhile, the bloggings will continue ...



PS  If you're on the lookout for infosec blogs worthy of your attention, take a look at this excellent shortlist from VPNmentor.

Extending the CIS security controls

The Center for Internet Security has long provided helpful free advice on information (or cyber) security, including a "prioritized list of 20 best practice security controls" addressing commonplace risks.

In the 'organizational controls' group, best practice control 17 recommends "Implement a security awareness and training program". Sounds good, especially when we read what CIS actually means by that:
"It is tempting to think of cyber defense primarily as a technical challenge, but the actions of people also play a critical part in the success or failure of an enterprise. People fulfill important functions at every stage of system design, implementation, operation, use, and oversight. Examples include: system developers and programmers (who may not understand the opportunity to resolve root cause vulnerabilities early in the system life cycle); IT operations professionals (who may not recognize the security implications of IT artifacts and logs); end users (who may be susceptible to social engineering schemes such as phishing); security analysts (who struggle to keep up with an explosion of new information); and executives and system owners (who struggle to quantify the role that cybersecurity plays in overall operational/mission risk, and have no reasonable way to make relevant investment decisions)."
Recognising that security awareness and training programs should not merely address "end users" (meaning staff or workers in general who use IT) is one of the things that differentiates primitive from basic approaches, extending the program to a broader-based organizational cultural development approach in fact. Well done CIS for pointing that out, although personally I would have offered more explicit guidance rather than emphasizing a "skills gap analysis". For example, having distinguished several audiences, I suggest preparing awareness and training materials on subjects and in formats that suit their respective perspectives and needs. Also, make the awareness and training activities ongoing, close to continuous rather than infrequent or occasional. Those two suggestions, taken together, lift basic security awareness and training programs to the next level - good practice at least, if not best practice.

Anyway, that's just 1 of 20. Similar considerations apply to the other 19 controls: no doubt they can all be embellished and refined or amplified upon by subject matter experts ... which hints at a 21st control: "Actively seek out and consider the advice of experts, ideally experts familiar with your situation" implying the use of consultants or, better still, employing your own information security specialists full-time or part-time as appropriate. 

While I'm at it, I'd like to suggest four further controls that are not immediately obvious among the present 20, all relating to management:
22. Information risk management - comprising a suite of activities, strategies, policies, skills, metrics etc. to identify, evaluate and address risks to information systematically and professionally;
23. Management system - a governance arrangement that envelops all aspects of information risk and security management under a coherent structure, ideally covering information risk, information security, governance, compliance, incident management, business continuity and more (e.g. health and safety, since "Our people are our greatest assets"!). Although I'm thinking of ISO27k here, there are in fact several such frameworks. Depending on the organizational or business context, any one of them might be perfect, or it may be better to draw on elements from several in order to assemble a custom arrangement with the help of those experts I mentioned a moment ago;
24. Information risk and security metrics - by focusing attention on and measuring key factors, metrics enable rational management, facilitate continuous improvement and help align information risk and security with business objectives. The advice might usefully expand on how to identify those key factors and how best to measure them, perhaps in the form of a 'measurement system';
25.  Information risk and security management strategy - I find it remarkable that strategy features so rarely in this field, given its relevance and importance to the organization. I guess this blind-spot stems partly from weaknesses in other areas, such as awareness, management systems and metrics: if management doesn't really understand this stuff, and lacks the tools to take charge and demonstrate leadership, it's left to flounder about on its own with predictable results.  If information risk and security managers, CISOs etc. aren't competent or aware of the value of strategy, maybe it never occurs to them to get into this, especially as standards such as ISO/IEC 27001 barely even hint at it, if at all. 
Maybe I should suggest these 5 additional controls to CIS? Their website doesn't exactly call out for suggestions so you, dear blog reader, are in the privileged position of advance notice. Take as long as you like to think this over and by all means comment below, email me or prompt CIS to get in touch. Let's talk!


PS  Seems I'm not alone in recommending the strategic route. I just spotted this in Ernst & Young's Global Information Security Survey 2018-19:
"More than half of the organizations don’t make the protection of the organization an integral part of their strategy and execution plans ... Cybersecurity needs to be in the DNA of the organization; start by making it an integral part of the business strategy ... Strategic oversight is on the rise. The executive management in 7 of 10 organizations has a comprehensive understanding of cybersecurity or has taken measures to make improvements. This is a huge step forward; put cybersecurity at the heart of corporate strategy ... Cybersecurity must be an ongoing agenda item for all executive and non-executive boards. Look to find ways to encourage the board to be more actively involved in cybersecurity."
Whether information risk and security is an integral part of business strategy, or business strategy is an integral part of information risk and security, is a moot point. Either way, they should be closely aligned, each driving and supporting the other. Strong information risk and security is both a business imperative and a business enabler. 

As to putting this on the board's agenda, we've been doing precisely that since, oooh, let me see, 2003 ...



Sunday 18 August 2019

About information assets ... and liabilities

Information security revolves around reducing unacceptable risks to information, in particular significant or serious risks which generally involve especially valuable, sensitive, critical, vital or irreplaceable information. Those are the ‘information assets’ most worth identifying, risk-assessing and securing. 

That seems straightforward but it is more complicated than it sounds for many reasons e.g.:
  • Information exists in many forms, often simultaneously e.g. computer data and metadata (information about information), knowledge, paperwork, hardware designs, molds, recipes, concepts and ideas, strategies, policies, understandings and agreements, experience and expertise, working practices, contacts, software, data structures, intellectual property (whether legally registered and protected or not) … any of which may need to be secured;
  • Information is generally dynamic, hence there is a timeliness aspect to its value (e.g. breaking vs old news, forthcoming vs published company accounts);
  • Information is usually context-dependent – its meaning and value arise partly from relationships to other supporting or related information (e.g. ’42’ may mean many things, even the product of six times nine);
  • Information is often diffuse and hard to identify, evaluate, contain/pin-down and secure – it’s cloudy and “it wants to be free”;
  • Information that is too tightly secured loses its value, since its value comes from its legitimate exploitation or use, timeliness, expression and communication/sharing i.e. its availability;
  • Some information has negative value (e.g. fake news, subterfuge, malware), which makes integrity important – and that’s another complex concept;
  • Severe threats, vulnerabilities or impacts increase the probability or impact of serious incidents, even if the information itself does not seem particularly special (e.g. a faulty 10 cent rivet can bring down a plane);
  • Some information risks are significant because of impacts primarily to third parties if the information is compromised. This includes valuable information belonging to third parties and entrusted to the organization (such as personal information and proprietary information/trade secrets/intellectual property) and various incidents with environmental or societal impacts (e.g. intelligence info about weapons capabilities). If incidents occur, there may be secondary impacts to the organization (such as noncompliance penalties and breakdowns in business relationships or brands) which can be hard to value (partly it depends on the third parties’ and other external reactions to incidents, partly on the accountability aspect).
There’s a lot there to take into account, and that’s not even an exhaustive list! In practice, though, there are some obvious shortcuts (e.g. a hospital is bound to need to address risks involving its health and business information, and “good practice” controls are applicable to most organizations) and the Keep It Simple, Stupid approach makes an excellent starting point – way better than putting all available resources into risk identification and analysis, leaving too little for risk treatment and management.

Friday 16 August 2019

The brilliance of control objectives

Way back in the 1990's, BS 7799 introduced to the world a brilliant yet deceptively simple concept, the "control objectives".  

Control objectives are short, generic statements of the essential purpose or goal of various information security controls. At a high level, information security controls are intended to 'secure information' but what does that actually mean? The control objectives explain.

Here's an example:

7. System access control

 

7.1 Business requirement for system access

Objective: To control access to business information.

Access to computer services and data should be controlled on the basis of business requirements. 
This should take account of policies for information dissemination and entitlement.

At first glance, this control objective is self-evident in that the objective of an access control is obviously to control access but look again: the objective explicitly refers to 'business information' and the following notes emphasize business requirements and policies in this area. In other words, this security control has a business purpose. The reason for controlling access to IT systems is to secure business information for business reasons.

The standard didn't elaborate much on those business reasons, partly because they vary markedly between organizations. A bank, for instance, has different information facing different information risks than, say, a mining company or government department. They all have valuable information facing risks that need to be addressed, and system access control is likely to be applicable to each of them, but in different ways. There are subtleties here that the standard deftly sidestepped, leaving it to intelligent readers to interpret the standard according to their circumstances.

The standard went on to describe controls that would satisfy the objective, forming a strong link between the security measures employed and the business reasons for doing so. I've always treated the controls themselves as examples that illustrate possible approaches, reminders or hints of the kinds of things that might be useful to satisfy the control objectives. There are loads of different ways to secure access to IT systems, and as an experienced infosec pro I don't need a standard to list them all out for me in great detail, especially as those details depend on the situation and the business context (although the Germans have made a valiant attempt to do that!). Furthermore, there is a near-infinite set of possible controls if you consider all the combinations and permutations, parameters and variants, hence it is unrealistic to expect a standard to identify the one best way to do this. There isn't a unique solution to this puzzle.

So instead the succinct control objectives set us thinking about what we're trying to achieve for the business in each of the 30-odd areas covered. Brilliant!

The control objectives in BS 7799:1995 and the BSI/DTI Code of Practice that preceded it were well-written and remain relevant today. Unfortunately, they have been diluted over the years since BS 7799 became ISO/IEC 17799 then ISO/IEC 27002. I am disappointed to learn that the next release of '27002 may drop them altogether, severing a valuable link between business and information security ... but that doesn't mean they are gone altogether. Maybe I'll launch a collaborative project on the ISO27k Forum to elaborate on an updated set of control objectives, or maybe I'll just do it myself in my copious free time [not]. We'll see how it goes.

Sunday 11 August 2019

Loop back security

This is a classical step-wise view of the conventional ISO27k approach to managing information risks:
  1. Identify your information risks;
  2. Assess/analyze them and decide how to treat them (avoid, share, mitigate or accept);
  3. Treat them - apply the chosen forms of risk treatment;
  4. Monitor and manage, reviewing and taking account of changes as necessary.
    As an example, most organizations have some form of user registration process to set up network computer accounts (login IDs) for workers. The controls outlined in ISO/IEC 27001 Annex A section 9.2.1, described in more detail in ISO/IEC 27002 section 9.2.1, are part of the suggested means of mitigating the risks associated with inappropriate user access to information and information systems, one of the four forms of risk treatment at step 3 in the risk management process.

    Ah but what happened to steps 1 and 2? Oh oh.

    Working backwards from step 3, management appear to have decided that the A.9.2.1 controls are required in step 2. So how did they reach that decision? Is there any evidence of that decision ever being taken, other than the fact that the controls are now in place? What drove the decision? What were they hoping to achieve? Were the  alternative risk treatment options considered and rejected?

    Prior to that, someone presumably identified the information risks in step 1. Great! Let's see them, then. Go ahead, make my day, show me the risks that are addressed by, say, your controls under A.9.2.1. Let's talk them over. What is the organization hoping to achieve with these controls? What would be the predicted business consequences if the controls weren’t in place, didn’t work as planned, or failed for some reason so the risks eventuated? How drastic would such an incident be to the organization: would it be terminal, very costly and disruptive, somewhat costly and disruptive, or merely annoying and of little real consequence? Relative to other information risks, are these risks high, medium or low? Are these risks a major concern for the organization, a clear problem area that has maybe led to a string of nasty incidents or near-misses in the recent past, or are they just theoretical concerns, perhaps things that might conceivably be a problem at some future point?

    Posing such questions is not simply a matter of me being an awkward bugger, a stickler for process, trying to prove an hypothesis that you didn't get to where you are by following the route prescribed by ISO27k, instead assuming that “Of course we need access controls! Everyone needs access controls!”. The real reason is to explore your organization's understanding of its information risks, since that implies a level of care over designing, documenting, implementing, operating and managing the controls, relative to all the other controls and risk treatments in scope of the ISMS - and not just the user registration and access controls: there are loads of controls relating to loads of risks. Do you have a good grasp of them, or have you jumped directly to The Answer without understanding the Question? Show me your workings! 

    There are implications for step 4 as well. If the A.9.2.1 controls are absolutely vital for sound business reasons relating to the associated risks, clearly management needs to be certain they are strong, which implies a lot of care and assurance, close monitoring and urgent action if they look likely to, or do, fail. If they are necessary, somewhat less care may be sufficient, with some assurance. If they are nice to have, then the amount of effort and assurance may be minimal … saving the resources for other more important matters. [Evidently, since they exist, someone has already decided they are not unnecessary!].

    The approach I'm waffling on about here is an illustration of a far more general point about rational, systematic or even scientific management. We often do things 'just because'. We follow convention. We adopt 'good practices' and prefer not to 'buck the trend' or 'stand out' ... but that's not always the best approach, and with a bit of thought we may be able to do things better.

    Looping back and forth through any sequential process gives us the opportunity to review and revise our understanding, deepening and extending it on each pass. Identifying and challenging our assumptions can lead to valuable insight.

    Of course there is a near infinity of loops to loop and it's neither practical nor advisable to attempt to review everything, implying a process to decide which loops to loop, addressing the why, when, how and who questions. I'll tackle that aspect another time.

    Saturday 10 August 2019

    The formalities of certification

    ISO/IEC JTC 1/SC 27 is currently getting itself all hot-under-the-collar about cloud security certificates, certifying compliance with standards that were neither intended nor written for certification purposes. 

    The ISO27k cloud security standards ISO/IEC 27017 and ISO/IEC 27018 are not written as formally as certifiable standards such as ISO/IEC 27001 ... and yet I gather at least one accredited certification body has been issuing compliance certificates anyway, implying that the auditors must have used their discretion in interpreting the standards and deciding whether the organizations fulfilled the requirements sufficiently well to 'deserve' certificates. The trustworthiness of those certificates, then, depend in part on the competence and judgement of the certification auditors, not just on the precise wording of the standards. In other words, there's an element of subjectivity about it.

    The key issue is that, in this context, compliance certification is a formal process designed to ensure that every duly-issued certificate says something meaningful, trustworthy and hence valuable about the certified organization's status - specifically, that it has been independently and competently verified that they fulfill all the mandatory requirements of the respective standard. Certification is meant to be an objective test.

    That's why certifiable standards such as ISO/IEC 27001 are so precisely worded and narrowly interpreted, for example distinguishing "shall" (= mandatory) from "should" (= discretionary) despite those being different tenses of the exact same English verb. Standards that are not intended to be used for certification processes are not so precisely and narrowly worded, allowing more discretion on how they are applied. To avoid confusion (!), they are not supposed to use the word "shall", and there are drafting rules about similar words e.g. "must", "may" and "can" laid down in the formal ISO Directives

    The idea, obviously enough, is to leave as little room for subjective interpretation as possible in the certifiable standards, a tricky objective in a field as diverse and dynamic as information risk and security management, especially given the huge variety of applicable organizations. The context is markedly different to, say, the specification of nuts and bolts.

    ISO 9000 was, I think, the first certifiable ISO standard to address this issue: rather than attempt to specify and certify an organization's product quality practices directly, the standard formally specifies an overarching "quality management system", which in turn should ensure that the products are of an appropriate quality. The "certifiable management system" approach has since spread to information security, environmental protection and so on.

    It is more of an issue for the accreditation and certification bodies or ISO than for SC 27 but, hey, SC 27 has plenty of passionately-held opinions and, to be fair, it is an integrity issue. I suspect there will be a crack-down on non-management system certifications, or at least a rewording to distance them from the management systems compliance certificates. That in turn will increase pressure to develop certifiable [management system] standards for cloud security and other domains (such as IoT security) where there is clearly market demand.

    Thursday 8 August 2019

    Loopy intros


    Normally in an awareness seminar or training course, we display a static title slide on the screen as people wander into the room, sipping coffee and chatting among themselves then settling down for the show. The title slide tells them they are in the right place at the right time but it's a boring notice.

    So, how about instead showing something more interesting to catch their eyes (and ears?) as they arrive?

    It's not too hard to set up a looping mini-presentation by following these instructions. Essentially, you add the loopy slides to the start of your conventional slide deck, set them to automatically advance every few seconds and 'repeat until escape'. The 'escape' can be achieved by adding an action button to the loopy slides, that when clicked launches the main part of the presentation.

    An alternative approach is to separate the loopy from main presentations. Run the loopy presentation as people arrive. When everyone is settled down, terminate it and launch the main presentation instead. 

    Although this involves a couple more clicks for the changeover, it has some advantages:
    • The main presentation is totally unchanged. You can add a loopy intro to any slide deck without changing those slide decks at all. The slide numbering is unchanged. You can still print the speaker-notes pages as handouts without worrying about or wasting paper on those loopy slides.

    • The loopy intro can be something generic, ideally eye-catching and perhaps amusing, perhaps customized to show the title of the main presentation (or if you can't even be bothered to do that, simply write the title on a sign!).

    • You might like to play some background muzak quietly as people arrive, partly to let people know that the show is on, partly to help things settle. Video clips are generally better with sound too.

    • The loopy intro can be re-used across numerous presentations, becoming part of your awareness and training program's branding. Before long, the audience will learn to recognize the style and content of the intro, as well as the main presentation ... hence it's worth investing a little of your valuable resources into preparing something appropriate and impressive. Please make it professional: remember you have an adult audience not a bunch of pre-schoolers. Minions are probably not the best role models.

    • The loopy intro can be updated over time - for example, you might use it to promote your upcoming awareness and training activities, planned sessions, topics, current issues, new stuff, team members, policy snippets, major incidents, news headlines or whatever. Proudly display your best security metrics. Display embarrassing photos from past infosec events, or physical security incidents. Get creative! This is also part of your branding, and fits very nicely with the ongoing/rolling approach to security awareness and training that we heartily recommend.
    Maybe we should prepare a generic loopy intro for Information Security 101, something customers can adapt to their needs? We're planning to update that module early next year so we have time to get our thinking caps on and try out the idea in our awareness slide decks between now and then.