Friday 21 August 2015

Lean security

Lean manufacturing or kaizen is a philosophy or framework comprising a variety of approaches designed to make manufacturing and production systems as efficient and effective as possible, approaches such as:
  • Design-for-life - taking account of the practical realities of production, usage and maintenance when products are designed, rather than locking-in later nightmares through the thoughtless inclusion of elements or features that prove unmanageable;
  • Just-in-time delivery of parts to the production line at the quantity, quality, time and place they are needed (kanban), instead of being stockpiled in a warehouse or parts store, collecting dust, depreciating, adding inertia and costs if product changes are needed;
  • Elimination of waste (muda) - processes are changed to avoid the production of waste, or at the very least waste materials become useful/valuable products, while wasted time and effort is eliminated by making production processes slick with smooth, continuous, even flows at a sensible pace rather than jerky stop-starts;
  • An obsessive, all-encompassing and continuous focus on quality assurance, to the extent that if someone spots an issue anywhere on the production line, the entire line may be stopped in order to fix the root cause rather than simply pressing ahead in the hope that the quality test and repair function (a.k.a. Final Inspection or Quality Control) will bodge things into shape later ... hopefully without the customer noticing latent defects;
  • Most of all, innovation - actively seeking creative ways to bypass/avoid roadblocks, make things better for all concerned, and deliver products that go above and beyond customer expectations, all without blowing the budget.
Service industries and processes/activities more generally can benefit from similar lean approaches ... so how might kaizen be applied to information risk management and security?
  • Design-for-security - products and processes should be designed from the outset to take due account of information security and privacy requirements throughout their life, implying that those requirements need to be elaborated-on, clarified/specified and understood by the designers;
  • Just-in-case - given that preventive security controls cannot be entirely relied-upon, detective and corrective controls are also necessary;
  • Elimination of doubt - identifying, characterizing and understanding the risks to information (even as they evolve and mutate) is key to ensuring that our risk treatments are necessary, appropriate and sufficient, hence high-quality, reliable, up-to-date information about information risk (including, of course, risk and security metrics) is itself an extremely valuable asset, worth investing in;
  • Quality assurance applies directly - information security serves the business needs of the organization, and should be driven by risks of concern to various stakeholders, not just 'because we say so';
  • Innovation also applies directly, as stated above.  It takes creative effort to secure things cost-effectively, without unduly restricting or constraining activities to the extent that value is destroyed rather than secured.

Tuesday 18 August 2015

Persistently painful piss-poor password params & processes

Let me start by acknowledging that passwords are a weak means of authenticating people, for all sorts of reasons. I know passwords suck ... and yet passwords are by far the most common user authentication method in use because of two factors (pun intended):
1) Passwords are conventional, well-understood, commonplace, and the natural default 'no-brain' option. People are used to them and [think they] understand them. Passwords or PIN codes are almost universally built-in to operating systems and many apps, websites etc. 
2) Compared to other methods, passwords are fairly cheap to implement, manage and use. There is no need to invest in biometric sensors, PKI, crypto-tokens or whatever unless you need multifactor authentication ... in which case you probably still need passwords. 
That said, there are many different ways of employing passwords for user authentication, many design parameters, most of which affect the level of security achieved in practice. Designing and implementing relatively strong password authentication mechanisms is not nearly as trivial as it may appear to the untrained eye.

Take for example eBay and PayPal, formerly one company but now split. Given their common origin, one might have thought they would have similar approaches to passwords, and indeed they do. They both suck.

Both sites make it a mission even to find the 'change password' option in the first place. 
On eBay, there is nothing as obvious as a "Change password" menu option or button, oh no, that would be far too easy. After hunting around for a while, I eventually discovered the requisite option tucked away under 'Hi Gary!' --> 'Account settings' --> 'Personal information' --> 'Edit' the password line.
On PayPal, once again there is nothing as obvious as a "Change password" option/button. It is in fact in under  'My account' --> 'Profile' --> 'My personal info' --> 'Change' the password line.
It is almost as if the eBay and PayPal IT teams have conspired to make their processes different. Are there good reasons, I wonder, why we have to 'edit' on eBay but 'change' on PayPal, or why it's 'account settings' on one but 'profile' on the other?  ... Or do you think perhaps nobody even bothered to check what the other was using?

The mission continues once we have found the password change function, since the password change mechanisms also differ: 
eBay first of all requires me to login again (since, I guess, the persistent eBay session may have been taken over by someone else), then to enter my old password, then the new password twice.
PayPal first of all requires me to enter my credit card number (in effect, a second password) then gives me the option to change either my password or my 'security questions', then to enter my old password, then the new password twice. 
Furthermore, the two sites define valid passwords differently.
The rules for valid eBay passwords are summarized in a tooltip ... 

... and separately, in more detail, in a pop-up help window:  

... which is fair enough.  There's plenty of advice there and the restrictions are sensible, although it is not clear whether the password is case-sensitive (I guess it is but it doesn't actually say so).
In contrast, valid PayPal passwords appear to be solely defined by a simple tooltip:
If there is any more detailed information on valid PayPal passwords, it is so well hidden that I can't find it, despite searching within help.
I don't know why PayPal restricts passwords to a maximum of 20 characters (quite long for a classic password yet too short for a decent passphrase) but perhaps it is a good thing since, most annoyingly of all, PayPal requires me to enter my new password, twice, manually: I am prevented from pasting in a very complex password generated by my password manager software. Consequently, I have two lame choices:
  1. I can think up a classic memorable password, type it in twice to the website then a third time to my password manager. This restricts the complexity of my password to one I can think up, remember and type easily, negating a large part of the value of using a password manager to generate long, complex passwords;
  2. I can generate a random complex password in the password manager, type it in twice to the website then paste it into my password manager. In practice, I can either mess around with window positions or write down the password on paper since the password generator function popup disappears when I go to enter it into the website - and there's an even greater chance of me mistyping a complex password at least once out of the two times I have to enter it.
So far, I have only commented on the 'change password' function from my perspective as a user of these two related websites, pointing out arbitrary differences in the menu choices, terminology, process and password parameters, and factors that make it quite hard to use long complex passwords. Curiously, despite being a banking/financial services company, PayPal's password rules restrict the maximum length of a password to just 20 characters whereas eBay allows a maximum of 64, hence a lot more entropy [I can't be bothered to figure how much more: I'll leave it as an exercise for the attentive reader]. 

The 'forgotten password' processes are also different, and I strongly suspect the ways these two sites hash and store the passwords also differ, behind the scenes. Even the way the sites inform users that their passwords have been changed differ. There are still other password security aspects I haven't checked, for instance how many invalid password attempts are allowed, what happens once the limit is reached, what other information from the user's system/browser is used as part of the authentication, and whether either site block simple SQL injection attacks ... because I'm not a hacker and it's not my job. 

Aside from the specifics, the more general point is that despite these two sites coming from common stock, there are substantial but seemingly arbitrary differences in practically identical functions. Now consider all the other gazillion websites and apps Out There, each with their own password parameters, processes and constraints. There are no universal methods for users to manage our passwords, and limited consensus even on the minimal password requirements (in my experience, few sites today accept passwords of less than 6 characters being both letters and digits ... but some do).

Given how commonplace they are, isn't it odd that there are no generally-accepted global standards regarding passwords? Perhaps I should suggest just such a standard to ISO/IEC JTC 1/SC 27 for inclusion in the ISO27k suite - what do you think? It's not hard to envisage a standard giving advice on aspects such as password parameters, password change functions, password storage etc., along with the risk- and business-driven design, testing and implementation of password authentication and related processes. It might even be possible to come up with a limited suite of cases demonstrating the main functions in conformance with the standard, with a consistency so obviously lacking in practice today albeit perhaps with high/medium/low security variants for the corresponding risk levels. More than enough guts there for an ISO27k standard, I'd say, with further standards covering multifactor authentication, biometrics, PKI-based digital certificate approaches etc.

Meanwhile, think about your own organization. Do you currently have policies, procedures, standards and guidelines laying out consistent methods of user authentication, password management etc. for your systems and apps? Do your systems re-use properly defined, designed, developed and proven parameterized password functions, or indeed security functions as a whole? Do you even consider these issues when selecting commercial apps? Or are you happy to continue compromising your security and make your users' lives a misery (not to mention the long-suffering Helpdesk)?

Wednesday 12 August 2015

Habitual security

Getting our work colleagues to behave more securely is a lot like breaking old habits and replacing them with new ones. 'Habit' implies several things, most notably there is stasis, inertia or resistance to change - the very essence of habit - hence directed changes inevitably require both time and energy. Furthermore, old habits die hard: they are our well-practiced, comfortable, default behaviors, mostly performed subsconsciously, autonomously, easily, without thinking or apparent effort. In contrast, changing to a different behavior requires conscious thought and deliberate effort, at least at first, until the new behavior itself becomes habitual. In the middle is the 'unfreeze' phase of Kurt Lewin's classic 3-phase change model, the road-hump separating two distinct behaviors or clusters of activities.

Habitual behavior, including addiction, has been studied extensively for decades and is fairly well understood in terms of the psychology and physiology, so what can we learn from medical science and practice? 

Well, operant conditioning indicates that there are essentially two diametrically-opposed methods of dealing with behavioral changes:


So, one might condition a smoker to 'give up the filthy habit' by emphasizing either the health risks they face if they continue smoking (enforcement) or the health benefits if they cut down on the ciggies (reinforcement), or indeed both (e.g. use enforcement first to break the old smoking habit and then reinforcement to fix the new non-smoking habit in place for good). I blogged recently about applying this simple but powerful approach to information security awareness. Most organizations routinely penalize noncompliance with policies and procedures, but too few actively reward compliant behaviors. They are missing a trick. What a waste!

Focusing on the enforcement end of the scale, aversion therapy associates undesirable behaviors with actual or threatened pain or discomfort. It might be effective to zap someone's rear end with a bolt of static through their office chair when they do something insecure, but I doubt HR, Health and Safety or Legal would let us! 

Moving to the other end of the scale, many weight-loss/anti-obesity programs use the reinforcing effect of social recognition and peer-group respect. "Hey everyone, look at that! Joanne has lost an amazing 3 kilos since last week! Congratulations Jo - you're this week's Star Performer!" Boosting slimmers' low self-esteem (resulting largely from the incessant enforcement pressure of advertising and celebrity figures) is an important part of the therapy. The combination of metrics and social group pressure is another simple but powerful approach. I've used it from time to time in my audit work, compiling benchmark comparisions between departments or business units then deliberately highlighting and celebrating the most secure ones ... although admittedly it is hard to resist the urge to hammer those at the bottom of the league table! Again, most organization can do more on this score, for example deliberately using good news stories in awareness and training materials (e.g. ranking departments by the quality and completeness of their business continuity plans, using positive, upbeat quotes from the leaders to illustrate the stories, and openly acknowledging and thanking or rewarding them for their efforts). 

In the same vein, socializing information security is a central feature of our approach, a key technique in establishing a widespread and deep-rooted corporate culture of security. My compelling suggestion is to spread the word about information security far and wide using social interactions, both formal and informal relationships within the corporation. A simple example is to build a network of 'security amabassadors' or 'sec-reps' embedded within and throughout the business, continually drip-feeding them with awareness content and (just as importantly) encouraging them to provide feedback regarding the program, such as new awareness topics or security pinch-points for the business. Another technique is to provide opportunities for social interaction, knowledge transfer and mutual reinforcement between layers of the organization (e.g. by addressing managers and staff) as well as crossing departmental stovepipes (e.g. drawing on specialists in information security, physical security, IT, risk, compliance, HR, quality, health-and-safety, audit, business continuity and other parts of the business to develop and deliver relevant security awareness messages). The concept goes well beyond social media, but why not make a start by using blogs and tweets and all that jazz to disseminate security awareness messages and gather that feedback I mentioned?

Yet another creative security awareness approach involves the use of social engineering - in a positive, white-hat, fully-sanctioned-by-management manner I hasten to add. Self-phishing (conducting mock phishing attacks against our esteemed colleagues) evidently piqued some imaginations but thankfully the fad has peaked-out. Thinking back to operant conditioning, there are two distinct approaches: either enforce the phishing-related policies and procedures by punishing those employees who are phoolish enough to phall for your phishing lures, or reinforce them by rewarding employees who resist the urge to open the attachments or follow dubious links, instead reporting them as security incidents. Which approach did you use? Score bonus points if you answered both, and go to the top of the class if you (a) used metrics from the mock-phishing assault to celebrate and reward the most phishing-aware departments, and (b) recognized that the security awareness value of social engineering methods goes way beyond mere phishing.

It's not hard to achieve effective security awareness if you actually care and think enough about it to be creative and energetic ... Sadly, however, the approaches I have just outlined remain uncommon in practice, largely I guess because we security awareness pro's have our bad habits too! We sit on our laurels, resisting change, resenting the additional effort needed to figure out something different and put it into practice. There are a million and one excuses: I've heard loads and, I admit, I've used several myself. But hey, when I look back at where we were in security awareness back in the dark old days of the 1990s, some of us (at least) have broken the mold and come on a long long way. At least we no longer expect to 'do' security awareness through the dreaded annual-lecture-to-the-troops, scattering a few childish cartoon posters about the place, or duping our colleagues with self-phishing ... do we*?


* That was the royal 'we' of course. I meant you: what are you doing to make your security awareness program a roaring success? What bad habits are you willing to kick in order to make progress? Think on: the clock is ticking and there's no time like the present. Carpe diem. Every round-the-world journey starts with a step. Some clouds have a silver lining. Do not run with scissors.

Wednesday 5 August 2015

Lessons from the aviation industry

The ICAO Global Aviation Safety Plan 2014-16 (GASP) is an extremely impressive document on so many levels.

First off, how about this for an entrance (first paragraph):
"Ensuring safety remains paramount
Continuous improvement in global aviation safety is fundamental to ensuring air transport continues to play a major role in driving sustainable economic and social development around the world. For an industry that directly and indirectly supports the employment of 56.6 million people, contributes over $2 trillion to global gross domestic product (GDP), and carries over 2.5 billion passengers and $5.3 trillion worth of cargo annually, safety must be aviation’s first and overriding priority."
Given everything that's at stake here (and just in case it escaped your notice, those are BIG numbers), "safety must be aviation's first and overriding priority".  No ifs or buts, there's absolute clarity of vision for the entire industry.  

In other words the global aviation industry has both determined and aligned itself on safety as the overriding strategic objective, eclipsing or setting aside lesser objectives such as mere commercial success, profitability, compliance, efficiency, eco-friendliness or whatever. Obviously those are important in their own right, but there is no doubt about the industry's top priority being safety.

Gulp!

Secondly, GASP extends more than a decade to 2027 within which are substantive near-, mid- and long-term strategic objectives building on "previous targets to reduce the number of fatal accidents and fatalities, to significantly decrease the global and regional accident rates and to improve cooperation between regional groups and safety oversight organizations".  The current version of GASP is no isolated example, but the product of a consistent strategic planning process.

Thirdly, the strategic objectives aren't hand-waving puffery. There are firm dates. The supporting text expands on the details with further explicit goals, particularly in the near term e.g. "Implementation of ICAO Standards and Recommended Practices (SARPs) related to the State’s approval, authorization, certification and licensing processes" and "four distinct Safety Performance Indicators" (standardization, collaboration, resources and safety information exchange) within a meaningful framework.

Here's a further illustration of the depth of thinking (page 6):
"Safety Information Exchange
The exchange of safety information is a fundamental part of the global plan and is required to achieve its objectives, enabling the detection of emerging safety issues and facilitating effective and timely action. To encourage and support the exchange of safety information, it is imperative to implement safeguards against the improper use of safety information. To this end, ICAO is cooperating with States and industry to develop provisions to ensure appropriate protection of safety information."
Sharing safety information within the industry is a key part of the strategy. Given the sensitivity of the information, ensuring that it is protected/secured is a prerequisite. Obvious, if you think about it. They have.

Lastly, as if that's not enough, the document itself is 80 pages long, professionally produced, a nice piece of marketing with plenty of meat behind the gloss concerning the global industry's explicit focus on safety.  

OK ... now consider all the above in relation to literally ANY OTHER GLOBAL INDUSTRY facing vaguely similar safety and security concerns: defense, power generation (including nuclear) and supply, mining, automobiles, shipping, farming, finance, IT ... When was the last time you saw anything even approximating that clarity and singularity of vision and alignment, on a worldwide scale?

'Nuff said.  Think on.

Tuesday 4 August 2015

Smoke-n-mirrors IBM style

I've just been reading the IBM 2015 Cyber Security Intelligence Index, trying to figure out their 'materials and methods' i.e. basic parameters for the survey, such as population size and nature. All I can find are some obtuse references in the first paragraph:
"IBM Managed Security Services continuously monitors billions of events per year, as reported by more than 8,000 client devices in over 100 countries. This report is based on data IBM collected between 1 January 2014 and 31 December 2014 in the course of monitoring client security devices as well as data derived from responding to and performing analysis on cyber attack incidents. Because our client profiles can differ significantly across industries and company size, we have normalized the data for this report to describe an average client organization as having between 1,000 and 5,000 employees, with approximately 500 security devices deployed within its network."
Reading between the lines, it appears that this is a report gleaned primarily from 'more than 8,000 client [network security?] devices' belonging to an unknown number of organizations around the world who are customers of IBM Managed Security Services ... which IBM has described as:
"24/7/365 monitoring and management of security technologies you house in your environment. IBM provides a single management console and view of your entire security infrastructure, allowing you to mix and match by device type, vendor and service level to meet your individual business needs while drastically reducing your security costs, simplifying security management and accelerating your speed to protection."
But, before you delve into the actual report, read that final sentence of the first paragraph again: they have 'normalized the data' (whatever that means) to an 'average client organization' with about 500 security devices ... so given the total of 8,000 devices, and on the assumption that 'average' means 'mean', it appears the survey covers just 16 organizations whose network security devices are managed by IBM. Oh boy oh boy. No wonder they are so reluctant to tell us about the analytical methods!  

The data are from 2014, the report was published in July 2015. Given the miniscule sample, I wonder why it took them 7 months to do the analysis and reporting? Crafting the words to gloss over the glaring flaws, perhaps?

The remainder of the report is pretty humdrum - some superficially interesting graphics and four 'case studies' (three of which - that's 75% or a 'vast majority', IBM - are not actual cases as such but fictional accounts based on the collective experiences of an unknown number of clients). There's nothing particularly unusual or noteworthy in the report, despite the hyperbole (2014 was hardly "The year the Internet fell apart", IBM). The trends and other statistical information is worthless in scientific terms.

Remember this cynical blog piece whenever you see the report quoted. Better still, read the report for yourself and make up your own mind.

Saturday 1 August 2015

Are you cyber-prepped?


That deliberately dark, foreboding, dramatic image is just one of the awareness posters in August's brand new awareness module on cybersecurity. Its purpose is to catch people's eyes, intrigue them and make them think. What is a "cyber-prepper"? What are they doing? Are they friend or foe - something to be wary of, or to emulate?

The cyber-prepping concept came to me as the awareness materials were being written. While "preppers" are busy digging their underground bunkers, stockpiling water, food and small-arms to survive The Big One, the reality is that modern warfare is likely to be markedly different to the classic nuclear/biological/chemical holocaust scenarios they typically fear. Few cyberweapons make a bang or a flash, let alone a mushroom cloud - in fact, stealth is arguably their most valuable characteristic. If the enemy doesn't even know it has been infiltrated and attacked until its already too late to respond, its IT systems, comms and networks having been silently destroyed, its critical infrastructure crippled by a covert virus or electromagnetic pulses without a shot being fired, there is an obvious strategic advantage in making the first strike devastating and decisive.

At the same time, the reason preppers hoard their arsenals of guns and bullets is notionally to be able to defend their dwindling stocks against marauding mobs in the aftermath of serious battle, a terrifying situation that may equally transpire after a serious cyber-battle. Whereas being physically overrun by foreign soldiers is just a possibility, violent social disorder is highly likely in both cases as a natural consequence of the infrastructural devastation. The theory goes that we humans will revert to our primitive animal instincts, hunting ruthlessly for shelter and supplies. On a disturbingly frequent basis, TV coverage of big-city riots graphically demonstrates the threat: it's not hard to imagine that kind of thing happening on a much wider scale, especially if the emergency services have been 'neutralized' or stretched paper-thin by huge, competing demands. It won't be the gadget shops being looted so much as supermarkets and camping suppliers!

I'm still idly mulling the idea over. Aside from food, water, weapons and so on, what kinds of things should cyber-preppers stockpile? What would be the most valuable supplies, tools or whatever for survivors of a cyberwar? Ironically enough, ICT and other electronic equipment that remains operational may be at a premium in the aftermath, although it's uncertain whether it would be of much use compared to, say, garden forks and vegetable seeds.