Showing posts with label Development. Show all posts
Showing posts with label Development. Show all posts

Sunday 11 February 2024

Innovative approaches to ISO/IEC 27001 implementation


This week I've read an interesting, inspiring piece by Robin Long exploring the costs, benefits, approaches and strategic options for implementing ISO27k.  

I like Robin's idea of trying things out and banking some 'security wins' before committing to a full implementation. A full-scope ISMS is a major commitment requiring strong understanding and support from management, requiring a high degree of trust in the team and CISO/ISM/project leader as well as the [planned] ISMS. Demonstrating and celebrating security wins is a good way to build trust and sustain it, once the ISMS is running.

I'm also intrigued by the possibilities of unconventional, creative, less boring approaches to implementation project planning - for example, instead of plodding sequentially through ISO/IEC 27001, clause-by-clause, think about:

Tuesday 20 June 2023

Security control categories and attributes



On LinkeDin this morning, Morten Ingvard asked:

"As part of updating and reshaping some parts of our information security management system (ISMS), I'm not convinced that the new categorization of controls in ISO/IEC 27002:2022 (Organizational, people, physical and technical), is the best suit for our organization to rationally identify relevant controls for their work. I understand there is an increased focus on the use of attribution - so controls can be selected based on different perspectives, but I want to have a "default view" that the organization can read and understand, and currently, I'm strongly considering sticking with a categorization structure looking more like the older 2013-version in ISO/IEC 27001."

Here's my response to Morten:

"The categories are primarily a convenient way to sequence the controls in the standard. It was the 'default view' selected by ISO/IEC JTC1/SC27.

Saturday 28 January 2023

2 more topic-specific information security policies

We have just completed and released another two information security policy templates through SecAware.com.

The latest additions are security policy templates on:

The full SecAware policy suite now has 83 templates:

























They were all researched and written to a consistently high quality, by me. They are designed to mesh together, complementing each other. I maintain them, updating individual policies as and when required and reviewing the entire suite every year or so. 

We provide them as MS Word documents that you can easily customise. Get in touch for additional policies, procedures or guidelines, or if you need assistance to adapt them to your corporate style. 

Buy them individually for $20 or take the whole lot for $399, saving over $1200.

Monday 17 October 2022

Security awareness month


Since October is cybersecurity awareness month in the USA, we've seized the opportunity to update SecAware.com with additional information on our security awareness material. 

SecAware's information security awareness modules explore a deliberately wide variety of individual topics in some depth:

Friday 22 July 2022

Security in software development


Prompted by some valuable customer feedback earlier this week, I've been thinking about how best to update the SecAware policy template on software/systems development. The customer is apparently seeking guidance on integrating infosec into the development process, which begs the question "Which development process?". These days, we're spoilt for choice with quite a variety of methods and approaches. 

Reducing the problem to its fundamentals, there is a desire to end up with software/systems that are 'adequately secure', meaning no unacceptable information risks remain. That implies having systematically identified and evaluated the information risks at some earlier point, and treated them appropriately - but how?

The traditional waterfall development method works sequentially from business analysis and requirements definition, through design and development, to testing and release - often many months later. Systems security ought to be an integral part of the requirements up-front, and I appreciate from experience just how hard it is to retro-fit security into a waterfall project that has been runnning for more than a few days or weeks without security involvement.

A significant issue with waterfall is that things can change substantially in the course of development: the organisation hopefully ends up with the system it originally planned, but that may no longer be the system it needs. If the planned security controls turn out to be inadequate in practice, too bad: the next release or version may be months or years away, if ever (assuming the same waterfall approach is used for maintenance, which is not necessarily so*). The quality of the security specification and design (which drives the security design, development and testing) depends on the identification and evaluation of information risks in advance, predicting threats, vulnerabilities and impacts likely to be of concern at the point of delivery some time hence.

In contrast, lean, agile or rapid application development methods cycle through smaller iterations more quickly, presenting more opportunities to update security ... but also more chances to break security due to the hectic pace of change. A key problem is to keep everyone focused on security throughout the process, ensuring that whatever else is going on, sufficient attention is paid to the security aspects. Rapid decision-making is part of the challenge here. It's not just the method that needs to be agile!

DevOps and scrum approaches use feedback from users on each mini-release to inform the ongoing development. Hopefully security is part of that feedback loop so that it improves incrementally at the same time, but 'hopefully' is a massive clue: if users and managers are not sufficiently security-aware to push for improvements or resist degradation, and if the development team is busy on other aspects, security can just as readily degrade incrementally as other changes take priority. 

Another issue is that security testing has to suit short process cycles, with a tendency towards quick/superficial tests and less opportunity for the thorough, in-depth testing needed to dig out troublesome little security issues lurking deep within. Personally, I would be very uncomfortable developing a cryptographic application too quickly, or for that matter anything business- or safety-critical.

So, there are some common factors there, regardless of the method:

  • The chosen development methods have risk and security implications;
  • Various dynamics are challenging, on top of the usual security concerns over complexity, and changes present both risks and opportunities;
  • Security is just one of several competing priorities, hence there is a need for sufficient, suitable resources to keep it moving along at the right pace;
  • Progress is critically reliant on the security awareness and capabilities of those involved i.e. the users, designers, developers, testers, project/team leaders and managers.
* Just one of those dynamics is that the processes may change in the course of development: a system initially developed and released through a classical waterfall project may be maintained by something resembling the rapid, iterative approaches. The cycle speed for iterations is likely to slow down as the system matures or resources are tight, or conversely speed up to react to an increased need for change from the business or technology. 
 
So, overall, it makes sense for a software/system development security policy to cover:
  • An engineering mindset, prioritising the work according to the organisation's information risks ('risk-first development'?), with a willingness to settle for 'adequate' (meaning fit-for-purpose) security rather than striving in vain for perfection;
  • Flexibility of approach - supporting/enabling whatever processes are in use at the time, integrating security with other aspects and collaborating with colleagues where possible;
  • Sufficient resourcing for the information risk and security tasks, justified according to their anticipated value (with implications for metrics, monitoring and reporting);
  • Monitoring and dynamically responding to changes, being driven by or driving priorities according to circumstances, seizing opportunities to improve security and resisting retrograde moves in order to ratchet-up security towards adequacy. 
The policy could get into general areas such as accountability (e.g. various process checkpoints with management authorisation/approval), and delve deeper into security architecture (to reduce design flaws), secure coding (to reduce bugs) and security testing (to find the remaining flaws and bugs), plus security functions (such as backups and user admin) ... but rather than bloat the SecAware policy template, we choose to leave the details to other policies and procedures. Customers are welcome to modify/supplement the template as they wish. 
 
Whether that suits the market remains to be seen. What do you think? Do your security policies cover software/system development? If so, do they at least address the issues I've noted? If not, $20 is a wise investment ...

Saturday 2 July 2022

Standards development - a tough, risky business

News emerged during June of likely further delays to the publication of the third edition of ISO/IEC 27001, this time due to the need to re-align the main body clauses with ISO's revised management systems template (specfically, the 2022 edition of the ISO/IEC Directives, Part 1 "Consolidated ISO Supplement —  Procedure for the technical work — Procedures specific to ISO",  Annex SL "Harmonized approach for management system standards"). 
 
Although we already have considerable discretion over which information security controls are being managed within our ISO/IEC 27001 Information Security Management Systems today, an unfortunate side-effect of standardisation, harmonisation, adoption, accreditation and certification is substantial inertia in the system as a whole. It’s a significant issue for our field where the threats, vulnerabilities, impacts and controls are constantly shifting and often moving rapidly ahead of us … but to be honest it’s equally problematic for other emerging and fast-moving fields. Infosec is hardly special in this regard. Just look at what's happening in microelectronics, IT, telecomms, robotics, environmental protection and globalisation generally for examples.

One possible route out of the tar-pit we've unfortunately slid into is to develop forward-thinking ‘future-proof’ standards and release them sooner, before things mature, but that’s a risky approach given uncertainties ahead. It would not be good for ill-conceived/premature standards to drive markets and users in inappropriate directions. It’s also tough for such a large, ponderous, conservative committee as ISO/IEC JTC 1/SC 27. However, the smart city privacy standard ISO/IEC TS 27570 is a shining beacon of light, with promising signs for the developing security standards on Artificial Intelligence and big data security too. I wish I could say the same of 'cyber', cloud and IoT security but (IMNSHO) the committee is struggling to keep pace with these fields, despite some fabulous inputs and proactive support from members plus the likes of the Cloud Security Alliance and NIST. 
 
The floggings will continue until morale improves.

Another tar-pit escape plan involves speeding-up the standards development process, perhaps also the promotion, accreditation and certification processes that follow each standard's publication – but again there are risks in moving ahead too fast, compromising the quality and value of the standards, damaging ISO/IEC’s established brands. 
 
SC 27 management appears to be working on just such an approach right now with ISO/IEC 27028, putting more time and effort into the informal drafting stages ahead of the formalities of Committee Drafts and voting. The idea is to smooth and speed up the formalities by drafting better standards in the first place, and gaining the committee’s implicit support/consensus ahead of explicit approval. Likewise with recent moves to separate subject matter expert involvement in the creative preliminary stages from national body involvement in the latter stages. We’ll see how that turns out!
 
Personally, I yearn for modern, collaborative, cloud-based methods, particularly for the early informal stages of each standard. I'm sure we could get a lot more done, relatively quickly and painlessly, by working together online as a group in near-realtime ahead of the necessary ISO formalities around proofreading and approval. At the very least, more productive social dialogue between the experts would help get us all to the same chapter, if not on the same page. Committee meetings, whether virtual or in person, are costly and ponderous compared to, say, Google Groups or Microsoft Teams. I see these as complementary not alternatives, not either-or but both-and.

Yet another tar-mitigation option would be for SC 27 leadership to clarify the strategy and (re)align the committee members accordingly, increasing their understanding and support for whatever it takes to optimise the processes. However, ‘leadership’, ‘strategy’, 'alignment’ and 'optimisation' are all difficult in the ISO context, given the importance of due process, ample consideration, cultural awareness, diplomacy and global consensus. Management has cats to herd, guide, persuade and convince rather than unilaterally pushing through or blocking changes (as happens occasionally). Governance is challenging at the best of times: in such a large, international, busy, largely voluntary and diverse organisation, it’s tough-as. 

Looking back, despite all the challenges and all that tar, SC 27 has been remarkably successful, generating and managing a sizeable portfolio of well-respected ISO27k, privacy and other infosec standards. Sure, it could have done better in some areas, but overall the world is in better shape today than it would otherwise have been without SC 27.

Meanwhile, aside from shouting a few choice phrases from the touchline, is there anything we can do to help?
  • As we gain knowledge and expertise, we can give something back. Volunteer as subject matter experts, actively engaging with SC 27 through our national standards bodies to help develop better, more forward-thinking standards, for example by contributing to, reviewing and commenting on draft standards, especially in the early, more creative stages of the drafting process;
  • Propose, draft and offer possible new ISO27k standards, as is currently happening with, say, security control attributes and professional services;
  • Collaborate more, pulling together as a supportive community to develop, understand, adopt and extract value from the standards;
  • Think beyond mere certification. Be more creative and innovative, treating the standards as foundational platforms, suggested good practices worth considering, adapting, adopting and building upon rather than targets, hurdles or constraints on progress (as is happening right now with ISO/IEC 27001);
  • Be open to novel approaches, such as integrated management systems, peer-group and collaborative working, and cherry-picking whichever approaches hold the most promise for achieving our organisations' business objectives (e.g. supplementing or completely replacing the current '27001 Annex A controls with a more contemporary mix);
  • Be more tolerant and considerate of each other, including ISO/IEC, SC 27's management and editorial teams, the accreditation and certification bodies, auditors plus our work and professional colleagues. Remember, we're all on the same side here!

Friday 4 September 2020

Standardising ISMS data/application program interfaces



We've been chatting on the ISO27k Forum lately about using various IT systems to support ISO27k ISMSs. This morning, in response to someone saying that a particular tool which had been recommended did not work for them, Simon Day made the point that "Each organisation trying to implement an ISMS will find it’s own way based on their requirements."

Having surveyed the market for ISMS products recently, I followed-up with my usual blurb about organisations having different information risks and business situations, hence their requirements in this area are bound to differ, and in fact vary dynamically (in part because organisations mature as they gain experience with their ISMS: their needs change). The need for flexibility is why the ISO27k standards are so vague (essentially: figure out your own requirements by identifying and evaluating your information risks using the defined governance structure - the ISMS itself), rather than explicitly demanding particular security controls (as happens with PCI-DSS). ISO27k is designed to apply to any organisation. 

That thought sparked a creative idea that I've been contemplating ever since: wouldn’t it be wonderful if there was a standard for the data formats allowing us to migrate easily between IT systems supporting ISO27k ISMSs?

I’m idly thinking about a standard file format with which to specify information risks (threats, vulnerabilities, impacts and probabilities), controls, policies, procedures, metrics, objectives etc. - maybe an XML schema with specified field names and (where applicable) enumerated lists of values.

Aside from migrating between ISMS IT support systems and services, standard data formats would facilitate data sharing between application systems, services or sub-functions (e.g. for vulnerability management, incident management and information risk management), and between departments or even organisations (e.g. insurance companies, auditors and advisors and their clients and partners).

Perhaps we should develop an outline specification and propose such a standard to ISO/IEC JTC 1/SC 27. A New Work Item Proposal would need sufficient details to be clear about what is being proposed and why, expanding on the requirement. Researching the topic and generating a basic draft as a starting point would ease the process of developing an ISO27k standard, so that's something else to add to my to-do list. I wonder if there are already XML schemas in this general area?

Saturday 16 May 2020

Adjusting to the new normal


"The U.S. Government has reported that the following vulnerabilities are being routinely exploited by sophisticated foreign cyber actors in 2020:
  • Malicious cyber actors are increasingly targeting unpatched Virtual Private Network vulnerabilities.
    • An arbitrary code execution vulnerability in Citrix VPN appliances, known as CVE-2019-19781, has been detected in exploits in the wild.
    • An arbitrary file reading vulnerability in Pulse Secure VPN servers, known as CVE-2019-11510, continues to be an attractive target for malicious actors.

  • March 2020 brought an abrupt shift to work-from-home that necessitated, for many organizations, rapid deployment of cloud collaboration services, such as Microsoft Office 365 (O365). Malicious cyber actors are targeting organizations whose hasty deployment of Microsoft O365 may have led to oversights in security configurations and vulnerable to attack.

  • Cybersecurity weaknesses—such as poor employee education on social engineering attacks and a lack of system recovery and contingency plans—have continued to make organizations susceptible to ransomware attacks in 2020."

Well whadyaknow?

  • The US government blames "sophisticated foreign cyber actors" - the usual xenophobic, somewhat paranoid and conspiratorial stance towards those filthy rotten foreigners, desperately attacking little old US of A (today's version of reds under beds I guess);

  • "Unpatched" VPNs and insecurely configured Office 365 services are being targeted, implicitly blaming customers for failing to patch and configure the software correctly, blithely ignoring the fact that it was US-based software vendors behind the systems that required patching and configuring to address exploitable vulnerabilities;

  • And finally, uneducated users (the great unwashed) receive a further gratuitous poke, along with the lack of planning on system recovery and contingency ... which is whose fault, exactly? Hmmm, I'll pick up that point another day.
Accountability and QA issues aside, the sudden en masse adoption of Working From Home has undoubtedly changed corporate information risks for all organizations - even those of us who were already routinely WFH, since we depend on ISPs, CSPs, telecomms companies, electricity suppliers, professional services companies and other third parties who are, now, WFH. COVID is another obvious, dramatic change with further implications for information and other risks (e.g. mental and physical health; fragile self-sufficiency; global economic shock; political fallout ...), and it's far from over yet.

WFH is now A Thing (not in the IoT sense!) for some of us anyway, although it's not possible or suitable for everyone. As COVID gradually fades from the headlines, some WFH workers will drift back to regular office work, others may continue WFH and a good proportion will do a bit of both (hybrid working as it's now known) according to circumstances and workloads. If COVID returns with a vengeance, or when the next pandemic turns up, we'll presumably be WFH en masse once more. So, have you reviewed and updated your corporate risk profile lately? Have the incident management, business continuity, IT, HR, business relationship management and other controls, processes and arrangements coped brilliantly with the present situation, or are adjustments called for? Do you even know how things are going out there, the workforce now scattered, hunkered down in their caves?

    Wednesday 27 March 2019

    Break-in news


    Kaspersky has released information on Operation ShadowHammer, a malware/APT infection targeting ASUS systems with particular MAC addresses on their network adapters.

    According to a Motherboard report:
    "The issue highlights the growing threat from so-called supply-chain attacks, where malicious software or components get installed on systems as they’re manufactured or assembled, or afterward via trusted vendor channels. Last year the US launched a supply chain task force to examine the issue after a number of supply-chain attacks were uncovered in recent years. Although most attention on supply-chain attacks focuses on the potential for malicious implants to be added to hardware or software during manufacturing, vendor software updates are an ideal way for attackers to deliver malware to systems after they’re sold, because customers trust vendor updates, especially if they’re signed with a vendor’s legitimate digital certificate."
    And that, in a nutshell, is a concern with, say, the Microsoft Windows 10 patches, pushed out at Microsoft's whim to Windows 10 users who haven't figured out yet how to prevent or at least defer them until they have been checked out.  Same thing with Android and other operating system and application auto-updates: aside from the inconvenience of downloading and installing the patches, and the aggravation caused by the need to patch up such shoddy software in the first place, the security issue is insidious ... and yet there is also a substantial risk of not patching at all, or of delaying patches.

    Rock, meet hard place.

    As we know from Stuxnet, bank ATM and other infections, even supposedly offline/isolated computer systems and private networks are not totally immune to online attacks. As for anything permanently connected to the Internet (IoT things, for instance ... plus virtually all other ICT devices), well that's like someone grabbing onto the exposed end of a high voltage power cable in the hope that it has been permanently disconnected.

    The ultimate solution is to improve the quality of software substantially, in particular minimizing exploitable vulnerabilities which implies simplifying and formalizing the design and coding. Unfortunately, that goal has eluded us so far and, to be frank, seems unattainable in practice. Therefore we're stuck with this mess of our own creation. Automation is wonderful but we can't trust the robots.

    Monday 21 January 2019

    Computer errors

    Whereas "computer error" implies that the computer has made a mistake, that is hardly ever true. In reality, almost always it is us - the humans - who are mistaken:
    • Flaws are fundamental mistakes in the specification and design of systems such as 'the Internet' (a massive, distributed information system with seemingly no end of security and other flaws!). The specifiers and  architects are in the frame, plus the people who hired them, directed them and accepted their work. Systems that are not sufficiently resilient for their intended purposes are an example of this: the issue is not that the computers fail to perform, but that they were designed to fail due to mistakes in the requirements specification;
    • Bugs are coding mistakes e.g. the Pentium FDIV bug affecting firmware deep within the chip. Fingers point towards the software developers but again various others are implicated; 
    • Config and management errors are mistakes in the configuration and management of a system e.g. disabling controls such as antivirus, backups and firewalls, or neglecting to patch systems to fix known issues;
    • Typos are mistakes in the data entered by users including those who program and administer the systems;
    • Further errors are associated with the use of computers, computer data and outputs e.g. misinterpreting reports, inappropriately disclosing, releasing or allowing access to sensitive data, misusing computers that are unsuited for the particular purposes, and failing to control IT changes;
    • 'Deliberate errors' include fraud e.g. submitting duplicate or false invoices, expenses claims, timesheets etc. using accidents, confusion, ineptitude as an excuse. 
    Set against that broad backdrop, do computers as such ever make mistakes? Here are some possible examples of true "computer errors":
    • Physical phenomena such as noise on communications links and power supplies frequently cause errors, the vast majority of which are automatically controlled against (e.g. detected and corrected using Cyclic Redundancy Checks) ... but some slip through due to limitations in the controls. These could also be categorized as physical incidents and inherent limitations of information theory, while limited controls are, again, largely the result of human errors;
    • Just like people, computers are subject to rounding errors, and the mathematical principles that underpin statistics apply equally to computers, calculators and people. Fully half of all computers make more than the median number of errors!;
    • Artificial intelligence systems can be misled by available information. They are almost as vulnerable to learning inappropriate rules and drawing false conclusions as we humans are. It could be argued that these are not even mistakes, however, since there are complex but mechanistic relationships between their inputs and outputs;
    • Computers are almost as vulnerable as us to errors in ill-defined areas such as language and subjectivity in general - but again it could be argued that these aren't even errors. Personally, I think people are wrong to use SMS/TXT  shortcuts and homonyms in email, and by implication email systems are wrong in neither expanding nor correcting them for me. I no U may nt accpt tht. 

    Sunday 23 September 2018

    What is the best development method for security?

    In answer to someone on CISSPforum asking for advice about the impact of various software development lifecycles, methods or (as if we need another ology) methodologies, I asserted that the SDLC method affects the way or the manner in which infosec is achieved (spec'd, built, confirmed, delivered, used, managed, monitored, maintained ...) more than how effective it ends up being.

    There are pros and cons to all the methods - different strengths and weaknesses, different purposes, opportunities, risks and constraints. Software or systems development involves a load of trade-off and compromises. For example, if information risks absolutely must be minimized, formal methods are a good way to achieve that ... at huge cost in terms of both the investment of money and time for the development, and the functionality and rigidity of the developed system. However, an even better way to minimize the risk is to avoid using software, sidestepping the whole issue!

    In most circumstances, I would argue that other factors are more significant in relation to the information security achieved in the developed system than the choice of development method e.g.:
    • Governance, management and compliance arrangements, especially around the extended dev team and the key stakeholders;
    • Strategies (e.g. business drivers for information security), priorities, resources available (including maturity, skills and competence on infosec matters - not just $$$);
    • Policies and standards, especially good security practices embedding sound principles such as:
      • Don't bolt it on - build security in;
      • Be information risk-driven;
      • Address CIA and other security, privacy, compliance and related matters;
      • Secure the whole system, not just the software;
      • Focus on important security requirements and controls, taking additional care, increasing both strength and assurance over those;
      • Later security, in anticipation of layers being breached: make it harder and more costly for adversaries and incidents to occur;
      • Trust but verify;
      • Accept that perfect or absolute security is literally unachievable, and security maturity is more quest than goal, hence provide for resilience, recovery and contingency as well as incident management and continuous improvement.
    • Well-defined critical decision points, sometimes known as hurdles, stage gates etc., plus the associated criteria and assurance requirements, plus the associated management processes to measure progress, handle issues, re-prioritize ...;
    • Corporate culture, attitudes towards information risk, infosec, cybersec, IT, compliance etc., among management, the intended system users, IT and the dev team, plus awareness and training;
    • Documentation: more than simply red tape, good quality documentation on information risk and security indicates a mature, considered, rational approach, facilitates wider involvement plus review and authorization, captures good practices and helps those not closely involved with the project appreciate what is being developed, how and why;
    • Systems thinking: alongside people, hardware, networks and other system, and dynamics, the software is just part of the bigger thing being developed;
    • Team working: high performance teamwork can achieve more, better security and higher quality products with the same resources, especially if the extended team includes a wide range of experts, users, administrators, managers and more;
    • Suitable metrics, such that 'more, better security and higher quality products' is more than just a hand-waving notion, becoming criteria, measures and drivers;
    • Risk and change management practices and attitudes, maturity, support, drive etc.;
    • Most of all, the deep understanding that underpins sound requirements specs, planning and execution, and leadership: infosec is an integral part not a bolt-on, ideally to the point that it is taken for granted by all concerned that It Will Be Done Properly.
    I would love an opportunity to try out dev-races, where two or more development teams set out in parallel to build and deliver whatever it is, in friendly competition with each other. They will all have the same fixed specs for some aspects of the delivery, but latitude to innovate in other respects e.g. methods/approaches.  At the appropriate points during the project, the 'losers' admit defeat and either depart or join the 'winners', pushing through the final, toughest activities together on the home straight.  At first glance, it sounds like it will double the costs ... but that's only for the early stages, and has the advantages of improving both motivation and the end product.  Personally, from both the security and business perspectives, I see more investment in the early stages as an opportunity more than a cost!

    Wednesday 2 May 2018

    Taking a poke at ADDIE

    ADDIE is an acronym from the Instructional Systems Design field, standing for:
    • Analysis - examine the situation, determine the learning objectives;
    • Design - design an approach to satisfy the learning objectives;
    • Development - prepare the course materials etc.;
    • Implementation - deliver the course or whatever (some form of training or awareness or learning opportunity);
    • Evaluation - figure out how well it's going in terms of meeting the objectives.
    ADDIE was published back in the 1970's.  At first glance, it looks like a useful framework ... but look again. Isn't that just the core of the classic waterfall structured project management method? If so, consider this: what's missing? 

    As commonly represented, it's an open-ended linear process, whereas in fact it should be iterative. In particular, the Evaluation activity generates metrics, information that can and should be used to guide the next round of awareness and training. It feeds into future Analysis and Design activities, for instance learning approaches that worked well may be worth repeating or boosting relative to those that flopped. Approaches that didn't go so well should be reviewed to determine whether they are worth revising for another go, parking, or consigning to the tip.

    What about the traditional testing and authorization phase? What happened to that? When awareness and training materials are believed ready to use, shouldn't they be competently and thoroughly checked against the specifications*, revised, polished and finalized if appropriate, and only then authorized for release? 

    *Hmmm, thinking about it, documented specifications are merely implied by  the Analysis step. Seems to me that's a crucial part of the process that shouldn't be glossed over. It's also another iterative activity, in that subsequent steps may involve revisiting and revising the specifications, reassessing various assumptions for instance and exploiting creativity such as novel approaches.

    Oh and 'revisiting specifications' hints at change management, in fact process management as a whole is lacking from ADDIE. If management is pouring (well OK, dribbling or drip-feeding) corporate resources into awareness and training, it is perfectly reasonable for them to anticipate an efficient and effective process. Management concerns all parts of the process e.g.:
    • Strategies - high level objectives or goals that knit training and awareness in with various other corporate activities and objectives (e.g. ensuring that information security is part of induction or orientation sessions for new starters and people being promoted into new positions with new responsibilities; and linking security knowledge and competences with ;
    • Plans - things such as project plans with scope, priorities, timescales, milestones and decision points, deadlines, dependencies/critical paths, resources etc.;
    • Resources - estimating, budgeting, allocating and accounting for the resources needed, provided and consumed, including people, tools, methods and materials; also motivating and guiding the people involved i.e. man-management for the people designing, developing and delivering the training and awareness, supporting, encouraging and getting the most value from them;
    • Authorizing or approving things, considering options, making key decisions, juggling resources and priorities, dealing with resistance and show-stoppers, smoothing the way to achieve the best possible outcome under the circumstances;
    • Risks, incidents and changes - essentially preparing for and handling unanticipated events, things that "just come up" in the course of the activities, including opportunities (beneficial risks);
    • Quality e.g. process improvement, corporate learning and maturity, squeezing every last drop of value from the process (not least the time and effort invested by trainees) and systematically improving things with every iteration, drawing on both internal and external sources of inspiration and innovating (e.g. exploiting new approaches to training,  and cutting-edge research on the psychology of learning).
    Tomorrow I'll be back with further thoughts on the parallels between ADDIE and [project] management methods.

    Thursday 28 September 2017

    Safe & secure


    The Coming Software Apocalypse is a long, well-written article about the growing difficulties of coding extremely complex modern software systems. With something in the order of 30 to 100 million lines of program code controlling fly-by-wire planes and cars, these are way too large and complicated for even gifted programmers to master single-handedly, while inadequate specifications, resource constraints, tight/unrealistic delivery deadlines, laziness/corner-cutting, bloat, cloud, teamwork, compliance assessments plus airtight change controls, and integrated development environments can make matters worse. 

    Author James Somers spins the article around a central point. The coding part of software development is a tough intellectual challenge: programmers write programs telling computers to do stuff, leaving them divorced from the stuff - the business end of their efforts - by several intervening, dynamic and interactive layers of complexity. Since there's only so much they can do to ensure everything goes to plan, they largely rely on the integrity and function of those other layers ... and yet despite being pieces of a bigger puzzle, they may be held to account for the end result in its entirety.

    As if that's not bad enough already, the human beings who actually use, manage, hack and secure IT systems present further challenges. We're even harder to predict and control than computers, some quite deliberately so! From the information risk and security perspective, complexity is our kryptonite, our Achilles heel.

    Author James Somers brings up numerous safety-related software/system incidents, many of which I have seen discussed on the excellent RISKS List.  Design flaws and bugs in software controlling medical and transportation systems are recurrent topics on RISKS, due to the obvious (and not so obvious!) health and safety implications of, say, autonomous trains and cars.

    All of this has set me thinking about 'safety' as a future awareness topic, given the implications for all three of our target audiences:
    1. Workers in general increasingly rely on IT systems for safety-critical activities. It won't be hard to think up everyday examples - in fact it might be tough to focus on just a few!

    2. With a bit of prompting, managers should readily appreciate the information risks associated with safety- and business-critical IT systems, and would welcome pragmatic guidance on how to treat them;

    3. The professional audience includes the programmers and other IT specialists, business analysts, security architects, systems managers, testers and others at the sharp end, doing their best to prevent or at least minimize the adverse effects when (not if) things go wrong. By introducing the integration and operational aspects of complex IT systems in real-world situations, illustrated by examples drawn from James Somers' article and RISKS etc., we can hopefully get them thinking, researching and talking about this difficult subject, including ways to bring simplicity and order to the burgeoning chaos.
    Well that's the outline plan, today anyway. No doubt the scope will evolve as we continue researching and then drafting the materials, but at least we have a rough goal in mind: another awareness topic to add to our bulging portfolio.

    Tuesday 20 June 2017

    Workplace infosec policies


    Protecting information in the workplace is such a broad brief that we're working on 4 policy templates for the July awareness module:
    1. Workplace information security policy - concerns the need to identify and address information risks wherever work is performed, and wherever valuable information exists (not just at the office!).  This is an update to our 'office security policy'.

    2. Information retention policy - the timescales for retention and/or the criteria for disposal, of information should be specified when it is classified, along with the security requirements for safe storage, communications and access.

    3. Information disposal policy - when information is no longer required, it may need to be disposed of securely using forensically sound techniques.

    4. Information classification policy - updated to reflect the need to specify retention and destruction requirements where applicable (e.g. if mandated in laws, regulations or contracts).
    Several other information security policies are also relevant - in fact virtually all of them - but if we attempted to promote them all, the key awareness messages would be diluted and lose their impact.  Even citing all the relevant policies from those 4 would become unwieldy, so instead we pick out those few that are most important in this context.

    This situation illustrates the value of a coherent and integrated suite of information security policies, designed, developed and managed as a whole. Having personally written all our policies, I appreciate not just what they say, but what they are intended to achieve and how they inter-relate. At the same time, I'm only human! Every time I review and revise the policies, I spot 'opportunities' ranging from minor readability improvements to more substantive changes e.g. responding to the effects of BYOD and IoT on information risks. Revising a policy is also an opportunity to refresh the accompanying security awareness materials, reminding everyone about the topic.

    Given that the landscape is constantly shifting around us, policy maintenance is inevitably an ongoing task. So when was the last time you checked and updated yours?

    Hinson tip: sort the policy files by the 'last updated' date, and set to work on at least checking the ones that haven't been touched in ages. It's surprising how quickly they become limp, lackluster and lifeless if not actually moldy like stale bread.


    PS  If you have to scrabble around just to find all the policies before sorting them, well the learning point is obvious, isn't it?

    PPS  No, I think it's a daft idea to have a policy on policy maintenance!

    Wednesday 14 June 2017

    The periodic table of atomic controls [updated]

    Many information security controls are multi-purpose, hence they could be specified in several places, several policies plus procedures and standards and guidelines etc. That multiplicity creates a nightmare for the ISO/IEC JTC 1/SC 27 project team trying to generate a succinct version of ISO/IEC 27002 without duplications, gaps or discrepancies in the control catalog. It’s also a potential nightmare for anyone writing corporate policies, or an opportunity depending on how you deal with it. 

    My current pragmatic approach is to mention [hopefully] all the important controls in each topic-specific policy template, with a reference section that mentions other related policies, creating a kind of policy matrix. I’m still wary of gaps and discrepancies though: with 60+ policies in our matrix so far, it’s fast approaching the limit of my intellectual abilities and memory to keep them all aligned! It’s an ongoing task to review and revise/update the policy templates, without breaking links, creating discrepancies, or missing anything important.

    My mention of ‘control catalog’ hints at a more rigorous approach: a database where every control is listed once, definitively, and then referenced from all the places that need to describe or mandate or recommend the controls. That in turn requires us to be crystal-clear about what constitutes a control. User authentication, for instance, is in fact a complex of several controls such as identification, challenge-response, cryptography, biometrics, enrolment, awareness, logging, compliance, passwords/PINs and more. Some of those are themselves complex controls that could be broken down further … leading to the ultimate level of ‘atomic controls’ or ‘control elements’. The control catalog, then, would be built around a kind of periodic table of all known atomic information security controls, which can be used individually or assembled into 'compound controls' mitigating various information risks.  
    Extending the analogy, it would be helpful if our periodic table (or 'information security elemental control catalog' or whatever we end up calling it) had a rational structure, some sort of logical sequence with groupings of related atomic controls in much the same way that, say, the 'noble gases' are clustered together on the real periodic table, giving the colored regions. Also, the atomic controls would need to be rigorously specified, with equivalents for the atomic number and other chemical parameters. Right now, though, I can only guess at some of the parameters that might be used to group related atomic controls: I suspect a structure might emerge once the complex controls are decomposed, the constituent atomic controls are identified, and they start piling up in a big unsightly heap. These are just some of the complexities that SC27 is currently grappling with in the ongoing revision of ISO/IEC 27002.It’s also, by the way, something where we might help out SC27 by compiling our periodic table. At the SC27 meeting in Hamilton, I tried unsuccessfully to persuade one of the project groups to set to work on that, instead of what they were proposing to do (yet another revamp of the glossary). It’s really a sizable research project, an idea for some enterprising academic, MSc/PhD student or research team maybe. It's entirely possible that someone out there is already on to it. If so, I'd love to hear about or from them. Do please get in touch.
    UPDATE June 20: I published this blog item on LinkeDin to reach a wider spectrum of readers. Michala Liavaag kindly pointed out that NIST SP800-53 has a controls catalog ... but the controls listed in Appendix F are compound or complex controls, not elemental. I'm proposing to take the analysis down to the lowest level, to the building blocks from which practical controls are assembled.UPDATE June 27: in an opinion piece in CSO Magazine asserting that ROI is the wrong metric for cybersecurity, Rick Howard says:
    "The idea of first principles has been around since the early Greek philosopher days. To paraphrase Aristotle, first principles in a designated problem space are atomic. They cannot be broken down any further. They are the building blocks for everything else. They drive every decision you make."

    Monday 12 June 2017

    Nothing small about business






















    As a small business, we have to do and manage much the same stuff that any business has to do, such as:
    • Marketing, promoting and selling our products e.g. maintaining and updating our websites, preparing advertising copy etc.
    • Procurement and sales administration - licensing, invoicing etc.
    • Customer and supplier relations
    • Financial administration: budgeting, accounting, tax, expenses, pay & rations
    • HR & personal development
    • IT - hardware, software, firmware, wetware and - yes - IoT
    • Information risk and security, including awareness (golly!)
    • Strategy, governance, compliance 
    • Planning, resource allocation, priorization
    • Market and competitor analysis
    • Research and development
    • Operations/production - working hard to make the products we sell
    • Quality assurance and quality control
    • Packaging, delivery and logistics
    • Elf'n-safety
    • Blogging and other social marketing/social media stuff
    In our case these are on a smaller, simpler scale compared to, say, a multinational megacorporation, but they are no less important to the business. The key difference is that (with some exceptions, namely our elite band of trusted advisors and specialist service providers) we rely on ourselves - our capabilities, expertise and skills across all of those areas, rather than calling on departments, teams and individuals who specialize. That necessarily makes us generalists, Jacks-and-Jills-of-all-trades with the attendant practical constraints and risks. We are constantly juggling priorities to meet deadlines.

    On the other hand, being personally involved with virtually everything going on means we don't have the regimented hierarchy, internal communications issues, corporate politics and so forth of larger organizations. We are glad not to suffer the enormous inertia and conservatism that plague large, mature organizations, nor the attendant overheads. We don't need to consult the rule books, check the policies and refer to the procedures to get stuff done. We can make substantial changes almost the very moment we decide to do something different, provided we have the resources - the knowledge and time mostly but also the motivation which stems from doing a good job, being respected and most of all being commercially successful. Minimal overheads help but still we need income.

    One of my tasks for the past week has been to prepare bids for a couple of prospective customers against their formal Requests For Tenders (RFTs) no doubt prepared by vast teams of procurement and legal specialists over the preceding weeks or months. Whereas they were able to spread the efforts and costs of planning, preparing, reviewing, approving, issuing and administering the RFT's across several people and functions representing a tiny fraction of their organizations' total activities and costs, we have no option but to dedicate almost all of our available resources to bidding. It's disproportionally costly for us, yet we have little option if we want the business.  

    We're used to squeezing a quart from the pint pot but going for the whole gallon, well something has to give. With deadlines approaching and assorted jobs piling up on the side, I may be blogging less often for a while. Normal service will be resumed as soon as possible.

    On the upside, the more bids we prepare, the more efficient and effective we become at doing so. At least, I tell myself that's the cunning plan that stops me becoming totally snowed-under, buried in the drift.

    Friday 9 June 2017

    Weaving the Web

    One of the pleasures of my job is continual learning, doing my best to keep up with the field. I read loads, mostly on the Web but I also maintain a physical bookshelf well-stocked with books ... including:
















































    Sir Tim Berners-Lee recounts the original design and development of the World Wide Web in the 1980s and 90s. This is more than merely an authoritative historical account, however valuable that may be. Tim elaborates on his big dreams and deep personal philosophy that drove him to conceive and gift to humanity the most powerful information technology invented - so far. 

    62 years ago when Tim was born (happy birthday!), ENIAC was in the final few months of its life and the 5,000-tube UNIVAC was just 2 years into commercial production. Computers were monstrous beasts with (by today's standards) minimal processing, storage and communications capabilities, yet ironically they were known as 'electronic brains'. Networking was virtually nonexistent, and email wasn't even invented until Tim was 16.

    Tim's early fascination with the 'power in arranging ideas in an unconstrained, weblike way' led him to create technologies to support that aim. This was true innovation, not merely coming up with bright ideas, wouldn't-it-be-nice pipe-dreams and theories but putting them into practice and exploring them hands-on. He has remained hands-on ever since, and is the Director of the World Wide Web Consortium

    Tim's vision extends way beyond what we have right now, into the realm of artificial intelligence, machine learning and real-time global collaboration on a massive scale, the 'semantic web' as he calls it. But in the sense of a proud parent watching their progeny make their way in the world, I suspect he is keen to see the Web develop and mature without the shackles of his own mental framework. The free Web ideal is closer to free speech than free beer.

    Bottom line: a fascinating insight into modern life.  Highly recommended and a steal at just $13 from Amazon.