Security in software development


Prompted by some valuable customer feedback earlier this week, I've been thinking about how best to update the SecAware policy template on software/systems development. The customer is apparently seeking guidance on integrating infosec into the development process, which begs the question "Which development process?". These days, we're spoilt for choice with quite a variety of methods and approaches. 

Reducing the problem to its fundamentals, there is a desire to end up with software/systems that are 'adequately secure', meaning no unacceptable information risks remain. That implies having systematically identified and evaluated the information risks at some earlier point, and treated them appropriately - but how?

The traditional waterfall development method works sequentially from business analysis and requirements definition, through design and development, to testing and release - often many months later. Systems security ought to be an integral part of the requirements up-front, and I appreciate from experience just how hard it is to retro-fit security into a waterfall project that has been runnning for more than a few days or weeks without security involvement.

A significant issue with waterfall is that things can change substantially in the course of development: the organisation hopefully ends up with the system it originally planned, but that may no longer be the system it needs. If the planned security controls turn out to be inadequate in practice, too bad: the next release or version may be months or years away, if ever (assuming the same waterfall approach is used for maintenance, which is not necessarily so*). The quality of the security specification and design (which drives the security design, development and testing) depends on the identification and evaluation of information risks in advance, predicting threats, vulnerabilities and impacts likely to be of concern at the point of delivery some time hence.

In contrast, lean, agile or rapid application development methods cycle through smaller iterations more quickly, presenting more opportunities to update security ... but also more chances to break security due to the hectic pace of change. A key problem is to keep everyone focused on security throughout the process, ensuring that whatever else is going on, sufficient attention is paid to the security aspects. Rapid decision-making is part of the challenge here. It's not just the method that needs to be agile!

DevOps and scrum approaches use feedback from users on each mini-release to inform the ongoing development. Hopefully security is part of that feedback loop so that it improves incrementally at the same time, but 'hopefully' is a massive clue: if users and managers are not sufficiently security-aware to push for improvements or resist degradation, and if the development team is busy on other aspects, security can just as readily degrade incrementally as other changes take priority. 

Another issue is that security testing has to suit short process cycles, with a tendency towards quick/superficial tests and less opportunity for the thorough, in-depth testing needed to dig out troublesome little security issues lurking deep within. Personally, I would be very uncomfortable developing a cryptographic application too quickly, or for that matter anything business- or safety-critical.

So, there are some common factors there, regardless of the method:

  • The chosen development methods have risk and security implications;
  • Various dynamics are challenging, on top of the usual security concerns over complexity, and changes present both risks and opportunities;
  • Security is just one of several competing priorities, hence there is a need for sufficient, suitable resources to keep it moving along at the right pace;
  • Progress is critically reliant on the security awareness and capabilities of those involved i.e. the users, designers, developers, testers, project/team leaders and managers.
* Just one of those dynamics is that the processes may change in the course of development: a system initially developed and released through a classical waterfall project may be maintained by something resembling the rapid, iterative approaches. The cycle speed for iterations is likely to slow down as the system matures or resources are tight, or conversely speed up to react to an increased need for change from the business or technology. 
 
So, overall, it makes sense for a software/system development security policy to cover:
  • An engineering mindset, prioritising the work according to the organisation's information risks ('risk-first development'?), with a willingness to settle for 'adequate' (meaning fit-for-purpose) security rather than striving in vain for perfection;
  • Flexibility of approach - supporting/enabling whatever processes are in use at the time, integrating security with other aspects and collaborating with colleagues where possible;
  • Sufficient resourcing for the information risk and security tasks, justified according to their anticipated value (with implications for metrics, monitoring and reporting);
  • Monitoring and dynamically responding to changes, being driven by or driving priorities according to circumstances, seizing opportunities to improve security and resisting retrograde moves in order to ratchet-up security towards adequacy. 
The policy could get into general areas such as accountability (e.g. various process checkpoints with management authorisation/approval), and delve deeper into security architecture (to reduce design flaws), secure coding (to reduce bugs) and security testing (to find the remaining flaws and bugs), plus security functions (such as backups and user admin) ... but rather than bloat the SecAware policy template, we choose to leave the details to other policies and procedures. Customers are welcome to modify/supplement the template as they wish. 
 
Whether that suits the market remains to be seen. What do you think? Do your security policies cover software/system development? If so, do they at least address the issues I've noted? If not, $20 is a wise investment ...

Popular posts from this blog

Pragmatic ISMS implementation guide (FREE!)

Two dozen information risks that ISO forgot

Philosophical phriday - compliance risk

ISMS internal audit priorities

Reading between the lines of ISO27001 [L O N G]

Passionate dispassion

45 ISO Management Systems Standards

Philosophical phriday - a noncompliance ramble

Adaptive SME security Crowdstrike special