Tuesday 26 July 2022

Half-a-dozen learning points from an ISO 27001 certification announcement








This morning I bumped into a marketing/promotional piece announcing PageProof’s certified "compliance" (conformity!) with "ISO 27001" (ISO/IEC 27001!). Naturally, they take the opportunity to mention that information security is an integral part of their products. The promo contrasts SOC2 against '27001 certification, explaining why they chose ‘27001 to gain some specific advantages such as GDPR compliance - and fair enough. In the US, compliance is A Big Thing. I get that.

It occurs to me, though, that there are other, broader advantages to ‘27001 which the promo could also have mentioned, further valuable benefits of their newly-certified ISMS.

Monday 25 July 2022

Resilience is ...


... "the ability for systems, networks, processes, people, functions, departments, business units, business operations, organisations, business relationships, even entire nations to continue operating more-or-less unaffected by security incidents, thereby ensuring availability and hence business continuity"
[source: SecAware glossary]

... depending on others and being there for them when they need us most

... "robustness, stability, dependability" [source: SecAware glossary]

... the rod bending alarmingly ... while landing a whopper

... an oak tree growing roots against the prevailing wind

... taking the punches, reeling but not out for the count

... demonstrating, time after time, personal integrity

... willingness to seize opportunities, taking chances

... coping with social distancing, masks and all that

... accumulating reserves for the bad times ahead

... the bloody-minded determination to press on

... disregarding trivia, focusing on what matters

... a society for whom this piece resonates

... deep resolve founded on inner strength

... knowing it'll work out alright in the end

... a word, a rich concept, a way of life

... more than 'putting on a brave face'

... knowing when and how to concede

... a prerequisite for ultimate success

... facing up to adversity: bring it on

... self-belief and trust in the team

... taking the knocks and learning

... communities pulling together

... being prepared for the worst

... standing out from the crowd

... being fit enough to survive

... pressing ahead, regardless

... standing up to be counted

... disproving the naysayers

... finding creative solutions

... having fallback options

... keeping on keeping on

... wiping away the tears

... always bouncing back

... built layer-upon-layer

... thriving on adversity

... having what it takes

... steadfast insistence

... picking your fights

... sheer doggedness

... an admirable trait

... justified optimism

... retaining options

... quiet confidence

... daring to differ

... plugging away

... core strength

... beyond hope

 ... getting even

... rerouting ...

... suppleness

... valuable

... true grit

... faith

... us

... 

 

Sunday 24 July 2022

Risk management trumps checklist security

While arguably better than nothing at all, an unstructured approach to the management of information security results in organisations adopting a jumble, a mixed bag of controls with no clear focus or priorities and – often – glaring holes in the arrangements. The lack of structure indicates the absence of genuine management understanding, commitment and support that is necessary to give information risk and security due attention - and sufficient resourcing - throughout the business. 
 
It's hard to imagine anyone considering such a crude, messy approach adequate, even those who coyly admit to using it!  I'm not even sure it qualifies as 'an approach'.
 
Anyway, the next rung up the ladder sees the adoption of a checklist approach: essentially, someone says 'Just adopt these N controls and you'll be secure'! It may be true that some information security controls are more-or-less universal, so any organisation that does not have them all might be missing out. Maybe it is a step up from the previous approach, and yet there are significant issues with checklists that tend to be:

Friday 22 July 2022

Security in software development


Prompted by some valuable customer feedback earlier this week, I've been thinking about how best to update the SecAware policy template on software/systems development. The customer is apparently seeking guidance on integrating infosec into the development process, which begs the question "Which development process?". These days, we're spoilt for choice with quite a variety of methods and approaches. 

Reducing the problem to its fundamentals, there is a desire to end up with software/systems that are 'adequately secure', meaning no unacceptable information risks remain. That implies having systematically identified and evaluated the information risks at some earlier point, and treated them appropriately - but how?

The traditional waterfall development method works sequentially from business analysis and requirements definition, through design and development, to testing and release - often many months later. Systems security ought to be an integral part of the requirements up-front, and I appreciate from experience just how hard it is to retro-fit security into a waterfall project that has been runnning for more than a few days or weeks without security involvement.

A significant issue with waterfall is that things can change substantially in the course of development: the organisation hopefully ends up with the system it originally planned, but that may no longer be the system it needs. If the planned security controls turn out to be inadequate in practice, too bad: the next release or version may be months or years away, if ever (assuming the same waterfall approach is used for maintenance, which is not necessarily so*). The quality of the security specification and design (which drives the security design, development and testing) depends on the identification and evaluation of information risks in advance, predicting threats, vulnerabilities and impacts likely to be of concern at the point of delivery some time hence.

In contrast, lean, agile or rapid application development methods cycle through smaller iterations more quickly, presenting more opportunities to update security ... but also more chances to break security due to the hectic pace of change. A key problem is to keep everyone focused on security throughout the process, ensuring that whatever else is going on, sufficient attention is paid to the security aspects. Rapid decision-making is part of the challenge here. It's not just the method that needs to be agile!

DevOps and scrum approaches use feedback from users on each mini-release to inform the ongoing development. Hopefully security is part of that feedback loop so that it improves incrementally at the same time, but 'hopefully' is a massive clue: if users and managers are not sufficiently security-aware to push for improvements or resist degradation, and if the development team is busy on other aspects, security can just as readily degrade incrementally as other changes take priority. 

Another issue is that security testing has to suit short process cycles, with a tendency towards quick/superficial tests and less opportunity for the thorough, in-depth testing needed to dig out troublesome little security issues lurking deep within. Personally, I would be very uncomfortable developing a cryptographic application too quickly, or for that matter anything business- or safety-critical.

So, there are some common factors there, regardless of the method:

  • The chosen development methods have risk and security implications;
  • Various dynamics are challenging, on top of the usual security concerns over complexity, and changes present both risks and opportunities;
  • Security is just one of several competing priorities, hence there is a need for sufficient, suitable resources to keep it moving along at the right pace;
  • Progress is critically reliant on the security awareness and capabilities of those involved i.e. the users, designers, developers, testers, project/team leaders and managers.
* Just one of those dynamics is that the processes may change in the course of development: a system initially developed and released through a classical waterfall project may be maintained by something resembling the rapid, iterative approaches. The cycle speed for iterations is likely to slow down as the system matures or resources are tight, or conversely speed up to react to an increased need for change from the business or technology. 
 
So, overall, it makes sense for a software/system development security policy to cover:
  • An engineering mindset, prioritising the work according to the organisation's information risks ('risk-first development'?), with a willingness to settle for 'adequate' (meaning fit-for-purpose) security rather than striving in vain for perfection;
  • Flexibility of approach - supporting/enabling whatever processes are in use at the time, integrating security with other aspects and collaborating with colleagues where possible;
  • Sufficient resourcing for the information risk and security tasks, justified according to their anticipated value (with implications for metrics, monitoring and reporting);
  • Monitoring and dynamically responding to changes, being driven by or driving priorities according to circumstances, seizing opportunities to improve security and resisting retrograde moves in order to ratchet-up security towards adequacy. 
The policy could get into general areas such as accountability (e.g. various process checkpoints with management authorisation/approval), and delve deeper into security architecture (to reduce design flaws), secure coding (to reduce bugs) and security testing (to find the remaining flaws and bugs), plus security functions (such as backups and user admin) ... but rather than bloat the SecAware policy template, we choose to leave the details to other policies and procedures. Customers are welcome to modify/supplement the template as they wish. 
 
Whether that suits the market remains to be seen. What do you think? Do your security policies cover software/system development? If so, do they at least address the issues I've noted? If not, $20 is a wise investment ...

Thursday 21 July 2022

ISO management systems assurance

In the context of the ISO management systems standards, the internal audit process and accredited certification systems as a whole, are assurance controls primarily intended to confirm that organisations' management systems conform to the explicit requirements formally expressed in the respective ISO standards.

A conformant management system, in turn, is expected to manage (design, direct, control, monitor, maintain …) something: for ISO/IEC 27001, that 'something-being-managed' is the suite of information security controls and other means of addressing the organisation’s information risks (called 'information security risks' or 'cybersecurity risks' in the standards). For ISO 9001, it is the quality assurance activities designed to ensure that the organisation's products (goods and services) are fit for purpose. For ISO 14001, it is the controls and activities necessary to minimise environmental damage.

My point is that the somethings-being-managed are conceptually distinct from the  'management systems' through which managers exert their direction and control. This is a fundamental part of the ISO management systems approach, allowing ISO to specify systems required to manage a wide variety of somethings in a similar way - a governance approach in fact.

Management system certification auditors, whose sole purpose is to audit clients' management systems' conformity with the requirements expressed in the standards, have only a passing interest in those somethings-being-managed, essentially checking that they are indeed being actively managed through the management system, thereby proving that the management system is in fact operational and not just a nice neat set of policies and procedures on paper.

Management system internal auditors, in contrast, may be given a wider brief by management which may include probing further into the somethings being managed ... but that’s down to management’s decision about the scope and purpose of the internal audits, not a formal requirement of the standards. Management may just as easily decide to have the internal auditors stick to the management system standard conformity aspects, just the same as the certification auditors.

Likewise with management reviews of the management systems: the ISO standards stop well short of specifying all the things management might conceivably want to be reviewed. Reviewing conformity with the respective ISO management systems standards is just one of several possible review objectives, alongside all the things hopefully being measured through the management system metrics.

Monday 18 July 2022

Skyscraper of cards


Having put it off for far too long, I'm belatedly trying to catch up with some standards work in the area of Root of Trust, which for me meant starting with the basics, studying simple introductory articles about RoT.

As far as I can tell so far, RoT is a concept -  the logical basis, the foundation on which secure IT systems are built.

'Secure IT systems' covers a huge range. At the high end are those used for national security and defence purposes, plus safety- and business-critical systems facing enormous risks (substantial threats and impacts). At the low end are systems where the threats are mostly accidental and the impacts negligible - perhaps mildly annoying. Not being able to tell precisely how many steps you've taken today, or being unable to read this blog, is hardly going to stop the Earth spinning on its axis. In fact' mildly' may be overstating it.

'Systems' may be servers, desktops, portables and wearables, plus IoT things and all manner of embedded devices - such as the computers in any modern car or plane controlling the engine, fuel, comms, passenger entertainment, navigation and more, or the smart controller for a pacemaker

Trust me, you don't want your emotionally disturbed ex-partner gaining anonymous remote control of your brakes, altimeter or pacemaker.

In  terms of the layers, we the people using IT are tottering precariously on the top of a house of cards. We interact with application software, interacting with the operating system and, via drivers and microcode, the underlying hardware. A 'secure system' is a load of software running on a bunch of hardware, where the software has been designed to distrust the users and administrators, other software and the hardware, all the way down to, typically, a Hardware Security Module, Trusted Platform Module or similar dedicated security device, subsystem or chip. Ironically in relation to RoT, distrust is the default, particularly for the lower layers unless/until they have been authenticated - but there's the rub: towards the bottom of the stack, how can low-level software be sure it is interacting with and authenticating the anticipated security hardware if all it can do is send and receive signals or messages? Likewise, how can the module be sure it is interacting with the appropriate low-level software? What prevents a naughty bit of software acting as a middleman between the two, faking the expected commands and manipulating the responses in order to subvert the authentication controls? What prevents a nerdy hacker connecting logic and scope probes to the module's ports in order to monitor and maybe inject signals - or just noise to see how well the system copes? How about a well-appointed team of crooks faking a bank ATM's crypto-module, or a cluster of spooks figuring out the nuclear missile abort codes?

Physically securing the hardware is a start, such that if someone tries to - say - open ('decapsulate') the TPM chip to analyse the silicon wafer under an electron microscope in the hope of finding some secret key coded within, the chip somehow destroys itself in the process - perhaps also the warhead for good measure. 

Other hardware/electronic controls can make it virtually impossible for hardware hackers to mount side-channel attacks, painstakingly monitoring and manipulating the module's power supply and ambient temperature in an attempt to reveal its inner secrets.

Cryptography is the primary control, coupled with appropriate use of authentication and encryption processes in both hardware and software (e.g.'microcode' physically built-in to the TPM chip's crypto-processor), plus other inscrutable controls (e.g. rate-limiting brute force attacks and, ultimately again, sacrificing itself, taking its secrets with it).

Developing, producing and testing secure systems is tough, even with access to low-level debugging mechanisms such as JTAG ports and insider-knowledge about the design. There must be a temptation to install hard-coded backdoors (cheat codes), despite the possibility of 'some idiot' further down the line failing to disable them before products start shipping. There is surely a fascination with attempting to locate and open the backdoors without tripping the tripwires that spring open the trapdoors to oblivion.

OK, so now imagine all of that in relation to cloud computing, where 'the system' is not just a physical computer but a fairly loose and dynamic assembly of virtual systems running on servers who-knows-where under the control of who-know-who sharing the global Internet who-knows-how. 

Having added several extra floors to our house of cards, what could possibly go wrong? 

That's what ISO/IEC 27070:2021 addresses. 

At least, I think so. My head hurts. I may be coming down with vertigo.

Sunday 10 July 2022

Complexity, simplified

Following its exit from the EU, the UK is having to pick up on various important matters that were previously covered by EU laws and regulations. One such issue is to be addressed through a new law on online safety.

"Online safety: what's that?" I hear you ask.  "Thank you for asking, lady in the blue top! I shall elaborate ... errrr ..."

'Online safety' sounds vaguely on-topic for us and our clients, so having tripped over a mention of this, I went Googling for more information. 

First stop: the latest amended version of the Online Safety Bill. It is written in extreme legalese, peppered with strange terms defined in excruciating detail, and littered with internal and external cross-references, hardly any of which are hyperlinked e.g.

Having somewhat more attractive things to do on a Sunday than study the bill, a quick skim was barely enough to pick up the general thrust. It appears to relate to social media and search engines serving up distasteful, antisocial, harmful and plain dangerous content, including ("but not limited to") porn, racist, sexist and terrorist materials. Explaining that previous sentence in the formal language more becoming of law evidently takes 230 pages, of the order of 100,000 words.

Luckily for us ordinary mortals, there are also explanatory notes - a brief, high-level summary of the bill, explaining what it is all about, succinctly and yet eloquently expressed in plain English with pictures (not). The explanatory notes are a mere 126 pages long, half the length of the original with another 40-odd thousand words. 

Simply explaining the explanatory notes takes half a page for starters:

 

So, the third bullet suggests that we read the 126 pages of notes PLUS the 230 page bill. My Sunday is definitely under threat. At this point, I'm glad I'm not an MP, nor a lawyer or judge, nor a manager of any of the organisations this bill seems likely to impact once enacted. I'm not even clear which organisations that might be. Defining the applicabilty of the law - including explicit exclusions to cater for legitmate journalism and free-speech - takes a fair proportion of those 346 pages.

Despite not clearly expressing the risk, the bill specifies mitigating controls - well, sort of. In part it specifies that OFCOM is responsible for drawing up relevant guidance that will, in turn, specify control requirements on applicable organisations (to be listed and categorised on an official register, naturally), with the backing of the law including penalties. Since drafting, promoting and enforcing the guidance is likely to be costly, the bill even allows for OFCOM to pass (some of) its costs on to the regulated organisations, who will, in turn, pass them on to users. A veritable cost-cascade.

As to the actual controls, well the bill takes a classical risk-management approach involving impact assessments and responses such as taking down unsafe content and banning users who published it. There are arrangements for users to report unsafe content to service providers, plus automated content-scanning technologies, setting the incident management process in motion.

The overall governance structure looks roughly like this:

No wonder it takes >100,000 words to specify that little lot in law ... but, hey, maybe my diagram will save a thousand, a few dozen anyway.

You're welcome.

The reason I'm blabbering on about this here is that I'm still quietly mulling-over a client's casual but insightful comment on Thursday. 

"I was wondering whether [the information security policies we have been customising for them] might be a little too in depth for our little start-up.

Fair comment! Infosec is quite involved and - as you'll surely appreciate from this very blog - I tend to focus and elaborate on the complexities, writing profusely on topics that I enjoy. I find it quite hard to explain stuff simply and clearly without first delving deep, particularly if the end product doesn't suit my own reading preferences.

Looking at the policies already prepared, I had cut down our policy templates from about 3 or 4 pages each to about 2, adjusting the wording to reflect the client's business, technology and people, and removing bits that were irrelevant or unhelpful in the context of a small tech business. But, yes, I could see how they might be considered in-depth, especially since, even after combining a few, there were 19 policies in the suite covering all the topics necessary.

So, I responded to the client's point by preparing a custom set of Acceptable User Policies to supplement the more traditional topic-based policies already prepared. I set out with our AUP templates - single-sided A4 leaflets in (for me!) a succinct style - laying out the organisation's rules for acceptable and unacceptable behaviours in topic areas such as malware, cloud and IoT. The writing style is direct and action-oriented, straight down-to-business. 

Modifying the AUP templates for the client involved trivial changes such as incorporating their company name in place of 'the organisation', and swapping-out the SecAware logo for theirs. A little trimming and adaptation of the bullet points to fit into half a side per topic took a bit more time but, overall, starting with our templates was much quicker and easier than designing and preparing the AUPs from scratch.

I took the opportunity to incorporate some eye-catching yet relevant images to break up the text and lead the reader from topic-to-topic in a natural flow.

I merged the AUP templates into one consolidated document for ease of use, and prepared additional AUPs on areas that weren't originally covered (security of email/electronic messaging and social media), ending up with a neat product that sums things up nicely in 11 topic areas. It can be colour printed double-sided on just 3 sheets of glossy A4 paper to circulate to everyone (including joiners), or published on the corporate network for use on regular desktop PCs, laptops or tablets.

So far, so good ... but then it occurred to me yesterday that if the AUPs are to be readily available and accessible by all, the client could do with a 'mobile' version for workers' smartphones. Figuring out the page size, margins and formatting for mobiles, and further simplifying/trimming the content to suit small, narrow smartphone screens with very limited navigation took me another hour or two, ending up with a handy little document that looks professional, is engaging and reads well, makes sense and provides useful guidance on important information security matters. Reeeeesult!


In recognition of the client's valuable suggestion that sparked this, we won't be charging them for the AUP work - it's a bonus. The client gets a nice set of policies well suited to their business and people, while we have new products gracing the virtual shelves of our online store, a win-win. Happy days.

A bargain at just $20!

Now, about that Online Safety Bill: would anyone like to commission a glossy leaflet version in plain English, complete with pretty pictures?