Skyscraper of cards


Having put it off for far too long, I'm belatedly trying to catch up with some standards work in the area of Root of Trust, which for me meant starting with the basics, studying simple introductory articles about RoT.

As far as I can tell so far, RoT is a concept -  the logical basis, the foundation on which secure IT systems are built.

'Secure IT systems' covers a huge range. At the high end are those used for national security and defence purposes, plus safety- and business-critical systems facing enormous risks (substantial threats and impacts). At the low end are systems where the threats are mostly accidental and the impacts negligible - perhaps mildly annoying. Not being able to tell precisely how many steps you've taken today, or being unable to read this blog, is hardly going to stop the Earth spinning on its axis. In fact' mildly' may be overstating it.

'Systems' may be servers, desktops, portables and wearables, plus IoT things and all manner of embedded devices - such as the computers in any modern car or plane controlling the engine, fuel, comms, passenger entertainment, navigation and more, or the smart controller for a pacemaker

Trust me, you don't want your emotionally disturbed ex-partner gaining anonymous remote control of your brakes, altimeter or pacemaker.

In  terms of the layers, we the people using IT are tottering precariously on the top of a house of cards. We interact with application software, interacting with the operating system and, via drivers and microcode, the underlying hardware. A 'secure system' is a load of software running on a bunch of hardware, where the software has been designed to distrust the users and administrators, other software and the hardware, all the way down to, typically, a Hardware Security Module, Trusted Platform Module or similar dedicated security device, subsystem or chip. Ironically in relation to RoT, distrust is the default, particularly for the lower layers unless/until they have been authenticated - but there's the rub: towards the bottom of the stack, how can low-level software be sure it is interacting with and authenticating the anticipated security hardware if all it can do is send and receive signals or messages? Likewise, how can the module be sure it is interacting with the appropriate low-level software? What prevents a naughty bit of software acting as a middleman between the two, faking the expected commands and manipulating the responses in order to subvert the authentication controls? What prevents a nerdy hacker connecting logic and scope probes to the module's ports in order to monitor and maybe inject signals - or just noise to see how well the system copes? How about a well-appointed team of crooks faking a bank ATM's crypto-module, or a cluster of spooks figuring out the nuclear missile abort codes?

Physically securing the hardware is a start, such that if someone tries to - say - open ('decapsulate') the TPM chip to analyse the silicon wafer under an electron microscope in the hope of finding some secret key coded within, the chip somehow destroys itself in the process - perhaps also the warhead for good measure. 

Other hardware/electronic controls can make it virtually impossible for hardware hackers to mount side-channel attacks, painstakingly monitoring and manipulating the module's power supply and ambient temperature in an attempt to reveal its inner secrets.

Cryptography is the primary control, coupled with appropriate use of authentication and encryption processes in both hardware and software (e.g.'microcode' physically built-in to the TPM chip's crypto-processor), plus other inscrutable controls (e.g. rate-limiting brute force attacks and, ultimately again, sacrificing itself, taking its secrets with it).

Developing, producing and testing secure systems is tough, even with access to low-level debugging mechanisms such as JTAG ports and insider-knowledge about the design. There must be a temptation to install hard-coded backdoors (cheat codes), despite the possibility of 'some idiot' further down the line failing to disable them before products start shipping. There is surely a fascination with attempting to locate and open the backdoors without tripping the tripwires that spring open the trapdoors to oblivion.

OK, so now imagine all of that in relation to cloud computing, where 'the system' is not just a physical computer but a fairly loose and dynamic assembly of virtual systems running on servers who-knows-where under the control of who-know-who sharing the global Internet who-knows-how. 

Having added several extra floors to our house of cards, what could possibly go wrong? 

That's what ISO/IEC 27070:2021 addresses. 

At least, I think so. My head hurts. I may be coming down with vertigo.