Safe & secure
The Coming Software Apocalypse is a long, well-written article about the growing difficulties of coding extremely complex modern software systems. With something in the order of 30 to 100 million lines of program code controlling fly-by-wire planes and cars, these are way too large and complicated for even gifted programmers to master single-handedly, while inadequate specifications, resource constraints, tight/unrealistic delivery deadlines, laziness/corner-cutting, bloat, cloud, teamwork, compliance assessments plus airtight change controls, and integrated development environments can make matters worse.
Author James Somers spins the article around a central point. The coding part of software development is a tough intellectual challenge: programmers write programs telling computers to do stuff, leaving them divorced from the stuff - the business end of their efforts - by several intervening, dynamic and interactive layers of complexity. Since there's only so much they can do to ensure everything goes to plan, they largely rely on the integrity and function of those other layers ... and yet despite being pieces of a bigger puzzle, they may be held to account for the end result in its entirety.
As if that's not bad enough already, the human beings who actually use, manage, hack and secure IT systems present further challenges. We're even harder to predict and control than computers, some quite deliberately so! From the information risk and security perspective, complexity is our kryptonite, our Achilles heel.
Author James Somers brings up numerous safety-related software/system incidents, many of which I have seen discussed on the excellent RISKS List. Design flaws and bugs in software controlling medical and transportation systems are recurrent topics on RISKS, due to the obvious (and not so obvious!) health and safety implications of, say, autonomous trains and cars.
All of this has set me thinking about 'safety' as a future awareness topic, given the implications for all three of our target audiences:
- Workers in general increasingly rely on IT systems for safety-critical activities. It won't be hard to think up everyday examples - in fact it might be tough to focus on just a few!
- With a bit of prompting, managers should readily appreciate the information risks associated with safety- and business-critical IT systems, and would welcome pragmatic guidance on how to treat them;
- The professional audience includes the programmers and other IT specialists, business analysts, security architects, systems managers, testers and others at the sharp end, doing their best to prevent or at least minimize the adverse effects when (not if) things go wrong. By introducing the integration and operational aspects of complex IT systems in real-world situations, illustrated by examples drawn from James Somers' article and RISKS etc., we can hopefully get them thinking, researching and talking about this difficult subject, including ways to bring simplicity and order to the burgeoning chaos.
Well that's the outline plan, today anyway. No doubt the scope will evolve as we continue researching and then drafting the materials, but at least we have a rough goal in mind: another awareness topic to add to our bulging portfolio.