Thursday 29 March 2018

Smart assurance

With just days to go to the delivery deadline, April's security awareness module on assurance is rounding the final corner and fast approaching the finishing line.

I've just completed updating our 300+ page hyperlinked glossary defining 2,000+ terms of art in the general area of information risk management, security, privacy, compliance and governance. Plus assurance, naturally.

As I compiled a new entry for Dieselgate, it occurred to me that since things are getting smarter all the time, our security controls and assurance measures need to smarten-up at the same rate or risk being left for particulates. Emissions and other type-testing and compliance verification for vehicles needs to go up a level, while the associated safety and technical standards, requirements, laws and regulations should also be updated to reflect the new smart threats. In-service monitoring and testing becomes more important if we can no longer rely on lab tests, but that creates further issues and risks relating to the less-well-controlled environment such as problems with inconsistencies and calibration, as well as the practical issues of testing products while they are being used. Somehow I doubt in-service testing will prove cheaper and quicker than lab tests!

Product testing is a very wide field. Take medical products for instance: there are huge commercial pressures associated with accredited testing and certification, with implications on safety and profitability. Presumably smart pacemakers or prosthetics could be programmed to behave differently in the lab and in the field, in much the same way as those VW diesel engines. Same thing with smart weapons, smart locks, smart white goods and more. I'm not entirely sure what might be gained by beating the system although it's not unreasonable to assume that 'production samples' provided for approval testing and product reviews will have thicker gold plating than the stuff that makes it to market. 

The more things are software-defined, the greater the possibility of diversity and unanticipated situations in the field. The thing that passed the test may be materially different to the one on the shelf, and it could easily change again with nothing more than a software update or different mode of operation.

At the same time, testing is being smartened-up. For decades already, lab test gear has been increasingly computerized, networked and generalized, allowing more sophisticated, reliable and comprehensive tests. I guess the next logical step is for the test gear to communicate with the equipment being tested to interrogate its programming and configuration, supplementing more conventional tests ... and running straight into the assurance issue concerning the extent to which the information offered can be trusted.

The various types of assurance required by owners/investors, authorities and regulators can be made smarter too, through the use of more sophisticated data collection and analysis - with the same issue that fraudsters and other unethical players are increasingly likely to try to beat the tests and conceal their nefarious activities through smarts. Remember Enron and Barings Bank? There are significant implications here for auditors, inspectors and other forms of oversight and rule-checking.

"At what point would you like your product to comply with the regulations, sir?"

The Iraqi/US WMD fiasco is another strong hint that deadly games are being played in the defense domain, while fake news and reputational-engineering are further examples of the information/cyberwars already raging around us. Detecting and hopefully preventing election fraud gets tougher as election fraudsters become smarter. Same with bribery and corruption, plus regular crimes.

Despite being "weird" (I would say unconventional, creative or novel), assurance has turned out to be a fascinating topic for security awareness purposes, with implications that only occurred to me in the course of researching and preparing the materials. I hope they inspire at least some of our customers' people in the same way, and get them thinking more broadly about information risk ... because risk identification is what launches the risk management sequence. If you don't even recognize a risk as such, you're hardly going to analyze and treat it, except by accident - and, strangely, that does not qualify as best practice.

No comments:

Post a Comment

The floor is yours ...