Showing posts with label SCADA. Show all posts
Showing posts with label SCADA. Show all posts

Friday 15 September 2023

Checklust security


"
Seventy Questions to Assess Cybersecurity Risk on a Rapidly Changing Threat Landscapeis an ISACA 'industry news' article by Patrick Barnett. 

Whereas normally I give 'industry news' and checklists a wide berth, Patrick is (according to the article) highly qualified and experienced in the field, so I took a closer look at this one. The prospect of condensing such a broad topic to a series of questions intrigued me. I'm not totally immune to the gleaming allure of well-conceived checklists.

Patrick says:

"There are 70 questions that can be asked to determine whether an enterprise has most defensive principles covered and has taken steps to reduce risk (and entropy) associated with cybersecurity. If you can answer “Yes” to the following 70 questions, then you have significantly reduced your cybersecurity risk. Even so, risk still exists, and entropy must be continuously monitored and mitigated. There is no specific number of layers that can remove all risk, just as there is nothing in the physical universe that does not experience entropy."
Hmmm. OK. Despite the definitive initial statement, I take that introduction as an implicit acknowledgement that there may be more than 70 questions ... and indeed many of the 70 are in fact compound/complex questions, such as "35. Do you prevent the disclosure of internal IP address and routing information on the Internet?" Most of us would instinctively answer "Yes" to that ... but look more closely: the question concerns "IP address" and "routing information", meaning both not either part. What qualifies as "routing information" anyway? And what about other network traffic apart from IP? What is 'disclosure'? What does Patrick mean by 'prevent'? And are we only concerned about 'the Internet'? If you are serious about addressing the information risks relating to NAT and all that (all that), you surely appreciate the naivete of question 35. If this is all Greek to you, maybe not. 

Thursday 29 March 2018

Smart assurance

With just days to go to the delivery deadline, April's security awareness module on assurance is rounding the final corner and fast approaching the finishing line.

I've just completed updating our 300+ page hyperlinked glossary defining 2,000+ terms of art in the general area of information risk management, security, privacy, compliance and governance. Plus assurance, naturally.

As I compiled a new entry for Dieselgate, it occurred to me that since things are getting smarter all the time, our security controls and assurance measures need to smarten-up at the same rate or risk being left for particulates. Emissions and other type-testing and compliance verification for vehicles needs to go up a level, while the associated safety and technical standards, requirements, laws and regulations should also be updated to reflect the new smart threats. In-service monitoring and testing becomes more important if we can no longer rely on lab tests, but that creates further issues and risks relating to the less-well-controlled environment such as problems with inconsistencies and calibration, as well as the practical issues of testing products while they are being used. Somehow I doubt in-service testing will prove cheaper and quicker than lab tests!

Product testing is a very wide field. Take medical products for instance: there are huge commercial pressures associated with accredited testing and certification, with implications on safety and profitability. Presumably smart pacemakers or prosthetics could be programmed to behave differently in the lab and in the field, in much the same way as those VW diesel engines. Same thing with smart weapons, smart locks, smart white goods and more. I'm not entirely sure what might be gained by beating the system although it's not unreasonable to assume that 'production samples' provided for approval testing and product reviews will have thicker gold plating than the stuff that makes it to market. 

The more things are software-defined, the greater the possibility of diversity and unanticipated situations in the field. The thing that passed the test may be materially different to the one on the shelf, and it could easily change again with nothing more than a software update or different mode of operation.

At the same time, testing is being smartened-up. For decades already, lab test gear has been increasingly computerized, networked and generalized, allowing more sophisticated, reliable and comprehensive tests. I guess the next logical step is for the test gear to communicate with the equipment being tested to interrogate its programming and configuration, supplementing more conventional tests ... and running straight into the assurance issue concerning the extent to which the information offered can be trusted.

The various types of assurance required by owners/investors, authorities and regulators can be made smarter too, through the use of more sophisticated data collection and analysis - with the same issue that fraudsters and other unethical players are increasingly likely to try to beat the tests and conceal their nefarious activities through smarts. Remember Enron and Barings Bank? There are significant implications here for auditors, inspectors and other forms of oversight and rule-checking.

"At what point would you like your product to comply with the regulations, sir?"

The Iraqi/US WMD fiasco is another strong hint that deadly games are being played in the defense domain, while fake news and reputational-engineering are further examples of the information/cyberwars already raging around us. Detecting and hopefully preventing election fraud gets tougher as election fraudsters become smarter. Same with bribery and corruption, plus regular crimes.

Despite being "weird" (I would say unconventional, creative or novel), assurance has turned out to be a fascinating topic for security awareness purposes, with implications that only occurred to me in the course of researching and preparing the materials. I hope they inspire at least some of our customers' people in the same way, and get them thinking more broadly about information risk ... because risk identification is what launches the risk management sequence. If you don't even recognize a risk as such, you're hardly going to analyze and treat it, except by accident - and, strangely, that does not qualify as best practice.

Thursday 4 January 2018

IoT and BYOD security awareness module released

The Internet of Things and Bring Your Own Device typically involve the use of small, portable, wireless networked computer systems, big on convenience and utility but small on security.  Striking the right balance between those and other factors is tricky, especially if people don’t understand or willfully ignore the issues – hence education through security awareness on this topic makes a lot of sense.
From the average employee’s perspective, BYOD is simply a matter of working on their favorite IT devices rather than being lumbered with the clunky corporate stuff provided by most organizations. In practice, there are substantial implications for information risk and security e.g.:
  • Ownership and control of the BYOD device is distinct from ownership and control of the corporate data and IT services;
  • The lines between business use and personal life, and data, are blurred;
  • The organization and workers may have differing, perhaps even conflicting expectations and requirements concerning security and privacy (particularly the workers' private and personal information on their devices);
  • Granting access to the corporate network, systems, applications and data by assorted devices, most of which are portable and often physically remote, markedly changes the organization’s cyber-risk profile compared to everything being contained on the facilities and wired LANs;
  • Increasing technical diversity and complexity leads to concerns over supportability, management, monitoring etc., and security of course.  Complexity is the information security manager's kryptonite.
IoT is more than just allowing assorted things to be connected to and accessed through the Internet and/or corporate or home networks.  Securing things is distinctly challenging when the devices are technically and physically diverse, often inaccessible with limited storage, processing and other capabilities (cybersecurity in particular).  If they are delivering business- or safety-critical functions, the associated risks may be serious or grave.
It strikes me as odd that risks to the critical national infrastructure resulting from the proliferation of IoT things are not higher up the public agendas of various governments. I have the uneasy feeling that maybe the authorities are wary of drawing attention to the issue, except (hopefully!) in private dealings with the utilities plus defense, finance and healthcare industries. Conversely, I could be mistaken in believing that IoT is substantially increasing information risks in industrial situations: perhaps the risks are all fully under control. Perhaps pigs have wings.

Visit SecAware.com to boost your security awareness program and catch imaginations with creative content.

Wednesday 20 July 2016

In the full glare

Here's a neat illustration of the challenges facing those protecting critical national infrastructures.

Take a look at this map of the UK's fuel pipelines - a massive mesh of pipes criss-crossing the country, linking refineries and fuel stores with power stations and airports. Many of the pipes are buried, carrying large volumes of volatile and energetic fuel under substantial pressure for hundreds of miles across open country, along roads, over canals and under cities, hence the need for the map, the website and the organization behind it: trust me, you don't want people accidentally digging them up, or driving piles through them. For health and safety reasons, let alone the risk of serious economic and physical fallout, people driving big yellow mechanical diggers and pile-drivers need to know if they are within striking range of the pipes. Planners, architects and builders need to know where they lie, plus the operators who use and maintain them, oh and the emergency services just in case.

Now imagine you've been tasked with protecting those same pipes against deliberate attacks by, well, anyone with a big yellow digger and a grudge for starters. The list of potential adversaries and their possible reasons is long and changeable. Some of them have serious resources and capabilities behind them, and no particular rush.

The reality of protecting critical infrastructures is rather different than the Popular Mechanics perspective.

Saturday 30 April 2016

Industrial information security awareness

Having dusted-off an old security awareness module on SCADA/ICS, we reviewed it to see what needed updating for May. It soon became clear that things have changed significantly in this area in the past seven years, hence we ended up re-scoping and re-writing the entire module. This time around we’ve broadened our perspective to cover all sorts of industrial IT systems and networks (including but going well beyond SCADA/ICS) and picked up on the issues relating to protecting critical national and corporate infrastructures.

There are important lessons to be learned from industrial incidents such as Fukushima, including the cascading failures that turned a Japanese disaster in 2011 into a global incident lasting much longer.

[I’m currently enjoying “The Power of Resilience: How the Best Companies Manage the Unexpected”, a fascinating book by Yossi Sheffi that uses the Sendai tsunami and other examples to illustrate business supply chain resilience.  Recommended reading.]

We also touch on the health and safety implications of industrial IT, acknowledging that shop-floor workers are valuable yet vulnerable information assets too and deserve every bit as much protection as do the robots, machine tools and pump controllers around them.

Tuesday 1 September 2015

IoT security awareness

The Internet of Things is a novel and rapidly evolving field making IoT security highly topical and yet, as with cybersecurity last month, it was something of a challenge to prepare a coherent, concise and valuable set of security awareness materials. 
In researching the topic, we discovered surprisingly few companies marketing various smart and mostly geeky things, a few news articles and lightweight gee-whizz journalistic pieces, and some almost impenetrable academic and technical papers about the technologies. Enterprising hackers are already exploring IoT, discovering and exploiting security vulnerabilities ostensibly for education and demonstration purposes, at least for now. Shiny new things are appearing on the market every week to be snapped up by an eager if our naïve public.
IoT presents a heady mix of risks and opportunities, with substantial commercial, safety, privacy, compliance and information security challenges ahead, and sociological implications for good measure. In a few years’ time when both things and IoT incidents have become commonplace (despite our very best efforts!), we may look back in amazement at the things we are doing today … but we are where we are, things are spreading fast and the risks are multiplying like salmonella on a Petri dish.

An IoT security awareness module is timely.

To prepare the materials, we took a back-to-basics approach, identifying and describing a wide range of risks associated with or arising from IoT as a starting point. For the staff stream, we focused on consumer things including smart home and wearables. For management, we discussed the commercial, strategic and policy concerns with IoT and IIoT (Industrial IoT). While it would have been easy just to highlight the security and privacy angle, we also discussed the business opportunities that arise from innovative things. Finding the right balance between risk and opportunity, or security and creativity, is the key to exploiting the amazing possibilities of these exciting new technologies.

The latest awareness module addresses the following generic learning objectives: 
  • Introduce IoT, an emerging and rapidly evolving field, explaining things, ubiquitous computing, mesh networks, IIoT and so forth; 
  • Outline the personal and business benefits driving IoT and IIoT adoption, touching on commercial opportunities, industry pressures and technology constraints plus wider societal issues, privacy concerns and so on; 
  • Explain the information risks arising from or relating to IoT & things, illustrating the threats, vulnerabilities and impacts with news of real-world IoT incidents, attacks and malware; 
  • Emphasize the four possible means of treating the risks (more than just security controls!);
  • Encourage the workforce to consider and ideally address the information risks, security and privacy aspects of IoT and things, going beyond mere ‘awareness’. 
IoT security is the 56th topic in our steadily growing portfolio of information security awareness materials. We're already working on another new topic for next month: 'rights and privileges' are core to IT security, crucial to logical access management, and important concepts in a much broader sense.

Could your security awareness program could do with a kick up the wotsits? Wish you had the time and energy to research and write about emerging information security challenges? Get in touch!

Monday 10 March 2014

Rejected ISO/IEC 27002 control for SCADA

The ISO27k standards are written and maintained by a sizable committee of international experts, working through their national standards bodies and following formal processes, with most of the business conducted at just two face-to-face meetings per year.  As such, the committee sometimes struggles to accept changes and reflect emerging information security issues, particularly in the case of ISO/IEC 27002.

Back in 2011, I suggested the text below as a new control for SCADA/ICS in 27002 but was unable to persuade the project team of its merits, perhaps because they were hoping that ISO/IEC TR 27019:2013 would cover it.

---------------------------------

Security requirements for specialist IT systems


Control objective 

To identify and satisfy the particular information security control requirements of specialist IT systems such as industrial control systems.


Implementation guidance
The particular information security risks associated with specialist IT systems such as Industrial Control Systems (ICS), Supervisory Control And Data Acquisition (SCADA), embedded systems, safety-critical systems, building management systems, physical access control systems etc. should be assessed as part of the implementation and maintenance activities, and the appropriate information security controls should be in place. Specialist advice is recommended to assess the information security risks and specify, implement, use and maintain suitable information security controls. Competent information security analysts familiar with the information security risks and controls commonly applied to such systems (including the vendors of commercial systems) should be consulted or involved in the process. Applicable standards should be applied, and all relevant legal and regulatory compliance obligations satisfied.

Other information
Industrial control systems and related specialist systems are commonly managed outside of the IT department. Many are delivered as “turnkey” installations, with limited or no access for maintenance and management by the organization’s employees. For various reasons such as differing risks, priorities and objectives, the information security controls associated with such specialist systems often differ substantially from the general IT infrastructure and ordinary commercial and office IT systems. At the same time, such systems may have to address serious information security risks through unique information security control requirements, for example arising from their rôle in ensuring the health and safety of plant operators.


PS  I also proposed new or changed security controls for:
  • The computer suite (mostly physical controls)
  • SDLC (software development life cycle)
  • BCM (business continuity management) and
  • Cloud computing

Saturday 2 June 2012

Cyberwar Trojans - updated again

Are you surprised by the news that the US, in conjunction with Israel, was indeed responsible for attacking Iran's nuclear program using the Stuxnet worm/Trojan? Reports on Stuxnet and Duqu have previously pointed the finger at US and Israel as the likely culprits due to the obvious political connotations, so confirmation from the Whitehouse is hardly a shock on that score.

What is surprising is that this was officially disclosed, right now.

Possibly the US government had reached the point that its position, its continued denials and silence on this matter, was simply untenable. Perhaps the impending release of a book about the Stuxnet affair meant that  incriminating evidence was about to hit the streets, so releasing it (via the NY Times no less!) was a way for the Whitehouse to retain some control over the 'official' version of events.

Or perhaps this is all propaganda - the Stuxnet reports, the book, the official denials and pronouncements, the lot. Are we being fed the not-exactly-subtle line that the US has a proven, offensive, cyberwar capability, so foreign powers should be quaking in their cyberboots?

Doubtless a huge amount of work is going on behind the scenes in the US and elsewhere to bolster cyber defenses for Critical National Infrastructures, but realistically what has been achieved so far? I wonder if confirming Stuxnet may in fact be a calculated move to prompt those responsible for CNI security to up their game substantially. The specter of retaliatory cyberattacks by Iran or some hostile foreign power should focus the minds of those in charge of CNI security on improving defenses, with the added benefit that they would  also be guarding against cyberattacks from other quarters (terrorists, criminals and hactivists, for example). And those attacks, frankly, are every bit as credible and likely as all-out cyberwar.

Still one of the most fascinating aspects of the Stuxnet attack was that it involved jumping an air-gap to penetrate the Iranian's internal ICS/SCADA network which was (supposedly) totally isolated from the big bad Interweb. Air-gapping networks is an obvious defense mechanism. According to public reports, Stuxnet jumped over by dint of an infected USB stick with which someone naively bridged the gap. 

An outstanding keynote presentation by Mark Fabro at AusCERT on the forensic analysis of an ICS/SCADA malware infection suggests another possibility - namely that the ICS/SCADA systems may have been pre-infected before they were even delivered and installed. The air-gap between the Internet and internal networks, even coupled with rigorous controls over anything that might cross the gap, is moot if the  internal network is already compromised. Suddenly, the fuzzy background chatter about possible backdoors in compilers, CPUs and cryptosystems that we've heard for years comes into sharp focus: states with the resources to be designing and producing such high-tech stuff patently have the wherewithal to insert secret backdoors, giving them the power of control over anyone using their trusted kit. Is this the ultimate Trojan horse, the most insidious of insider threats?


PS  Given that the recently-discovered Flame malware, dubbed "the most sophisticated cyber weapon yet unleashed", appears to be stealing 'technical information from the Middle East', it doesn't take a rocket surgeon to  figure out a possible link to Stuxnet and hence the US Government - though that's mere conjecture of course.  As Kaspersky's blogger put it:
"Currently there are three known classes of players who develop malware and spyware: hacktivists, cybercriminals and nation states. Flame is not designed to steal money from bank accounts. It is also different from rather simple hack tools and malware used by the hacktivists. So by excluding cybercriminals and hacktivists, we come to conclusion that it most likely belongs to the third group. In addition, the geography of the targets (certain states are in the Middle East) and also the complexity of the threat leaves no doubt about it being a nation state that sponsored the research that went into it."
Perhaps the disclosure of Flame was the reason behind those revelations in the NY Times?

PPS (June 9th)  Seems disclosure of the US government's role in Stuxnet is being used for political gain, or at least for media exposure.

Thursday 26 March 2009

Pop Mechanics does infrastructure security

Popular Mechanics gives the US national infrastructure a once-over from the perspective of its resilience to cyberwarfare, asking "How Vulnerable is U.S. Infrastructure to a Major Cyber Attack? Could hackers take down key parts of our infrastructure? Experts say yes. They could use the very computer systems that keep America's infrastructure running to bring down key utilities and industries, from railroads to natural gas pipelines. How worried should we be about hacking, the new weapon of mass disruption?"

It starts with a pop culture doomsday scenario to grab the readers' attention: "The next world war might not start with a bang, but with a blackout. An enemy could send a few lines of code to control computers at key power plants, causing equipment to overheat and melt down, plunging sectors of the U.S. and Canadian grid into darkness. Trains could roll to a stop on their tracks, while airport landing lights wink out and the few traffic lights that remain active blink at random."

Referring to the "hodgepodge" of Industrial Control Systems controlling elements of the critical infrastructure such as power and water supplies, the author at one point claims that "a good rule of thumb is that any device that is computer-controlled and networked is vulnerable to hacking". That's true I guess, for undefined values of 'vulnerable'. But SCADA/ICS devices that are connected to wireless/microwave control links or use phone lines and modems are also vulnerable to hacking: are these 'networked' I wonder?

I would disagree with the author on one point. He says "Infrastructure is meant to last a long time, so upgrades to existing systems tend to occur at a glacial pace." The glacial pace is not because infrastructure is meant to last a long time, but because changing such complex, safety-critical systems in any way (even to implement security patches) creates additional risks that may outweigh the need to make the change. It's a risk management decision, of course, and a delicate one given that leaving the systems open to cyberwarfare attackers does not necessarily lead to cyberwarfare, whereas creating a power cut or safety incident is bound to hit the headlines.

The article covers the usual range of headline incidents and scare stories with a little expert commentary, and as such is fine as a general security awareness piece. There's nothing of much use here, though, for security or general management at critical infrastructure organizations.

Tuesday 24 March 2009

How to fix SCADA security [not]

In "A cautionary tale about nuclear change management" ComputerWorld blogger Scott McPerson discusses a few security incidents that have been linked to SCADA systems, picking out two causes: poor change management and problems with the IT architectures. If only things were so simple in Real Life.

According to Scott, the change management problem can be solved by adequate pre-release testing of patches. Mmm. OK, well let's assume a SCADA-using organization has the resources to invest in an IT test jig comprehensive enough to model the live SCADA/ICS systems, complete with real-time data feed simulators and control panels, or at least a sufficient part of the complete live system to allow representative and realistic testing. Presumably they could test the patches and software upgrades thoroughly enough to reduce the possibility of unintended consequences, but how far can or indeed should they go? Anyone who has actually tried to do exhaustive software testing, even in a very simple laboratory setting, knows that it is literally impossible to test everything in practice. With the best will in the world, the fanciest test jig that money can buy and the most competent, skilled and diligent professional testers on the job, there is always a residual risk at the declared end of testing. In real life, the end of testing is almost always declared by management well before the testers are truly happy, not least because the issues and risks that the planned software changes are supposed to fix inevitably persist at least until the fix is applied, so there are clearly competing pressures. Damned if we do, damned if we don't.

OK, I'm certainly not arguing that pre-release software testing is a waste of time on SCADA or any other IT systems, far from it. But the reality is that no matter how much testing and fixing is done, the eventual decision to implement implicitly if not explicitly accepts the residual risk. In my experience, the operational, safety and commercial risks associated with system failures on SCADA systems are so significant that the opposite situation is more of a problem, namely that SCADA systems are not patched at all, or at least not promptly, due to the extreme risk aversion. Legacy systems are the norm not the exception in SCADA/ICS-land. In the case of safety-relevant and certified systems, plus the highly specialized bespoke systems typical of controllers for complex machinery (such as, oh er, a nuclear power station), the inertia problem is even worse.

Scott's second point about IT architectural issues also seems rather glib to me. "The fact that some utilities -- including nuclear utilities -- are stupid enough to attach the servers that control and manage SCADA systems to the same Internet that runs porn and Nigerian scams and MySpace is ludicrous. It is also dangerous." That statement seriously denegrates the highly competent IT and business managers in the utilities, manufacturing and engineering companies where I have worked. Such people are far from stupid. As I said already, they are highly risk averse and do not take such decisions lightly. But again there are competing priorities. The Internet is a convenient, cheap way to access SCADA/ICS systems, networks, devices etc. for remote diagnostics and support purposes, for example, and often glues together critical business processes throughout the supply chain. Connecting the SCADA/ICS network to any other network (even the internal corporate LAN) is clearly fraught with danger so security is always a concern.

The main beef I have with you, Scott, is that you have over-simplified the problems and provided trivial solutions, as if simply saying these things will make a difference. Calling the people who are actually dealing with the risks "stupid" is hardly going to make friends and influence people.

Thursday 19 March 2009

SCADA stories of 2008

SCADA security specialists Digital Bond run an annual summary of the top SCADA security stories of the year before. Here are their lists for 2008, 2007 and 2006.

In 2007, the story about successfully hacking and taking control of an electricity generating plant was hot news, along with NERC's moves to improve information security for the US electricity industry. In 2008, the US water industry seems to have followed NERC's lead with their own security roadmap.

Wednesday 4 March 2009

Scared of SCADA?

Our latest product is a brand new security awareness module on SCADA, ICS, DCS and related acronyms - essentially industrial process control systems. I suspect few employees outside of IT will have heard of SCADA and hardly any will have considered the security requirements associated with keeping the lights on, both literally (SCADA systems are heavily used by the electricity generators and grid) and figuratively (modern factories are packed with all manner of computerized industrial machinery). For those who work not in manufacturing industry but in ordinary offices, we point out that elevators and other facilities are typically managed by a Building Management System, itself a form of SCADA. For those who don't even work in an office, the Engine Management System in their car is another example.

In addition to the potential for unplanned production outages and disruption to critical infrastructures, the health and safety plus environmental protection aspects make SCADA security impacts potentially horrific. Simply being obscure is no defence against some hackers and, potentially, their terrorist masters. Governments and managers at major utilities are worried about SCADA security risks, so all in all this is an important awareness topic.

Tuesday 3 February 2009

Website content integrity failure

While researching for our next awareness module on SCADA security, I came across the Omron PLC website and couldn't help laughing when I read their news items. They haven't been well translated from the original - at least I doubt anyone would seriously have meant to write "The reverend converts the broadcasting waves echolike backwards from the RFID attach into digital aggregation that crapper then be passed on to computers that crapper attain ingest of it.". Let's hope we make more sense of SCADA security in our awareness briefings!