Friday 28 August 2020

NZ Stock Exchange DDoS continues




The New Zealand Stock Exchange is having a rough week.  Under assault from a sustained DDoS attack, its web servers have crumpled and fallen in an untidy heap again today, the fourth day of embarrassing and costly disruption.

DDoS attacks are generally not sophisticated hacks but crude overloads caused by sending vast volumes of data to overwhelm the servers.  

The Host Error message above shows "RedShield" which appears to be a security service remarkably similar to a Web Application Firewall (although the company claims to be producing something far better) ...

























If so, RedShield appears to be passing DDoS traffic to the stock exchange web servers which can't cope. Presumably, this particular DDoS attack does not fit the profile of the attacks that RedShield is designed to block, in other words RedShield is patently not preventing the DDoS.

I don't know whether RedShield is supposed to block DDoS traffic and is failing to do so, or if DDoS protection is simply not part of the RedShield service. Either way, it appears a DDoS attack is causing business impacts.

Whether RedShield is still working as designed to block application-level attacks is a moot point if the web servers are down ... but it is possible that the DDOS attack may be an attempt to over-stress the security systems, allowing more sophisticated hacks to leak past the weakened defences.  Hopefully, RedShield is still faithfully blocking all of them.

More likely, I suspect, this is a classic DDoS extortion: the attackers are demonstrating their power to disrupt the Stock Exchange's business, repeatedly, despite the defensive measures in place, as a way to force the Exchange to pay a ransom (probably - they are understandably reluctant to reveal the details with the spooks at GCSB actively investigating the incident).

Defences against DDoS attacks start with the basics such as network and server security, plus the policies and procedures to make sure the controls are effective in practice. Routine security monitoring and incident responses should include characterising the attack in progress, leading to active responses ranging from 'simply' disconnecting the network feeds (perhaps literally pulling the cables out) to filtering, diverting or slowing down the network traffic, ideally blocking the malicious traffic while allowing legitimate traffic to flow as normal. I'm talking about fairly conventional network security controls (mostly firewalls), albeit with sufficient throughput to cope with the onslaught. 

Almost certainly the responses would need to be coordinated with Internet service providers, internal IT service providers, and the authorities. Given the clearly disruptive impacts on the business, a crisis team would be liaising with all involved while keeping senior management and other stakeholders informed. From personal experience, this is an extremely stressful time for all involved, all the more so if there was inadequate preparation i.e. business continuity management, crisis planning, incident management exercises etc. with lashings of security awareness and training.  [If it turns out the Exchange was not, in fact, adequately prepared for this, there are governance and accountability implications for senior management. DDoS is just one of several 'real and present dangers' for any Internet connected business.] 

From there, the sky's the limit in terms of potential investment in increased server and network capacity, resilience, flexibility and redundancy, even cloud-based DDoS mitigation services such as Cloudflare and Akemai and other business continuity arrangements designed to guarantee at least a minimal level of service for essential business activities. Quite possibly these are in effect and working just fine right now, despite the apparent disruption to the Exchange's website: I have no inside track here but I'll be watching the news with interest as the incident unfolds. 

No comments:

Post a Comment

The floor is yours ...