Following the WannCry ransomware attack, there’s been no shortage of finger pointing when it comes to laying blame for who enabled these attacks to occur in the first place. Microsoft blames the National Security Agency (NSA) for coming up with the exploit in the first place. The NSA says it told Microsoft about the vulnerability months ago, which is roughly the same time that Wikileaks published the code the NSA had developed to exploit the vulnerability that the cybercriminals employed to such great extent.
At the same time, organizations are being held to account for deciding to continue to run legacy versions of Windows that are unsupported. But even with support, it’s not all that clear that the outcome of the WannaCry attacks would have been that much different.
A new survey of 400 cybersecurity professionals conducted by Enterprise Management Associates on behalf of Bay Dynamics, a provider of security analytics, finds that 74 percent of them report being overwhelmed by the volume of vulnerability maintenance work they face. A full 79 percent of those cybersecurity professionals report that the patching approval process their organization relies on is mostly manual.
Most of them also admit they don’t have any real capability to deal with zero-day threats. The survey finds that 64 percent admit that threat alerts are not addressed each day, and another 52 percent report that threat alerts are improperly prioritized by systems and therefore must be manually reprioritized.Many cybersecurity professionals say they have little capability to deal with zero-day threats Click To Tweet
The whole patching process has been deeply flawed for years. Software vendors issue patches all the time. But before those patches get implement most organizations need to test the impact those patches might have on their applications. There are ways to automate the patch process. But for the most part patching applications is an inefficient manual process. The result is even when a vulnerability gets discovered and patched, it still takes weeks or even months for that patch to get deployed. In the meantime, cybercriminals go to work on exploiting that vulnerability on the assumption they’ve got a few months before most of their potential victims will have implemented an effective defense. The truth is that there are so many organizations for one reason or another that never get around to implementing a patch at all. That’s why well over 90 percent of successful cybersecurity attacks make use of a known exploit. Cybercriminals are not investing in software engineering. They are finding ingenious ways to deliver payloads that exploit a known vulnerability.
For years now there’s been a lot debate over how vendors handle the whole vulnerability disclosure process. Most vendors make a patch available as soon at the time when they disclose the existence of the vulnerability. But it’s clear that patch management processes that are still largely manual are not able to keep pace with the volume of vulnerabilities being discovered.
Of course, all those vulnerabilities highlight just how flawed the software development process is. The good news is that modern applications built using technologies such as Docker containers eliminate the need to patch applications. New functionality can be more simply added by simply replacing one container with a new one. But legacy applications that rely on patches will be with us for decades to come. Unfortunately, until the processes being employed to manage those patches improves the overall state of IT security is not likely to improve anytime soon.
Mike Vizard has covered IT for more than 25 years, and has edited or contributed to a number of tech publications including InfoWorld, eWeek, CRN, Baseline, ComputerWorld, TMCNet, and Digital Review. He currently blogs for IT Business Edge and contributes to CIOinsight, The Channel Insider, Programmableweb and Slashdot.Mike also blogs about emerging cloud technology for Intronis MSP Solutions by Barracuda.