One of the great ironies of cybersecurity is that two of the attacks that have gained the most global notoriety have inflicted the least amount of economic damage. The entities behind WannaCry and the latest variant of the Petya ransomware attacks have demanded $300 in Bitcoin payment to decrypt data. In the case of WannaCry the attack was disrupted when cybersecurity experts figured out how to disrupt the attack by implementing a kill switch. In the meantime, an Internet service provider in Germany has blocked the email address that victims of the latest global ransomware were supposed to direct their payments. Speculation as to whether the global attention this attack generated was worth the risk caused cyber criminals to reconsider their actions or whether this attack is a cyber espionage attack that spun out of control runs rampant.
When it comes to IT security most small-to-medium (SMB) businesses have a herd mentality. They know there are predators. They just assume that given the size of the SMB they figure that the odds are good some company other than them will fall victim. What that thinking fails to account for is how efficient the predators are becoming at hunting.
A new report from Malwarebytes, a provider of malware removal tools, published this week makes plain the size and scope of the problem. The report finds that amount of malware being discovered with SMB organizations in the U.S. with fewer than 1,000 employees from the first quarter of 2016 to the same period in 2017 has increased a startling 165 percent. Even accounting for the fact that organizations might be getting better at discovering malware, that increase in volume suggests that either there are a lot more predators or they have become much more efficient in terms of launching attacks. The truth is a little bit of both. Cyber criminals have developed an elaborate marketplace through which they sell and share exploits. That makes it simpler for cyber criminals with limited skills to employ malware. At the same time, cybercriminals are taking advantage of bots and adware to spread malware more broadly than ever.
At the Gartner Security and Risk Management Summit 2017 conference held this week, Gartner unfurled what it describes as the top security technologies IT organizations should be employing in 2017. They include:
- Cloud Workload Protection Platforms: Hybrid cloud workload protection platforms (CWPP) that provide an integrated way to protect workloads via a single management console as well as a single way to express security policy, regardless of where the workload runs.
- Remote Browser: By isolating the browsing function, Gartner says malware is kept off the end-user system. That, in turn, reduces the surface area for attack by shifting the risk of attack to the server sessions, which can be reset to a known good state on every new browsing session, tab opened or URL accessed.
- Deception: Deception technologies are defined by Gartner as the use of deceits, decoys and/or tricks designed to thwart, or throw off, an attacker's cognitive processes, disrupt an attacker's automation tools, delay an attacker's activities or detect an attack. By using deception technology behind the enterprise firewall, enterprises can better detect attackers that have penetrated their defenses with a high level of confidence in the events detected. Gartner says deception technology implementations now span multiple layers within the stack, including endpoint, network, application, and data.
The cybersecurity elephant in the room that most organizations don’t really want to address is that a huge percentage of the systems still being used every day are indefensible.
A survey of 500 chief information security officers (CISOs) conducted by Vanson Bourne on behalf of Bromium, a provider of container software to provide higher levels of isolation between applications and the platforms they run on, finds organizations are now issuing patches on average five times per month. The report estimates that each patch requires 13 man hours to implement. The report also finds that more than half had to pay staff overtime or hire a third-party services firm at an average cost of $19,908 per patch. It’s little wonder that 53 percent of the CISOs describe crisis patch management as a major disruption to their IT and security teams.
There’s not much IT security professionals can do to prevent cybercriminals from launching attacks in the first place. That means the name of the IT security game is to reduce the mean time to detection (MTTD) as part of larger effort to reduce the mean time to response (MTTR).
Unfortunately, too many IT organizations are still thinking about MTTD in terms of detection of malware at some point after it lands on their systems. IT security would be a whole lot better for all concerned if organizations started thinking about MTTD in terms of detecting malware when it first appears in the wild. The good news is that threat intelligence services are getting better at detecting these threats. Most IT security vendors subscribe to multiple IT threat intelligence services. Most of them also responsibly share research about potential threats with each other. But IT organizations would be well advised to develop their own threat intelligence capabilities. After all, to be forewarned is to be forearmed.'IT organizations would be well advised to develop their own threat intelligence capabilities' ~ @mvizard Click To Tweet
When you look at the vulnerabilities that routinely get exploited by cybercriminals the most they generally fall into two broad categories. The first usually involves some sort of phishing attack in which an end user was essentially tricked into downloading a document or clicking on a link that infected their system with malware. The second class of attack is usually aimed at some well-known vulnerabilities that exist in an operating system, database or application.
Beyond continually educating users about how to recognize these threats there’s not much the average cybersecurity professional can do about phishing attacks except try to contain the damage. But in the case of software that contains known vulnerabilities, there is reason for optimism on two fronts. The first is thanks to advances in technologies such as machine learning algorithms it should become a lot easier to discover the vulnerabilities. The second reason for optimism is tied to the rise of a DevSecOps movement among application developers.
As an extension of the DevOps movement, DevSecOps is starting to gain credence because more developers are being held accountable for the quality of their code. Previously, application developers pretty much wrote code that was they deemed finish they threw over the proverbial IT operations team to deploy. After several contentious meetings with those developers, the IT operations teams would eventually get the code to a place where it could be deployed in a production environment. At no time, however, was anybody testing that software for anything more than basic compliance with a vague set of security policies. Because of that flawed process, there’s more software that can be easily exploited by cybercriminals than anyone cares to admit. The Verizon Data Breach Report for 2017 makes that point abundantly clear. Out of the 1,935 breaches analyzed, 88 percent were accomplished an all too common list of nine attack vectors. How this state of affairs came about continues to boggle the minds of IT security professional everywhere.
But now that more developers are being held to account there’s not surprisingly a lot more interest in including security testing within the larger application testing process. In fact, instead of waiting to the end of the application development process to do that testing there’s a concerted effort now to test applications at each stage of the build process. None of this means that all the code that’s already been deployed is suddenly going to be magically fixed. But it does mean that as legacy applications get updated or replaced the inherent level of security of those applications should substantially improve.
Of course, there will never be a such a thing as perfect security. But security professionals know that most organizations are their own worst enemies. The exploits that are being used to routinely compromise their security are not all that sophisticated. There are no legions of hackers with awesome programming skills stealing organizations blind. There are, however, thousands of programmers with just enough skill to launch an attack using code somebody else wrote. Most of those programmers are not getting rich. Most of them only do it because the criminals that hire them pay them more for their skills than anybody else. If they could be gainfully employed doing something else more interesting and rewarding, they probably wouldn’t be involved in cybercrime in the first place. There will always be cybercriminals. But right now, there’s so many of them because the current state of IT security makes it too easy. Thanks to the rise of DevSecOps, however, there may one day soon be a day when that’s no longer the case.
Get more information on DevSecOps at their website at http://www.devsecops.org/
Mike Vizard has covered IT for more than 25 years, and has edited or contributed to a number of tech publications including InfoWorld, eWeek, CRN, Baseline, ComputerWorld, TMCNet, and Digital Review. He currently blogs for IT Business Edge and contributes to CIOinsight, The Channel Insider, Programmableweb and Slashdot.Mike also blogs about emerging cloud technology for Intronis MSP Solutions by Barracuda.
Following the WannCry ransomware attack, there’s been no shortage of finger pointing when it comes to laying blame for who enabled these attacks to occur in the first place. Microsoft blames the National Security Agency (NSA) for coming up with the exploit in the first place. The NSA says it told Microsoft about the vulnerability months ago, which is roughly the same time that Wikileaks published the code the NSA had developed to exploit the vulnerability that the cybercriminals employed to such great extent.
At the same time, organizations are being held to account for deciding to continue to run legacy versions of Windows that are unsupported. But even with support, it’s not all that clear that the outcome of the WannaCry attacks would have been that much different.