The fact that the Central Intelligence Agency (CIA) has a collection of exploits for hacking into anything connected to the Internet should not come as much of a surprise to IT security professionals. After all, intelligence agencies around the world have been developing and aggregating these exploits for years. Many of those exploits were also one way or another acquired from cyber criminals. Most of them are especially sophisticated and, in many cases, have already been addressed by IT technology vendors.
What is disturbing is that fact that Wikileaks opted to reveal CIA exploits in a large batch that is tantamount to cyber criminals. While vendors such as Apple claim many of the exploits described in the information leaked by Wikileaks have already been addressed, there’s still enough information in those leaks to send IT security teams scrambling to patch any number of applications and systems all at once. Even if the vulnerability is known it doesn’t always mean IT organizations have gotten around to implementing the appropriate patch. At a bare minimum, IT security teams now need to conduct an impromptu security audit.
Worse yet, Wikileaks is threatening to release even more batches of exploits in the future. Wikileaks at least is now saying it will share that information with technology companies before publicly releasing any additional security exploits in the future. But it takes a lot less time for cybercriminals to figure out how to employ those exploits than it does for the average IT security team to fix the vulnerabilities described. That means it’s only a matter of hours before some of that leaked information starts to do material harm. Because of that issue, it also shouldn’t come as much of a surprise that technology companies are now expressing an interest in getting a better look at the information Wikileaks has. At the bare minimum, those efforts would fall under the heading of proper due diligence. But if any cash exchanges hands during those conversations all kinds of ethical red flags should be raised. Disclosing government secrets is still a crime no matter how high-minded the motivation. The shaky moral ground is that Wikileaks is disregarding is how the way it goes about disclosing that information might be doing a lot more harm to people it is theoretically trying to protect than actual good.
Vulnerability disclosures have always been a troublesome issue. IT security researchers and vendors have for the most part put in place systems and protocols for sharing this type of information in the most responsible way possible. Wikileaks, for better or worse, has now blindly inserted itself into the process without fulling understanding the potential consequences. That, in turn, leaves IT security professionals to clean up the mess.
At the same time, however, it does create an opportunity for the technology industry to provide more transparency into how the security vulnerability process works. For far too many technology companies want this process to occur as discreetly as possible for fear that such disclosures might make people and organizations think twice about buying their products in the first place. That discretion may not be the better part of valor because all too often many of the patches provided to address those issues don’t get addressed for months on end because the sense of urgency surrounding them is largely confined to the IT security community. The Wikileaks disclosure may not pass a test concerning how means should justify the ends. But at the very least it should serve as a catalyst for a much wider conversation about how vulnerability disclosures need to be optimally handled.
Mike Vizard has covered IT for more than 25 years, and has edited or contributed to a number of tech publications including InfoWorld, eWeek, CRN, Baseline, ComputerWorld, TMCNet, and Digital Review. He currently blogs for IT Business Edge and contributes to CIOinsight, The Channel Insider, Programmableweb and Slashdot.Mike also blogs about emerging cloud technology for Intronis MSP Solutions by Barracuda.