When artificial intelligence (AI) first arrived on the cybersecurity scene there was understandably a lot of skepticism. Grandiose claims about the ability of AI platforms to eliminate the need for security analysts have fortunately given way to more rational assertions that focus on the role machine learning algorithms will play in augmenting cybersecurity professionals.
Now the conversation isn’t so much about whether AI will replace cybersecurity professionals as much as it is how quickly can AI models based on machine learning algorithms be applied. A survey of 350 security analysts conducted by International Data Corp (IDC) on behalf of FireEye, a managed security services provider (MSSP) suggests that the answer now is not soon enough.
The survey notes that depending on whether a security analyst works for an internal IT team or an MSSP that somewhere between 45 to 53% of security alerts are false positives. As the number of those false positives increase, there’s a tendency to start ignoring more alerts. The survey finds three in four analysts are worried about missing incidents.
In theory, machine learning algorithms should over time reduce the number of alerts generated that are false positives. There may be a time when AI initially increases the number of false positives as machine learning algorithms discover the environment. In terms of AI adoption among security analysts, the glass appears to be half full. A total of 43% of respondents said they are using AI and machine learning technologies.
Clearly, security analysts are moving past the fear and loathing stage when it comes to AI. Many of them are likely to conclude they won’t want to work for organizations that don’t have AI capabilities. As IT environments become more complex the stress level will be too high. The only way security analysts will be able to keep pace is by relying more on AI to preserve their sanity.
Cybersecurity analysts should always insist on a level of transparency and explainability rather than blindly trusting an AI black box. As is always the case with AI, it’s one thing to be wrong. It’s usually quite another to be wrong at scale. AI may never be perfectly understood by all but at the end of the day, even the most advanced forms of math can be explained.
It’s not clear to what degree security analysts might one day verbally engage with AI models. Most of the tasks that machine learning algorithms will automate will run unseen in the background. However, as security analysts gain confidence in AI models that relationship may simply devolve into a simple request such as “show me the three things today that are most likely to get me fired.”
In the meantime, it’s not likely AI in its current form will ever replace security analysts. However, many of the low-level tasks that often conspire to make cybersecurity more aggravating than it needs to be will slowly disappear in time. In fact, many cybersecurity professionals with a little AI help might rediscover their enthusiasm for a job that all too frequently burns out even the best and the brightest.
Mike Vizard has covered IT for more than 25 years and has edited or contributed to a number of tech publications including InfoWorld, eWeek, CRN, Baseline, ComputerWorld, TMCNet, and Digital Review. He currently blogs for IT Business Edge and contributes to CIOinsight, The Channel Insider, Programmableweb, and Slashdot. Mike also blogs about emerging cloud technology for SmarterMSP.