While cybersecurity is often cited as one of the first use cases for artificial intelligence (AI) a new survey of 400 security analysts working in organizations with over 1,000 employees in the U.S. conducted by Osterman Research on behalf of ProtectWise, a provider of a cloud-based application for analyzing network packets, finds there is much work to be done before most cybersecurity AI comes anywhere close to living up to the initial hype.
The survey finds 73 percent of respondents reporting they have implemented security products that incorporate at least some aspect of AI. But over half the respondents (54%) says cybersecurity AI delivers inaccurate results. A full 61 percent of respondents also say they don’t believe AI has yet to stop zero-days and advanced threats. Conversely, the survey suggests approximately one-half of respondents either see some value in cybersecurity AI or have not yet formed an opinion. But either way, it’s clear there’s plenty of room for improvement surrounding what is still fairly described as bleeding edge technology.73% of respondents in a recent survey have implemented #ArtificialIntelligence in their #CyberSecurity infrastructure, but 42% find them difficult to use, and 46% say deployment is too burdensome.Click To Tweet
In fact, a total of 42 percent of the survey respondents say they find cybersecurity AI products as difficult to use, while 46 percent say they find the rules creation and implementation process attached to cybersecurity AI technologies to be burdensome.
Mike Vizard has covered IT for more than 25 years, and has edited or contributed to a number of tech publications including InfoWorld, eWeek, CRN, Baseline, ComputerWorld, TMCNet and Digital Review. He currently blogs for IT Business Edge and contributes to CIOinsight, The Channel Insider, Programmableweb and Slashdot. Mike also blogs about emerging cloud technology for SmarterMSP.