Time to Come to Terms with the Impact AI Will Have on Cybersecurity

Print Friendly, PDF & Email

It’s now only a matter of time before cybercriminals start to leverage artificial intelligence (AI) to both create and launch attacks that will become much more targeted. Cybercriminals already have the bots required to collect the data required to build AI models. They can also afford to recruit the expertise needed to build AI models. There are already any number of nation states that it would be safe to assume are already using AI to further their cyberespionage aims.

A recent report reveals that 82% of cybersecurity professionals are concerned about the possibility of cybercriminals using AI in attacks against their businesses ~@mvizard Click To Tweet

A new report from Neustar, a provider of real-time information services, finds that 82 percent of the cybersecurity professionals the company surveys to create an International Cyber Benchmarks Index are concerned about the possibility of hackers using AI against their organization. At the most recent Black Hat USA conference, IBM researchers demonstrated the feasibility of just such an attack being launched by developing DeepLocker, a tool that combines multiple AI models to create highly-evasive malware.

Defending against attacks launched using AI models is, of course, going to require organizations to have access to AI models of their own to defend their organizations. The trouble is that building these AI models not only takes a lot of time and effort, it also requires organizations to have access to massive amounts of data to teach the machine and deep learning algorithms employed to create the AI model to recognize cybersecurity attacks. More challenging still, as the cybersecurity attacks evolve those AI models need to be constantly updated. For all practical purposes, organizations will need to rely much more on cybersecurity services provided by vendors that have the resources required to invest in building those AI models.

It’s now only a matter of time before cybersecurity experts augmented with AI will do battle against criminals augmented with AI models of their own. ~@mvizardClick To Tweet

As these AI models become more adept at identifying cybersecurity attacks the role of individual cybersecurity professionals is also naturally going to change. AI is not going to eliminate the need for human cybersecurity experts any time soon. Rather cybersecurity experts augmented with AI will do battle against cybersecurity criminals augmented with AI models of their own. The intensity of those contests will increase both in terms of scale and how much is at risk because it might only take a few seconds for massive amounts of data to be lost. As part of that shift most lower level cybersecurity tasks will become automated. That means the bar for entering the cybersecurity field is about to become higher.

In theory, at least, the overall state of cybersecurity should improve. Most breaches these days can in part be attributed to human error. AI should make it possible to discover and eliminate many more vulnerabilities before any application ever finds its way into a production environment. That means the even as the number of endpoints in an IT environment exponentially increase the level of security per endpoint should be substantially higher than it is today.

There is, of course, no magic AI silver bullet when it comes to cybersecurity. But machines are about to play a much bigger role. Cybersecurity teams would be well-advised not to depend on those machines too much. But overall, thanks to the rise of AI cybersecurity in the months and years ahead, cybersecurity promises to be at the very least a whole lot less frustrating than it is today.

There is no magic AI silver bullet when it comes to cybersecurity, but machines are about to play a much bigger role. @mvizardClick To Tweet

 

Scroll to top
Tweet
Share
Share