
Rapid GenAI adoption creates security challenges
Cybersecurity teams are falling behind the pace of generative artificial intelligence (GenAI) tool adoption in a way that makes it probable the number of data breach incidents will rise sharply in the weeks and months ahead, especially as usage of shadow AI services continues to increase.
A survey from ManageEngine finds that 70% of IT decision makers (ITDMs) have identified unauthorized AI use within their organizations and 60% of employees are using unapproved AI tools more than they were a year ago. A full 91% have implemented policies, but only 54% have implemented clear, enforced AI governance policies and actively monitor for unauthorized use of generative AI tools.
85% also report that employees are adopting AI tools faster than their IT teams can assess them, with 32% of employees having entered confidential client data into AI tools without confirming company approval. Well over two-thirds (37%) have entered private, internal company data. More than half (53%) said usage of personal devices for work-related AI tasks is creating a blind spot in their organization’s security posture.
A separate report from Harmonic Security finds that 8.5% of employee prompts to generative tools include sensitive corporate data. Nearly half of the data entered (48%) was customer data, compared to 27% that entered sensitive employee data.
The list of potential cybersecurity issues beyond data leakage that should be addressed include prompt injection attacks that generate malicious outputs or extract confidential data; deliberate poisoning of the data used to train AI models; cyberattacks aimed specifically at AI infrastructure and the software supply chain used to build AI models; and the theft of AI models themselves.
At this point, it’s not feasible to ban usage of generative AI tools, so rather than waiting for the inevitable breach, many cybersecurity teams are proactively moving to detect, audit and monitor generative AI usage. Armed with those insights, it then becomes feasible to define a set of governance policies around a set of sanctioned generative AI tools and workflows that cybersecurity teams have vetted.
It may not be possible to prevent every incident that will arise as generative AI tools and platforms become more widely used, but hopefully with a little additional training, the number of major breaches can be dramatically contained.
Of course, this isn’t the first time cybersecurity professionals have found themselves chasing after an emerging technology after the proverbial barn door has been closed. In many ways, the adoption of generative AI tools and platforms is simply the latest instance of shadow cloud computing services. The only difference is the amount of sensitive data being shared, which suggests that far too many employees have yet to learn any lessons from previous cloud security incidents involving, for example, any number of software-as-a-service (SaaS) applications.
Cybersecurity teams will need to once again exercise some forbearance as they deal with generative AI security incidents, but there may also be teachable moments. After all, as Winston Churchill once noted, a good crisis should never be allowed to go to waste.

The Ransomware Insights Report 2025
Key findings about the experience and impact of ransomware on organizations worldwide
Subscribe to the Barracuda Blog.
Sign up to receive threat spotlights, industry commentary, and more.

Managed Vulnerability Security: Faster remediation, fewer risks, easier compliance
See how easy it can be to find the vulnerabilities cybercriminals want to exploit