
Time for that overdue AI chat is now
The time for cybersecurity professionals to have an adult conversation with business leaders about the need to put controls and processes in place to secure artificial intelligence platforms and services is now officially overdue.
As usual, cybersecurity teams are once again responding to yet another emerging technology that is being widely employed without much regard for the cybersecurity implications. The challenge is that platforms such as ChatGPT are readily accessible to end users who are not always aware of potential risks. Some organizations have even gone so far as to ban usage of these services until they can get some controls in place but, as always, cloud services continue to exacerbate shadow IT challenges.
The most immediate issue is making sure that sensitive data doesn’t find its way into a prompt being used to generate text, code, and soon video. While the providers of generative AI services will, if the right settings are turned on, guarantee that data used in a prompt will not be used to train the next iteration of their AI models, most end users don’t read the terms of service all that closely. In fact, the Federal Trade Commission just warned providers of AI services to not at some future date change those terms and conditions to gain access to that data.
Of course, savvy cybersecurity teams will not leave it to chance whether terms and conditions are observed. There are already multiple variations of data loss prevention (DLP) tools specifically designed to make sure sensitive data such as social security numbers or, worse yet, proprietary intellectual property doesn’t find its way into a prompt.
Where things become more challenging is when organizations start to either customize or build a large language model (LLMs) using their own data. The AI models created will rank among the most valuable intellectual property any organization has so it’s only going to be a matter of time before cybercriminals launch phishing attacks to steal credentials that will give them access. Once they have those credentials, they may content themselves to try to launch a ransomware attack to encrypt valuable data or poison the pool of data being used to train an AI model to ensure it randomly hallucinates.
Worse yet, they may just steal the entire LLM and then sell it to the highest bidder they can find on the Dark Web.
Unfortunately, the data science teams that build AI models don’t have a lot of cybersecurity expertise, so they are prone to making mistakes that can easily prove devastating when it’s discovered that an AI model is either compromised or being used for malicious intent. Much like application developers who lack the cybersecurity expertise needed to build and deploy secure applications, data scientists will need to rely on cybersecurity professionals to help them secure the machine learning operations (MLOps) workflows being used to build models.
The National Cyber Security Centre for the United Kingdom this month published a guide that cybersecurity teams can use to frame conversations with business leaders about the nature of the threats an organization is likely to encounter when embracing AI. That guide is based on a set of best practices for securing AI environments that was created in collaboration with the Cybersecurity and Infrastructure Security Agency (CISA) and was published at the end of last year.
Of course, many business leaders are caught up in a wave of irrational AI exuberance, so getting them to take the time needed to appreciate the cybersecurity implications isn’t always going to be easy. However, as always being the adult in the room is still part of the job cybersecurity job description.

The Ransomware Insights Report 2025
Key findings about the experience and impact of ransomware on organizations worldwide
Subscribe to the Barracuda Blog.
Sign up to receive threat spotlights, industry commentary, and more.

Managed Vulnerability Security: Faster remediation, fewer risks, easier compliance
See how easy it can be to find the vulnerabilities cybercriminals want to exploit