
5 Ways cybercriminals are using AI: Malware generation
Over the last few months, we’ve taken a fresh look at artificial intelligence (AI) and its many subsets, like machine learning (ML) and generative AI (GenAI). Today, we’re continuing this topic with a look at how cybercriminals are using GenAI to create malware. For the most part, we’ll be talking about GenAI employed in large language models (LLMs) like ChatGPT or Google Gemini.
If you’ve ever played with one of these LLMs, you may have run into their programmed limitations. Ask ChatGPT to write malware for you, and you will get a polite “no” with a few words about using your skills responsibly and within the bounds of the law. Digging a bit deeper, we can see that ChatGPT has several mechanisms in place to prevent malicious use of the technology. Gemini also has a few mechanisms in place, but the first thing it tells you is that it’s not responsible for what users do. And sure, we can agree on that, but a few more questions like “Why can’t you create malware?” result in low-value answers like “It’s hurtful” and “It’s illegal.” Ultimately, Gemini will assure you, “My response is guided by a more general set of principles designed to promote safety and responsible use.
Most of us don’t need to go beyond the limits of these LLMs. If an LLM or other GenAI application doesn’t work for us, we can find another or create our own. Cybercriminals can do the same thing, though they operate in a different marketplace with fewer restrictions.
How criminals use GenAI
AI opens new opportunities and capabilities for cybercriminals. Remember, AI systems are designed to learn. Criminals who train their own AI systems on malware and other malicious software can significantly ‘level up’ their attacks. For example:
- Automated code generation: Criminals can create new variants of malware quickly and automatically. This helps them create many different attacks with different characteristics but similar functionality.
- Evasion Techniques: Running malware and security software against each other can teach AI systems how malware is detected. The AI can then modify the malware to avoid detection.
- Exploit Development: AI can scan and discover vulnerabilities in target systems. These vulnerabilities are then analyzed and used to create exploits and attack sequences.
- Adaptation and Learning: GenAI adapts to security systems and can learn from the results of other attacks. The use of AI can allow malware to dynamically adjust its tactics during an attack based on real-time analysis of the target’s defenses.
You may still be wondering how LLMs can be used to create malware or aid in other attacks. Threat actors commonly take two approaches with malicious AI. The first is the use of ‘adversarial attacks,’ which is an umbrella term for the different techniques used to cause AI technologies to malfunction. Poisoning, evasion, and extraction attacks are a few examples of this. These attacks can create malware or be used in conjunction with a malware attack. For example,
- Vulnerabilities found in AI systems can help threat actors develop more effective attacks against the target. See Gabe’s blog here for examples.
- Malfunctioning AI systems can create confusion and hide other attacks on financial systems, critical infrastructure, and business operations. Instead of looking for intruders or malware, IT is distracted by the AI system.
- Exploiting LLM vulnerabilities can let a threat actor create a phishing email through a restricted system like ChatGPT. This post on using GenAI for phishing attacks provides an example.
These adversarial attacks are also known as ‘jailbreaks,’ and many are shared or sold in criminal communities.
A second and more common approach to generating malware through GenAI is to build or buy ‘dark LLMs’ that were made for threat actors. These LLMs do not have the restrictions you saw earlier in ChatGPT and Gemini, and some are built for specific types of attacks. For example, FraudGPT was designed to create phishing emails, cracking tools, and carding schemes. DarkBart (or DarkBard) is used for phishing, social engineering, exploiting system vulnerabilities, and distributing malware. DarkBart was based on Google Bard (now Google Gemini) and integrates with other Google applications to facilitate the use of images and other components in an attack. Researchers suspect that CanadianKingpin12 is the primary threat actor behind most of these dark LLMs because he is the most prolific promoter and seller of this software on the crime forums.
The software was priced at $200 per month or $1700 per year, with a couple of other tiers in between. The ad claims over 3,000 confirmed sales. Advanced threat groups are more likely to build their own tools rather than buy through an ad like this.
Types of AI-crafted malware attacks
Now that we’ve discussed how threat actors might use LLMs, let’s examine some of the malware they produce with those tools.
Adaptive malware
Adaptive malware can change its code, execution patterns, or communication methods based on what it encounters during an attack. This is to avoid detection, but it can also adapt to take advantage of new attack opportunities. Adaptive malware predates GenAI and dark LLMs, but artificial intelligence and machine learning (ML) have improved their evasion techniques and effectiveness.
Dynamic malware payloads
A malware payload is the part of the malware that performs the actual malicious activity. In Cactus ransomware, for example, the encryption binary is the payload. A dynamic payload can modify its actions or load additional malware during the attack. It can adapt to conditions after it is deployed to evade detection or increase effectiveness. Like adaptive malware, dynamic payloads can be created without AI enhancement. Using AI capabilities improves the malware by making it more responsive to the environment.
Zero-day and one-day attacks
These are attacks against unknown or recently discovered vulnerabilities. Zero-day attacks are previously unknown to the vendor, so the vendor had “zero days” to patch the vulnerability before it is attacked. One-day attacks occur in the short span of time between the release of a vendor patch and the installation of the patch by the customer. The “one day” refers to the limited window of opportunity for the attackers. GenAI can accelerate the discovery of zero-day vulnerabilities and the development of an exploit. The attack landscape is reduced each time a patch is released or installed, so threat actors want to launch their attacks as soon as possible. GenAI reduces the time it takes them to attack.
Content obfuscation
Just like it sounds, content obfuscation refers to the act of hiding or disguising the true intent of malicious code through encryption, encoding, polymorphism, or metamorphism. These evasion techniques are most successful against security measures that rely on identifying known patterns of malicious activity. GenAI can increase the complexity and effectiveness of all these methods. AI has also been used to blend irrelevant code into malware so that security systems do not recognize the malware as a threat.
AI-powered botnets
Botnets enhanced with AI capabilities can modify their own code to evade detection, propagate to other devices without human intervention, select the best among multiple targets, and optimize their attacks based on the security response. AI can also manage botnet resources for load balancing and improve the communication between devices and networks. AI-powered botnets run more effective distributed denial-of-service (DDoS) attacks and spam campaigns. They are also more resilient because the AI can decide to run self-healing and obfuscation/evasion capabilities as needed.
And there’s more
This is just a partial list of how and why threat actors are using GenAI to create and improve malware. There’s no way we can list them all here, but there are some other resources you might find interesting. Microsoft and OpenAI are tracking threat actors who are using LLMs in their operations. Here are some examples:
- Forest Blizzard (Russia) is generating scripts to perform tasks like file manipulation and data selection. This is likely part of the effort to automate their threat operations.
- Emerald Sleet (North Korea) is scripting tasks that accelerate attacks, like identifying certain user events on a system. The group also uses LLMs to create spear phishing and other social engineering attacks against governments and other organizations that focus on defense against North Korea.
- Crimson Sandstorm (Iran) is generating code to evade detection and attempting to disable security through Windows Registry or Group Policy.
If you are looking for more information on these threat actors, keep in mind that the above list follows Microsoft’s naming convention. Most threat actors have been assigned multiple names. Forest Blizzard, for example, is also known as Fancy Bear and APT28.
Microsoft is also working with MITRE to add the following tactics, techniques, and procedures (TTPs) into the MITRE ATT&CK® framework or MITRE ATLAS™ (Adversarial Threat Landscape for Artificial-Intelligence Systems) knowledgebase:
- LLM-informed reconnaissance: Employing LLMs to gather actionable intelligence on technologies and potential vulnerabilities.
- LLM-enhanced scripting techniques: Utilizing LLMs to generate or refine scripts that could be used in cyberattacks or for basic scripting tasks such as programmatically identifying certain user events on a system and assistance with troubleshooting and understanding various web technologies.
- LLM-aided development: Utilizing LLMs in the development lifecycle of tools and programs, including those with malicious intent, such as malware.
- LLM-supported social engineering: Leveraging LLMs for assistance with translations and communication, likely to establish connections or manipulate targets.
Microsoft has several more LLM-themed TTPs listed on their site here.
Another interesting article is this Harvard publication on a “zero-click” worm that hijacks AI systems for spam campaigns, data theft, or other malicious activity. Harvard researchers developed the worm to demonstrate the need for defensive countermeasures in AI systems.
Barracuda has recently published an e-book, Securing tomorrow: A CISO’s guide to the role of AI in cybersecurity. This e-book explores security risks and exposes the vulnerabilities that cybercriminals exploit with the aid of AI to scale up their attacks and improve their success rates. Get your free copy of the e-book right now and see all the latest threats, data, analysis, and solutions for yourself.

The Ransomware Insights Report 2025
Key findings about the experience and impact of ransomware on organizations worldwide
Subscribe to the Barracuda Blog.
Sign up to receive threat spotlights, industry commentary, and more.

Managed Vulnerability Security: Faster remediation, fewer risks, easier compliance
See how easy it can be to find the vulnerabilities cybercriminals want to exploit