
OWASP security guidance on deepfakes
The public release of ChatGPT in late 2022 has introduced radical changes to how businesses and individuals use artificial intelligence (AI) technologies. The benefits were obvious for businesses: streamlining numerous business processes and reducing costs. Individuals began to use it to speed up their productivity and for leisure by, for example, adjusting their photos using AI tools to change their hair color or body style.
As with everything in life, modern technology comes with risks. In the AI domain, the most prominent risk was using it to create fabricated images and video content to depict something that has not truly happened. This practice is also known as creating deepfakes.
To mitigate the increasing risks of leveraging AI technologies in different business arenas, OWASP introduced the top 10 for large language model (LLM) applications in 2023. This list highlights and addresses security issues specific to deploying and managing LLMs and generative AI applications. However, after the broad adoption of AI technologies and the increased usage of this technology by threat actors to create fabricated content, OWASP has issued a new guide specifically for addressing and mitigating deepfake security risks by applying fundamental security principles. In this article, we will discuss the main elements of this guide and how businesses can leverage it to boost their defenses against different deepfake attacks. However, before we start, let us define what deepfake means.
Deepfakes and synthetic media
Deepfakes are a type of synthetic media created using AI. This technology employs machine learning (ML) algorithms to generate realistic, human-like content, such as images, videos, audio, and text.
There are different types of synthetic media:
- Deepfake videos: These are manipulated videos that alter video footage, for example, by replacing a human face with another person's face, to make a convincing fake video.
- Images generated using AI: AI tools can generate images from user text prompts or modify existing ones.
- Synthetic text: These systems generate text content, such as articles, blog posts, poetry, e-books, user guides, or any text content, based on large datasets they trained on. ChatGPT and Claude are examples of text generative AI.
- Synthetic speech: This type of media uses AI and deep learning to generate sound that resembles human speech.
- Virtual assistants: These programs understand and respond to human language. They leverage NLP and ML algorithms to naturally understand and respond to human voice or text commands.
Now that we have a fair understanding of synthetic media and their types, let’s talk about the recent OWASP guide on mitigating deepfake-based attacks.
Deepfake incident management
The OWASP guide presents a comprehensive framework for addressing and responding to deepfake-related security challenges across various organizational contexts. While the preparatory phase remains consistent, the subsequent detection, containment, and response stages are tailored to specific deepfake incident types.
Preparation
Organizations must evaluate their vulnerability to deepfake threats through various attack vectors, such as:
- Digital identity manipulation: Leveraging AI-generated voice, video, or image technologies to circumvent security protocols. For example, intruders may use deepfakes to impersonate a specific user to gain illegal access to sensitive resources.
- Executive impersonation: Executing fraudulent schemes by mimicking high-level executives, such as the CFO, to authorize unauthorized financial transactions. Deepfakes can be leveraged to mimic the target’s speech or event fabricate a video call.
- Brand reputation compromise: Creating synthetic media depicting a high-level employee, such as the CEO, making controversial statements that could damage the company's reputation.
- Recruitment infiltration: Threat actors use advanced deepfake technologies and stolen personal information to manipulate online hiring processes, such as mimicking other people during an online job interview. The aim is to convince the HR representative to hire them so that they can ultimately gain authorized access to enterprise-protected resources.
- Strategic disinformation: Generating and spreading fabricated multimedia content (video, news articles, and images) designed to influence market dynamics about a particular company. This tactic commonly targets key stakeholders, such as investors, partners, or customers, to undermine trust and disrupt key business relationships. The final aim is to damage the target company's reputation and market position.
Assessment of defenses
The guide suggests that organizations should execute a security assessment that reviews of their security policies, procedures, and auditing methods for the following four areas:
- Sensitive data disclosure
- Help desk
- Financial transactions
- Event response
Human-based authentication best practices
When an organization implements human-based authentication, at least two of the following best practices should be enforced:
- Keep a directory of approved communication methods, such as an alternative email address or phone number, to further authenticate a specific person.
- Require alternative communication verification, such as calling the person back or sending a separate email to verify the request.
- Use the "code of the day" method. Financial institutions often use this practice to generate a daily unique code that can be used in conjunction with other verbal identification to execute important tasks.
- Use security questions to verify identity in addition to existing authentication factors. Avoid easy-to-research questions, such as mother's middle name.
- Ask the requester’s manager or supervisor to verify the request.
Financial transactions
When dealing with financial transactions, the OWASP guide suggests the following best practices:
- Establish clear policies regarding how to execute financial transactions within your organization.
- Implement the concept of separation of duties. This means no one individual can have full control over executing one transaction.
- Request authorization from two employees to execute each transaction. For transactions with a high amount, request approval from more than two employees.
- Use the "code of the day" method to verify individuals executing financial transactions.
- Leverage multifactor authentication (MFA) to secure financial transactions.
- Use two communication methods to approve a financial transaction, for example, via email and phone.
- Regularly audit financial transaction procedures and ensure compliance with enforced procedures.
Help desk
The OWASP guide suggests the following best practices to mitigate deepfake attacks for employees working in the help desk department:
- Review password reset procedures and ensure MFA is in place for all employees' accounts.
- Test all work processes related to the help desk and identify gaps that could be vulnerable to deepfake attacks.
- Document all processes that do not require MFA, and ensure human-based authentication adheres to security best practices.
Hiring
In the recruitment area, OWASP suggests the following best practices:
- Establish a process for reporting suspicious candidates (who are suspected of using identities generated using AI technology) to the concerned department.
- Use automated solutions to detect forged documents, such as fake passports and IDs, and inform the candidates that you will check their IDs to see if they are generated using deepfake technology.
- Include a note in all job postings stating that no audio or video manipulation methods will be allowed during the interview process.
- Audit all hiring practices and ensure all HR department employees follow best practices for background checks, references, resume reviews, and candidate interviews.
Sensitive data disclosure
When dealing with sensitive data, such as customers’ personal data, the OWASP guide suggests the following best practices:
- Review current policies and procedures for sensitive data disclosure across all departments and interview employees across these departments to recognize the currently implemented workflows – which could differ from the one documented.
- Identify gaps in current procedures.
- Identify which processes are allowed to be executed without MFA.
- Ensure human-based authentication methods are following security best practices.
Brand monitoring
For brand monitoring, the following best practices are recommended:
- Review current brand monitoring tools and services and ensure they can recognize deepfake content.
- Ensure all employees across different departments know about deepfake content types and how to report such content to the appropriate department.
Event response
In the event response area, ensure the following:
- You have an established process in place to report deepfake content.
- Your current service level agreement (SLA) with digital forensic companies includes a section for dealing with deepfake incidents.
- You have an established process to take down deepfake content, such as copyright infringements, lookalike domains, and other fabricated content.
Deepfake incident response plan
The OWASP guide suggests a general incident response plan to identify and respond to deepfake content. It proposes the following general steps:
- Create a governance structure to respond to deepfake incidents.
- Identify the escalation procedures when identifying deepfakes.
- Identify how to take down deepfake content and establish the legal actions for pursuing such cases officially.
- For each deepfake incident type, identify the relevant crisis communication plan across all deepfake scenarios, which are:
- Financial gain through fraud by impersonation
- Impersonation for cyberattacks
- Job interview fraud
- Mis/Dis/Mal information
- Categorize the deepfake incident, whether it belongs to a large campaign or just an isolated incident. The OWASP guide suggests incident response plans should account for the following implications:
- Reputational damage
- Extortion pressure following a ransomware or data exfiltration event
- Hacktivism / corporate activism
- Financial fraud
- Sensitive information disclosure
- Industrial espionage
- Computer or network breaches
- Misleading stakeholders
- Stock price manipulation
- Determine if your organization has the required deepfake identification technology; if not, request that your digital forensics provider provide this capability.
- Define when to request help from law enforcement.
- Ensure that the incident response plan is audited regularly and updated continually.
Awareness training
Ensure all employees have adequate training on how to identify deepfake content. The OWASP guide proposes the employee's awareness training should, at minimum, cover the following points:
- What deepfakes are
- What to do if you think a deepfake is targeting you
- What to do if you are a subject of a deepfake
- Where to report deepfakes
OWASP provides comprehensive guidance to mitigate deepfake risks. Organizations must prepare by assessing their current vulnerabilities, implementing MFA, establishing robust verification processes, and creating incident response plans to handle such incidents. Employee awareness training is critical to recognize and report synthetic media threats that could compromise digital identity, financial security, and brand reputation.

The Ransomware Insights Report 2025
Key findings about the experience and impact of ransomware on organizations worldwide
Subscribe to the Barracuda Blog.
Sign up to receive threat spotlights, industry commentary, and more.

Managed Vulnerability Security: Faster remediation, fewer risks, easier compliance
See how easy it can be to find the vulnerabilities cybercriminals want to exploit