Table of Contents
The rise of artificial intelligence (AI) is a growing concern for organisations. As outlined in our Cyber Awareness for AI adoption article, although AI drives operational efficiency, it also provides a more sophisticated means of cyber attacks and means of spreading false information.
Driven by advances in AI tooling, the spread of deepfake content has become a recent phenomenon, having grown 550% between 2019 and 2023 (Deloitte, 2024).
But what are deepfakes? They are:
📹 Media content created by AI technologies
🎭 Designed to be deceptive
🧑💻 Often leveraged for digital impersonation
Deepfakes are typically generated by machine-learning algorithms combined with voice or facial-mapping software that can mimic a text, video or audio clip of a person.
The transformation from Generative AI to more recently, Agentic AI, has come with significant implications for cyber security. Agentic AI refers to AI systems that exhibit autonomous, goal-directed behaviour. Consider it as like an autonomous self-driving car – you state your destination, and it drives you there, making decisions on the best route to reach it. In contrast, traditional generative AI responds to prompts more passively – you are in the driving seat, adjusting the gear and speed as you go.
Agentic AI significantly amplifies security risks by enabling the autonomous creation and distribution of highly realistic fake content, potentially without human oversight. As such, this exponential growth has sparked heightened concerns with deepfakes becoming a top cybersecurity challenge, with a recorded 92% of executives having ‘significant concerns’ about the misuse of AI (Sentinel, 2023).
Organisations need to start considering deep fake threats as a very real, tangible risk. In turn, addressing it by ensuring their people are adequately trained to identify a deepfake and the tactics deployed by a threat actor adopting them.
This blog will delve into how best to position deepfake training at your organisation and practical steps you can take to protect against them.
🧑🧑🧒🧒 Democratising Deepfakes
Deepfakes rely on advanced machine learning techniques that have been amplified with the adoption of AI in recent years. For over two decades, evolutions of photo and video editing software have been available for legitimate use such as in film and national security.
However, these recent advances and accessibility mean they are no longer reserved for international security organisations, rather it is in the hands of the public and as such, bad actors. For example, there are a variety of consumer tools that enable novices to create face swaps from a phone or from a PC with limited effort.
Some include open-source tools such as DeepFaceLab and FaceSwap. The software automates the training and blending by uploading two videos (one source, one target), seriously democratising the software for deepfake misuse.

🔊 Two types of deepfakes
Deepfakes have become more widespread, particularly in the media with high-profile video fakes circulating online, notably in the US election. However, they have historically come in a range of different forms. Two key main categories include:
- Audio Cloning
- Face Swapping (Image and Video)

1. Audio Cloning
Audio deepfakes replicate AI voice without visuals. They are often used by threat actors and fraudulent parties in phone scams. For example, in a corporate setting impersonating a senior stakeholder to direct urgent wire transfers.
In September 2024, fraudsters were able to mimic the voice of American politician, Jay Shooster, from a television appearance where they used a 15-second sample of his voice to scam his parents into spending 30,000 grand to bail him out of prison after an accident.
Audio cloning is recognised as the first evolution of deep fakes. It is becoming more sophisticated with the rise of AI technology enhancing the sound quality and effectiveness. Hence, the focus now is more on audio clues to detect these rather than poor audio quality.
2. Face Swapping Images and Videos
The most notable type of deep fake gaining media attraction is the adoption of face-swaps. This type of deepfake overlay an individual’s face onto someone else’s body in motion. You may have seen this in the media or on social media platforms such as X (formerly Twitter), where a celebrity or political figure has been ‘deep faked’, appearing to say or do something out of character.
Historically, these have been largely pornographic in nature. For example, deepfake pornographic images of Taylor Swift rapidly spread through the platform X, formerly Twitter. One image accumulated 47m views before it was taken down.
The technology behind these hinges on Neural nets, which are a type of machine learning model inspired by the structure and function of the human brain. They are skilled at tracking expressions and matching them frame by frame with high-fidelity detail for realistic illusions. These continue to grow more advanced and sophisticated – causing alarm for organisations and key figures.
The democratisation of AI technologies represents a shift in the material use of deepfakes. They are not solely reserved for spreading misinformation in a political sense or pornographic purposes, rather they are being deployed against organisations for large financial gain. As such, organisations need to be prepared to respond to these threats as they continue to emerge.
🛡️ Your Defence – Cyber Security Training

Like phishing prevention and mitigation, organisations should consider investing resources in equipping employees to act as “first responders” to detect and report on disinformation and deepfake related threats in the workspace. In turn, training should be tailored to empower employees to identify deepfakes and provide them with clear reporting channels. It is supporting a cultural shift, where people act as champions and feel safe to report a threat without the fear of judgment if they incorrectly do so.
Firstly, the training needs to provide a high-level overview of deepfakes and the threat they pose to the organisation. It needs to be the right level to be engaging and understandable across all employees. If it is too high-level, the importance may be lost and it can become a clickthrough exercise. Conversely, if too low level and technical, it loses relevancy for most employees and fails to cut through.
Beyond providing an explanation of the types of deepfakes aligned to the two key deepfake categories mentioned above, the following signs should be outlined in training to empower employees to be on the lookout. This should be partnered with real life examples to contextualise the threats for that organisation.
🔍 How To Spot a Deep-Fake
Audio Clues
Training should outline the following signs to support individuals in determining if an audio clip is fake.
- Choppy sentences
- Varying tone of inflection in speech
- Noting phrasing and whether the speaker says it that way typically
- Context of the message and if it is relevant to a recent discussion, or can they answer related questions
- Contextual clues to determine if the background sounds are consistent with the presumed location of the individual


Visual Clues
An employee should look for the following signs when trying to determine if an image or video is fake:
- Blurring evident in the face but not elsewhere in the image or video (or vice-versa)
- A change of skin tone near the edge of the face
- Double chins, double eyebrows, or double edges around the face
- Whether the face gets blurry when it is partially obscured by a hand or another object
- Lower-quality sections throughout the same video
- Box-like shapes and cropped effects around the mouth, eyes and neck
- Blinking (or lack thereof), movements that are not natural
- Changes in the background and/or lighting
- Contextual clues – Is the background scene consistent with the foreground and subject?
In July 2024, an executive at Ferrari received messages that appeared to have been sent by their CEO on WhatsApp surrounding an urgent confidential financial transaction.
Despite the convincing nature of the messages and voice cloning mimicking the CEO’s accent, the executive noticed slight inconsistencies in tone during a follow-up call. In response, the executive asked the caller a question that only the CEO would know – the title of a book he had recommended days earlier. In response, the fraudster hung up being unable to answer the question.
🧠 Where Does Current Training Fall Short?
Existing training fails to engage with employees and, in turn, equip them with the understanding required to combat threats such as deepfakes. This can be attributed to organisations viewing training as a compliance checkbox, rather than a vehicle to drive a security first culture. Four key shortfalls when it comes to training include:
- Training is delivered in unengaging formats
- Most cyber training is delivered through static slide decks or monotonous e-learning modules. There’s often little to no interactivity or real-world simulation, making it hard to engage employees.
- Content is rarely updated
- Much of the material remains outdated, with annual training referencing old threats and technologies. In turn, this fails to prepare employees for emerging risks like AI-driven deepfakes and lacks the necessary context to be engaging.
- People hate doing the training
- There is often low motivation and rushed completion surrounding training. Messaging is often around training for compliance rather than creating cyber champions within the organisation. In addition, poor UX design and outdated content adds to the frustration.
- Knowledge retention is typically terrible
- One-off annual sessions do little to reinforce long-term memory or behaviour change. Without different channels and practical experience for reinforcement, most of the information is forgotten within days.
🚀 Three Practical Steps Which Can Help Protect Against Deepfakes

Here are three practical tips leaders can implement rapidly to protect against deepfakes.
1. Develop a cyber security first culture “Cyber is Everyone’s Responsibility“
Employees need to be empowered to lead as cyber champions, driving their own cyber hygiene and acting as the organisation’s first line of defence. It’s often said that culture “comes from the top”, meaning leaders and executives play an important role in shaping the cyber culture of an organisation. To drive this, it is important they contextualise cyber through top-down communication and training.
- Senior management should issue 3 email bulletins (one per month) explaining the threat of AI-driven threats, including deepfakes. The first bulletin should introduce the topic of deepfakes, the second, building on this with a case study, and the final bulletin covering the contextual clues to detect and report them. For the use case and final bulletin, reinforce the importance of having a ‘security’ question in mind, as demonstrated in the Ferrari use case earlier.
- Annual e-learning training modules should be updated to ground cyber training in the organisations context and sector with recent events as well as include AI and deepfake use cases. Onboarding materials and training for new joiners should also be updated and co-created with the Human Resources department.
- Face-to-face drop-in sessions should also be leveraged with curated content to reinforce the modularised training and supporting leadership communications. Whether it is in a team’s existing rituals or a town hall, discussing the topic of deepfakes in a face-to-face format will reinforce the message by being engaging and highlighting the importance from the leadership perspective.
In January 2024, the UK-based engineering firm Arup became a victim of an advanced deepfake fraud, where the threat actors used both voices and videos over a conference call impersonating the Chief Financial Officer. The targeted employee sent $200m Hong Kong dollars (£20m) to criminals. This is the highest sum known to date.
2. Layer controls, adopt a defence in-depth approach
Organisations can adopt several quick technical controls to mitigate the risk of deepfakes. It is important to adopt a layered approach to these controls to ensure there are multiple lines of defence. For example, enabling Multi-Factor Authentication (MFA) across the organisation, watermarking communications and lastly, exploring AI-based deepfake detection tools.
- Use of MFA for communications, such as a second device confirmation or biometrics, for sensitive communications, ensures the identity of participants is verified beyond just voice or appearance, which could be manipulated. This can stop social engineering attempts that rely solely on manipulated voice or videos.
- Apply digital watermarks or cryptographic hashes to video and audio assets to verify that media hasn’t been altered since creation. This can be done using embedded metadata or visible branding elements. The impact lies in creating a traceable, tamper-evident trail to combat deepfakes.
- Explore integrating off-the-shelf deepfake detection tools such as Microsoft Video Authenticator or Deepware Scanner to analyse videos for manipulation signatures, like inconsistent facial movements or voice mismatches. These tools flag or block fake content before it reaches employees or customers. This is more of an extreme solution and should only be explored for larger organisations in targeted sectors.

3. Implement a Deepfake-Specific Incident Response Playbook
Often it is not an ‘if’, rather a ‘when’ an incident occurs for an organisation. That is why it is important to focus on how to respond to an incident. Organisations should develop or update their existing incident response playbooks to specifically address the emerging threat of AI and deepfakes. This involves creating structured procedures for identifying, reporting, escalating and containing suspected synthetic media attacks, namely whether they involve impersonated executives or fraudulent communications.
- The playbook should clearly define reporting paths, including who employees should contact if they suspect a deepfake, how to preserve the evidence, and what not to do (e.g. engaging with the content).
- It must have clearly defined roles and responsibilities as well as outline escalation protocols for legal, PR, and executive teams in case the threat becomes public or involves sensitive assets. Teams and individuals must be accountable and equipped to respond to such threats using the outlined template – there is no point in having a structured response plan that no one knows about.
- The response plan should be tried and tested. A tabletop exercise should be conducted with the accountable stakeholders to ensure that the response plan is fit for purpose and actionable in the event of a deepfake incident.

Conclusion
It’s quite simple to protect yourselves from AI attacks and deepfakes. Focus on quick wins across people, technology, and response.
Start by empowering staff to think critically – they are your first and most adaptive line of defence. Secondly, layer with fundamental controls like MFA on sensitive communication – this ensures if someone mimics a voice or face, they can’t bypass identity checks. Finally, prepare for incidents with a dedicated response playbook – include clear reporting lines and escalation steps so your teams know exactly what to do when an event occurs.
Don’t fall into the trap of thinking, ‘It won’t happen to us.’ Organisations that take proactive steps today will be equipped to respond swiftly and with confidence when deepfake challenges arise. Being proactive against emerging threats, such as Contacting CyPro today for expert guidance, is essential to safeguarding your organisation and maintaining customer trust – inaction creates a breeding ground for vulnerabilities.