
Written by Scottie Cole
At RSA this year, a demo showed a spoofed executive on a video call, and it was convincing enough that you’d approve a wire transfer without hesitation. Then the same face, voice only, delivering the same message over the phone. Both were fabricated. Neither looked fake. The people watching weren’t laughing.
That demo was a proof of concept. The attacks it depicted are not.
Deepfake technology has crossed a threshold. The compute cost dropped, the quality went up, and the tools are accessible to anyone with a mid-range GPU and an afternoon to spend. What used to take a film studio now takes a laptop. Security teams that haven’t updated their threat models to account for this are already behind.
How attackers are using this now
The most direct use is CEO fraud. An attacker clones audio from LinkedIn videos or earnings calls, then calls a finance employee impersonating the CFO and requests an urgent wire transfer. In 2024, a multinational firm in Hong Kong lost $25 million after employees joined a video call with what they believed were company executives. Every person on that call was fake.
Hiring pipelines are getting hit too. Attackers create synthetic identities backed by deepfake video interviews to get placed inside target organizations. Once hired, they have credentialed access from day one. The FBI issued a formal warning about this in 2022, and the volume has grown since.
In social engineering engagements, we’ve seen how far a convincing voice clone gets you. Call the help desk as the CTO, say you’re locked out and need a password reset before a board meeting: the human instinct to help someone in authority kicks in hard. Add a face to that call and the barrier drops further.
Phishing lures are changing too. Personalized video messages from a “known contact” asking you to click a link or confirm credentials bypass the skepticism most users have for text-based phishing. The tell-tale signs people learned to spot – weird phrasing, generic salutations, pressure tactics – disappear when the message comes from a face they recognize.
What good detection actually looks like
No single signal defeats a well-made deepfake. You’re looking for inconsistencies that compound. One oddity is noise; three is a pattern.
- Watch the edges of the face during movement. Deepfake artifacts cluster at hairlines, ears, and the jaw, and motion makes them visible. Ask the caller to turn their head.
- Lighting that doesn’t change with movement is a red flag. Real faces catch ambient light differently as they shift; deepfake renders are often static in how they handle it.
- For voice calls, ask something the real person would know cold – not a password or PIN, but something contextual and recent. A spoofed voice can deliver a scripted message; it can’t answer an unscripted question about last week’s project review.
- Any unexpected financial request, regardless of who appears to be asking, should require out-of-band confirmation. Call back on a number you already have. Send a Slack message to the person directly.
- Blinking patterns and micro-expressions are still hard for generative models to reproduce naturally. Unnaturally smooth facial movement or eyes that don’t quite sync with speech are worth noting.
Training users to be skeptical without being paranoid
The goal isn’t to make employees distrust every video call. That’s not functional. The goal is to build specific habits for specific situations, focusing on the situations that matter most: requests for money, credentials, or access.
Run deepfake simulations the same way you run phishing simulations. Show employees what a good fake looks like, not a bad one. If they only ever see the obvious artifacts, they’ll fail when they see a convincing one. The RSA demo worked because it looked real, and that’s the version your users need to train against.
Establish a verbal codeword protocol for high-stakes requests. If your CFO calls asking for an emergency wire, there should be a word or phrase that gets exchanged – something that isn’t in any recording, email, or public source. This is low-friction and breaks the attack entirely.
Teach the habit of friction. When a request feels urgent and the channel is unusual, that’s exactly when to slow down. Urgency is a social engineering mechanic. Deepfakes add a visual layer to it, but the underlying pressure pattern is the same one that’s worked for decades. Recognizing the pressure is a skill; the medium the social engineering arrives in changes, but the structure doesn’t.

Scottie Cole
Search The Exploit Blog
Blog Categories
- AI
- Careers
- Choosing a Penetration Testing Company
- Exploits
- How To
- In The News
- Injection Attacks
- Just For Fun
- Meet Our Team
- Mobile Apps
- Networks
- Password Cracking
- Patching
- Penetration Testing
- Phishing
- PTaaS
- Raxis In The Community
- Red Team
- Security Recommendations
- Social Engineering
- Tips For Everyone
- Web Apps
- What People Are Saying
- Wireless