In March 2025, a finance director in Singapore joined what seemed like a routine Zoom call with senior leadership. The CFO was there. Other executives too. They discussed an urgent fund transfer.
The finance director authorized a $499,000 payment.
None of those executives were real. Every face on that video call was a deepfake.
This isn't science fiction. It's happening every day. And the losses are staggering.
The most infamous case? A Hong Kong finance worker transferred $25 million after a video call with what they believed was their CFO and colleagues. The attackers had created deepfakes using publicly available footage from earnings calls and company videos.
Unlike other departments, finance teams can move money directly. They have authority to approve wire transfers. They handle urgent transactions regularly.
Attackers know this.
And here's the terrifying part: creating a convincing voice clone requires just 3 seconds of audio. Three seconds. That's less than a typical greeting.
Where do criminals get that audio?
Every CFO who has ever spoken publicly has given attackers everything they need.
Hong Kong police investigating the Arup case discovered how sophisticated the attack was. The perpetrators:
The finance worker who approved the transfer had no reason to doubt what they saw. The faces matched. The voices matched. The request seemed legitimate.
This is what trust collapse looks like.
Some companies think AI detection tools will solve this. They won't.
Here's the problem: in controlled studies, human accuracy at identifying high-quality deepfake videos drops to just 24.5%. Yet 60% of people believe they could spot a fake.
This confidence is completely unfounded.
And AI detection tools? They're in an arms race they're losing. As soon as detection improves, deepfake generation adapts. It's a technological stalemate at best.
The solution isn't better detection. It's better verification.
Forward-thinking companies are implementing what we call "cryptographic trust infrastructure":
1. Kill the single point of authority No single person should be able to authorize large transfers based on a video call. Period.
2. Out-of-band verification If someone requests a transfer via video, verify through a completely separate channel. Call them back on a known number. Send a verification code through internal systems.
3. Proof-based identity verification This is where not.bot comes in. Instead of trusting what you see and hear, verify identity through cryptographic proof that can't be faked.
When someone claims to be your CFO, you don't trust the pixels on your screen. You verify their identity mathematically.
The deepfake arms race will only intensify. By 2026, deepfake-as-a-service platforms will make these attacks available to anyone with a credit card.
The question isn't whether your organization will face a deepfake attack. It's whether you'll be ready.
Detection assumes you can spot the fake. That's a losing bet.
Verification assumes nothing you see is real until proven otherwise. That's the only winning strategy.
Your voice is now a credential that can be stolen. It's time to treat identity verification with the same rigor you treat your financial controls.
Because if your CFO can be cloned in 3 seconds, the question isn't if you'll be targeted—it's when.