Here's a truth that most security vendors won't tell you: detection-based security is mathematically doomed.
Not eventually doomed. Not doomed if AI gets smarter. Structurally doomed from day one.
There's a fundamental asymmetry built into every detection system that guarantees generators will win over time. Understanding this asymmetry is the key to understanding why verification is the only path forward.
Let's start with where we are today. According to fraud researcher David Birch:
These numbers are staggering. And they're getting worse every year.
Why? Because detection can't keep up with generation.
Here's the mathematical truth at the heart of every detection system:
Generators get feedback. Detectors don't.
When a generator (a bot, a deepfake creator, an AI agent) tries to pass detection and fails, it learns exactly why. The rejection message, the error code, the behavior that triggered detection—all of this becomes training data for the next attempt.
When a generator tries to pass detection and succeeds, the detector has no idea it failed. It can't learn from mistakes it doesn't know it made.
Think about what this means:
Over time, attackers learn from every failure while defenders are blind to their failures. The system improves on one side only.
This isn't a fixable bug. It's a structural feature of how detection works.
This explains a pattern that frustrates security researchers: why deepfake detection tools plateau at 70-80% accuracy while generation quality keeps improving.
Detection tools are trained on known deepfakes. They get good at catching those deepfakes. Then generators learn what detection looks for and avoid it. New deepfakes pass detection. But researchers don't know which ones passed—so they can't train on them.
The generator-detector arms race is inherently asymmetric. Generators iterate on failures. Detectors can only iterate on known successes.
The same asymmetry explains the endless spam filter arms race.
Spam filters catch known spam patterns. Spammers learn which messages get through. They generate more messages like the successful ones. But the filter doesn't know which spam got through—only which it caught.
After decades, spam remains a massive problem. Not because filter developers are incompetent, but because the asymmetry can't be engineered away.
CAPTCHAs are the purest example of detection failure.
CAPTCHAs got harder. Users got frustrated. Bots got smarter. Eventually AI could pass CAPTCHAs more reliably than tired humans.
The arms race was unwinnable because:
In July 2025, OpenAI's ChatGPT agent casually clicked through "I am not a robot" verification. The detection era is officially over.
Asymmetry creates another problem: false positives punish real people.
To catch more bad actors, you make detection stricter. Stricter detection means more false positives—real humans flagged as bots, legitimate transactions blocked as fraud.
These false positives have real costs:
So you loosen detection. Now more bad actors get through.
You're trapped between false positives (hurting real users) and false negatives (missing bad actors). There's no detection threshold that solves this—because the fundamental asymmetry remains.
When people see AI-generated fakes, they often ask: "Can't we just use AI to detect AI?"
No. Here's why:
"Better detection" is still detection. The asymmetry doesn't care how sophisticated your detector is.
If detection is structurally doomed, what's the alternative?
Verification.
Instead of asking "does this look real?" we ask "can you prove it's real?"
Verification doesn't face the asymmetry problem because:
A deepfake video can look perfectly real. But the person depicted can't cryptographically prove they recorded it.
An AI agent can pass a CAPTCHA. But it can't cryptographically prove it's tied to a verified human identity.
A synthetic identity can look authentic. But it can't cryptographically prove it was created by a real person.
The shift from detection to verification is a shift from probabilistic guessing to mathematical certainty.
Detection says: "This is probably real (73% confidence)." Verification says: "This is cryptographically proven real."
In a world where AI can generate anything convincingly, probability isn't enough. Only mathematical proof provides reliable ground truth.
The arms race between generation and detection is over. Generation won—not because generators got smarter, but because the asymmetry was always in their favor.
The future belongs to verification. To cryptographic proof. To systems that don't guess whether something is real, but know.
That's what we're building.