This week we've been diving deep into personhood credentials—what they are, what makes a good one, and how not.bot implements them. Today we're stepping back to look at the broader academic landscape.
Researchers at MIT, Stanford, Microsoft, OpenAI, and dozens of other institutions are actively working on this problem. There's also been criticism, including some sharp concerns from privacy advocates like the Electronic Frontier Foundation.
Let's examine both sides.
The comprehensive paper "Personhood Credentials" published in August 2024 represents a collaboration between researchers from:
Their core argument: As AI becomes capable of impersonating humans at scale, we need a new category of credential that proves humanness without compromising privacy.
1. The AI Impersonation Problem is Real and Growing
The paper documents how AI systems can now:
This isn't theoretical. It's happening everyday, right now.
2. Traditional Identity Systems Are Insufficient
Government IDs weren't designed for the digital age. They either:
3. Detection-Based Approaches Are Failing
The researchers note the fundamental asymmetry we discussed earlier: generators learn from failures while detectors can't learn from successes. This makes detection a losing battle.
4. Personhood Credentials Offer a Solution
PHCs can:
The academic consensus is clear: we need proof of personhood, and privacy-preserving approaches are preferable to surveillance-based alternatives.
Not everyone agrees. The Electronic Frontier Foundation published a critical response titled "Dystopian Tech Concept" about personhood credentials.
Their concerns are worth taking seriously:
"What happens when governments mandate these credentials?"
This is a legitimate worry. If personhood credentials become mandatory government-issued IDs, they could enable surveillance rather than prevent it.
Our response: The academic paper specifically calls for a "marketplace of issuers"—multiple competing providers, not government monopoly. not.bot is designed as a voluntary tool, not a government mandate. The architecture prevents government control because we don't centralize data that governments could subpoena.
"People without smartphones, without stable addresses, without traditional ID—they'd be locked out."
Valid concern. Any credential system could create new forms of exclusion.
Our response: not.bot works on any smartphone and doesn't require a permanent address or government ID after initial verification. We're actively working on accessibility for underserved populations. The alternative—a world where bots dominate and no human verification exists—also harms vulnerable groups who are disproportionately targeted by fraud.
"Authoritarian governments could use this to suppress anonymous speech."
A serious concern with historical precedent.
Our response: This is precisely why architecture matters. not.bot's design:
A well-designed PHC system protects dissidents. A poorly-designed one endangers them. The implementation details are everything.
"It starts as 'prove you're human' and ends as 'prove your worthiness to participate.'"
This is the slippery slope argument, and it has some validity.
Our response: This is why the governance requirements matter. A PHC system should only prove humanness—not worthiness, not identity, not anything beyond the binary question "is there a real human behind this?" Any system that expands beyond this is no longer a personhood credential—it's an identity system or a reputation system, which are different things with different tradeoffs.
We take these criticisms seriously because they're serious. Here's how our design addresses each concern:
| Concern | not.bot's Architectural Response |
|---|---|
| Government control | Decentralized; no government partnership; user-controlled credentials |
| Exclusion | Smartphone-only; no address required; working on accessibility |
| Weaponization | Multiparty computation; unlinkable aliases; we can't unmask users |
| Mission creep | Binary proof of humanness only; no reputation, no worthiness tests |
We're not building a surveillance system that happens to prove humanness. We're building a privacy system that happens to verify humanness.
The order matters.
Both the advocates and critics are responding to the same underlying reality: the internet is facing a trust crisis, and current solutions aren't working.
The researchers are right that:
The critics are right that:
The question isn't whether to build personhood credentials. It's how to build them correctly.
The academic community is converging on a set of principles for responsible PHC development:
These principles distinguish helpful verification systems from harmful surveillance systems.
Not.bot was designed around these principles from the start. Not because we read the papers first—but because building for privacy and user control led us to the same conclusions the researchers reached.
The academic debate over personhood credentials isn't about whether they're needed. It's about how to build them responsibly.
The critics raise valid concerns. The researchers provide valid solutions. The key is implementation that takes both seriously.
We built not.bot to be the answer that addresses the concerns and delivers the benefits.