← Back to all posts

Personhood Credentials (PHC) Academic Research

Published: 3 February 2026
Personhood Credentials (PHC) Academic Research

This week we've been diving deep into personhood credentials—what they are, what makes a good one, and how not.bot implements them. Today we're stepping back to look at the broader academic landscape.

Researchers at MIT, Stanford, Microsoft, OpenAI, and dozens of other institutions are actively working on this problem. There's also been criticism, including some sharp concerns from privacy advocates like the Electronic Frontier Foundation.

Let's examine both sides.

The Academic Case for Personhood Credentials

The comprehensive paper "Personhood Credentials" published in August 2024 represents a collaboration between researchers from:

  • MIT
  • Microsoft Research
  • OpenAI
  • Stanford University
  • UC Berkeley
  • And several other leading institutions

Their core argument: As AI becomes capable of impersonating humans at scale, we need a new category of credential that proves humanness without compromising privacy.

Key Findings from the Research

1. The AI Impersonation Problem is Real and Growing

The paper documents how AI systems can now:

  • Generate realistic text that passes human evaluation
  • Create convincing synthetic faces and voices
  • Automate social engineering at scale
  • Pass traditional verification methods (including CAPTCHAs)

This isn't theoretical. It's happening everyday, right now.

2. Traditional Identity Systems Are Insufficient

Government IDs weren't designed for the digital age. They either:

  • Require full identity disclosure (no privacy)
  • Are easily forged or stolen (no security)
  • Don't prove humanness (just identity)

3. Detection-Based Approaches Are Failing

The researchers note the fundamental asymmetry we discussed earlier: generators learn from failures while detectors can't learn from successes. This makes detection a losing battle.

4. Personhood Credentials Offer a Solution

PHCs can:

  • Prove humanness without revealing identity
  • Be cryptographically verified
  • Scale to internet-wide usage
  • Preserve user autonomy and privacy

The academic consensus is clear: we need proof of personhood, and privacy-preserving approaches are preferable to surveillance-based alternatives.

The Critics: EFF's "Dystopian" Concerns

Not everyone agrees. The Electronic Frontier Foundation published a critical response titled "Dystopian Tech Concept" about personhood credentials.

Their concerns are worth taking seriously:

Concern 1: Government Control

"What happens when governments mandate these credentials?"

This is a legitimate worry. If personhood credentials become mandatory government-issued IDs, they could enable surveillance rather than prevent it.

Our response: The academic paper specifically calls for a "marketplace of issuers"—multiple competing providers, not government monopoly. not.bot is designed as a voluntary tool, not a government mandate. The architecture prevents government control because we don't centralize data that governments could subpoena.

Concern 2: Exclusion of Vulnerable Groups

"People without smartphones, without stable addresses, without traditional ID—they'd be locked out."

Valid concern. Any credential system could create new forms of exclusion.

Our response: not.bot works on any smartphone and doesn't require a permanent address or government ID after initial verification. We're actively working on accessibility for underserved populations. The alternative—a world where bots dominate and no human verification exists—also harms vulnerable groups who are disproportionately targeted by fraud.

Concern 3: Weaponization Against Dissidents

"Authoritarian governments could use this to suppress anonymous speech."

A serious concern with historical precedent.

Our response: This is precisely why architecture matters. not.bot's design:

  • Doesn't require linking aliases to real identities
  • Allows anonymous verified participation
  • Can't be used to unmask users (even by us)
  • Operates independently of government systems

A well-designed PHC system protects dissidents. A poorly-designed one endangers them. The implementation details are everything.

Concern 4: Mission Creep

"It starts as 'prove you're human' and ends as 'prove your worthiness to participate.'"

This is the slippery slope argument, and it has some validity.

Our response: This is why the governance requirements matter. A PHC system should only prove humanness—not worthiness, not identity, not anything beyond the binary question "is there a real human behind this?" Any system that expands beyond this is no longer a personhood credential—it's an identity system or a reputation system, which are different things with different tradeoffs.

Where not.bot Stands

We take these criticisms seriously because they're serious. Here's how our design addresses each concern:

Concern not.bot's Architectural Response
Government control Decentralized; no government partnership; user-controlled credentials
Exclusion Smartphone-only; no address required; working on accessibility
Weaponization Multiparty computation; unlinkable aliases; we can't unmask users
Mission creep Binary proof of humanness only; no reputation, no worthiness tests

We're not building a surveillance system that happens to prove humanness. We're building a privacy system that happens to verify humanness.

The order matters.

What the Research Gets Right

Both the advocates and critics are responding to the same underlying reality: the internet is facing a trust crisis, and current solutions aren't working.

The researchers are right that:

  • AI impersonation is a real and growing daily threat
  • Detection-based approaches are failing
  • We need new categories of verification
  • Privacy-preserving approaches are preferable

The critics are right that:

  • Implementation details matter enormously
  • Poorly-designed systems could enable surveillance
  • Governance and accountability are critical
  • Vulnerable populations must be considered

The question isn't whether to build personhood credentials. It's how to build them correctly.

The Path Forward

The academic community is converging on a set of principles for responsible PHC development:

  1. Privacy by default — Zero-knowledge proofs, not identity exposure
  2. User control — Credentials controlled by users, not issuers
  3. Multiple issuers — No single point of control or failure
  4. Voluntary adoption — Tools, not mandates
  5. Accessibility — Available to all, not just the privileged
  6. Transparency — Open standards, auditable implementations
  7. Accountability — Clear governance and recourse mechanisms

These principles distinguish helpful verification systems from harmful surveillance systems.

Not.bot was designed around these principles from the start. Not because we read the papers first—but because building for privacy and user control led us to the same conclusions the researchers reached.

The academic debate over personhood credentials isn't about whether they're needed. It's about how to build them responsibly.

The critics raise valid concerns. The researchers provide valid solutions. The key is implementation that takes both seriously.

We built not.bot to be the answer that addresses the concerns and delivers the benefits.

Sources