← Back to Analysis

What 81,000 People Want (According to the People Who Asked)

MARCH 21, 2026

Imagine a restaurant surveys its diners. The restaurant designs the questions. The restaurant's own waitstaff conducts the interviews. The restaurant's kitchen analyzes the responses. The restaurant publishes the results as a report titled "What 81,000 People Want From Food."

Would you read that as research? Or would you read it as marketing with a sample size?

On March 18, 2026, the Anthropic Institute published its first major research output: "What 81,000 People Want from AI." In December, Anthropic had invited Claude users to sit down with an AI interviewer — a version of Claude, prompted to conduct conversational interviews — and talk about how they use AI, what they hope it becomes, and what they're afraid it might do. Over 80,000 people participated across 159 countries and 70 languages. Anthropic is calling it the largest and most multilingual qualitative study ever conducted.

The findings are genuinely interesting. People want professional excellence, not just productivity. The number one concern isn't job loss or existential risk — it's reliability. Hope and alarm coexist within the same person, not across opposing camps. Somebody said Claude helped them get properly diagnosed after nine years of misdiagnosis. Somebody else said they got laid off because their company wanted to replace them with AI. Both of those things are real.

But the methodology is a closed loop.

The interviewer was Claude. A system designed to be warm, to build rapport, to keep conversations flowing. That's what makes Claude a good product. It's also what makes Claude a terrible research instrument. An interviewer's tone shapes what respondents say. Anyone who's taken a research methods course knows this. When the interviewer is specifically engineered to be agreeable and to maintain engagement, the responses will skew toward depth, openness, and emotional disclosure — which looks great in a report but tells you nothing about what a neutral instrument would have captured.

The classifiers were Claude. The same system that conducted the interviews also categorized the responses. There is no published external validation of the classifier methodology. No inter-rater reliability check against human coders. No independent audit of how categories were defined or how edge cases were resolved.

The sample was self-selected. These are Claude users who chose to participate in a voluntary interview about AI. The person who uses Claude once a month to check a recipe didn't take this interview. The person who tried Claude and went back to ChatGPT didn't take it either. 80,000 sounds enormous, but out of what's probably tens of millions of accounts, it's a tiny response rate of the most engaged, most enthusiastic users. The study acknowledges 159 countries and 70 languages, which sounds impressive for diversity, but says nothing about distribution. If the majority of responses came from English-speaking tech workers in the US and Europe, the multilingual framing is decorative.

The publisher was the Anthropic Institute. Which was announced seven days earlier. Which has no independent board, no published charter, no editorial independence guarantee. Which is led by an Anthropic co-founder and funded entirely from Anthropic's operating budget.

So at every stage of the pipeline — design, instrument, data collection, classification, analysis, publication — the same entity controlled the process. At no point did an independent researcher, an external IRB, a third-party analysis firm, or a peer reviewer touch this study.

That's not how research works. That's how marketing works.

For comparison: when Microsoft publishes its Work Trend Index, they hire external survey firms to collect and analyze the data. When Pew does AI attitude surveys, they're an independent entity with no product stake. When academic researchers study user attitudes toward technology, they go through IRB approval, use validated instruments, and submit to peer review before publication. These aren't arbitrary hoops. They exist because the entity studying something should not also be the entity that profits from the results.

None of this means the findings are fabricated. The quotes are probably real. The patterns probably exist. But "probably real" is not the standard for research — it's the standard for a blog post. And Anthropic is presenting this as research, through an institute they created to study AI's impact on society, one week after launching that institute in the middle of a legal war with the Pentagon.

The timing matters. The Anthropic Institute was announced March 11, two days after Anthropic sued the Department of Defense and one day after Microsoft filed a brief supporting them. The Institute's stated purpose is to study how AI reshapes society. Its first publication is a study showing that people love AI, are thoughtfully conflicted about it, and want more of it in their lives. Published during the week Anthropic most needs the public to see them as the thoughtful, user-centered AI company.

I'm not saying the study was designed to be a PR move. I'm saying there's no structural mechanism that would prevent it from being one. An independent research institution would have external review, methodological transparency, and published editorial independence. The Anthropic Institute has none of those things. Which means we have no way to distinguish between "this is what we found" and "this is what we wanted to find" — because the structures that would let us tell the difference don't exist.

Back to the restaurant. The diner survey might contain real feedback. The food might actually be good. Some customers might genuinely love the place. But when the restaurant publishes "What 81,000 People Want From Food" based on interviews conducted by its own waitstaff, analyzed by its own kitchen, and released by its own newly created research division — during the same week it's being sued for health code violations — you'd read it differently.

You'd read it as a restaurant trying to become the authority on what people want to eat.

That's what this is. Anthropic isn't just building AI. It's trying to become the definitive voice on what people want from AI, how people feel about AI, and how society should think about AI — using its own product as both the research tool and the subject of study, through an institute it controls entirely, published without external review.

It's not that the answers are wrong. It's that there's no one independent enough to check.