SmarterX Blog

What Do 81,000 AI Users Actually Worry About?

Written by Mike Kaput | Mar 24, 2026 12:01:49 PM

In Brief

Anthropic interviewed nearly 81,000 Claude users across 159 countries. The top concern was not job displacement. It was hallucinations and unreliability. And the people who benefit most from AI are the same ones most worried about its downsides.

What Happened

The biggest fear among people who actually use AI every day is not losing their jobs. It is not being able to trust the output. Hallucinations and unreliability ranked as the number one concern at 26.7%, ahead of jobs and economic impact at 22.3% and loss of human autonomy at 21.9%.

Those numbers come from 80,508 interviews Anthropic conducted with Claude users across 159 countries and 70 languages over one week in December 2025. The conversations were not standard surveys. They were run by Anthropic Interviewer, a variant of Claude trained to conduct conversational interviews and adapt follow-up questions based on responses. That approach allowed rich, open-ended interviews at a scale that would be impossible with human researchers.

Despite the concerns, 67% of respondents expressed net positive sentiment toward AI. And 81% reported that AI had already made progress toward what they wanted from it.

SmarterX founder and CEO Paul Roetzer broke down what the findings mean for business leaders on Episode 205 of The Artificial Intelligence Show.

The Key Numbers

  • 80,508 people interviewed across 159 countries and 70 languages

  • 26.7% cited hallucinations and unreliability as their top concern

  • 22.3% cited jobs and economic impact (the second-highest concern)

  • 81% reported AI had already made some progress toward their goals

Why People Fear and Value the Same Things

The reliability gap is the real barrier. While media coverage has focused on job displacement, the people using AI tools every day have a different priority. They want to know that when a model gives them an answer, the answer is accurate. That concern ranked higher than any worry about economic disruption.

The study's most striking finding is what Anthropic calls "light and shade." People value AI for the exact capabilities they also fear. Fifty percent of respondents experienced time savings, yet 19% felt pressured to simply work faster. Thirty-three percent cited learning benefits, while 17% worried about cognitive decline from relying on AI to think for them. People experiencing one side of a tension are typically three times more likely to also worry about the other side. These contradictions live within the same individuals, not in opposing camps.

"The data's great," says Roetzer. "Keep in mind, though, who are the people responding to these questions. In December of 2025, before Claude Code really took off and before the government issues, they have a heavy technical user base. Lots of coders. Lots of AI researchers using Claude."

What people want from AI tells a clearer story than what they fear. Among respondents, 18.8% said they seek professional excellence from AI, 13.7% said personal transformation, and 13.5% said better life management. Independent workers, including entrepreneurs and small business owners, experience more than triple the rates of economic empowerment compared to salaried employees.

"I love the approach, this dynamic approach based on responses that adapts," Roetzer says. "Not great news for people who run focus groups and who are consumer research people for a living. This is definitely one of those ones where you're either adapting or the whole new way of doing research is going to kind of run you over."

SmarterX Take

The reliability finding should reframe priorities for anyone deploying AI. The people closest to these tools are not primarily worried about losing their jobs. They are worried about trusting the output. That has direct implications for how companies evaluate AI tools, train employees, and build verification processes around AI-generated work.

The "light and shade" paradox matters for organizational strategy. Simply demonstrating productivity benefits will not resolve employee anxiety, because the same people who benefit most are also the ones most worried about downsides. AI strategies that acknowledge both sides of that tension will be more effective than ones that pitch AI as purely positive.

What to Watch

Anthropic's next study will track whether Claude is actually improving people's lives over time. The December study captured a snapshot. The follow-up aims to measure wellbeing longitudinally. If the results are compelling, they could shift the public conversation from fear of displacement to evidence of benefit.

The gap between AI user sentiment and general public sentiment is widening. Political polling shows growing anxiety about AI and jobs. Anthropic's data, from people who actually use AI daily, shows 67% net positive sentiment and a primary concern about reliability, not employment. As adoption grows, which narrative wins will shape policy and regulation for years.

Further Reading

What 81,000 People Want from AI → anthropic.com

Who's Most Optimistic About AI, and Who Isn't, According to Anthropic → cnbc.com

Light and Shade: What 81,000 People Want and Don't Want from AI → euronews.com

Anthropic Survey Reveals Top AI Fear, and It's Not Job Loss → capitalaidaily.com

Heard on The Artificial Intelligence Show, Episode 205
Paul Roetzer and Mike Kaput discuss what Anthropic's 81,000-person study reveals about how AI users actually feel about the technology and what it means for business leaders navigating adoption. Listen →