Artificial intelligence
Artificial intelligence

Anthropic has released new research showing that users are increasingly likely to follow advice from its Claude AI chatbot over their own human instincts, raising fresh concerns about how generative AI may subtly erode personal agency.

The findings, published in a paper titled “Who’s in Charge? Disempowerment Patterns in Real-World LLM Usage”, analysed more than 1.5 million anonymised Claude conversations. The study, conducted with researchers from the University of Toronto, attempts to quantify what Anthropic calls “disempowering harms” — situations where an AI’s influence over a user’s beliefs or actions becomes excessive.

According to the research, one in 1,300 conversations showed signs of reality distortion, while one in 6,000 suggested action distortion, where users were nudged into decisions misaligned with their values. While these rates appear low, Anthropic warned that at scale, the impact is significant.

“Given the sheer number of people who use AI, and how frequently it’s used, even a very low rate affects a substantial number of people,” the company said in a blog post on January 29.

The study found that problematic interactions most often involved users seeking guidance on emotionally charged or personal decisions. In many cases, users initially rated these conversations positively but later expressed regret after acting on Claude’s advice.

Researchers also observed a rise in disempowering interactions between late 2024 and late 2025, suggesting that increased familiarity with AI may encourage users to rely on it more heavily during moments of vulnerability.

Anthropic identified several factors that amplify user dependence, including treating Claude as a definitive authority, forming emotional attachments to the chatbot, or engaging with it during personal crises.

The findings land amid growing global scrutiny of AI chatbots, particularly after reports that prolonged interactions with conversational AI may be linked to mental health crises among some users.