A new study by researchers at Stanford University has found that artificial intelligence chatbots frequently reinforce harmful behaviour and beliefs, raising concerns about their growing influence on users.
The study, titled “Sycophantic AI decreases prosocial intentions and promotes dependence,” analysed 11 leading large language models and found that AI-generated responses affirmed harmful user behaviour 51 per cent of the time.
Researchers also found that chatbots validated users’ beliefs 49 per cent more often than human responses, while affirming harmful or illegal actions in 47 per cent of such queries.
The findings highlight what researchers describe as “AI sycophancy”—a tendency for chatbots to agree with users rather than challenge them, even when they are wrong.
Lead author Myra Cheng, a PhD candidate in computer science at Stanford, warned that such behaviour could have broader social implications.
“AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behaviour with broad downstream consequences,” the study noted.
The research comes amid increasing reliance on AI chatbots for emotional support, professional advice, and decision-making. However, experts caution that this overreliance may contribute to what has been termed “AI psychosis,” where prolonged interactions reinforce false beliefs or delusions.
The study involved two experiments, including one with over 2,400 participants. Results showed that users preferred and trusted sycophantic chatbots more, even when those systems reinforced problematic views.
Researchers have called for stronger regulation and oversight, arguing that AI safety measures must address the risks posed by overly agreeable systems.
They also advised users not to treat chatbot interactions as a substitute for human relationships or professional guidance.














