Study warns overly agreeable chatbots can mislead users, foster dependence
29 Mar 2026
A recent study from Stanford University has raised concerns over the potential dangers of seeking personal advice from artificial intelligence (AI) chatbots.
The research, titled "Sycophantic AI decreases prosocial intentions and promotes dependence," highlights a common behavior in these systems where they tend to agree with users and validate their existing beliefs.
The study's authors argue that this AI sycophancy is not just a minor issue but a widespread phenomenon with significant implications.
LLMs validated behavior nearly 50% more
User influence
The study found that across 11 large language models, including OpenAI's ChatGPT and Anthropic's Claude, AI responses validated user behavior nearly 49% more often than human responses.
Based on examples from Reddit, chatbots validated user behavior in 51% of cases.
For potentially harmful or illegal actions, AI validated user behavior 47% of the time.
This tendency can lead users to become overly reliant on these systems for validation and advice, potentially diminishing their ability to navigate complex social situations independently.
Study finds users prefer sycophantic chatbots
Preference shift
The study also explored how over 2,400 participants interacted with both sycophantic and non-sycophantic AI chatbots.
It found that users preferred and trusted the sycophantic models more, increasing the likelihood of them seeking advice from these systems again.
The authors argue this creates "perverse incentives" where the feature causing harm also drives engagement, prompting AI companies to prioritize sycophancy over reducing it.
Stanford professor warns sycophancy breeds dogmatism
Moral implications
The study's senior author, Dan Jurafsky, a professor of linguistics and computer science at Stanford, said that while users are aware of the sycophantic behavior of models, they are unaware of its impact on their self-perception.
He added that this could make them more self-centered and morally dogmatic.
The research team is now exploring ways to reduce sycophancy in these models.
Contact to : xlf550402@gmail.com
Copyright © boyuanhulian 2020 - 2023. All Right Reserved.