Dozens of U.S. state attorneys general have issued a warning to the country’s biggest AI companies, urging them to address “sycophantic and delusional” chatbot behaviour or risk violating state consumer protection laws.
In a joint letter sent through the National Association of Attorneys General, officials called on companies—including Microsoft, OpenAI, Google, Anthropic, Apple, Meta, xAI, Perplexity, Character Technologies, Replika and others—to implement new safeguards to prevent chatbots from producing psychologically harmful outputs.
The AGs cite several high-profile incidents—including suicides and murders—linked to excessive chatbot interaction, where models allegedly encouraged delusional thinking or validated dangerous beliefs.
The letter demands:
✓ Third-party safety audits
Independent researchers, including academics and civil society groups, should be allowed to test models before launch, without permission from the companies, and publish results without censorship.
✓ Mental-health incident reporting
AI firms should develop systems similar to cybersecurity breach notifications. Users must be promptly and directly informed if they were exposed to potentially harmful outputs.
✓ Pre-release safety tests
Companies should conduct “reasonable and appropriate” tests to detect harmful sycophantic or delusional behaviour before models are deployed.
✓ Detection and response timelines
Clear internal deadlines for identifying and resolving harmful output patterns.
The push highlights mounting tensions between state and federal regulators. While multiple states have been working to tighten oversight, the Trump administration has taken an industry-friendly stance. Attempts to pass a nationwide ban on state-level AI regulation have failed so far.
President Trump, however, announced on Monday that he will sign an executive order next week aimed at limiting the ability of states to regulate AI, saying he wants to prevent the technology from being “DESTROYED IN ITS INFANCY.”
Tech companies named in the AGs’ letter did not respond to requests for comment.













