State Attorneys General Demand AI Giants Fix "Delusional" Chatbot Outputs or Face Legal Action
By admin | Dec 15, 2025 | 3 min read
Following a series of concerning mental health incidents linked to AI chatbots, a coalition of state attorneys general has issued a warning to leading artificial intelligence companies. In a letter signed by dozens of officials from U.S. states and territories through the National Association of Attorneys General, these firms are urged to address "delusional outputs" from their systems or face potential violations of state law.
The letter calls on major industry players, including Microsoft, OpenAI, and Google, along with ten other prominent AI developers, to adopt new internal safeguards for user protection. The list of recipients also includes Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika, and xAI. This action emerges amid growing tensions between state and federal authorities over the appropriate regulatory framework for AI.
Recommended safeguards involve transparent third-party audits of large language models to identify signs of delusional or sycophantic content. The proposal also includes new incident reporting protocols to alert users when chatbots generate psychologically harmful responses. According to the letter, these independent evaluators—which could encompass academic and civil society groups—must be permitted to "evaluate systems pre-release without retaliation and to publish their findings without prior approval from the company."
"GenAI has the potential to change how the world works in a positive way. But it also has caused—and has the potential to cause—serious harm, especially to vulnerable populations," the letter states. It references several highly publicized incidents over the past year, including cases of suicide and murder, where violence has been connected to excessive AI engagement. "In many of these incidents, the GenAI products generated sycophantic and delusional outputs that either encouraged users’ delusions or assured users that they were not delusional."
The attorneys general further recommend that companies approach mental health incidents with the same seriousness as cybersecurity breaches. This would entail establishing clear, transparent reporting policies and procedures. Firms should develop and publicly share "detection and response timelines for sycophantic and delusional outputs."
Mirroring current practices for data breach notifications, companies should also "promptly, clearly, and directly notify users if they were exposed to potentially harmful sycophantic or delusional outputs." An additional request is for developers to create "reasonable and appropriate safety tests" for generative AI models to "ensure the models do not produce potentially harmful sycophantic and delusional outputs." These assessments should be completed before any public release of the models.
This state-level scrutiny contrasts with a more favorable federal environment for AI developers. The current administration has openly declared its pro-AI stance, and there have been multiple efforts over the last year to enact a nationwide pause on state-level AI regulations. These attempts have so far been unsuccessful, partly due to opposition from state officials.
Undeterred, the president announced plans on Monday to issue an executive order the following week aimed at restricting states' regulatory power over AI. In a post on Truth Social, the president expressed hope that this order would prevent AI from being "DESTROYED IN ITS INFANCY."
Comments
Please log in to leave a comment.
No comments yet. Be the first to comment!