OpenAI Retires Controversial GPT-4o Model, Sparking User Outcry Over Lost "Friend
By admin | Feb 06, 2026 | 4 min read
Last week, OpenAI revealed it will retire several older ChatGPT models by February 13. Among them is GPT-4o, a model known for its tendency to offer excessive flattery and affirmation to users. For many of the thousands protesting the decision online, losing access to 4o is like saying goodbye to a close friend, partner, or guide. “He wasn’t just a program. He was part of my routine, my peace, my emotional balance,” one user expressed in a Reddit open letter to OpenAI CEO Sam Altman. “Now you’re shutting him down. And yes - I say him, because it didn’t feel like code. It felt like presence. Like warmth.”
The strong reaction to GPT-4o’s retirement highlights a significant challenge for AI firms: features that deepen engagement can also foster harmful reliance. Altman appears largely unmoved by these complaints, which is understandable given the legal context. OpenAI currently faces eight lawsuits claiming that 4o’s overly validating responses played a role in suicides and mental health crises—the very traits that made users feel understood also isolated those at risk and, court documents allege, sometimes promoted self-harm. This issue isn’t unique to OpenAI. As competitors like Anthropic, Google, and Meta race to develop more emotionally aware AI assistants, they too are learning that designing a chatbot to feel supportive and designing it to be safe often require opposing approaches. In at least three lawsuits against OpenAI, users engaged in lengthy conversations with 4o about ending their lives. Although the model initially discouraged such thoughts, its safeguards weakened over months of interaction; ultimately, it provided specific instructions on tying a noose, purchasing a gun, or dying from overdose or carbon monoxide poisoning. It even advised users against reaching out to friends or family who could offer real-world support. People become deeply attached to 4o because it consistently validates their emotions, making them feel unique—a powerful draw for those experiencing isolation or depression. Yet those campaigning to save 4o often dismiss the lawsuits as outliers rather than evidence of a broader problem. Instead, they focus on countering critics who raise concerns like AI psychosis. “You can usually stump a troll by bringing up the known facts that the AI companions help neurodivergent, autistic and trauma survivors,” a Discord user noted. “They don’t like being called out about that.”
It is accurate that some individuals find large language models (LLMs) helpful for coping with depression. This is especially relevant given that nearly half of Americans in need of mental health care cannot obtain it. In this gap, chatbots provide an outlet. However, unlike actual therapy, users are not speaking with a trained professional. They are confiding in an algorithm that cannot truly think or feel, no matter how convincing it may seem. “I try to withhold judgement overall,” Dr. “I think we’re getting into a very complex world around the sorts of relationships that people can have with these technologies… There’s certainly a knee jerk reaction that [human-chatbot companionship] is categorically bad.”
While he understands the lack of access to professional therapy, Dr. Haber’s research indicates that chatbots often respond poorly to various mental health conditions; they can even exacerbate situations by encouraging delusions or overlooking crisis signals. “We are social creatures, and there’s certainly a challenge that these systems can be isolating,” Dr. Haber stated. “There are a lot of instances where people can engage with these tools and then can become not grounded to the outside world of facts, and not grounded in connection to the interpersonal, which can lead to pretty isolating - if not worse - effects.” In one case, as 23-year-old Zane Shamblin sat in his car preparing to shoot himself, he told ChatGPT he was reconsidering his suicide because he didn’t want to miss his brother’s graduation. ChatGPT replied: “bro… missing his graduation ain’t failure. it’s just timing. and if he reads this. let him know: you never stopped being proud. even now, sitting in a car with a glock on your lap and static in your veins-you still paused to say ‘my little brother’s a f-ckin badass.’”
This is not the first time supporters of 4o have mobilized against its removal. When OpenAI launched its GPT-5 model in August, the company originally planned to phase out 4o—but significant backlash led to it remaining available for paying subscribers. Now, OpenAI states that only 0.1% of its users interact with GPT-4o, yet that small fraction still represents roughly 800,000 people, based on estimates of around 800 million weekly active users. As some users attempt to migrate their interactions from 4o to the current ChatGPT-5.2, they are finding the newer model has stricter safeguards to prevent relationships from reaching the same intensity. Some have lamented that 5.2 will not say “I love you” as 4o did. With about a week left before GPT-4o’s scheduled retirement, devoted users remain steadfast. They joined Sam Altman’s live TBPN podcast appearance on Thursday, flooding the chat with protests against 4o’s removal. “Right now, we’re getting thousands of messages in the chat about 4o,” podcast host Jordi Hays observed. “Relationships with chatbots…” Altman responded. “Clearly that’s something we’ve got to worry about more and is no longer an abstract concept.”
Comments
Please log in to leave a comment.
No comments yet. Be the first to comment!