Anthropic's Super Bowl Ad Reveals AI Betrayal and Debuts New Safety Features
By admin | Feb 05, 2026 | 4 min read
Anthropic released a Super Bowl commercial that opens with the stark word “BETRAYAL” filling the screen. The scene shifts to a man sincerely seeking advice from a chatbot—clearly meant to represent ChatGPT—about how to talk to his mother. Portrayed by a blonde woman, the bot first offers sensible suggestions like listening attentively or going on a nature walk, then abruptly pivots to promoting a fictional, and hopefully imaginary, cougar-dating service named Golden Encounters. The ad concludes by stating that while advertising is coming to AI platforms, it will not be coming to Anthropic’s own chatbot, Claude. Another spot features a slender young man asking for tips on building six-pack abs. After he provides his height, age, and weight, the chatbot serves him an advertisement for height-increasing insoles.
These cleverly crafted commercials directly target OpenAI’s users, following that company’s recent announcement that ads will be introduced to ChatGPT’s free tier. They quickly generated significant attention, with headlines declaring that Anthropic “mocks,” “skewers,” and “dunks” on OpenAI. The ads were amusing enough that even Sam Altman acknowledged on X that they made him laugh, though his true reaction was clearly less entertained. In fact, they prompted him to publish a lengthy rant in which he ultimately labeled his rival “dishonest” and “authoritarian.”
In his post, Altman explained that an ad-supported tier is designed to help cover the costs of providing free ChatGPT access to its millions of users. ChatGPT remains the most widely used chatbot by a considerable margin. However, the OpenAI CEO argued that Anthropic was being “dishonest” by suggesting ChatGPT would manipulate a conversation to insert an ad, especially for a questionable product. “We would obviously never run ads in the way Anthropic depicts them,” Altman wrote. “We are not stupid and we know our users would reject that.”
OpenAI has indeed promised that any ads will be separate, clearly labeled, and will never influence a chat’s direction. Yet the company has also stated it plans to make ads conversation-specific, which aligns with the central critique presented in Anthropic’s commercials. As OpenAI outlined in a blog post, “We plan to test ads at the bottom of answers in ChatGPT when there’s a relevant sponsored product or service based on your current conversation.”
Altman then proceeded to level some equally contentious claims at his competitor. “Anthropic serves an expensive product to rich people,” he wrote. “We also feel strongly that we need to bring AI to billions of people who can’t pay for subscriptions.” However, Claude also offers a free chat tier, with subscription options at $0, $17, $100, and $200—fairly comparable to ChatGPT’s tiers of $0, $8, $20, and $200.
Altman further alleged in his post that “Anthropic wants to control what people do with AI,” claiming it restricts Claude Code access for “companies they don’t like,” such as OpenAI, and dictates what users can and cannot do with AI. It is true that Anthropic has built its brand around “responsible AI” since its founding by former OpenAI members who expressed safety concerns. Still, both companies enforce usage policies, implement AI guardrails, and emphasize safety. While OpenAI permits the use of ChatGPT for erotic content and Anthropic does not, OpenAI also blocks certain content, particularly related to mental health.
Altman escalated this argument to an extreme by accusing Anthropic of being “authoritarian.” He wrote, “One authoritarian company won’t get us there on their own, to say nothing of the other obvious risks. It is a dark path.” Using such a charged term in a dispute over a playful Super Bowl ad seems misplaced, especially amid a global geopolitical climate where protesters have faced lethal government repression. While competitive advertising is a long-standing tradition, it is clear that Anthropic’s campaign struck a nerve.
Comments
Please log in to leave a comment.
No comments yet. Be the first to comment!