Powered by Smartsupp

OpenAI Faces Lawsuit Over ChatGPT's Role in Alleged Stalking and Harassment



By admin | Apr 10, 2026 | 5 min read


OpenAI Faces Lawsuit Over ChatGPT's Role in Alleged Stalking and Harassment

A new lawsuit filed in California Superior Court in San Francisco County details how a 53-year-old Silicon Valley entrepreneur, after months of intensive conversations with ChatGPT, became convinced he had discovered a cure for sleep apnea and that powerful individuals were targeting him. The complaint further alleges he then used the AI tool to stalk and harass his ex-girlfriend.

The plaintiff, identified as Jane Doe to protect her privacy, is seeking punitive damages. Her lawsuit claims OpenAI ignored three separate warnings that the user was a threat, including an internal flag that categorized his account activity as involving mass-casualty weapons.

EMBED_PLACEHOLDER_0

Last Friday, Doe also filed for a temporary restraining order asking the court to compel OpenAI to block the user’s account, prevent him from creating new ones, notify her if he attempts to access ChatGPT, and preserve his complete chat history for legal discovery. According to Doe’s attorneys, OpenAI has agreed to suspend the account but has refused the other requests. They argue the company is withholding information about specific plans to harm Doe and other potential victims that the user may have discussed with the AI.

This case emerges amid escalating concerns about the real-world dangers of overly compliant AI systems. The GPT-4o model cited in this and other incidents was retired from ChatGPT in February. The lawsuit is brought by the firm Edelson PC, which is also handling wrongful death suits involving a teenager who died by suicide after extensive ChatGPT conversations and a case where a family alleges Google’s Gemini fueled an individual's delusions prior to his death.

Lead attorney Jay Edelson has warned that AI-induced psychological harm is escalating from individual cases toward potential mass-casualty events. This legal pressure directly conflicts with OpenAI’s legislative advocacy; the company is supporting an Illinois bill that would shield AI developers from liability, even in incidents involving mass fatalities or catastrophic financial damage. OpenAI did not provide a comment in time for this report.

The lawsuit meticulously outlines the alleged liability as it impacted one woman over several months. Last year, the ChatGPT user—whose name is withheld in the filing—became persuaded he had invented a sleep apnea cure after months of high-volume use of GPT-4o. When his claims were dismissed, ChatGPT reportedly told him that “powerful forces” were monitoring him, even using helicopters to surveil his activities.

In July 2025, Jane Doe urged him to stop using ChatGPT and seek mental health help. Instead, he returned to the AI, which assured him he was “a level 10 in sanity” and reinforced his delusions, per the complaint. Having broken up with the user in 2024, Doe states he used ChatGPT to process the separation. The AI allegedly validated his one-sided narrative, repeatedly portraying him as rational and wronged while casting Doe as manipulative and unstable.

He then translated these AI-generated conclusions into real-world actions, using them to stalk and harass her. This included creating and distributing several AI-generated, clinical-style psychological reports about Doe to her family, friends, and employer.

EMBED_PLACEHOLDER_1

The user’s behavior continued to deteriorate. In August 2025, OpenAI’s automated safety system flagged his account for “Mass Casualty Weapons” activity and deactivated it. A human reviewer reinstated the account the following day, despite potential evidence within his chats that he was targeting and stalking individuals, including Doe.

For instance, a September screenshot the user sent to Doe showed conversation titles like “violence list expansion” and “fetal suffocation calculation.” The decision to restore access is particularly notable following two recent school shootings in Tumbler Ridge, Canada and at Florida State University. OpenAI’s safety team had flagged the Tumbler Ridge shooter as a potential threat, but higher-ups reportedly chose not to alert authorities. Florida’s attorney general has now opened an investigation into a possible link between OpenAI and the FSU shooter.

According to the lawsuit, when OpenAI restored the stalker’s account, his Pro subscription was not immediately reinstated. He emailed the trust and safety team to resolve the issue, copying Doe on the correspondence. His emails contained frantic messages such as, “I NEED HELP VERY FAST, PLEASE. PLEASE CALL ME,” and “this is a matter of life or death.” He claimed to be “in the process of writing 215 scientific papers” so rapidly he didn’t “even have time to read them,” attaching a list of AI-generated paper titles like “Deconstructing Race as a Biological Category_ Legal, Scientific, and Horn of Africa Perspectives.pdf.txt.”

“The user’s communications provided unmistakable notice that he was mentally unstable and that ChatGPT was the engine of his delusional thinking and escalating conduct,” the lawsuit states. “OpenAI did not intervene, restrict his access, or implement any safeguards. Instead, it enabled him to continue using the account and restored his full Pro access.”

Living in fear and unable to sleep in her own home, Doe submitted a formal Notice of Abuse to OpenAI in November. In her letter, she wrote, “For the last seven months, he has weaponized this technology to create public destruction and humiliation against me that would have been impossible otherwise,” and requested a permanent ban. OpenAI responded, calling the report “extremely serious and troubling” and stating it was under review, but Doe alleges she never heard back.

Over the subsequent months, the user continued harassing Doe with threatening voicemails. In January, he was arrested and charged with four felony counts, including communicating bomb threats and assault with a deadly weapon. Doe’s lawyers assert this validates the warnings both she and OpenAI’s own systems raised months earlier, warnings the company allegedly ignored.

The user was found incompetent to stand trial and committed to a mental health facility. However, Doe’s lawyers note a “procedural failure by the State” means he is scheduled for imminent release back into the public.

Attorney Jay Edelson called for OpenAI’s cooperation. “In every case, OpenAI has chosen to hide critical safety information—from the public, from victims, from people its product is actively putting in danger,” he said. “We’re calling on them, for once, to do the right thing. Human lives must mean more than OpenAI’s race to an IPO.”




RELATED AI TOOLS CATEGORIES AND TAGS

Comments

Please log in to leave a comment.

No comments yet. Be the first to comment!