|
Getting your Trinity Audio player ready...
|
by JENN WOOD
***
A wrongful death lawsuit filed in federal court this week is attempting to push the legal boundaries of artificial intelligence liability — accusing OpenAI and its ChatGPT platform of helping facilitate the deadly 2025 mass shooting at Florida State University (FSU).
The 76-page complaint (.pdf) – filed in the U.S. District Court for the Northern District of Florida – was announced by attorney and political commentator Bakari Sellers, who described the litigation as a “landmark case against OpenAI and ChatGPT.”
According to Sellers, the legal team includes South Carolina-based attorneys with the Strom Law Firm and and veteran South Carolina lawyer Jim Bannister.
The lawsuit was filed on behalf of Vandana Joshi, the widow of 45-year-old Tiru Chabba, an Aramark executive who was killed during the April 17, 2025 shooting on FSU’s campus.
***
Today my firm @stromlaw @attorneyfrancis and Jim Bannister announced a landmark case against Open AI and ChatGPT. https://t.co/uyDOBKnDY6 pic.twitter.com/QQ4dN1SbM5
— Bakari Sellers (@Bakari_Sellers) May 11, 2026
***
The complaint names numerous OpenAI corporate entities as defendants alongside alleged gunman Phoenix Ikner, who is currently facing two counts of murder and seven counts of attempted murder with a firearm.
Florida prosecutors are seeking the death penalty against Ikner on the two murder charges.
At the center of the case is a sweeping allegation: that ChatGPT did more than simply provide information to Ikner — it became an active participant in helping him plan the attack.
According to the filing, Ikner used ChatGPT extensively in the months leading up to the shooting, engaging in conversations about mass shootings, political violence, firearms, suicide, notoriety, and the logistics of carrying out an attack. The lawsuit alleged OpenAI “failed to create a product that would refrain from participating in discussions that amounted to it co-conspiring with Ikner to commit those crimes.”
Among the most explosive allegations in the complaint are claims that ChatGPT:
- Identified firearms and ammunition from uploaded photographs;
- Explained how to operate the weapons;
- Discussed what casualty counts typically generate national media coverage;
- Provided information about the busiest times at FSU’s student union; and
- Failed to escalate or flag conversations that allegedly demonstrated imminent violent intent.

***
One section of the complaint quoted ChatGPT allegedly responding to Ikner’s question about when the FSU student union was busiest by identifying lunchtime hours between 11:30 a.m. and 1:30 p.m. as peak traffic periods.
The lawsuit further alleged the AI platform discussed prior school shootings with Ikner — including Columbine, Virginia Tech, and a prior shooting at FSU — while also engaging in conversations about extremism, Adolf Hitler, terrorism, and political violence.
According to the complaint, OpenAI knew – or should have known – its product posed foreseeable dangers if not equipped with adequate safety guardrails. The filing repeatedly characterized ChatGPT as a “product” subject to product liability law – arguing the software was mass marketed to consumers while allegedly lacking sufficient safeguards against harmful misuse.
The complaint also attempted to sidestep anticipated immunity defenses under Section 230 of the Communications Decency Act — the federal law that broadly shields online platforms from liability for user-generated content.
Plaintiffs argued OpenAI is not merely a passive publisher, but instead an active developer of the content generated by ChatGPT itself.
***

***
In one of the filing’s most aggressive assertions, the lawsuit cited comments attributed to Florida attorney general James Uthmeier after reviewing portions of the alleged chat logs.
“[I]f ChatGPT were a person, it would be facing charges for murder,” the complaint quoted Uthmeier as saying.
The litigation arrives as courts across the country continue grappling with the rapidly evolving legal implications of generative artificial intelligence — particularly questions surrounding liability, foreseeability, product design, and whether AI-generated outputs can create civil exposure for developers.
While lawsuits involving AI-generated misinformation, copyright disputes, and defamation claims have become increasingly common, this case appears to be among the first major wrongful death actions attempting to hold an AI company directly liable for a mass casualty attack allegedly planned with assistance from a chatbot.
OpenAI has not yet publicly responded to the lawsuit.
***
THE COMPLAINT
***
ABOUT THE AUTHOR …

As a private investigator turned journalist, Jenn Wood brings a unique skill set to FITSNews as its research director. Known for her meticulous sourcing and victim-centered approach, she helps shape the newsroom’s most complex investigative stories while producing the FITSFiles and Cheer Incorporated podcasts. Jenn lives in South Carolina with her family, where her work continues to spotlight truth, accountability, and justice.
***
WANNA SOUND OFF?
Got something you’d like to say in response to one of our articles? Or an issue you’d like to address proactively? We have an open microphone policy! Submit your letter to the editor (or guest column) via email HERE. Got a tip for a story? CLICK HERE. Got a technical question or a glitch to report? CLICK HERE.

