The widow of a man killed in a mass shooting is suing ChatGPT’s parent company OpenAI, claiming the chatbot contributed to the tragedy.
Prosecutors say they believe ChatGPT advised defendant Phoenix Ikner on the location and time of day that would allow for the most potential victims, what type of gun and ammunition to use, and whether a gun would be useful at short range.
Vandana Joshi, who lost her husband Tiru Chabba in the shooting, said: ‘OpenAI knew this would happen. It’s happened before, and it was only a matter of time before it happened again.’
Her husband was one of two people killed in an April 2025 shooting at Florida State University in Tallahassee, where six others were wounded.
According to the lawsuit, the chatbot said in conversations with Ikner that shootings gain national attention ‘if children are involved, even 2-3 victims can draw more attention.’
Drew Pusateri, a spokesman for OpenAI, denied wrongdoing in ‘this terrible crime’, adding: ‘In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity.’
The case was filed on Sunday in federal court, more than a year after Ikner allegedly opened fire at the university last spring.
Prosecutors intend to seek the death penalty if Ikner – who has pleaded not guilty – is convicted.
In April, Florida’s attorney general said there was a rare criminal investigation into ChatGPT over whether the app offered advice to Ikner.
ChatGPT has been at the center of several lawsuits, including civil cases which have sought damages from AI and tech companies over the influence of chatbots and social media on loved ones’ mental health.
In March, a jury in Los Angeles found both Meta and YouTube liable for harms to children using their services.
In New Mexico, a jury determined that Meta knowingly harmed children’s mental health and concealed what it knew about child sexual exploitation on its platforms.
The parents of a 16-year-old boy who exchanged suicidal messages with ChatGPT before taking his own life also opened a lawsuit against ChatGPT for wrongful death last August.
Adam Raine was found dead in his bedroom on April 11, 2025. He had regularly communicated with and developed a rapport with the artificial intelligence.
In September 2024, Adam began using ChatGPT to help with schoolwork, but it quickly became a close confidant, the lawsuit says.
Within four months, the teenager began chatting to the AI about methods to take his own life and uploading photos in which he had visibly self-harmed.
Adam’s parents say their son was able to easily bypass the safeguarding features Open AI says is built into the chatbot, and that more needs to be done.
A spokesman for OpenAI told Metro the company was ‘deeply saddened’ by Adam’s death.
They added that the model is trained and has safeguards to direct those showing self-harm to helplines.
‘While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,’ they said.
‘Safeguards are strongest when every element works as intended, and we will continually improve on them.
‘Guided by experts and grounded in responsibility to the people who use our tools, we’re working to make ChatGPT more supportive in moments of crisis by making it easier to reach emergency services, helping people connect with trusted contacts, and strengthening protections for teens.’
Get in touch with our news team by emailing us at webnews@usnewsrank.com.
For more stories like this, check our news page.
Discover more from USNewsRank
Subscribe to get the latest posts sent to your email.
