Florida Attorney General James Uthmeier has opened an investigation into OpenAI over the April 2025 FSU shooting, after evidence emerged that the alleged gunman used ChatGPT extensively and sought tactical advice from the chatbot as the attack was unfolding. The move puts one of the world’s most closely watched AI companies under scrutiny for how it handled signs of violent intent in a case that has now spilled into a separate, deadly assault in Canada.
The public record now links the Florida case to a June 2025 episode in which OpenAI’s automated review system flagged a user’s extensive ChatGPT activity describing gun-violence scenarios. Staffers debated whether law enforcement should be notified, but leaders decided the case did not meet the company’s threshold of “credible and imminent” risk of physical harm. The account was then banned for misuse, and the matter was dropped.
That sequence matters because the alleged FSU shooter was an 18-year-old who, according to the facts made public early this month, used ChatGPT not just for general conversation but for tactical advice right as the attack was being carried out. Uthmeier’s inquiry comes as questions sharpen over what companies should do when automated systems surface violent content that falls short of an internal emergency standard but may still point to real-world danger.
The stakes were underscored again eight months later, on February 10, when Jesse Van Rootselaar carried out a mass shooting in Tumbler Ridge, British Columbia. He killed two family members at home and then five children and one educator at a secondary school, gravely wounded another child, and left dozens of other people hurt and traumatized before ending the rampage with his suicide. Local police had previously been aware of other worrisome behavior by the perpetrator.
Read Also: Sam Altman home attack suspect arrested after threats at OpenAI HQ
Together, the two cases have pushed a largely hidden problem into view. Threat assessment sources say high-risk cases involving chatbots are on the rise, with one describing several recent incidents in which “the chatbot component is pretty incredible.” Another threat assessment source said more people may be vulnerable to this than experts first expected, while a leader in the field said getting technical information from a chatbot can give a person planning violence a feeling of power.
OpenAI and other companies have denied that their platforms cause harm and have pointed to efforts to tighten guardrails and prevent misuse. Mental health practitioners have also described encountering cases of what they call AI-induced psychosis. But the Florida investigation has made the central question unavoidable: when a company’s systems flag violent intent, is an internal ban enough, or does public safety require a faster call to authorities?
For now, Uthmeier’s probe ensures the FSU shooting will not be remembered only as a campus attack. It has become part of a broader test of whether AI companies can recognize danger soon enough to matter — and whether their own thresholds are still calibrated to the risks now reaching them.






