OpenAI ignored employee concerns over violent user: Report
What's the story
A recent report by The Wall Street Journal has shed light on internal disagreements at OpenAI over the reporting of violent users to law enforcement. The report highlights that some employees have raised alarms about potentially dangerous chatbot users, but the company has often prioritized user privacy over public safety. This reluctance to intervene has already gotten OpenAI into legal trouble in the past.
Meeting details
Disagreements highlighted during an OpenAI meeting last summer
The report further reveals that the disagreements over reporting cases to law enforcement were highlighted during an OpenAI meeting last summer. Employees from various departments, including investigations, operations, product policy, and legal, attended this meeting. The team reportedly gathered around 10 cases to set criteria for referring cases to law enforcement.
Policy debate
Legal team emphasized user privacy and warned against over-enforcement
During the meeting, employees from the investigations team pushed for more frequent reporting to authorities than the 15 to 30 cases typically referred each year. However, OpenAI's legal team and CEO Sam Altman emphasized user privacy and warned against over-enforcement, potentially causing unintended harm. The company cited instances where police intervention could cause distress to a young person and their family.
Employee concerns
Frustration among employees over reluctance to share cases with authorities
Some OpenAI employees have expressed their frustration over the company's apparent reluctance to share cases with authorities. One such case involved a high-school student in Tennessee suspected of using ChatGPT to plan a school shooting. While OpenAI did report this case, it did not do so for another teenager from Texas who was role-playing school shooting scenarios with the chatbot.
Legal issues
Tragic mass shooting case highlights the potential consequences
The report also highlights the case of Jesse Van Rootselaar, a user whose detailed descriptions of gun violence over several days made OpenAI employees uncomfortable. They interpreted his writings as a sign of potential real-world violence and advocated alerting law enforcement. However, OpenAI leaders chose not to contact authorities. Months later, in February 2026, Van Rootselaar allegedly carried out a mass shooting in Tumbler Ridge, British Columbia, killing eight people.
Lawsuits filed
Families of victims have since filed lawsuits against OpenAI
The families of the victims have since filed seven lawsuits against OpenAI, alleging wrongful death, negligence, and aiding and abetting the shooting. In response to this incident, Altman had issued a formal apology for not alerting law enforcement agencies earlier in the matter. He acknowledged that while words can never be enough to make up for the harm caused, an apology was necessary to recognize the irreversible loss suffered by their community.