May 03, 2026

What's hot

What's hot

Grieving Families Sue OpenAI After Children Killed in Canada Shooting

Table of Content

CANADA—Families of victims of one of Canada’s deadliest mass shootings have taken legal action against OpenAI and its chief executive, Sam Altman, accusing the company of failing to alert authorities despite warning signs ahead of the attack.

The lawsuits, filed Wednesday in a U.S. federal court in San Francisco, stem from the February 10 shooting in Tumbler Ridge, British Columbia, where nine people — many of them children — were killed.

According to the court filings, OpenAI had identified troubling ChatGPT conversations linked to the attacker months before the incident but did not notify law enforcement. Plaintiffs allege the company’s leadership chose not to escalate the threat, partly out of concern that doing so could expose the scale of violent-related activity on the platform and harm its business interests.

Seven separate lawsuits were filed, with more expected in the coming weeks, as legal representatives expand the case to include additional victims and affected families.

The attacker, identified in the filings as 18-year-old Jesse Van Rootselaar, allegedly killed her mother and stepbrother before carrying out a shooting at her former school. There, she shot and killed an educational assistant and five students aged between 12 and 13. She later died by suicide.

Among those seeking damages are the husband of the slain educational assistant, parents of a 13-year-old victim, and the family of a 12-year-old girl who survived multiple gunshot wounds but remains in intensive care with severe injuries.

The lawsuits claim that as early as June 2025, OpenAI’s internal monitoring systems flagged conversations in which the suspect described violent scenarios. Members of the company’s safety team reportedly recommended notifying police, concluding there was a credible and imminent threat.

However, the filings allege that senior leadership, including Altman, overruled that recommendation. While the suspect’s account was eventually deactivated, she was allegedly able to create another account and continue using the platform to plan the attack.

In response, an OpenAI spokesperson described the shooting as “a tragedy” and emphasised that the company prohibits the use of its tools to facilitate violence. The company also said it has since strengthened its safeguards, including improved threat detection, escalation procedures, and support for users in distress.

OpenAI has previously maintained that although the account was flagged, the activity did not meet its internal criteria for reporting to law enforcement.

In an open letter published last week, Altman said he was “deeply sorry” that authorities were not alerted.

The lawsuits are seeking unspecified damages and a court order compelling the company to overhaul its safety systems, including mandatory protocols for reporting credible threats to law enforcement.

Legal representatives said the cases were filed in California partly due to limits on damages for pain and suffering under Canadian law.

The legal action adds to a growing number of lawsuits against artificial intelligence companies, with plaintiffs alleging that chatbot platforms have contributed to harmful outcomes, including violence and self-harm. These cases are believed to be the first in the United States to directly link ChatGPT to a mass shooting.

The controversy is also attracting regulatory attention. U.S. authorities have launched investigations into separate incidents involving AI platforms, while Canadian officials say they are reviewing potential regulations to strengthen oversight of chatbot technologies.

As the cases move forward, they are expected to test the boundaries of accountability for AI companies — particularly how far their responsibility extends when users exhibit signs of potential harm.

Source: Reuters

Vivian Orok Nyong
+ posts
Tags :

Vivian Orok Nyong

Related Posts

Must Read

Popular Posts

Ofcom Launches Investigation into X Over Grok AI’s Alleged Generation of Sexualised and Illegal Content

The United Kingdom’s communications regulator, Ofcom (the Office of Communications), has opened a formal investigation into X (formerly Twitter) following serious concerns that the platform’s Grok AI chatbot may have been used to generate and disseminate highly sexualised and potentially illegal content. The investigation centres on allegations that Grok — the artificial intelligence system developed...

Drone Strike Hits Makeyevka in Donetsk People’s Republic

MAKEYEVKA, Donetsk People’s Republic — A drone strike attributed to the Ukrainian Armed Forces hit the eastern city of Makeyevka overnight, destroying multiple vehicles and causing significant damage to infrastructure at a local service station. The attack, part of the ongoing hostilities in the region, reportedly targeted the Krasnogvardeysky (Chervonogvardeysky) district. Local authorities said a...

U.S. military aircraft have been spotted at a key air base in Portugal’s Azores amid escalating tensions between the United States and Iran.

🚨 BREAKING NEWS AZORES, PORTUGAL — U.S. military aircraft have been spotted at a key air base in Portugal’s Azores amid escalating tensions between the United States and Iran. This is a developing story. 🚨 BREAKING NEWS LAJES, Azores — U.S. military aircraft were spotted on the tarmac at Lajes Air Base on Terceira Island...

Credibility News © Copyright 2025 | Powered by Fameweb