HomePoliticsCanada Issues Stern Ultimatum on OpenAI safety measures, Threatens Law After B.C....

Canada Issues Stern Ultimatum on OpenAI safety measures, Threatens Law After B.C. School Shooting

OTTAWA, Ontario — Canadian ministers warned OpenAI Wednesday that its OpenAI safety measures must be strengthened quickly or the federal government will legislate new requirements for AI chatbots, after a mass shooting at a British Columbia secondary school raised questions about how the company handles warning signs of violence. The warning followed talks with OpenAI’s safety team after the company said it did not alert police when it banned the suspected shooter’s ChatGPT account months before the Feb. 10 attack in Tumbler Ridge, British Columbia, federal officials said, Feb. 25, 2026.

The Royal Canadian Mounted Police said nine people, including the shooter, died in the Tumbler Ridge shootings, and two victims remained in serious condition after being airlifted to hospital.

What Canada wants from OpenAI safety measures

Federal officials say voluntary guardrails are no longer enough. In comments reported by Reuters, Justice Minister Sean Fraser said Ottawa expects changes “very quickly,” adding that if they are not delivered, the government will “be making changes” through legislation.

Prime Minister Mark Carney also weighed in, saying the government will explore options “to the full lengths of the law” to prevent future tragedies.

Artificial Intelligence Minister Evan Solomon has demanded a clearer explanation of how OpenAI decides when troubling content becomes a reportable public-safety risk. Ministers have said they want to see measurable OpenAI safety measures — not just assurances — when credible warning signs point to serious violence.

Officials have framed the issue as broader than one company, signaling that any new requirements would likely apply across major AI platforms operating in Canada.

How the B.C. school shooting put OpenAI safety measures under scrutiny

OpenAI has said the suspect’s account was banned in June 2025 after being flagged by automated systems for activity tied to violence. The company has said it considered contacting law enforcement but decided the information did not meet its internal threshold for referral at the time.

In a Feb. 11 update, the RCMP said nine people, including the shooter, were killed. Police said about 25 people were assessed for possible injuries during the evacuation, and that two victims remained in serious condition after being airlifted to hospital.

That timeline — a ban months before a deadly attack, followed by a post-shooting notification — has sharpened debate over whether OpenAI safety measures are designed to detect and escalate early warning signs of real-world harm, or whether the bar for intervention is set too high.

What OpenAI, Ottawa and British Columbia have said

According to The Associated Press, Solomon summoned OpenAI representatives to Ottawa to explain the company’s protocols and how it decides whether to forward cases to law enforcement. British Columbia Premier David Eby has argued that earlier reporting might have prevented the deaths.

For additional context on how the case unfolded inside the company, The Verge reported that the suspect’s 2025 chats triggered an internal review and were debated by staff, but OpenAI ultimately concluded it had not identified credible or imminent planning.

Ministers said they expected OpenAI to arrive with “concrete solutions.” But coverage by CityNews, which cited The Canadian Press, said federal officials expressed disappointment that no substantial new plan was presented and that OpenAI said it would return with updated OpenAI safety measures.

Regulation has been building for years

The Tumbler Ridge tragedy has made the debate more urgent, but the push to tighten rules around online harms predates this case. In 2024, the federal government introduced the Online Harms Act — a plan to create a new framework for how platforms reduce exposure to certain categories of harmful content — according to a Government of Canada backgrounder.

Internationally, lawmakers have also moved toward binding rules. The European Parliament approved the European Union’s Artificial Intelligence Act in 2024, describing a risk-based approach with added obligations for powerful general-purpose systems in the Parliament’s AI Act press release.

Canadian officials now say the question is not whether OpenAI safety measures exist, but whether they are effective enough — and whether they should be backed by enforceable standards that apply consistently across AI companies operating in Canada.

What comes next for OpenAI safety measures in Canada

OpenAI has said it will provide Ottawa with an update on additional steps it is taking. Ministers have not released draft legislative language, but they have framed the issue as a public-safety expectation: stronger OpenAI safety measures, faster escalation of credible warning signs and clearer accountability when threats involve schools or children.

The RCMP has said the investigation remains active and that investigators are collecting and reviewing digital evidence as they work to determine the full circumstances behind the killings.

Whether the outcome is a negotiated safety plan, new federal legislation or both, Ottawa has made clear it wants changes that can be measured. In the weeks ahead, the spotlight will remain on what stronger OpenAI safety measures look like — and how quickly they can be implemented before another warning sign is missed.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular