OpenAI outlines steps to boost safety measures in response to Canada school shooting


Workers install a fence around a makeshift memorial for the victims two days after a deadly mass shooting took place at a school in the town of Tumbler Ridge, British Columbia, Canada, February 12, 2026. REUTERS/Jennifer Gauthier

TORONTO, ⁠Feb 26 (Reuters) - OpenAI said on Thursday it will set up a direct point of contact with ⁠Canadian law enforcement and improve detection of repeat violators of its "violent activities" policy to boost ‌safety protocols in the wake of a recent school shooting.

The ChatGPT maker detailed the steps in a letter to Canada's minister in charge of artificial intelligence, Evan Solomon.

Ann O’Leary, OpenAI’s vice president of global policy, wrote the letter after Canadian ministers this week urged the ChatGPT ​maker to boost its safety protocols quickly and warned Ottawa would ⁠effect change through legislation if the company ⁠did not.

"We remain committed to cooperating with law enforcement authorities on the investigation into the Tumbler Ridge tragedy, and ⁠we ‌are committed to an ongoing partnership with federal and provincial governments," O’Leary said, referring to the town in British Columbia where the shooting occurred.

Ottawa is reviewing OpenAI's letter and will comment in coming days, ⁠a spokesperson for Minister Solomon said.

Canadian ministers summoned OpenAI's safety team ​for talks this week after the ‌company said it had not contacted police about an account belonging to the alleged shooter, Jesse ⁠Van Rootselaar, that it ​had banned.

Van Rootselaar, 18, is suspected of killing eight people on February 10 before taking her own life in Tumbler Ridge. OpenAI said it banned her ChatGPT account last year for policy violations.

The company said the account was flagged by systems ⁠that identify "misuses of our models in furtherance of violent activities" but ​did not provide further details. OpenAI said the issues did not meet its internal criteria for reporting to law enforcement.

O'Leary said on Thursday thatunder the company's "enhanced law enforcement referral protocol," it would have referred the initial account ban in ⁠June to police if it were discovered now.

She also said the company had discovered that Van Rootselaar had used a second account, which it shared with law enforcement.

"We commit to strengthening our detection systems to better prevent attempts to evade our safeguards and prioritize identifying the highest-risk offenders," O'Leary said.

The company also committed to periodically assessing ​the thresholds used by its automated systems for identifying potential violent activities by ⁠users.

Crime experts have noted that while greater scrutiny of AI platforms and social media is necessary, police or other ​authorities may have missed additional chances to avert one of Canada's worst ‌mass killings.

Police said Van Rootselaar had a history of ​mental health problems and that they had removed and later returned guns from her home.

(Reporting by Ryan Patrick Jones, Bhargav Acharya and Ismail Shakil; Editing by Caroline Stauffer, Cynthia Osterman and Tom Hogue)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Burger King tests AI headsets to detect 'welcome,' 'thank you' responses
Australian supermarket giant reins in AI assistant claiming to be human
Where AI lives: Southeast Asia's data centre boom
Binance cannot arbitrate customer claims over crypto losses, US judge rules
UK activists plan protests over climate, social impacts of AI data centres
Meta signs multi-billion-dollar deal to rent Google AI chips, The Information reports
Anthropic cannot accede to Pentagon's request in AI safeguards dispute, CEO says
Netflix declines to raise offer for Warner Bros
Jack Dorsey’s Block to cut nearly half its workforce in AI overhaul, shares surge
Exclusive-ASML says next-gen EUV tools ready to mass-produce chips, marking key shift for AI chip production

Others Also Read