TORONTO, Feb 26 (Reuters) - OpenAI said on Thursday it will set up a direct point of contact with Canadian law enforcement and improve detection of repeat violators of its "violent activities" policy to boost safety protocols in the wake of a recent school shooting.
The ChatGPT maker detailed the steps in a letter to Canada's minister in charge of artificial intelligence, Evan Solomon.
Ann O’Leary, OpenAI’s vice president of global policy, wrote the letter after Canadian ministers this week urged the ChatGPT maker to boost its safety protocols quickly and warned Ottawa would effect change through legislation if the company did not.
"We remain committed to cooperating with law enforcement authorities on the investigation into the Tumbler Ridge tragedy, and we are committed to an ongoing partnership with federal and provincial governments," O’Leary said, referring to the town in British Columbia where the shooting occurred.
Ottawa is reviewing OpenAI's letter and will comment in coming days, a spokesperson for Minister Solomon said.
Canadian ministers summoned OpenAI's safety team for talks this week after the company said it had not contacted police about an account belonging to the alleged shooter, Jesse Van Rootselaar, that it had banned.
Van Rootselaar, 18, is suspected of killing eight people on February 10 before taking her own life in Tumbler Ridge. OpenAI said it banned her ChatGPT account last year for policy violations.
The company said the account was flagged by systems that identify "misuses of our models in furtherance of violent activities" but did not provide further details. OpenAI said the issues did not meet its internal criteria for reporting to law enforcement.
O'Leary said on Thursday thatunder the company's "enhanced law enforcement referral protocol," it would have referred the initial account ban in June to police if it were discovered now.
She also said the company had discovered that Van Rootselaar had used a second account, which it shared with law enforcement.
"We commit to strengthening our detection systems to better prevent attempts to evade our safeguards and prioritize identifying the highest-risk offenders," O'Leary said.
The company also committed to periodically assessing the thresholds used by its automated systems for identifying potential violent activities by users.
Crime experts have noted that while greater scrutiny of AI platforms and social media is necessary, police or other authorities may have missed additional chances to avert one of Canada's worst mass killings.
Police said Van Rootselaar had a history of mental health problems and that they had removed and later returned guns from her home.
(Reporting by Ryan Patrick Jones, Bhargav Acharya and Ismail Shakil; Editing by Caroline Stauffer, Cynthia Osterman and Tom Hogue)
