Microsoft staffer warns regulators about harmful AI content


A Microsoft engineer is sounding an alarm on March 6, 2024, about offensive and harmful imagery he says is too easily made by the company’s artificial intelligence image-generator tool. — AP

A Microsoft Corp software engineer sent letters to the company’s board, lawmakers and the US Federal Trade Commission warning that the tech giant is not doing enough to safeguard its AI image generation tool, Copilot Designer, from creating abusive and violent content.

Shane Jones said he discovered a security vulnerability in OpenAI’s latest DALL-E image generator model that allowed him to bypass guardrails that prevent the tool from creating harmful images. The DALL-E model is embedded in many of Microsoft’s AI tools, including Copilot Designer.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

AI bubble to be short-lived, rebound stronger, NTT DATA chief says
SoftBank's Arm plans to set up chip training facility in South Korea
France seeks three-month suspension of Shein website in court hearing
One Tech Tip: Up your Christmas shopping game with AI tools
SoftBank's Arm plans to set up chip training facility in South Korea
Exclusive-India weighs greater phone-location surveillance; Apple, Google and Samsung protest
AI industry not in a bubble, but stocks could see correction, SK chief says
The rise of�AI reasoning models comes with a big energy tradeoff
Amazon pays Italy 180 million euros to end tax, labour probe, sources say
Meta’s Zuckerberg plans deep cuts�for metaverse efforts

Others Also Read