A Microsoft engineer is sounding an alarm on March 6, 2024, about offensive and harmful imagery he says is too easily made by the company’s artificial intelligence image-generator tool. — AP
A Microsoft Corp software engineer sent letters to the company’s board, lawmakers and the US Federal Trade Commission warning that the tech giant is not doing enough to safeguard its AI image generation tool, Copilot Designer, from creating abusive and violent content.
Shane Jones said he discovered a security vulnerability in OpenAI’s latest DALL-E image generator model that allowed him to bypass guardrails that prevent the tool from creating harmful images. The DALL-E model is embedded in many of Microsoft’s AI tools, including Copilot Designer.
