Microsoft Employee says AI tool should be removed until the company addresses its issue of creating offensive images
AI Tool Copilot
Shane Jones, a Microsoft software engineer, has warned that the company’s AI (Artificial Intelligence) text-to-image generator, Copilot Designer, has “systemic issues” that lead to harmful image creation, including objectified images of women. He has also raised concerns regarding the company marketing the tools as safe for everyone, including children, despite knowing the risks underlying them.
Jones posted the letter publicly on his LinkedIn page, which he wrote to the FTC (US Federal Trade Commission), claiming, “One of the most concerning risks with Copilot Designer is when the product generates images that add harmful content despite a benign request from the user.”. He gave an example where, when asked for a “car accident,” Copilot Designer “tends to randomly include an inappropriate, sexually objectified image of a woman in some of the pictures it creates.”
Must Read: What is Microsoft Copilot? Complete Step-By-Step Guide To Enable It
Earlier, he had also sent related letters raising the AI false image generation concern to the Microsoft Board of Directors before he wrote to FTC, which he was forced to remove to avoid any negative news coverage. Jones claims to have found 200 examples of concerning images created by Copilot Designer while he was working to test the company’s products to check their vulnerability to bad actors. He said months of testing had been done on Microsoft’s Copilot Designer as well as OpenAI’s DALL-E 3, as both are built on the same technology, before raising such concerns.
As per the letter to the FTC, Jones has also urged Microsoft “to remove Copilot Designer from public use until better safeguards could be put in place,” or at least to market the tool only to adults. Microsoft, OpenAI, and FTC have declined requests to comment on anything on the Jones letter immediately.
Jones’s letter came as a growing support for the already growing concern of different AI image generation tools creating offensive and misleading images. Google AI Chatbot Gemini also faced a similar issue of generating stereotypically biased images. In response to this, Google said to pause the AI image generation tool and rework its algorithm. Jones also urged the company to take some similar actions for the safety and security of mankind. To Read the Complete Letter: Click Here
Also Read: Microsoft Unveils Copilot: AI Innovations & Potential Revenue Surge
In his letter, Jones also said, “In a competitive race to be the most trustworthy AI company, Microsoft needs to lead, not follow or fall behind. Given our corporate values, we should voluntarily and transparently disclose known AI risks, especially when the AI product is being actively marketed to children.
Shane Jones has also raised a similar concern letter to Washington Attorney General Bob Ferguson and lawmakers, which includes staffers for the US Senate Committee on Commerce, Science, and Transportation.
This post was last modified on March 7, 2024 4:03 am
Perplexity AI Voice Assistant is a smart tool for Android devices that lets users perform…
Meta AI is a personal voice assistant app powered by Llama 4. It offers smart,…
On April 23, 2025, current President Donald J. Trump signed an executive order to advance…
Google is launching The Android Show: I/O Edition, featuring Android ecosystem president Sameer Samat, to…
The top 11 generative AI companies in the world are listed below. These companies have…
Google has integrated Veo 2 video generation into the Gemini app for Advanced subscribers, enabling…