Microsoft Engineer Raises Concerns
Late-night testing by Shane Jones, a Microsoft artificial intelligence engineer, has unveiled disturbing images generated by Microsoft’s copilot Designer, the AI image generator powered by OpenAI’s technology. Jones, who actively red-teams the product, found content that contradicts Microsoft’s responsible AI principles, including demons, monsters, and explicit scenes involving sensitive topics like abortion rights, underage drinking, and drug use.
Internal Reports Ignored
Jones, alarmed by his findings, began reporting his concerns internally in December. Despite acknowledgment from Microsoft, the company chose not to withdraw the product from the market. Jones, feeling compelled to address the issue publicly, penned an open letter on LinkedIn, urging OpenAI to investigate. Microsoft’s legal department requested the removal of his post, prompting him to escalate the matter to U.S. senators and later to the Federal Trade Commission.
Calls for Action
In his recent letters to the Federal Trade Commission Chair Lina Khan and Microsoft’s board of directors, Jones demands a halt to Microsoft’s Copilot Designer’s public use until improved safeguards are in place. He advocates for product disclosures and a change in Google’s Android app rating, emphasizing the potential risks associated with the AI model.
Public Outcry Amidst Industry Concerns
Jones joins a growing debate surrounding generative AI, which has seen a surge in deepfake creation and concerns about misinformation, especially with crucial upcoming elections globally. Despite over 1,000 daily product feedback messages received by Microsoft’s copilot team, Jones asserts that the team’s response is limited to addressing the most severe issues, leaving many potential risks unexplored.
Controversial Content and Copyright Concerns
Microsoft’s Copilot Designer, available under an “E for Everyone” app rating, continues to generate images deemed inappropriate and harmful. Jones highlights concerns about political bias, religious stereotypes, and conspiracy theories. Additionally, copyright violations arise as Copilot generates images featuring Disney characters, potentially breaching both legal and Microsoft policy boundaries.
Addressing the Crisis
Microsoft, in response to the concerns raised, emphasizes its commitment to addressing employee concerns and enhancing technology safety. However, Jones asserts the lack of efficient reporting channels for employees in case of widespread dissemination of harmful images. As debates on generative AI’s impact intensify, the industry faces challenges in establishing robust safeguards and guardrails to protect against misuse.
The controversy surrounding Microsoft’s Copilot Designer underscores the pressing need for comprehensive AI model oversight, with potential implications for the broader deployment of generative AI in various industries.
Also Read: Microsoft Introduces Copilot Key on Windows Keyboards for AI Conversations