Page Loader
Summarize
Microsoft thwarted attempt to expose DALL-E 3 vulnerabilities, says whistleblower
Issues were reported in December 2023

Microsoft thwarted attempt to expose DALL-E 3 vulnerabilities, says whistleblower

Jan 31, 2024
03:01 pm

What's the story

A Microsoft engineer named Shane Jones has raised concerns about potential security flaws in OpenAI's DALL-E 3. It could enable users to create violent or explicit images, like the recent Taylor Swift deepfakes. Jones says Microsoft's legal team stopped him from warning the public, so he's reaching out to political figures in the US. They include Senators Patty Murray and Maria Cantwell, Representative Adam Smith, and Washington state Attorney General Bob Ferguson.

Timeline

Discovery of exploit and attempts to go public

In early December 2023, Jones found a way to bypass DALL-E 3's security measures. After informing his bosses at Microsoft, they told him to report it to OpenAI. He tried to spread the word on LinkedIn, asking OpenAI's non-profit board to suspend DALL-E 3 access, but Microsoft's legal team allegedly made him delete the post. Jones was allegedly asked to delete it "immediately without waiting for the email from legal." However, he never received an official explanation.

Response

OpenAI and Microsoft respond to whistleblower's claims

OpenAI says they looked into Jones' claims and found that his method doesn't actually get around their safety systems. They highlighted their multi-layered approach to security, which includes filtering explicit content and adding extra safeguards. A Microsoft spokesperson encouraged Jones to use OpenAI's standard reporting channels and confirmed that his techniques didn't bypass their AI-powered image generation solutions' safety filters.

Usage

Microsoft Designer used in Taylor Swift's explicit deepfake

Earlier this week, 404 Media reported that Microsoft Designer was part of the toolset used to make Taylor Swift's explicit deepfake video. It uses DALL-E 3 as a backend. The publication believes Microsoft patched that loophole after being notified. However, it is unclear whether the exploits reported by Jones are related to the one used to make the Swift deepfake.

Safety

Whistleblower calls for government action on AI vulnerabilities

Jones thinks the Taylor Swift deepfakes show what could happen if similar security issues aren't addressed. He's asking lawmakers in Washington DC, to create a system for reporting and tracking AI vulnerabilities while protecting employees who speak up. Jones stresses the importance of holding companies accountable for product safety and being transparent about known risks, saying that concerned employees shouldn't be silenced.