Page Loader
Summarize
Microsoft restricts Copilot AI to prevent generation of offensive images
Changes implemented after an AI engineer's concerns

Microsoft restricts Copilot AI to prevent generation of offensive images

Mar 11, 2024
02:04 pm

What's the story

Microsoft is taking action to prevent its Copilot AI image generator from creating controversial images, according to CNBC. This move comes after Shane Jones, an AI engineering lead at Microsoft, voiced his concerns to the Federal Trade Commission (FTC) about the AI tool generating explicit or controversial images. In response, Microsoft has blocked certain terms and added a warning about policy violations that could lead to suspension from the tool.

Restricted terms

Copilot's ethical principles and policies

Microsoft is working on strengthening safety filters and reducing system misuse. Now, terms like "pro choice" and "pro life" are blocked to avoid potential issues. The updated AI image generator also refuses to generate images of teens or kids with assault rifles, stating it goes against ethical principles and Microsoft's policies. The system also flags certain prompts that conflict with its content policy, reminding users not to request harmful or offensive content.

Insights

Remaining issues and ongoing efforts

Some problems still exist with the AI tool. For example, searching "car accident" generates graphic images, while "automobile accident" prompt may show women in revealing clothing on damaged cars. The system also creates controversial images of copyrighted Disney characters. Microsoft and OpenAI are actively working to address these issues and improve the tool's safety filters. Recently, Microsoft CEO Satya Nadella vowed to enhance Copilot's safeguards following the dissemination of explicit images of Taylor Swift, created by Copilot.