Microsoft touts responsible AI efforts in its first transparency report
Microsoft has released its inaugural Responsible AI Transparency Report, a detailed document that underscores the tech giant's commitment to developing and deploying responsible artificial intelligence (AI) systems throughout 2023. The report is a testament to an agreement Microsoft voluntarily entered into with the White House in July of the previous year, pledging to build safe, responsible AI systems. The company reports creating 30 responsible AI tools and expanding its dedicated AI team.
Microsoft implements risk assessments in AI development
Microsoft has implemented risk assessments at every stage of generative AI application development, according to the Responsible AI Transparency Report. The company's innovative approach includes the addition of 'Content Credentials' to its image generation platforms. This unique feature embeds a watermark into images, indicating their creation by an AI model. Furthermore, Microsoft has equipped Azure AI users with specialized tools for identifying potentially harmful content.
Microsoft intensifies internal safety checks on AI models
Microsoft is intensifying its red-teaming efforts, a process where internal teams intentionally try to bypass safety features in AI models. The company is also extending these red-teaming strategies to third-party testing before launching new models. Additionally, Microsoft has introduced tools for assessing security risks, including new jailbreak detection methods introduced in March this year.
Controversies surround Microsoft's AI deployments
Despite these safety measures, Microsoft's AI deployments have sparked controversy. Users found that Bing AI was disseminating incorrect information and promoting ethnic slurs. In October 2023, it was discovered that Bing's image generator could be manipulated to produce inappropriate images featuring popular characters. Further controversy arose in January when deepfaked explicit images of celebrities were reportedly created using Microsoft Designer.
Microsoft responds to AI controversies
In response to these controversies, Microsoft CEO Satya Nadella expressed his dismay at the deepfake images, describing them as "alarming and terrible." Natasha Crampton, the company's Chief Responsible AI Officer, acknowledged that both AI and responsible AI are evolving fields. "Responsible AI has no finish line," Crampton stated in an email to The Verge. She further emphasized that while their work under the Voluntary AI commitments is ongoing, they have made significant progress since signing them.