LOADING...
UK, Microsoft partner to build world-first deepfake detection system
Framework will set clear expectations for industries on detection standards

UK, Microsoft partner to build world-first deepfake detection system

Feb 05, 2026
03:03 pm

What's the story

The UK government has announced a partnership with Microsoft, academic institutions, and industry experts to develop a system for detecting deepfake content online. The move comes as part of the country's efforts to set standards for combating harmful and misleading AI-generated content. The initiative is particularly relevant given the rise of generative AI chatbots like ChatGPT and Grok, which have raised concerns over the scale and realism of deepfakes.

Detection framework

Deepfakes being weaponized by criminals, says Kendall

The UK government is working on a deepfake detection evaluation framework to set consistent standards for assessing detection tools and technologies. Technology Minister Liz Kendall said in a statement, "Deepfakes are being weaponized by criminals to defraud the public, exploit women and girls, and undermine trust in what we see and hear." The framework will assess how technology can be used to evaluate, understand, and detect harmful deepfake materials regardless of their source.

Testing methods

Framework will set clear expectations for industries on detection standards

The framework will be used to test deepfake detection technologies against real-world threats such as sexual abuse, fraud, and impersonation. This would provide the government and law enforcement with a better understanding of existing gaps in detection. The framework will also be used to set clear expectations for industries on deepfake detection standards.

Advertisement

Rising concern

Deepfake numbers surged in 2025, raising alarm

According to government figures, an estimated 8 million deepfakes were shared in 2025, a massive jump from 500,000 in 2023. The rapid rise of these manipulated images has prompted governments and regulators worldwide to take action. The urgency was further heightened by the discovery that Elon Musk's Grok chatbot could generate non-consensual sexualized images of people, including children.

Advertisement