
OpenAI flags Chinese operatives misusing ChatGPT for mass surveillance
What's the story
OpenAI has flagged the misuse of its AI chatbot, ChatGPT, by suspected Chinese government operatives. The company said these users were trying to build tools for large-scale monitoring of data collected from social media platforms. One such user was banned after trying to use ChatGPT to create promotional materials and project plans for an AI-powered social media listening tool meant for a government client.
Surveillance technology
'Probe' to monitor extremist speech, political content
The tool, dubbed a social media "probe," was designed to scour platforms like X, Facebook, Instagram, Reddit, TikTok and YouTube for specific extremist speech as well as ethnic, religious and political content. Another account suspected of being linked to a government entity was also banned after using ChatGPT to draft a proposal for a "High-Risk Uyghur-Related Inflow Warning Model." This model would analyze transport bookings against police records to warn about travel movements by the Uyghur community.
Company stance
OpenAI's models not officially available in China
OpenAI observed that some of these activities appeared aimed at enabling large-scale monitoring of online or offline traffic. The company stressed the need for its continued vigilance against possible authoritarian abuses in this area. Interestingly, OpenAI's models aren't officially available in China and the company suspects these users might have used a VPN to access its website.
Cybersecurity breach
Russian hackers using ChatGPT to create malware
OpenAI also reported that Russian hackers have misused its AI model to create and improve malware, including a remote access trojan and credential stealers. The company noted that persistent threat actors seem to have changed their tactics to mask some of the more recognizable indicators of AI usage in their content. However, it found no evidence of new tactics or that its models gave threat actors novel offensive capabilities.
Usage statistics
ChatGPT being used to identify scams more than creating them
Despite the misuse cases, OpenAI found that ChatGPT is being used to help people identify scams much more than it is being misused by threat actors to create scams. The company estimates that ChatGPT is being used for scam detection up to three times more often than for creating scams. Since starting its public threat reporting in February 2024, OpenAI has disrupted and reported over 40 networks violating its usage policies.