Malicious actors from U.S. foreign adversaries used ChatGPT jointly with other AI models to conduct various cyber operations, according to a new OpenAI report. Users linked to China and Russia relied on OpenAI’s technology in conjunction with models such as China’s DeepSeek to execute phishing campaigns and covert influence operations, the report found.
China-Linked Accounts Target Multiple Languages
A cluster of accounts that showed signs consistent with Chinese government intelligence efforts used the AI model to generate content for campaigns in multiple languages, in addition to developing tools and malware. This group also looked at using DeepSeek to automate this process, analyzing online content to produce a list of email targets and create material that would likely appeal to them. OpenAI banned the accounts but noted it could not confirm whether they ultimately used automation with other systems to achieve their aims.
The threat actors appeared to be using AI models strategically, combining ChatGPT with alternative platforms to maximize their reach. Increasingly, the company has disrupted actors who leverage multiple systems simultaneously, suggesting a sophisticated approach to cyber warfare. The Chinese cluster demonstrated advanced capabilities in multilingual content generation, raising concerns about the scale of potential operations.
Russian Influence Operation Expands Digital Footprint
Another cluster of accounts based in Russia used ChatGPT to develop scripts, SEO-optimized descriptions and hashtags, translations and prompts for generating news-style videos with other AI models. The activity appears to be part of a Russian influence operation previously identified, which posted AI-generated content across websites and social media platforms, the report noted. Its latest content criticized France and the U.S. for their role in Africa while praising Russia.
The accounts, now banned by OpenAI, also produced content critical of Ukraine and its supporters. However, the ChatGPT maker found that these efforts gained little traction despite their technological sophistication. The Russian operation relied on a combination of automated tools to produce material designed to spread disinformation, but engagement remained minimal across digital channels.
Surveillance Proposals Reveal Authoritarian Ambitions
OpenAI separately noted in the report that it banned several accounts seemingly linked to the Chinese government that sought to use ChatGPT to develop proposals for large-scale monitoring, such as tracking social media or movements. “While these uses appear to have been individual rather than institutional, they provide a rare snapshot into the broader world of authoritarian abuses of AI,” the company wrote.
The uses represent a concerning glimpse into how foreign powers might deploy AI technology for surveillance purposes. Though the proposals appear exploratory and not yet implemented at scale, they signal potential future applications. You should understand that these instances provide critical intelligence about how adversaries view AI as a tool for monitoring citizens and suppressing dissent.
OpenAI Disrupts Evolving Threat Landscape
The OpenAI’s findings reveal that threat actors are increasingly adapting their strategies, moving beyond single-platform reliance. We now see malicious entities using various systems in conjunction to evade detection and maximize impact. The process of identifying and disrupting these networks has become more complex as actors blend multiple AI platforms.
OpenAI continues to monitor for signs of coordinated abuse across its systems. The company’s ability to detect consistent patterns of malicious behavior has improved, though actors constantly adapt their tactics. As AI models become more accessible, the challenge of preventing misuse grows, requiring constant vigilance and collaboration between tech companies and intelligence agencies to protect users from cyber threats orchestrated by state-backed entities.