CRC Weekly: Cyber-based hostile influence campaigns 6th - 12th October 2025
- CRC
- Oct 17
- 8 min read

[Introduction]
Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect.
During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events.
[Review highlights]
Russia is weaponizing grief by using AI to create deepfake "resurrections" of fallen soldiers, turning personal tragedy into state propaganda. – CyberNews
A Russian Influence campaign generated 200,000 social media mentions overnight, creating "informational chaos" to deflect blame for a drone incursion. - Le Monde
Chinese chatbots are being used for espionage, harvesting user data for microtargetd propaganda targeting sensitive groups like military personnel - Politico
A Chinese Influence campaign using fake social media accounts and a pseudo-local media outlet to undermine the US-Philippine alliance was uncovered. – Reuters
The UK’s new national security adviser met with a group that the U.S. State Department has labeled a "malign" part of Beijing’s foreign influence network. - The Telegraph
An AI-enabled influence operation, synchronized with military strikes, used deepfake videos and impersonated media to incite revolt in Iran. - Citizen Lab
Chinese and Russian state media launched coordinated campaigns to frame Taiwan's president as a provocateur, distorting his calls for deterrence. - DisinfoWatch
The U.S. has dismantled key defenses like the Foreign Malign Influence Center, creating a vacuum exploited by adversaries - The Washington Post
TikTok’s algorithm has enabled manipulated videos and propaganda to spread rapidly across Africa, fueling pro-junta sentiment during recent coups. - LSE
[Week in Review]
AI-Generated "Ghosts": Russia's New Front in Digital Propaganda
The use of artificial intelligence in Russia to create propaganda from private grief is examined in an article from CyberNews. For a fee ranging from $35 to $60, families of deceased soldiers can commission AI-generated videos in which their loved ones appear to speak, embrace them, or ascend to heaven. These services, some of which reportedly handle hundreds of orders daily, produce deepfake clips that are then rapidly disseminated across Russian social media platforms, including Telegram and VKontakte. While these videos may provide a "balm effect" for grieving families, especially those unable to recover the bodies of soldiers, Ukrainian outlets like StopFake.org have warned against the manipulation of emotions inherent in such content. The practice represents a novel form of digital propaganda, turning personal mourning into a tool for reinforcing state narratives by creating a sanitized depiction of wartime loss.
Source: CyberNews ‘Russian AI resurrection videos turn grief into propaganda’ Available Online
How Russian Bot Networks Assaulted Czech Democracy Online
During the October parliamentary elections in the Czech Republic, Russia engaged in coordinated disinformation campaigns aimed at interfering with the democratic process. A report by EUvsDisinfo details how networks of TikTok bot accounts and pro-Russian websites saturated Czech online spaces with propaganda. These operations sought to portray Vladimir Putin in a positive light, legitimize the war in Ukraine, and amplify anti-Western and anti-establishment narratives. Investigations by Czech media found that these propaganda sites published more articles daily than the country’s most established news outlets. After the election, Russian state-controlled media continued to push misleading narratives, falsely claiming the results indicated a rejection of the EU. This digital interference campaign also included accusations from Kremlin-linked sources that the European Union was itself guilty of election interference, a common tactic of projecting blame onto adversaries.
Source: EUvsDisinfo ‘For the Kremlin, elections are golden opportunities for interference’ Available Online
A Digital Blitz: Russia combined drone and Information Attack on Poland
Following a Russian drone incursion into Polish airspace, the country was targeted by an unprecedented and coordinated disinformation attack, as detailed in an article published by Le Monde. The operation aimed to generate "informational chaos" by saturating social media algorithms with false narratives at a massive scale, resulting in up to 200,000 mentions in one night. Primarily driven by coordinated Russian and Belarusian accounts on platforms like X and Facebook, the campaign sought to divert blame by portraying the incident as a Ukrainian provocation designed to draw NATO into the conflict. Simultaneously, it characterized the Polish military and NATO as "ineffective and powerless." Experts view this incident as a significant escalation in Russia’s hybrid war, demonstrating a new phase of information warfare. The influence operation's reach extended to France, Germany, and Romania, highlighting its regional scope and its strategic goal of eroding European support for Ukraine.
Source: Le Monde, ‘Poland hit by unprecedented disinformation attack following Russian drone incursion’ Available Online
Chinese-developed chatbots leave user information vulnerable exploitation
China's substantial investment in artificial intelligence is fueling concerns that extend beyond economic competition into the realms of cyberwarfare, espionage, and disinformation. According to an article from Politico, Beijing’s integration of AI into state-linked hacking groups could amplify the scale and sophistication of cyberattacks on U.S. infrastructure. In parallel, Chinese-made chatbots present espionage risks by harvesting user data, which could be weaponized for tailored disinformation campaigns targeting sensitive sectors such as first responders or military personnel. Research indicates that leading Chinese chatbots, including DeepSeek, Baidu’s Ernie, and Alibaba’s Qwen, consistently produce content that aligns with Beijing’s political narratives, subtly reinforcing state messaging. Such platforms pose a risk of shaping public opinion, particularly as affordable Chinese AI services become more widespread in developing nations, creating new vectors for digital influence.
Source: Politico ‘Inside the Chinese AI threat to security’ Available Online
Beijing's Shadow Campaign to Fracture US-Philippine Alliance
A Chinese-funded Foreign Information Manipulation & Interference (FIMI) campaign in the Philippines was orchestrated to undermine local support for the country’s alliance with the United States. A Reuters investigation uncovered that the operation was managed by the marketing firm InfinitUs Marketing Solutions, which received direct funding from China’s embassy in Manila to "guide public opinion." The campaign utilized fake social media accounts posing as Filipinos to amplify pro-China and anti-American content, as well as a fabricated media outlet named Ni Hao Manila. These accounts spread misinformation regarding U.S. military cooperation, attacked Philippine lawmakers critical of China, and disseminated false narratives on other geopolitical issues. Philippine officials warned that such digital influence operations aim to make Manila "compliant" with Beijing’s strategic interests, highlighting the information war playing out in a region of significant geopolitical importance.
Source: Politico ‘How China waged an infowar against U.S. interests in the Philippines’ Available Online
UK Security Adviser’s Past Meetings with China Influence Group Raise Concerns
Sir Keir Starmer’s new national security adviser, Jonathan Powell, is facing scrutiny over past meetings with a Chinese organization identified by U.S. intelligence as part of Beijing’s foreign influence network. A The Telegraph report revealed that in March 2024, Powell met with the Chinese People’s Association for Friendship with Foreign Countries (CPAFFC), an organization the U.S. State Department has described as "malign." This group is linked to Chinese Communist Party efforts to co-opt global institutions and shape international narratives. U.S. officials have warned that CPAFFC and associated think tanks like the Grandview Institution are instrumental to China's "people-to-people" diplomacy, a strategy used to promote pro-Beijing messaging. Powell’s repeated visits to China and speaking engagements have fueled concerns that these exchanges may inadvertently legitimize entities associated with disinformation and political manipulation campaigns, coming at a time of heightened sensitivity over Chinese interference in the UK.
Source: The Telegraph ‘Powell met ‘malign’ Chinese group before joining Starmer’s team’ Available Online
AI-Augmented Influence Operation Targets Regime Change in Iran
A covert network known as PRISONBREAK has been executing an AI-enabled influence operation targeting Iranian audiences with calls for revolt and fabricated media. An analysis from Citizen Lab details how the campaign utilized over 50 inauthentic profiles on X to distribute deepfake video content and impersonate media outlets, aiming to stoke domestic unrest. The operation's digital activities appear to have been tightly synchronized with kinetic military actions, such as the June 2025 Evin Prison bombing, employing tactics of narrative seeding and amplification in real-time. While definitive attribution is challenging, Citizen Lab assesses that the operator is most likely an Israeli government agency or a contractor, citing the advanced knowledge of military operations and coordinated narrative timing. This case highlights the evolving threat of AI-augmented disinformation in geopolitical conflicts, demonstrating how digital influence campaigns now operate alongside traditional warfare.
Source: Citizen Lab ‘We Say You Want a Revolution: PRISONBREAK – An AI-Enabled Influence Operation Aimed at Overthrowing the Iranian Regime’ Available Online
China and Russia Coordinate False Narratives Against Taiwan
Chinese and Russian state media outlets have engaged in coordinated campaigns to distort the statements of Taiwanese President Lai Ching-te and portray Taiwan as a source of regional instability. According to DisinfoWatch, recent analysis shows that on October 8, 2025, China’s Global Times accused President Lai of "seeking independence through military means," a claim echoed by Russian state media. This narrative directly contradicted Lai’s actual remarks, which stressed deterrence and called on Beijing to renounce the use of force. The disinformation campaign also framed the People’s Liberation Army’s coercive military drills as a stabilizing measure. Furthermore, Beijing has manipulated international law by falsely equating its "One China" principle with UN Resolution 2758, which pertains to China’s UN seat but does not determine Taiwan’s sovereignty. These coordinated digital narratives represent a joint effort to isolate Taiwan and legitimize aggressive actions in the region.
Source: DisinfoWatch ‘Converging False PRC–Russian Narratives Target Taiwan and President Lai’ Available Online
United States Cedes Ground in the Global Information War
The United States has effectively "disarmed" in the information war, leaving it vulnerable to foreign disinformation from Russia, China, and Iran. As stated by The Washington Post, the dismantling of key defenses, such as the Foreign Malign Influence Center, has created a vacuum that adversaries have exploited by spreading fabricated content, including AI-generated images and videos. Analysts at NewsGuard identified thousands of social media posts from state-backed media that aimed to deepen polarization by circulating conflicting lies. The impact is measurable, with surveys showing that a third of Americans believe at least one significant Russian falsehood about Ukraine. The article notes that Russian disinformation networks, like the Pravda Network, have seeded millions of false stories, some of which are now being used to "infect" large AI models that subsequently repeat these lies as fact, amplifying their reach and perceived credibility.
Source: The Washington Post ‘How foreign nations are gaslighting Americans’ Available Online
TikTok's Ascendance in Africa Reshapes Media with Misinformation Risks
TikTok has rapidly become one of Africa’s most influential platforms for news consumption, bringing with it a significant surge in misinformation and political propaganda. A news piece by LSE describes how millions across the continent now rely on TikTok for information, while trust in traditional media outlets declines. The platform’s algorithms, designed to maximize engagement, enable manipulated videos and misleading content to achieve viral reach before they can be verified. This digital environment has had tangible real-world consequences, such as bolstering pro-junta sentiment during coups in Niger and Mali and fueling political division during elections in South Africa and Nigeria. While countermeasures are emerging, such as South Africa's partnership with TikTok’s election center and Ghana's fact-checking networks, the report concludes that combating disinformation on the platform will require stronger digital literacy, transparent moderation, and renewed investment in credible journalism.
Source: LSE ‘TikTok is becoming Africa’s newsroom’ Available Online
[Glossary]
The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity.
However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult.
To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.
As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website
_edited.png)
.png)