CRC Weekly: Cyber-based Hostile influence campaigns 20th-26th October 2025
- CRC

- Oct 29
- 8 min read
Updated: Nov 12

[Introduction]
Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect.
During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events.
[Contents]
[ Report Highlights]
An American fugitive in Moscow is behind a network of 141 fake news sites powered by an AI programmed to insert bizarre and irrelevant praise for Vladimir Putin into unrelated articles. - NewsGuard
Through a strategy dubbed "LLM grooming" aims to manipulate AI chatbots by flooding the internet with pro-Kremlin content, effectively weaponizing the models to reproduce false narratives. - EUvsDisinfo
Posing as legitimate news agencies, covert Russian entities are expanding hybrid warfare in Africa by training local journalists and influencers to spread pro-Kremlin narratives. - European Council on Foreign Relations
To mask its own aggressive military expansion, a Russian information operation inverts reality by accusing Canada and NATO of militarizing the Arctic. - DisinfoWatch
China's hybrid influence campaigns in Europe combine soft-power tactics through cultural and academic channels with advanced AI-driven digital operations. - Taipei Times
Recognizing that information manipulation by fossil fuel interests is a primary obstacle to progress, the COP30 climate summit will make public trust a central issue for the first time. - Global Witness
An EU-funded "Digital Detectives" project is building a nationwide network in Uzbekistan by training local experts to equip journalists and fact-checkers with advanced verification skills. - EEAS
[The Week In Review]
Matryoshka Campaign Deploys Synthetic Media to Attack Journalism Credibility
The Russian Matryoshka network is impersonating reputable media organizations to spread fabricated stories and undermine trust in Western journalism. A report from NewsGuard details how the hostile influence campaign uses AI-generated videos and fake social media accounts to circulate false claims about political scandals in Germany and France. The videos have falsely attributed quotes to NewsGuard executives and presented entirely invented events, such as Germany suing the organization for exposing war preparations. Matryoshka’s strategy mirrors the very information manipulation tactics it accuses others of employing. Its content relies on AI voice-overs, manipulated footage, and fictitious experts, all designed to exploit real-world controversies, like France's 2023 bedbug panic, to insert Russian narratives into public discourse. The operation highlights a sophisticated use of synthetic media to attack the credibility of established news and research entities.
Source: Newsguard, Why Russia Puts Words in NewsGuard’s Mouth, Available Online: (https://www.newsguardrealitycheck.com/p/why-russia-puts-words-in-newsguards)
Russia Trains Local Journalists to Spread Pro-Kremlin Narratives in Africa
Russia has intensified its hybrid warfare tactics in Africa, employing information operations to influence public opinion and destabilize regional politics. The Kremlin established entities like the Africa Corps and the Africa Initiative to bolster its presence and spread pro-Russian narratives across the continent. These operations involve training local journalists, influencers, and activists to disseminate content in multiple languages, including English, French, Arabic, and regional languages like Hausa and Swahili. A report by the European Council on Foreign Relations (ECFR) notes that the Africa Initiative operates covertly, posing as a news agency while engaging in information manipulation. The ECFR highlights the need for a coordinated European response, suggesting current anti-disinformation policies are ineffective. Recommendations include investing in local media and using platforms like WhatsApp to counteract hostile narratives, as Europe risks ceding influence to Russia in Africa's information ecosystem.
Source: European Council on Foreign Relations, The bear and the bot farm: Countering Russian hybrid warfare in Africa, Available Online: https://ecfr.eu/publication/the-bear-and-the-bot-farm-countering-russian-hybrid-warfare-in-africa/#recommendations
Russia Pushes False Arctic Narrative to Mask Arctic Military Expansion
Russian state media is amplifying a narrative that Canada and NATO are promoting "war rhetoric" in the Arctic, while portraying Russia as a peaceful actor. This information operation inverts reality, as Russia has aggressively expanded its military infrastructure in the region since 2021, whereas recent Canadian measures are defensive. The Kremlin uses tactics including selective omission, projection, and euphemism laundering to present its maximalist Arctic claims as benign while framing allied defensive actions as provocative. The campaign is amplified through Russian diplomatic channels, Telegram, and pro-Kremlin outlets, reflecting a broader strategic goal of weakening allied cohesion and chilling Canadian Arctic policy. A DisinfoWatch report notes that by framing Russia as restrained, the campaign seeks to normalize its jurisdictional ambitions and discourage deterrence investments, following a recurring Kremlin pattern of "peaceful Russia/militarizing NATO."
Source: DisinfoWatch, Russian MFA Accuses West and Canada of Militarizing The Arctic, Available Online: https://disinfowatch.org/disinfo/russian-mfa-accuses-west-and-canada-of-militarizing-the-arctic/
Pro-Kremlin Actors Use AI and Data Collection to Target Ukraine-EU Relations
Pro-Kremlin propagandists have intensified information operations aimed at undermining Ukraine-EU relations and demoralizing Ukrainians. According to a report by the Delegation of the European Union to Ukraine and the DARE Project, these campaigns use Telegram channels, Facebook groups, and fake news websites to spread false narratives. The fabricated stories include claims that the EU is "prolonging the war," accusations of aggressive policies toward Russia, and false stories about refugee conditions and child trade schemes. The report highlights that pro-Kremlin actors are using sophisticated strategies, including emotional manipulation, AI-generated visuals, and fake media outlets. Regional patterns revealed tailored falsehoods in Kherson, Donetsk, and Odesa, with claims about "combat moths" imported from the EU and the sale of cities to foreign interests. Some campaigns also collected personal data, illustrating a dual strategy of psychological influence and opportunistic exploitation.
Source: EEAS, Results of pro-Russian information manipulation and disinformation monitoring targeting Ukraine-EU relations during June – August, 2025, Available Online: https://www.eeas.europa.eu/delegations/ukraine/results-pro-russian-information-manipulation-and-disinformation-monitoring-targeting-ukraine-eu_en
Beijing Combines Cultural Diplomacy with AI-Driven Influence in Europe
Concerns are growing over Beijing's disinformation and hybrid influence campaigns across Europe, even as some nations distance themselves diplomatically. A recent Italian Senate conference highlighted how China continues to exert pressure through psychological manipulation, propaganda, and economic coercion, despite Italy’s 2023 withdrawal from the Belt and Road Initiative. As published by the Taipei Times, Chinese influence persists through academic and cultural channels, including Confucius Institutes and the suppression of performances by groups critical of the Chinese Communist Party. The digital dimension of these operations leverages platforms like DeepSeek and AI-driven tools to manipulate public perception and amplify state-controlled messaging. This technological aspect has raised alarms among European governments, which now view China's use of AI and data tracking as a severe national security threat, prompting new measures to strengthen democratic resilience and curb foreign manipulation.
Source: Taipei Times, EU facing increased interference from China, Available Online: https://www.taipeitimes.com/News/editorials/archives/2025/10/26/2003787875
American Fugitive in Moscow Runs AI-Powered Pro-Kremlin Fake News Network
John Mark Dougan, a former Florida deputy now based in Moscow, has become a key figure in Russia's digital influence operations, using a self-trained generative AI system to create large volumes of fake news. An investigation from NewsGuard identifies Dougan as part of the pro-Kremlin influence group Storm-1516. His recent campaign involves 141 French-language websites spreading Russian propaganda and false claims aimed at undermining Western democracies. A notable feature of the AI-generated articles is the consistent insertion of exaggerated and irrelevant praise for Russian President Vladimir Putin, regardless of the topic. Evidence from cybersecurity researchers suggests Dougan's AI is programmed with a pro-Russia, anti-West bias, even leaving behind visible AI prompts that instruct it on how to frame content. While Dougan denies responsibility, he has publicly boasted about receiving a Russian state honor for his "work in the information sphere."
Source: NewsGuard, Russian AI Sites Can’t Stop Gushing About Putin, Available Online: https://www.newsguardtech.com/special-reports/ai-driven-john-mark-dougan-pro-kremlin-disinformation-campaign/
Russia Engages in 'LLM Grooming' to Manipulate AI Chatbots
Russia has shifted its information warfare tactics to target artificial intelligence, deliberately manipulating large language models (LLMs) through a strategy known as "LLM grooming." This involves flooding the internet with millions of low-quality articles and content from pro-Kremlin websites, including the Pravda network, to ensure AI chatbots reproduce false narratives. The goal is to weaponize AI to spread misleading information, such as fabricated claims about Ukraine's President Zelenskyy. According to analysis by EUvsDisinfo, the campaigns involve multiple actors, including Russian state media, pro-Kremlin influencers, and offshoots of the Internet Research Agency. The broader significance lies in the Kremlin's ability to shape digital information ecosystems, erode trust in AI-generated knowledge, and amplify global security risks as automated disinformation becomes harder to detect and counter, threatening the integrity of online fact-finding.
Source: EUvsDisinfo, Large language models: the new battlefield of Russian information warfare, Available Online: https://euvsdisinfo.eu/large-language-models-the-new-battlefield-of-russian-information-warfare/
Climate Action Hindered by Coordinated Disinformation and Greenwashing Campaigns
Information manipulation has become one of the most significant obstacles to meaningful climate action, as fossil fuel companies and their allies use influence campaigns to cast doubt on climate science and delay policy responses. These tactics range from outright denial to more insidious strategies like greenwashing, where polluters portray themselves as environmentally responsible while expanding fossil fuel production. Social media algorithms amplify such content, rewarding polarization over accuracy. The growing recognition of this threat has pushed information integrity into the spotlight, with COP30 set to make public trust a central issue for the first time. A Global Witness article states that while informing people of the fossil fuel industry's deception can increase support for accountability, Big Tech's failure to curb falsehoods continues to erode public understanding. Experts now call for stronger oversight and education, arguing that defending information integrity is inseparable from defending the planet.
Source: Global Witness, What does information integrity have to do with climate?, Available Online: https://globalwitness.org/en/campaigns/digital-threats/what-does-information-integrity-have-to-do-with-climate/
EU-Funded 'Digital Detectives' Initiative Trains Uzbek Journalists to Counter Falsehoods
A new initiative in Uzbekistan, the "Digital Detectives" project, aims to strengthen the country's defenses against disinformation and promote media literacy. Funded by the European Union and implemented by the Modern Journalism Development Centre, the project has launched its first Training of Trainers session in Tashkent to establish a nationwide network of experts. These trainers will assist journalists and fact-checkers across Uzbekistan in identifying and countering false information more effectively. As published by the EEAS, participants explored key fact-checking strategies, including promise tracking, detecting fake news, and utilizing digital verification tools such as the Wayback Machine. They also discussed the importance of storytelling as a method for strengthening credibility and public trust. By empowering local media professionals, the project represents a proactive effort to create a more resilient information environment and safeguard the public sphere against manipulation.
Source: EEAS, “Digital Detectives” Project Launches First Training of Trainers on Fact-Checking in Uzbekistan, Available Online: https://www.eeas.europa.eu/delegations/uzbekistan/%E2%80%9Cdigital-detectives%E2%80%9D-project-launches-first-training-trainers-fact-checking-uzbekistan_en
Europe's Counter-Disinformation Efforts Face External Threats and Internal Resistance
Europe's battle against information manipulation has reached a critical turning point, as new and complex challenges undermine progress. Foreign Information Manipulation and Interference (FIMI), fueled by geopolitical conflicts and hybrid warfare, continues to expand, while generative AI has lowered the barriers for malicious actors to produce large-scale propaganda. At the same time, the fight against disinformation is facing growing internal resistance, with some nationalist movements portraying counter-disinformation efforts as censorship, thereby weakening institutional trust. A recent article from the EU Disinfo Lab notes that major digital platforms have also reversed some commitments to content moderation, allowing false narratives to spread more easily. This has created a dual threat from external state-backed propaganda and domestic disengagement. The report concludes that Europe's resilience depends on enforcing regulations, empowering civil society, and achieving strategic digital autonomy.
Source: EU Disinfo Lab, Documenting the setbacks: The new environment for counter-disinformation in Europe and Germany, Available Online: https://www.disinfo.eu/publications/documenting-the-setbacks-the-new-environment-for-counter-disinformation-in-europe-and-germany/
[CRC Glossary]
The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult.
To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.
As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website
_edited.png)
.png)


