CRC Weekly: Cyber-based hostile influence campaigns 17th-23rd November 2025
- CRC
- 9 hours ago
- 11 min read

[Introduction]
Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect.
During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events.
[Contents]
Russia Shifts Strategic Budget to Prioritize Digital Information Operations
Pro-Russian Cyberattacks Target Danish Political Websites During Election Period
Russia Incorporating Cartoons and Animation in African Narrative Warfare Strategy
Generative AI utilised to Fabricate Ukrainian Surrender Narratives
Multiplying Inauthentic Networks Signal Rising Chinese Interference Risk to South Korea
Generative AI Facilitates High Volume Low Quality Influence Operations
Artificial Intelligence Behind Trend for Personalized Influence Campaigns
Political Instability in Bulgaria Amplifies Vulnerability to Hostile Narratives
Strategic Use of AI Suggested to Counter Venezuelan Regime Narratives
Crowdsourced Context on Platforms Significantly Reduces Virality of Misleading Content
LLMs Accelerate Convergence of Cybercrime and Information Threats
[ Report Highlights]
Moscow shifts strategy, boosting state media funding by 54 percent to prioritize information warfare over conventional military spending. -Jamestown Foundation
Pro-Russian hackers target Danish political party websites with DDoS attacks to manufacture confusion and disrupt the ongoing election period. - Danish Social Security Agency
Moscow weaponizes animated cartoons and historical grievances to bypass literacy barriers and reshape political alliances across the African continent. - EU vs. Disinfo
Kremlin-linked actors exploit OpenAI’s Sora 2 to generate realistic but fabricated footage of Ukrainian soldiers surrendering on the battlefield. - NewsGuard
Multiplication of Chinese linked sockpuppet accounts mark early signs of Chinese interference operations targeting Korean elections. - Doublethink Lab
High-volume "AI slop" floods social platforms, prioritizing quantity over sophistication in modern influence operations. -Graphika
Artificial intelligence transforms information warfare into a precision weapon, enabling actors to deploy millions of personalized, microtargeted falsehoods simultaneously. - Foreign Affairs.com
Prolonged political paralysis and high news avoidance in Bulgaria create a fertile environment for foreign and domestic hostile influence campaigns. - Disinfo Lab
Experts suggest leveraging agentic AI to penetrate Venezuela’s closed information environment and deliver precision-targeted narratives to counter the regime. - CSIS
Study finds crowd-sourced corrections on X significantly reduce misinformation by leveraging reputational pressure to prompt deletions. -University of Rochester
The AI arms race enables malicious actors to deploy automated "swarms" and sophisticated cybercrime with minimal expertise. -Oxford Internet Institute
[Weekly Review]
1. Russia Shifts Strategic Budget to Prioritize Digital Information Operations
According to an article by the Jamestown Foundation, the Russian state is significantly reallocating resources from conventional military spending toward a comprehensive information warfare strategy, evidenced by a 54 percent increase in state media funding within its 2026 draft budget. The Kremlin is operationalizing an expansive propaganda ecosystem that integrates traditional state media and diplomatic channels with advanced digital tactics, including AI-generated content and coordinated online influence operations. These efforts are designed to shape the perceptions of domestic, Ukrainian, and Western audiences simultaneously. Domestically, the narrative warfare focuses on projecting military success and economic stability while denying war crimes.
Externally, Kremlin-backed platforms are engaged in a hostile influence campaign (HIC) aimed at intimidating Western governments and undermining support for Ukraine. The analysis notes that these hybrid warfare tactics are becoming increasingly sophisticated, utilizing AI-generated fake videos and cloned social media accounts to erode Ukrainian morale. This strategic pivot highlights Russia's reliance on non-kinetic methods to offset conventional military stagnation, aiming to destabilize Western unity and manipulate public opinion through a synchronized combination of kinetic strikes and digital information operations.
Source: Jamestown Foundation, 'Kremlin Shifts Focus to Information Warfare' Available Online: https://jamestown.org/kremlin-shifts-focus-to-information-warfare/
2. Pro-Russian Cyberattacks Target Danish Political Websites During Election Period
Several political party websites in Denmark were temporarily disabled following a series of Distributed Denial-of-Service (DDoS) attacks on November 17. An official statement by The Danish Agency for Social Security links this disruptive activity to a pro-Russian hacker group, noting that while the attacks did not compromise sensitive data or election systems, they serve as a tactic to generate confusion.
Authorities warn that these Cyfluence operations aim to amplify misleading narratives about stability during the municipal and regional election period. The timing of the disruption aligns with elevated threat assessments predicting that foreign actors would attempt to shape public perception by hindering access to political information. Although DDoS attacks are a recurring feature of the national cyber landscape, this specific campaign underscores how cyber threats are integrated into broader hostile influence campaigns.
Officials emphasize that attack methods are continuously evolving, necessitating ongoing adaptations in cyber defenses to protect democratic processes and counter attempts to manipulate public confidence through digital interference.
Source: The Danish Agency for Social Security, 'Several political party websites hit by DDoS attacks'
Available Online: https://samsik.dk/artikler/2025/11/flere-partiers-hjemmesider-ramt-af-ddos-angreb/
3. Russia Incorporating Cartoons and Animation in African Narrative Warfare Strategy
Russian influence actors are increasingly exploiting historical grievances and anti-Western sentiment to reshape public perception in Africa, as detailed in an article by EU vs. Disinfo. Through projects such as TruthAfrica, analysts have documented a coordinated effort to disseminate narratives that position Moscow as a defender of traditional values while framing Western nations as neo-colonial aggressors. A central component of this hostile influence campaign is the deployment of memetic warfare tactics, specifically the use of animation and easily shareable media, to circumvent literacy barriers and linguistic differences across the continent.
Cartoons that glorify Russian-linked paramilitary groups and vilify Western actors have emerged as potent vehicles for this propaganda. This content is designed to shift blame for regional instability away from jihadist groups and toward Western partners, thereby increasing local acceptance of Russian private military companies. The article highlights that these techniques are not limited to the African information environment; similar animated propaganda targeting Ukraine demonstrates that these digital methods are part of a global strategy. By leveraging these accessible media formats, Russian operators aim to effectively manipulate the information ecosystem and secure long-term geopolitical advantages.
Source: EU vs. Disinfo, 'A Battle for Hearts and Minds: How Russian Propaganda Takes Over Africa'
Available Online: https://euvsdisinfo.eu/a-battle-for-hearts-and-minds-how-russian-propaganda-takes-over-africa/
4. Generative AI utilised to Fabricate Ukrainian Surrender Narratives
Kremlin-linked propagandists are increasingly leveraging advanced generative AI tools to produce fabricated video content depicting Ukrainian soldiers surrendering. As revealed in a NewsGuard report, actors have utilized OpenAI's Sora 2 text-to-video technology to create viral clips intended to erode trust in the Ukrainian Armed Forces during critical combat phases. Despite the realistic visual quality of this synthetic propaganda, experts have identified clear inconsistencies, including incorrect uniforms, distorted facial features, and mispronounced geographic names, confirming the footage is inauthentic.
The investigation identified a specific cluster of videos circulating in November 2025, many of which retained traces of the AI tool’s watermarks. These clips were disseminated by anonymous accounts previously associated with pro-Kremlin influence operations, indicating a structured effort to amplify false narratives. Ukraine’s Center for Countering Disinformation has similarly flagged this activity as part of a coordinated network targeting foreign audiences. The campaign highlights the growing reliance on AI-generated media to manipulate perceptions of battlefield reality and demoralize target populations.
Source: NewsGuard, 'Kremlin Propagandists Weaponize OpenAI’s'
5. Multiplying Inauthentic Networks Signal Rising Chinese Interference Risk to South Korea
As published by Doublethink Lab, rising concerns regarding Chinese interference have triggered right-wing protests in South Korea, despite court rulings debunking claims of direct election manipulation. However, the publication confirms that Foreign Information Manipulation and Interference (FIMI) by actors linked to the People’s Republic of China is occurring, albeit on a limited scale. Investigations have uncovered evidence of fake Korean-language news websites and of coordinated comment manipulation on major digital platforms, both designed to influence online discourse. A specific tactic involves the use of fabricated social media personas, often sockpuppet accounts posing as young Korean women, which engage with pro-China content and amplify anti-U.S. narratives. While these networks have not yet executed large-scale coordinated campaigns, their presence represents a "canary in the coal mine" for future operations. The report suggests that while current reach is low, these subtle digital activities fit established patterns of PRC-aligned influence. Strengthening information literacy and monitoring for inauthentic behavior are cited as critical measures to prevent these nascent operations from escalating into broader societal destabilization.
Source: Doublethink Lab, 'Canaries in the Coal Mine: Cases of Foreign Information Manipulation in South Korea'
Available Online: https://medium.com/doublethinklab/canaries-in-the-coal-mine-cases-of-foreign-information-manipulation-in-south-korea-163217242330
6. Generative AI Facilitates High Volume Low Quality Influence Operations
Recent research by Graphika indicates that while generative AI has increased the speed and volume of influence operations, it has not yet resulted in the sophisticated, high-quality campaigns initially feared. Instead, the technology is enabling state and non-state actors to flood social platforms with "AI slop"—high-volume, low-quality content. Operations such as CopyCop, Doppelgänger, and Spamouflage are utilizing these tools to automate the creation of personas and the generation of text, effectively scaling their ability to mislead targeted audiences and amplify polarizing narratives without requiring significant human resources.
Applying the Actor-Behavior-Content (ABCDE) framework, analysts observed that propagandists are delegating core tasks to AI, blurring the lines between professional and amateur operators. This shift allows for the saturation of the information environment with fabricated content, even if individual pieces remain relatively crude. The report notes that despite the surge in volume, engagement metrics suggest these AI-driven campaigns struggle to achieve organic reach. The primary threat identified is not the persuasiveness of the synthetic propaganda, but the sheer quantity of misleading material, which can distort public debate and overwhelm fact-checking capabilities within the digital information ecosystem.
Source: Graphika, 'Cheap Tricks' Available Online: https://public-assets.graphika.com/reports/Graphika_Report_Cheap_Tricks.pdf
7. Defensive AI Tool ‘Silverer’ Uses Data Poisoning to Combat Deepfakes
According to a report by Foreign Affairs, the integration of artificial intelligence is transforming information operations from crude mass persuasion into precise, adaptive campaigns capable of undermining democratic processes. Recent incidents, such as deepfake audio impersonating the U.S. Secretary of State and the large-scale psychological profiling conducted by China's GoLaxy, demonstrate how AI enables hostile actors to deploy microtargeted propaganda. This technological shift allows for the creation of convincing forgeries and the simultaneous dissemination of millions of targeted falsehoods, effectively supercharging the toolkit available for cognitive warfare.
Case studies highlight the global reach of these tactics, from AI-generated hate speech in India to Russian-linked operations that disrupted Romania’s 2024 election. The report cites China’s "Uncle Spam" campaign as an example of how adversaries harvest vast amounts of personal data to refine their targeting strategies. Despite the escalation of these AI-enabled threats, U.S. defenses have been compromised by budget cuts to agencies responsible for countering malign information. Experts warn that without renewed investment and collaboration with tech platforms, the information environment will remain vulnerable to these sophisticated, automated forms of strategic deception and foreign interference.
Source: Foreign Affairs, 'Artificial Intelligence Supercharging Disinformation Warfare'
Available Online: https://www.foreignaffairs.com/united-states/artificial-intelligence-supercharging-disinformation-warfare
8. Political Instability in Bulgaria Amplifies Vulnerability to Hostile Narratives
Bulgaria’s prolonged political instability, characterized by fragmented parties and repeated no-confidence votes, has created an information environment highly susceptible to malign influence. A report by Disinfo Lab outlines how the split within the Movement for Rights and Freedoms (MRF) and the rise of nationalist factions have fueled public mistrust in democratic institutions. This institutional paralysis, combined with multiple snap elections, has weakened public confidence, making citizens more vulnerable to narrative warfare that exploits political fatigue. The situation is further aggravated by a deteriorating media landscape, where fragile press freedom and high rates of news avoidance drive users toward unverified content on social platforms.
These conditions allow hostile influence campaigns (HICs), whether domestic or foreign, to effectively manipulate public sentiment regarding governance and corruption. The report notes that Bulgaria currently has the highest rate of news avoidance in Europe, creating a vacuum that is easily filled by information disorder. While the country has continued its integration into the EU, including Schengen entry, the prevalence of misleading narratives remains a destabilizing force. This environment of polarization serves to undermine the legitimacy of institutions and complicates Bulgaria's strategic communications regarding its European orientation.
Source: Disinfo Lab, 'Disinfo landscape in Bulgaria'
Available Online: https://www.disinfo.eu/wp-content/uploads/2025/11/20251117_Disinfo-landscape-in-Bulgaria-V2-1.pdf
9. Strategic Use of AI Suggested to Counter Venezuelan Regime Narratives
The deployment of a U.S. carrier strike group near Venezuela signals increased attention on the region, yet a CSIS commentary argues that the strategic outcome will likely be determined by information operations rather than kinetic force. The author suggests that Venezuela's tightly controlled information environment, characterized by digital surveillance and censorship, presents unique opportunities for soft warfare. With the Maduro regime facing deep unpopularity, U.S. actors could leverage agentic AI to monitor sentiment and deploy tailored strategic communications that align with local grievances, effectively bypassing the state’s narrative control while minimizing physical risks.
The commentary emphasizes that AI-enabled tools, such as automated source monitoring and anomaly detection, can facilitate precision targeting in areas like Puerto Cabello. By analyzing AI-curated datasets from social platforms, analysts can detect subtle shifts in public mood that might otherwise go unnoticed. This approach posits that information warfare is becoming central to modern statecraft. The ethical application of AI-assisted messaging offers a method to pressure repressive regimes and shape public perception, advancing strategic objectives through the digital domain without the need to escalate into traditional armed conflict.
Source: CSIS, 'How to Win the Information Game in Venezuela'
Available Online: https://www.csis.org/analysis/how-win-information-game-venezuela
10. Crowdsourced Context on Platforms Significantly Reduces Virality of Misleading Content
New research published by the University of Rochester indicates that X's Community Notes system is an effective tool for Digital Impact on Discourse (DID), significantly curbing the spread of misleading content. The study found that posts receiving public notes, which add corrective context visible to all users, were 32 percent more likely to be deleted by their authors compared to posts receiving only private notes. This suggests that peer-generated corrections can outperform traditional algorithmic enforcement or expert fact-checking in influencing user behavior within the social media information ecosystem.
The researchers identify reputational pressure as the primary mechanism driving these retractions, particularly among high-visibility or verified users who wish to maintain credibility. These findings have broader implications for countering information disorder, demonstrating that transparency-driven, community-based tools can improve information integrity without resorting to direct censorship. By leveraging users' collective action, platforms can create a scalable model to mitigate the impact of viral false narratives and foster a more resilient digital environment amid polarized online discourse.
Source: University of Rochester, 'Effective Online Fact Checkers Are Peers'
11. LLMs Accelerate Convergence of Cybercrime and Information Threats
The rapid acceleration of the AI "arms race," driven by major developers, is facilitating a new wave of sophisticated operations that blur the line between cybercrime and information warfare. According to an article by the Oxford Internet Institute, Large Language Models (LLMs) now enable malicious actors to execute personalized phishing campaigns and "perfect scams" with minimal technical expertise. Furthermore, the availability of custom malicious models, such as WormGPT, lowers the barrier to entry for conducting offensive cyber activities. Simultaneously, multi-agent AI systems can infiltrate online communities to coordinate inauthentic behavior and manufacture synthetic consensus, challenging traditional verification systems.
The article notes that this environment is spurring rapid innovation in defensive measures, including AI-detection tools and provenance systems like C2PA, which aim to enhance transparency. The evolving landscape represents a paradigm shift where the focus must move from restricting access to AI tools—which are now widely available—to building a resilient information ecosystem capable of withstanding AI-driven manipulation. The analysis underscores the urgent need for coordinated action between policymakers and industry to counter these scalable, automated threats that combine the disruptive potential of cyber warfare with the deceptive power of synthetic propaganda.
Source: Oxford Internet Institute, 'The LLM Arms Race'
Available Online: https://www.oii.ox.ac.uk/news-events/the-llm-arms-race/
[CRC Glossary]
The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult.
To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.
As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website
_edited.png)
.png)