top of page

Weekly Report: Cyber based influence campaigns 31th March – 6rd April 2025

  • Writer: CRC
    CRC
  • Apr 15
  • 12 min read

INTRODUCTION


Cyber-based hostile influence campaigns aim to influence target audiences by disseminating information and/or disinformation over the internet, sometimes in conjunction with cyberattacks, which amplify their impact (hence, forcing Cyfluence, as opposed to cyberattacks that seek to steal information, extort money, etc.). Such hostile influence campaigns and operations can be considered an epistemological branch of Information Operations (IO) or Information Warfare (IW).


Typically, and as customary over the last decade, information is disseminated across various internet platforms, which are the different elements of the hostile influence campaign. As such, the connectivity and repetitiveness of content between these elements are the primary characteristics that define influence campaigns. Much like Cyber-attacks, hostile influence campaigns have also become a tool for rival nations and corporations to damage reputations or achieve various business, political, or ideological goals. Much like in the cybersecurity arena, PR professionals and government agencies respond to negative publicity and disinformation shared through news and social media.

We use the term' cyber-based hostile influence campaigns' as we also include in this definition cyber-attacks aimed at influencing (such as hacking and leaking during election time), while excluding from this term other types of more traditional kinds of influence, such as diplomatic, economic, and military.


Between March 31 and April 06th, 2025, we observed, collected, and analyzed endpoints related to cyber-based hostile influence campaigns, including Cyfluence attacks. The following report summarizes the key events we consider most significant. Some campaigns involve social media and news outlets, while others utilize cyber-attack capabilities.



 

[Listen to the Podcast]


[Download Report]




HOSTILE INFLUENCE CAMPAIGNS



STATE ACTORS


[Russia]

Russia's Disinformation Strategy

Listen to the Article:

EUvsDisinfo reports that the 3rd EEAS (European External Action Service) report describes Russian disinformation as a structured, four-layered system.


The first layer includes official state-controlled channels like RT and Sputnik, which openly represent the Kremlin’s voice. The second layer consists of state-linked platforms such as NewsFront. They try to hide their ties to the Russian state but follow its messaging closely. The third layer includes anonymous websites and accounts. They are hard to trace but show technical and behavioural signs of coordination with known pro-Kremlin sources.


The fourth and deepest layer involves state-aligned actors. They cannot be directly linked to the Russian state but regularly repeat Kremlin narratives. They also use the same infrastructure and tactics as confirmed actors. Together, all layers serve Russia’s goal of shaping public opinion and spreading confusion. Researchers use technical clues like domain data and hosting and behavioral patterns like AI or automated posting to detect and track these operations.


Source: 

EUvsDisinfo, 3rd EEAS Report on The Architecture of Russia’s FIMI Operations. [online], (2025), Available at: https://euvsdisinfo.eu/the-architecture-of-russias-fimi-operations/

 

Russian Disinformation and Hostile Campaigns in Georgia

Listen to the Article:

EUvsDisinfo highlights that Russian media outlets such as Sputnik and Russia Today (RT) in Georgia played a central role in spreading disinformation to undermine the protests against the Georgian government’s decision to suspend EU accession talks until 2028. The protesters were labeled as "liberal-fascist" traitors and "puppets of the West," while the police’s use of force was portrayed as necessary, lawful, and proportionate. These outlets promoted the narrative of a Western-backed "color revolution" and accused the US and EU of destabilizing Georgia. Additionally, disinformation claimed that Western organizations like USAID and NED were funding the protests to undermine Georgia’s sovereignty. Russian media also sought to link the unrest to the war in Ukraine, framing Georgia as a “second front” against Russia. These campaigns aimed to depict the West as hostile while casting Russia as the stabilizing force in Georgia. These narratives align with Russia’s broader strategy of hostile influence.


Source: 

EUvsDisinfo, "The war on truth: Russian disinformation and Georgia’s path to EU discord," (2025), [online] Available at: https://euvsdisinfo.eu/the-war-on-truth-russian-disinformation-and-georgias-path-to-eu-discord/

 

[War in Ukraine]

Conclusions on AI’s Influence in State-Sponsored Disinformation Campaigns

Listen to the Article:

PNAS Nexus, published by Oxford University Press, features a recent report that examines the impact of generative AI on disinformation campaigns, focusing on DCWeekly.org, a propaganda site. This site, identified as part of a Russian influence operation, spread pro-Russian narratives targeting a global audience, especially in countries like West Africa, Turkey, India, and the U.S. The report shows that the use of generative AI, particularly OpenAI's ChatGPT-3, significantly increased the production of disinformation. Before AI adoption, the content was mainly copied and edited from other sources. After integrating AI in September 2023, the articles appeared more original, though they often retained the same source base. Notably, many of these disinformation campaigns focused on anti-Ukrainian narratives, including fabricated stories about Ukrainian President Volodymyr Zelenskyy. A survey showed that these AI-generated articles were perceived as equally persuasive and credible as previous ones, highlighting the growing threat of AI-supported disinformation.


Source:

Oxford Academic, PNAS Nexus, Morgan Wack, C. Ehrett, D. Linvill, P. Warren, 2025. Generative propaganda: Evidence of AI’s impact from a state-backed disinformation campaign. [online] Published by Oxford University Press.

 

Russia Escalates Disinformation War to Undermine Ukraine’s Global Support

Listen to the Article:

A report from RBC-Ukraine reveals that Russia has launched a new disinformation campaign against Ukraine. The campaign involves key Russian media outlets such as Gazeta.ru, Sputnik, Vedomosti, and Voennoye Obozreniye, as well as foreign proxy platforms like Reseau International and Magyar Hírlap, which present themselves as independent sources.

Dissemination occurs through anonymous Telegram channels, manipulated TikTok accounts, and YouTube videos impersonating Ukrainian soldiers or "independent experts." These materials are spread in various formats, including opinion pieces, fake interviews, analyses, and infographics.

The primary objective of this operation is to blame Kyiv for the failure of peace talks, depict Russia as a "peacemaker," and portray Ukraine as unwilling to negotiate. Simultaneously, the campaign seeks to undermine trust in the Ukrainian government and weaken Western support for Ukraine.


Source: 

RBC-Ukraine, B. Babaiev, 2025. Russia blames Ukraine for stalled talks in new disinformation campaign – Ukraine's intelligence. [online]

 

GENERAL REPORTS


The Complexities of Disinformation Attribution

Listen to the Article:

Oxford Academic recently published a study examining attribution's role in disinformation campaigns and its use as a deterrence strategy. The study highlights the political risks and uncertainties in attribution decisions, particularly in liberal democracies. The research argues that technical capabilities and domestic political contexts influence attribution by analyzing cases such as the 2016 US and 2021 German Bundestag elections. The study introduces the concept of the "uncertainty loop," which describes how varying levels of political, social, and technical uncertainty influence the timing and manner of attribution decisions. The findings suggest that while technical advancements have made attribution more feasible, political considerations, such as the risk of domestic backlash and the impact on international relations, often dictate whether or not attribution is publicly pursued. Thus, disinformation attribution serves as both a deterrence measure and a politically sensitive tool in modern international relations.


Source:

Oxford Academic. Hedling, E., & Ördén, H. (2025). Disinformation, deterrence and the politics of attribution, Published by Oxford University Press

 

Understanding FIMI: Key Findings and Trends in Digital Warfare

Listen to the Article:

The 3rd EEAS Threat Report examines the growing threat of disinformation and foreign information manipulation (FIMI) in the digital age. Key actors include Russia and China, who use disinformation to deepen political divisions and undermine trust in democratic institutions. The report introduces the FIMI Exposure Matrix, a tool to identify media channels connected to FIMI operations. This matrix categorizes channels based on technical and behavioral indicators to analyze their ties to manipulative actors. The increasing use of AI-generated content and fake accounts to spread disinformation is also highlighted. The report emphasizes the role of digital platforms as the primary source of these threats and calls for enhanced institutional collaboration and improved detection methods to address these challenges. The goal is to strengthen societal resilience to FIMI and increase transparency on digital platforms.


Source:

European External Action Service (EEAS). "3rd EEAS Report on Foreign Information Manipulation and Interference Threats." European Union External Action, 2025, Available at: https://www.eeas.europa.eu/eeas/3rd-eeas-report-foreign-information-manipulation-and-interference-threats-0_en.

 

Appendix - Frameworks to Counter Disinformation


Early Detection of Disinformation Campaigns Using AI

Listen to the Article:

In its report, RAND examines using large language models (LLMs) to detect disinformation and propaganda. Unlike traditional methods, LLMs can analyze broader contexts and identify subtle propaganda patterns. They recognize classical techniques like exaggeration and deception. The report finds that fine-tuned LLMs effectively detect disinformation, especially when trained on propaganda data. It recommends including non-English sources and developing a larger corpus of propaganda instances. In conclusion, LLMs are a promising tool for detecting foreign malign information operations.


Source:

RAND Corporation. (2025). Defending American Interests Abroad: Early Detection of Foreign Malign Information Operations. Christopher A. Mouton, Caleb Lucas, Shaun Ee. Available at: https://www.rand.org/pubs/research_reports/RRA2853-1.html.

 

UK’s Foreign Influence Registration Scheme to Counter Disinformation

Listen to the Article:

The UK government announced the launch of the Foreign Influence Registration Scheme (FIRS) during an oral statement to Parliament. This scheme is part of the National Security Act 2023 and addresses the increasing risk of covert foreign interference. Its main goal is to enhance transparency regarding foreign influence, particularly from countries like Russia and Iran, and to safeguard national security, democratic institutions, and the UK’s political system.


Under FIRS, individuals and organisations must register if they carry out activities on behalf of foreign powers within the UK. The scheme operates on two levels: the political tier, which applies to all foreign states, and the enhanced tier, which focuses on hostile actors that pose a more significant threat. This includes foreign governments, authorities, and state-controlled political parties.


FIRS will take effect on 1 July 2025, following a three-month transition period. Failing to register will be considered a criminal offence.


Source:

UK Government, Home Office and Dan Jarvis MBE MP, 2025. Foreign Influence Registration Scheme implementation. [online]

 

Disinformation as a Geopolitical Weapon

Listen to the Article:

A study published in the Journal of Complex Networks explores how disinformation spread through social media can disrupt the operation of critical infrastructure. Using a case study from New York City, the researchers show how false reports about supposedly closed subway stations can influence passenger behavior and lead to overcrowding and delays.


Many people rely on social media to plan their routes. When targeted disinformation is circulated on these platforms, it can cause detours, congestion, and inefficient system use, resulting in real-world disruptions in urban transportation.


The researchers developed a mathematical model to identify the most influential users in social networks. They then target these users with accurate information to prevent the spread of false narratives.


The study clarifies that protecting critical infrastructure also means tackling digital disinformation campaigns. What begins online can quickly have tangible consequences in everyday life.


Source:

Journal of Complex Networks, S. Jamalzadeh, K. Barker, A.D. González, S. Radhakrishnan, E. Bessarabova, G. Sansavini, 2025. Disinformation interdiction: protecting infrastructure networks from weaponized disinformation campaigns. [online]


 

GLOSSARY


Information Operations

Hybrid Warfare

Cyber Warfare

Cyfluence Attack

Soft Warfare

CIB

FIMI

Hostile Influence Campaign (HIC)

Digital Impact on Discourse (DID)

Misinformation

Disinformation

Inauthentic Behavior

Fake users

Unidentified users

Sockpuppet accounts

Bots

Repurposed accounts

Fake website

Deep Assets

Real platforms

Astroturfing

Cyberbullying

 

DISCLAIMER


Copyright and License of Product 

This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained.


Disclaimer of Warranties

The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks.


Accuracy of Information 

The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice.


Limitation of Liability

To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law.


Indemnification

The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms.


Third-Party Rights

The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms.


Governing Law and Jurisdiction 

This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.


 

bottom of page