Search CRC
100 results found with an empty search
- Meta Takes Down Global CIB Operations from Iran, China, and Romania
I. INTRODUCTION Meta’s latest Adversarial Threat Report details the detection and dismantling of three global Coor-dinated Inauthentic Behaviour (CIB) campaigns attributed to actors based in China, Iran, and Romania. [1] Notably, the report is only twelve pages long, making it the shortest instalment of Meta’s a dversarial Threat series to date. This trend toward brevity may suggest a deprioritisation of transparency or a strategic shift in how Meta communicates threat data. Figure shows: Quarterly adversarial threat report page count over time While concise, the report still illustrates that Meta remains actively involved in addressing hostile digital influence efforts. II. CHINA Meta attributed a network originating in China with conducting hostile influence operations targeting Myanmar, Taiwan, and Japan. The operation leveraged fake accounts (often featuring AI-generated profile photos) to pose as local users, manage pages, disseminate content, and simulate organic engagement. The campaign sought to shape public opinion in favour of Chinese political interests. While it criticized civil resistance movements and voiced support for the ruling junta in Myanmar, content targeting Japan attacked the government and condemned its military alliance with the United States. Anonymous posts in Taiwan spread allegations of corruption among political and military leaders, creating the false impression that these narratives emerged organically from local users. [2] Meta links this campaign to two earlier China-based operations, previously dismantled in September 2022 [3] and February 2024 [4] . I II. IRAN Meta reports that it disrupted an Iranian CIB network at an early stage, aided by automated detection systems and intelligence shared by Google’s Threat Intelligence Group. Operators often impersonated female journalists and spammed popular hashtags (e.g., #starbuck, #instagram) in an attempt to blend into or overwhelm trending conversations. Posts in Azeri covered current events such as the Paris Olympics, calls to boycott American brands, and criticisms of U.S. foreign policy, President Biden, and Israel’s military actions in Gaza. The campaign also referenced events such as Israel’s 2024 attacks, possibly referring to the ongoing conflict escalation. [5] IV. ROMANIA The most extensive campaign by scale originated from Romania, involving approximately 20,000 accounts and $177,000 in ad spending across multiple platforms. Fake accounts posed as local Romanian users, posting about sports, travel, and regional news. Cross-platform activity on YouTube, X, and TikTok reinforced these personas. According to the report, the operators demonstrated strong operational security (OpSec), using proxy IPs and other measures to obscure coordination and origin. Meta reports dismantling the operation before attracting a substantial, authentic audience. [6] While Meta did not directly link this campaign to electoral interference, its timing coincides with a broader surge in hostile digital influence activity surrounding the Romanian elections. See our report here for a deep dive into the efforts to influence the Romanian election. [7] Footnotes: (1) Meta, Adversarial Threat Report: May – First Quarter 2025 , 2025. [online] Available at: https://transparency.meta.com/metasecurity/threat-reporting . [Accessed June 09, 2025]. (2) Meta, Adversarial Threat Report: May – First Quarter 2025 , 2025. [online] Available at: https://transparency.meta.com/metasecurity/threat-reporting . [Accessed June 09, 2025]. (3) Meta, 2022. Removing Coordinated Inauthentic Behavior From China and Russia . [online] Available at: https://about.fb.com/news/2022/09/removing-coordinated-inauthentic-behavior-from-china-and-russia/ . [Accessed June 09, 2025]. (4) Meta, Adversarial Threat Report: Feb – Fourth Quarter 2023 , 2024. [online] Available at: https://transparency.meta.com/metasecurity/threat-reporting . [Accessed June 09, 2025]. (5) Meta, Adversarial Threat Report: May – First Quarter 2025 , 2025. [online] Available at: https://transparency.meta.com/metasecurity/threat-reporting . [Accessed June 09, 2025]. (6) Meta, Adversarial Threat Report: May – First Quarter 2025 , 2025. [online] Available at: https://transparency.meta.com/metasecurity/threat-reporting . [Accessed June 09, 2025]. (7) Cyfluence Research Center (CRC), The Romanian Presidential Elections 2024: Analysis of Information Operations and Long-term Influence Efforts, 2025, [online] Available at: https://www.cyfluence-research.org/post/the-romanian-presidential-elections-2024-analysis-of-information-operations-and-long-term-inf [Accessed June 09, 2025]. DISCLAIMER Copyright and License of Product This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.
- Visibility as Victory: The Strategic Logic of Doppelgänger
I. INTRODUCTION Doppelgänger is considered one of the most prominent Russian digital influence operations since the start of the war against Ukraine. The campaign gained notoriety for using deceptively realistic clones of Western news and government websites—so-called mirror sites —with current estimates ranging from 70 to 700. These sites were used to spread disinformation, primarily pro-Russian, anti-Ukrainian, and anti-Western narratives. Dissemination occurred through a multi-tiered system involving fake accounts, automated bot comments, and paid online advertising. The operation was coordinated primarily by the Social Design Agency (SDA) in cooperation with entities such as Structura and Dialog. According to U.S. authorities, between January and April 2024 alone, the network disseminated roughly 40,000 content items and 33.9 million comments globally. Strategic targets included the 2024 European Parliamentary elections and the Paris Olympic Games. [1] In this week’s blog post, we look at key takeaways from a major new report that challenges what we thought we knew about the so-called Doppelgänger campaign. "Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency" (2025) digs into over 3,100 leaked internal files from the SDA — a major player in Russia’s influence operations — and offers a fresh perspective on how this organisation works. Given the report's breadth, we do not aim to cover it comprehensively, but instead highlight a few key findings that shed new light on current developments in digital hostile influence. The full report is highly recommended for anyone seeking to understand the shifting strategies, infrastructures, and evaluation metrics that define the contemporary landscape of state-linked information warfare. II. KEY CONCLUSIONS The actors involved—above all the Social Design Agency (SDA) and its network—understand the information operations analysed in the report as part of a state-sanctioned information war. Their stated objective is the expansion of a far-reaching digital propaganda ecosystem that has operated under Kremlin direction since 2022. With an estimated annual budget of €600 million, this system targets informational environments in Russia, Ukraine, the West, and increasingly, the global public sphere. [2] Figure 1: Image used by SDA to demonstrate its campaign goals; Courtesy of Psychological Defence Research Institute at Lund University [3] SDA is overseen by Sergei Kiriyenko and his team within the Presidential Administration. Together with related entities, SDA does not function as a conventional state media organ but as an outsourced actor in a state-funded, semi-private model of digital propaganda. Within this system, it competes with legacy structures like RT or Sputnik for funding and political relevance. [4] Figure 2: Network of the Russian propaganda machine [red boxes added by the authors]; Courtesy of Audinet and Gérard [5] Figure 3: A representation of regular participants in Presidential Administration meetings at which strategic decisions about SDA operations were made, Courtesy of Psychological Defence Research Institute at Lund University [6] Strategic directives come directly from Kremlin officials—most notably Alexander Kharichev —and are implemented with little discussion or internal review. Campaigns are often deployed within 48 hours. Sofia Zakharova handles operational management, including performance metrics and budget oversight. Leaked documents show that media visibility, regardless of tone, is treated internally as evidence of effectiveness and is used to justify additional funding. [7] Although SDA and Structura are separate entities, they operate in practice as a single, integrated unit. [8] Crucially, neither Western sanctions nor takedown efforts have disrupted this model. Instead, these external pressures have accelerated its evolution and sharpened its competitive edge within Russia’s influence ecosystem. SDA played a central role in the Doppelgänger campaign. The report draws on a trove of leaked documents supplemented by publicly available data on affiliated entities such as Structura, the Internet Development Institute (IRI), Dialogue, and Dialogue Regions. The authors also examine a broader set of related operations, including Matryoshka, Overload, and Undercut. Several other campaigns with similar architecture and techniques appear to originate from the same organisational environment. While not all were operated directly by SDA, they likely relied on shared infrastructure or sub-licensed services, and can be understood as part of the same systemic counter-campaign. [9] The authors issue an explicit methodological critique of prior analysis, stating: “Doppelgänger has been over-hyped and under-analysed, leading to fundamental misunderstandings about its purpose and scope… Our approach has focused on capabilities, foregrounding SDA's activities' strategic and operational components.” [10] One of the report’s central arguments is that Western interpretations of Doppelgänger have been fundamentally flawed. What has often been described in media and policy discourse as a coherent Russian disinformation campaign is, from the authors’ perspective, only a technical delivery tactic within a much larger strategic framework . Mirror sites for content dissemination—often quickly detected and taken down—are not considered a standalone operation but one among many interchangeable methods. [11] The SDA’s activities, the report argues, should be understood as strategic counter-campaigns —a hybrid model that blends advertising logic, political campaigning, intelligence tradecraft, and military strategy. Within this framework, the exposure of Doppelgänger was not considered a failure, but a success. With over 350 international media reports referencing the campaign, SDA recorded a high return on investment. Visibility became the metric: attention from Western media, politicians, and fact-checkers was used internally to demonstrate effectiveness, ultimately justifying additional funding and responsibility from the Kremlin. [12] Doppelgänger illustrates a fundamental insight: Russian influence operations do not measure success by persuasion or approval, but by narrative penetration. The core objective is to ensure the presence of Russian thematic lines within Western discourse , regardless of how they are framed, received, or challenged. [13] Figure 4: Core objective of the SDA in short, Courtesy of Psychological Defence Research Institute at Lund University [14] What works is amplified; what fails to generate resonance is discarded . The Social Design Agency (SDA) draws no qualitative distinctions between disinformation narratives, distribution channels, or actors, whether through bots, fake profiles, paid advertising, or influencers. Everything serves the goal of penetrating the Western information environment. A central objective, therefore, is not merely to circumvent Western fact-checkers, analysts, and investigative journalists but to strategically reframe their efforts. Within SDA, exposés and debunkings are interpreted not as setbacks, but as mechanisms that enhance the campaign’s reach. To this end, the agency has developed internal evaluation models and success metrics prioritising visibility in public discourse , regardless of content, as the primary benchmark. [15] Figure 4: Date from reports analysing the degree of penetration of "Russian propaganda" in the information environments of the leading European countries, Courtesy of Psychological Defence Research Institute at Lund University [16] The Doppelgänger campaign should not be viewed as an isolated incident, but rather as a symptom of a broader structural shift within the Russian influence apparatus . Sanctions and restrictions targeting traditional Russian state media such as RT and Sputnik have accelerated this transition by forcing the Kremlin to rely more heavily on informal, decentralised channels. However, this shift is not unique to Russia. It reflects a wider transformation across the global information operations landscape, where non-state and commercial actors are increasingly central . At the heart of this model are professionally organised networks of private and formally independent actors , equipped with significant resources and high-level technical expertise. The Social Design Agency (SDA) exemplifies this new generation of digitally trained propagandists, who systematically leverage social and digital media to manipulate societal dynamics at scale. III. LESSONS FROM THE DOPPELGÄNGER CASE The report’s authors believe that the response to the “Doppelgänger” campaign exposed substantial structural and analytical deficiencies, highlighting the need for urgent corrective action. A central issue lies in the conceptual orientation of many current analyses. The prevailing focus on so-called TTPs—tactics, techniques, and procedures—has led many investigations to become bogged down in technical detail, without grasping the threat actors' overarching strategic capabilities and objectives. The report therefore advocates for a capabilities-based analytical framework that enables a holistic understanding of adversarial strategies and allows for targeted disruption. TTP-based methods remain essential, for example, in the forensic examination of specific campaign components, but they are insufficient for identifying systemic patterns or long-term decision-making processes. [17] Another key criticism concerns the lack of coordination within the counter-FIMI community. Responses to “Doppelgänger” were largely uncoordinated: more than 60 technical reports were produced independently, without a shared analytical baseline or strategic alignment. This fragmentation, the authors argue, ultimately made it easier for the adversarial network to maintain its operations. The case of Meta exemplifies this: despite playing a key role in uncovering the network and publishing lists of mirror sites, internal SDA documents show that the platform continued to be systematically used for advertising, fully aware of its exploitability. The report references an estimated $338,000 in ad revenue generated within the EU alone since sanctions were imposed in July 2023, which the authors call scandalous. [18] The report also critically examines the common practice of assigning catchy names to influence operations, whether by SDA or counter-influence actors. Labels such as “Doppelgänger,” “Undercut,” or “Stormkiller” may capture attention but risk reducing complex structures to seemingly isolated operations and suggesting false distinctions. In reality, the authors stress, these are interlinked activities embedded within an adaptive and networked propaganda model. Branding—also on the side of investigators—may inadvertently contribute to strategic oversimplification. [19] The report finds that the Social Design Agency (SDA) benefited from the public exposure of the “Doppelgänger” campaign. Media and political attention were internally assessed as evidence of effectiveness and used to justify increased funding from Kremlin sources. According to the authors, this visibility was not interpreted as a setback, but as confirmation of the campaign’s strategic success. The report also raises the possibility that parts of the leaked material may have been released deliberately—potentially even by SDA itself—to validate its impact and position within Russia’s influence ecosystem. In this context, public exposure becomes a calculated part of the operation, potentially contributing to its consolidation rather than disruption. [20] The report clarifies that this is not a blanket critique of investigative work, fact-checking, or digital counterstrategies. On the contrary, the effectiveness of the counter-FIMI community has made it a growing target for strategic exploitation. Therefore, it is all the more urgent for this community to reassess its role critically, act more strategically, coordinate more coherently, and develop more robust models for analysis and intervention. Only then can the next generation of Russian influence operations be effectively detected, understood, and countered. [21] Footnotes: (1) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.14. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (2) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.14. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (3) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.94. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (4) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.41. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (5) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.40. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . Source of Image: Audinet, M., & Gérard, C., 2024. Under the Radar: Crisis, Reorganization, and Clandestinization in Russia’s Ecosystem of Information Influence after the Invasion of Ukraine. Academic Policy Paper Series, no. 2, November 2024. [online] Available at: https://therussiaprogram.org/russia_information_influence . [Accessed May 30 2025]. (6) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.41. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (7) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, pp.42-43. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (8) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.20. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (9) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.20. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (10) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.14 [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (11) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.14. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (12) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.15. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (13) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.77. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (14) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.16. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (15) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.21. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (16) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.76. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (17) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, pp.189-190. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (18) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.190. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (19) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.190. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (20) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, pp.36-37 [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (21) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.191. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. DISCLAIMER Copyright and License of Product This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.
- Weekly Report: Cyber based influence campaigns 26th May - 01th of June 2025
[Listen to the Podcast] [Report Highlights] Global Witness's investigation ahead of Poland’s June 2025 presidential runoff revealed that TikTok’s algorithm disproportionately amplifies far-right content, posing a disinformation risk to electoral integrity. Meta’s Q1 2025 Adversarial Threat Report reveals that coordinated disinformation networks from China, Iran, and Romania used fake accounts and cross-platform tactics to manipulate public discourse and promote state-aligned narratives. As published by EU vs. Disinfo , Georgia prepares to implement its new Foreign Agents Registration Act (FARA) on June 01st, 2025. The government, led by the Georgian Dream party, has launched a wide-reaching campaign to justify the law. Reporters Without Borders (RSF) reports that Oleg Nesterenko, a Russian national based in France, has been spreading Kremlin propaganda through media outlets, social platforms, and conspiracy blogs, particularly targeting audiences in French-speaking Africa. > TABLE OF CONTENTS < HOSTILE INFLUENCE CAMPAIGNS - SOCIAL MEDIA PLATFORMS [TikTok ] Far-right content amplification threatens Polish election integrity [Meta ] Meta Uncovers Global Disinformation Networks from China, Iran, and Romania STATE ACTORS [ Russia] Russian Disinformation Targets the Baltic States France-Based Russian Propagandist Amplifies Kremlin Narratives in Africa [ The War in Ukraine] Lessons from Ukraine About Disinformation in the Cyber War AI-RELATED ARTICLES AI Disinformation Is Polluting the Internet and Undermining Public Trust The Role of Narrative in Disinformation and Its Defense GENERAL REPORTS Belarusian Disinformation Targets Poland Through Social Media Georgia's FARA Law Promoted Through Kremlin-Backed Disinformation FRAMEWORKS TO COUNTER DISINFORMATION Meta’s 2025 First Quarter Report GLOSSARY DOWNLOAD SECTION REPORT HOSTILE INFLUENCE CAMPAIGNS - SOCIAL MEDIA PLATFORMS [TikTok] Far-right content amplification threatens Polish election integrity Global Witness has investigated TikTok’s recommendation algorithm and found that far-right content is disproportionately amplified in the lead-up to Poland’s June 2025 presidential runoff. Researchers created test profiles that engaged equally with both candidates' official accounts. Despite this, the algorithm recommended over five times more pro-Nawrocki (far right) content than pro-Trzaskowski (centrist) content. Overall, two-thirds of the political content pushed by TikTok favored far-right views, including antisemitic material. The apparent skew is especially notable given that Trzaskowski led the first round vote and had a significantly stronger TikTok presence than Nawrocki. TikTok’s recommendation algorithm, which prioritizes engaging content, appears to favor the sensationalist narratives more commonly associated with the far right. This dynamic which emphasises and reinforces particular political perspectives over others, raises concerns of systematic bias and its implications for electoral integrity. TikTok disputes the findings, calling the methodology flawed and denying deliberate bias. However, similar amplification patterns were observed during past elections in Germany and Romania, suggesting a broader issue in how political content is surfaced on the platform. The European Commission is investigating TikTok’s election-related conduct under the Digital Services Act. Critics call for increased transparency, urging the platform to grant external researchers meaningful access to data and disclose how engagement signals are weighted to assess its impact better. Sources: Global Witness, 2025. TikTok algorithm recommends twice as much hard right content to users ahead of Polish election. [online] Available at: https://globalwitness.org/en/campaigns/digital-threats/tiktok-algorithm-recommends-twice-as-much-hard-right-content-to-users-ahead-of-polish-election/ [Back to Table of Contents] [Meta] Meta Uncovers Global Disinformation Networks from China, Iran, and Romania Meta’s Q1 2025 Adversarial Threat Report reveals the takedown of coordinated inauthentic behavior (CIB) networks originating in China, Iran, and Romania that aimed to manipulate public opinion through fake accounts and deceptive online content. The Chinese network targeted Myanmar, Taiwan, and Japan using AI-generated profile photos and multilingual content to pose as locals, spreading pro-regime and anti-Western narratives. The Iranian operation focused on Azeri-speaking audiences in Azerbaijan and Turkey, promoting anti-U.S. and pro-Palestinian sentiments across multiple platforms, often using fake female journalist personas and popular hashtags to blend into trending conversations. The Romanian campaign stood out due to its massive scale, 658 Facebook accounts, and $177,000 in ad spend. It sought to influence Romanian public discourse through false personas and cross-platform coordination, including YouTube, X, and TikTok. Meta disrupted these networks before they gained substantial, authentic followings. These findings highlight the persistent and evolving threat of state-linked and opportunistic actors weaponizing social platforms to sow misinformation, undermine trust, and manipulate global political narratives. Sources: Meta, 2025. Quarterly Adversarial Threat Report Q1 2025. [online] Available at: https://transparency.meta.com/metasecurity/threat-reporting [Back to Table of Contents] STATE ACTORS [Russia] Russian Disinformation Targets the Baltic States The Jamestown Foundation identifies hostile influence and disinformation as central elements of Russia’s hybrid strategy targeting the Baltic states. These efforts aim to erode democratic cohesion, weaken public trust, and disrupt alignment with Western institutions, without crossing the threshold of conventional conflict. At the core is a sustained campaign of information manipulation. Disinformation circulates through platforms like Telegram and TikTok, often supported by AI-generated content and deepfakes. Messaging is tailored to divide societies, discredit NATO and EU engagement, and diminish support for Ukraine. Russian-speaking minorities and economically vulnerable groups are frequent targets of these narratives, which are designed to exploit identity, fear, and historical grievances. These influence operations operate in tandem with other hybrid tactics, such as cyberattacks, sabotage, and covert intimidation, which are not the focus here but reinforce the overall pressure. Incidents like GPS interference, infrastructure attacks, or state-backed migration flows contribute to instability and uncertainty, creating fertile ground for manipulative messaging. Electoral influence is pursued primarily through perception management rather than direct interference. Polarizing content, the amplification of distrust, and reputational attacks on political actors are used to undermine institutional legitimacy and democratic participation. Together, these methods form a layered strategy of hostile influence that targets societal cohesion rather than territorial control. The approach is calibrated to remain deniable, cumulative, and difficult to attribute, placing democracies under sustained psychological and informational stress. Source: The Record, Antoniuk, D., 2025. Russia-linked disinformation floods Poland, Romania as voters cast ballots . [online] Available at: https://therecord.media/russia-disinformation-poland-presidential-election [Back to Table of Contents] France-Based Russian Propagandist Amplifies Kremlin Narratives in Africa Reporters Without Borders (RSF) provides a detailed report on Oleg Nesterenko, a self-styled Russian propagandist who has operated from France for nearly 30 years, spreading disinformation around 30 countries, especially in French-speaking African nations. Nesterenko disseminates pro-Russian narratives using classic propaganda techniques such as victim-perpetrator reversal, conspiracy theories, and fabricated images and videos. He denies documented war crimes like the Bucha massacre and portrays Russia as a peacemaker. Highly active on social media, he posts hundreds of messages daily, often without clear financial backing, making him a “small-scale entrepreneur” of disinformation. The report also examines his questionable background, including involvement in Nazi memorabilia trading and his radical teaching activities at a Paris business school, highlighting his ideological shift. His family’s participation in propaganda efforts is also documented. This case exemplifies how complex and multifaceted hostile influence operations by individuals can be, illustrating the challenges of identifying and combating modern hybrid warfare and propaganda. Source: Reporters Without Borders (RSF), Froger, A., Mzalouat, H., 2025. Conquering minds: A deep dive into the media offensive of a France-based Russian propagandist. [online] Available at: https://rsf.org/en/conquering-minds-deep-dive-media-offensive-france-based-russian-propagandist [Back to Table of Contents] [The War in Ukraine] Lessons from Ukraine About Disinformation in the Cyber War A report by the State Service of Special Communications and Information Protection of Ukraine (SSSCIP) underscores how influence and disinformation campaigns have become integral components of modern warfare, as evidenced during the first two years of the conflict in Ukraine. Russian cyber actors have systematically employed cyber operations to undermine media outlets and societal structures. In 2022 alone, over 200 attacks targeted Ukrainian news agencies, state media, and online platforms with the intent to disseminate propaganda, spread disinformation, and manipulate public opinion. Beyond conventional tactics such as DDoS attacks and data destruction via wiper malware, fabricated media content was increasingly used to disrupt IT systems and influence and intimidate the population. A notable example is the circulation of a deepfake video purporting to show the Ukrainian president calling for surrender, intended to sow panic and erode trust in public institutions. In 2023, these tactics evolved, with adversaries intensifying targeted “hack and leak” operations. Stolen documents and sensitive technical materials were primarily disseminated through Telegram channels to amplify psychological pressure and demoralize society. These actions exemplify hybrid warfare, combining conventional military efforts with cyber and informational campaigns to destabilize physical and societal-political domains. The report emphasizes that cyberattacks and hostile influence operations are deeply interconnected within the framework of hybrid warfare, collectively seeking to destabilize Ukraine’s physical infrastructure alongside its social and political fabric. Source: RAND Corporation, Treyger, E., Williams, H. J., & D'Arrigo, A., 2025. Measuring the Reach of Russia’s Propaganda in the Russia-Ukraine War . [online] Available at: https://www.rand.org/pubs/research_briefs/RBA3450-2.htm l [Back to Table of Contents] AI RELATED ARTICLES AI Disinformation Is Polluting the Internet and Undermining Public Trust TechXplore publishes an article from The Conversation explaining the “AI slop” phenomenon: low-quality, AI-generated content increasingly flooding social media platforms. AI slop includes automatically created texts, images, or videos that are often repetitive and emotionally charged to maximize attention, regardless of accuracy or quality. This content is commonly found on Reddit, LinkedIn, Facebook, and Twitter. Estimates suggest that over half of longer English-language posts on LinkedIn are AI-generated. Approximately 25% of all internet traffic is attributed to “bad bots”—automated programs that mimic human behavior to distribute this content widely. Bad bots are used to share, like, and comment on AI slop, boosting the visibility and accelerating the spread of these posts. These automated spreaders pretend to be real users, helping to disseminate disinformation — deliberately false or misleading information — on a large scale. Disinformation driven by AI slop and bad bots is highly relevant to hostile influence because it manipulates political opinions, interferes with elections, and deepens societal divisions. The sheer volume and sophisticated distribution of AI slop make it harder for people to discern truth and hinder open, democratic discourse. Combating this problem is challenging as AI technology and bot networks grow increasingly complex. Without coordinated technological, legal, and societal measures, these manipulative contents risk undermining democratic processes worldwide. Source: TechXplore, Provided by The Conversation, Roozenbeek, J., van der Linden, S., Kyrychenko, Y., 2025. What is AI slop? Why you are seeing more fake photos and videos in your social media feeds. [online] Available at: https://techxplore.com/news/2025-05-ai-slop-fake-photos-videos.html [Back to Table of Contents] The Role of Narrative in Disinformation and Its Defense TechXplore, with content provided by The Conversation, explores how storytelling shapes beliefs and how AI can be used to detect disinformation campaigns exploiting narrative techniques. Stories engage people emotionally and can be manipulated to influence public opinion, especially on social media. Cultural understanding is essential because symbols and meanings vary between cultures; AI risks misinterpreting narratives without it. For instance, a phrase positive in one culture may be offensive in another. Training AI with diverse cultural knowledge improves its ability to spot manipulative content for specific audiences. A significant challenge for AI is processing non-linear narratives—social media posts that jump back and forth in time or omit details. While humans handle fragmented storytelling naturally, AI struggles to extract accurate event sequences and relationships. To tackle this, AI tools can analyze narrative structures, user personas (including social media handles), and timelines to identify coordinated disinformation efforts. These systems detect repeated storylines and suspicious posting patterns, assisting intelligence agencies, platforms, researchers, and users in flagging manipulative content. Source: TechXplore, Provided by The Conversation, Finlayson, M., Islam, A.A., 2025. Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns. [online] Available at: https://techxplore.com/news/2025-05-weaponized-storytelling-ai-disinformation-campaigns.html [Back to Table of Contents] GENERAL REPORTS Belarusian Disinformation Targets Poland Through Social Media The Digital Forensic Research Lab (DFRLab) reports that Belarusian state media, specifically the Polish-language edition of Radio Belarus linked to the sanctioned entity Beltelradio, is spreading disinformation targeting Polish audiences on platforms such as TikTok, YouTube, X, and Facebook, actively circumventing European Union (EU) sanctions. As of May 7, 2025, these accounts had published over 7,790 posts and videos, garnering 16 million views and at least 542,000 engagements. This digital campaign aims to influence Poland’s 2025 presidential elections by undermining trust in democratic institutions, amplifying polarizing narratives, and either discrediting or supporting specific candidates. Although platforms are legally obligated under the EU Digital Services Act (DSA) to block sanctioned content, these channels were reported in December 2024 but remained broadly accessible throughout the election campaign. Only TikTok acted shortly before the election by applying geofencing. Platforms have ongoing obligations under the DSA to identify and mitigate systemic election risks and illegal content, but enforcement has been insufficient in this case. Source: DFRLAB, Gigitashvili, G., Hoffman, M., Nazari, S., 2025. Banned, yet broadcasting: Sanctioned Belarusian state media influencing the Polish elections. [online] Available at: https://dfrlab.org/2025/05/29/banned-yet-broadcasting-how-sanctioned-belarusian-state-media-is-influencing-the-polish-elections-on-social-media/ [Back to Table of Contents] Georgia's FARA Law Promoted Through Kremlin-Backed Disinformation As published by EU vs Disinfo , Georgia plans to implement its new Foreign Agents Registration Act (FARA) on June 1, 2025. The government, led by the Georgian Dream party, has launched a campaign to justify the law. This campaign includes elements of disinformation aligned with pro-Kremlin narratives. Officials claim the law aims to protect national sovereignty and prevent foreign-funded unrest. Critics argue the legislation is intended to restrict civil society and independent media, drawing comparisons to Russia’s foreign agent law, despite government statements that it is modeled after the U.S. FARA. The disinformation campaign portrays NGOs and critical media as agents of foreign interference. Leaders of Georgian Dream and affiliated media promote theories of a “Global War Party” and allege that Western governments are paying 120 euros per protester to fuel unrest. These claims, including allegations that NGOs are plotting a revolution, are echoed by pro-Kremlin and Azerbaijani outlets, framing the law as a defense against Western destabilization. Although the government emphasizes similarities with the U.S. FARA, observers note significant legal protections and intent differences. More than 100 civic and media organizations warn that the law could be used to suppress dissent. Unlike the U.S. version, which targets espionage-like activities with judicial oversight, Georgia’s law applies more broadly. It lacks these safeguards, raising concerns it may serve as an authoritarian tool under the pretext of transparency and national security. Source: EUvs.Disinfo, 2025. Georgia’s Foreign Agents law: protecting sovereignty or attacking civil society? [online] Available at: https://euvsdisinfo.eu/georgias-foreign-agents-law-protecting-sovereignty-or-attacking-civil-society/ [Back to Table of Contents] FRAMEWORKS TO COUNTER DISINFORMATION Meta’s 2025 First Quarter Report Meta's Integrity Report for Q1 2025 reveals that the company identified and disrupted three covert influence operations from Iran, China, and Romania, halting these campaigns before they reached significant authentic audiences. At the same time, Meta acknowledges scaling back proactive content moderation efforts, aiming to reduce enforcement errors by raising thresholds for automated content removal and relying more heavily on user reports for less severe policy violations. Despite this general reduction in proactive moderation, Meta emphasizes that targeted enforcement against serious threats, such as coordinated manipulation ("hostile influence"), remains a high priority. Additionally, the company continues enhancing its content moderation capabilities through advanced AI tools, notably large language models (LLMs), which surpass traditional AI systems and human reviewers in accuracy and efficiency. Source: Meta, 2025. Integrity Reports, First Quarter 2025. [online] Available at: https://transparency.meta.com/de-de/integrity-reports-q1-2025/ [Back to Table of Contents] [Download Report] [Back to Table of Contents] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [Back to Table of Contents] DISCLAIMER Copyright and License of Product This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect. [Back to Table of Contents]
- Weekly Report: Cyber based influence campaigns 12th - 18th of May 2025
[Listen to the Podcast] [Report Highlights] According to Google's Threat Analysis Group (TAG) Q1 2025 bulletin , state-linked actors from Russia, Iran, and Nigeria conducted coordinated disinformation campaigns across YouTube, Google Ads, and other platforms to spread propaganda and manipulate public opinion. A small-scale experiment published by Global Witness concludes that TikTok's algorithm disproportionately promotes far-right political content in Romania before the presidential election. The DFRLab reports that pro-Kremlin networks appear to have supported Romanian far-right candidate George Simion in the 2025 presidential election by amplifying nationalist and anti-Western narratives across social media platforms. An article published by ABC highlights accusations of foreign interference and disinformation tactics in the Solomon Islands. China is implicated following the resignation of newly appointed Rural Development Minister Daniel Waneoroa from the Inter-Parliamentary Alliance on China (IPAC). As a CSIS Center for Strategic & International Studies repor t detailed, widespread disinformation campaigns significantly affected the 2025 Philippine midterm elections. According to EU vs. Disinfo , the Kremlin is using state-driven propaganda and religious narratives to glorify its war in Ukraine and obstruct peace efforts through disinformation. > TABLE OF CONTENTS < HOSTILE INFLUENCE CAMPAIGNS - SOCIAL MEDIA PLATFORMS [ Google] Google TAG Uncovers Global Disinformation Campaigns in Q1 2025 [ TikTok] TikTok’s algorithm may lean right in Romania STATE ACTORS [ Russia] White Lies, No Lines on the Kyiv Express Operation Overload: Experts You Know, Voices You Don’t VIGINUM: Russian Influence Campaign Storm-1516 Kremlin-Aligned Networks Target Romania’s 2025 Presidential Election [ The War in Ukraine] Kremlin Glorifies War Through Religion and Disinformation Campaigns [ China] China Accused of Disinformation and Political Pressure in Solomon Islands CYFLUENCE ATTACKS Cyber Attacks Persist After India-Pakistan Ceasefire GENERAL REPORTS The Impact of Disinformation on the Philippine Midterm Elections CSIS Futures: The Collapse of Trust in a Connected World GLOSSARY DOWNLOAD SECTION REPORT HOSTILE INLUENCE CAMPAIGNS - SOCIAL MEDIA PLATFORMS [Google] Google TAG Uncovers Global Disinformation Campaigns in Q1 2025 Google’s Threat Analysis Group (TAG) identified and disrupted several coordinated influence operations (CIOs) across its platforms in the first quarter of 2025. These campaigns were primarily linked to state-sponsored actors from Russia, Iran, and Nigeria, who used YouTube channels, Google Ads accounts, and websites to spread content supportive of their respective governments and critical of opposing parties. Russia’s operations stood out, with over 1,300 YouTube channels taken down and numerous domains blocked. These were connected to the actor “Portal Kombat,” which pushed pro-Russian narratives in multiple languages. The TAG report reveals a shift in disinformation tactics, including the growing use of multilingual content and the focus on regional issues to sway public opinion. Platforms like YouTube and Google News' involvement highlight the broad scope of these efforts. The findings reflect the ongoing threat posed by state-backed disinformation and the need for constant vigilance and action to protect the integrity of information ecosystems. Sources: GOOGLE Threat Analysis Group, Leonard, B., 2025. TAG Bulletin: Q1 2025. [online] Available at: https://blog.google/threat-analysis-group/tag-bulletin-q1-2025/ [Back to Table of Contents] [TikTok] TikTok’s algorithm may lean right in Romania Global Witness conducted a small-scale experiment in early May 2025 in Bucharest to assess TikTok’s political content recommendations ahead of Romania’s presidential election. Over two days, researchers created three new TikTok accounts on factory-reset phones to simulate users without prior history. Each account followed the official pages of both presidential candidates and watched around ten posts per candidate. Then, the For You feed was browsed for ten minutes—political posts were watched, while non-political posts were skipped. All political content shown was manually reviewed. The results suggest that approximately three-quarters of the political content promoted by TikTok favored far-right narratives or personalities. While the methodology was limited in scope, more exploratory than empirical, it raises concerns about the role of recommendation algorithms in amplifying extremist views. TikTok’s algorithm is designed to surface content based on user interaction and interests, a model that can inadvertently prioritize polarizing or provocative material. TikTok has rejected the findings, calling the study unscientific and misleading. However, the results echo similar concerns in other countries, including Germany. Under the EU’s Digital Services Act, platforms like TikTok are legally required to assess and mitigate risks to electoral integrity—including those posed by their algorithms. The European Commission is already investigating TikTok’s influence in the Romanian electoral context. Source: GLOBAL WITNESS, 2025. TikTok algorithm continues to push multiple times more far-right content to users ahead of Romanian election. [online] Available at: https://globalwitness.org/en/campaigns/digital-threats/tiktok-algorithm-continues-to-push-multiple-times-more-far-right-content-to-users-ahead-of-romanian-election/ [Back to Table of Contents] STATE ACTORS [Russia] White Lies, No Lines on the Kyiv Express NewsGuard’s Reality Check report reveals a targeted disinformation campaign launched in early May 2025 against several European leaders. The campaign centered on a blurry, low-resolution video that falsely claimed French President Emmanuel Macron, German Chancellor Friedrich Merz, and UK Prime Minister Keir Starmer were seen with cocaine during a train trip to Kyiv on May 9. The video in question shows a white napkin and a cocktail pick. Despite its harmless content, the footage spread rapidly across Russian state media, far-right platforms, and conspiracy websites. Kremlin-linked figures amplified the false claim, including Foreign Ministry spokesperson Maria Zakharova and RT editor-in-chief Margarita Simonyan. More than 100 articles pushing the hoax appeared across Russian media networks. The effort aimed to discredit key Western supporters of Ukraine, with a particular focus on Macron, who has emerged as one of the most vocal backers of Kyiv. This incident follows a broader Kremlin strategy of spreading fabricated drug-related claims, a tactic previously used against Ukrainian President Volodymyr Zelensky.High-resolution footage from reliable outlets like AFP and AP disproved the allegation, showing that the supposed “evidence” was entirely misleading. In addition, according to a Le Monde article , the Elysée broke from France’s traditional diplomatic communication by responding to the cocaine rumor with sarcasm and meme-style messaging on its official X account. This marked the first time the French presidency used such a tone, aiming to counter disinformation in real time and mirror Ukraine’s online tactics. Source: NEWSGUARD Reality Check, SADEGHI, M., & MAITLAND, E., 2025. It’s All a Blur: A Fuzzy Video Is Cited to Falsely Claim that European Leaders Snorted Cocaine on Their Way to Kyiv. [online] Available at: https://www.newsguardrealitycheck.com/p/its-all-a-blur-a-fuzzy-video-is-cited LE MONDE, AUDUREAU, W., 2025. How the Elysée adapted its communication style to tackle Macron cocaine rumor. [online] Available at: https://www.lemonde.fr/en/les-decodeurs/article/2025/05/16/how-the-elysee-adapted-its-communication-style-to-tackle-macron-cocaine-rumor_6741335_8.html [Back to Table of Contents] Operation Overload: Experts You Know, Voices You Don’t Findings from the Institute for Strategic Dialogue (ISD) reveal that the Russian disinformation campaign Operation Overload, first identified in 2023, has continued into 2025 with new tactics and targets. Between January and March, the campaign published at least 135 deceptive posts across platforms like X, Telegram, and Bluesky, focusing on Germany, France, and Ukraine. The operation aims to undermine democratic trust and weaken support for Ukraine by impersonating trusted sources. It uses AI-generated voices, fake headlines, and forged logos to mimic media outlets, academics, and law enforcement. In the first quarter of 2025, over 80 organisations were impersonated, with more than three-quarters linked to public institutions. While most posts saw little engagement, one video falsely claiming USAID paid celebrities to visit Ukraine reached over 4 million views, boosted by conspiracy accounts. The rest relied on bot networks to simulate visibility. Though its direct reach is limited, the campaign creates real-world harm, confusing users, damaging reputations, and draining fact-checkers' resources. Around 80% of identified posts remained online during analysis, increasing long-term risk. Earlier reports by groups like CheckFirst, Recorded Future, and the Digital Forensics Research Lab have also documented how Operation Overload floods social media with hoaxes to overwhelm journalists and institutions. The 2025 phase shows a continuation of this strategy—more refined, but equally focused on destabilisation through deception. Source: INSTITUTE FOR STRATEGIC DIALOGUE, 2025. Stolen voices: Russia-aligned operation manipulates audio and images to impersonate experts. [online] Available at: https://www.isdglobal.org/digital_dispatches/stolen-voices-russia-aligned-operation-manipulates-audio-and-images-to-impersonate-experts/ [Back to Table of Contents] VIGINUM: Russian Influence Campaign Storm-1516 The recent technical report by VIGINUM [For more background information, please find our recent blog post here ] provides an in-depth analysis of 77 documented influence operations attributed to the Russian campaign Storm-1516. The report outlines the campaign’s overarching goals—chief among them, discrediting the Ukrainian government to weaken Western aid—while highlighting its systematic targeting of political figures in France, Germany, and the United States, especially during election periods. Storm-1516 leverages deepfakes, staged videos, and a sophisticated distribution network involving burner and paid accounts. These narratives are amplified through pro-Russian networks and linked influence operations, including Project Lakhta and CopyCop. The investigation draws direct lines to individuals and groups aligned with the Russian state, including exiled former U.S. law enforcement officer John Mark Dougan and figures from the Prigozhin and Dugin networks. It further implicates Yury Khoroshenky, a suspected GRU Unit 29155 operative, as a coordinator and financier. VIGINUM concludes that Storm-1516 constitutes a clear case of foreign digital interference, posing an escalating threat to the integrity of European public discourse. Source: ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf [Back to Table of Contents] Kremlin-Aligned Networks Target Romania’s 2025 Presidential Election A report by the Digital Forensic Research Lab (DFRLab) reveals that pro-Kremlin networks in Romania and Moldova actively supported far-right presidential candidate George Simion during Romania’s 2025 election. These networks, which had previously criticized Simion, shifted to amplifying his nationalist and Eurosceptic messaging. Their efforts relied heavily on platforms such as Telegram and TikTok. The DFRLab notes that Simion’s rise was enabled by the same digital infrastructure that had earlier promoted Călin Georgescu, a candidate later disqualified by Romania’s electoral bureau and Constitutional Court. Georgescu was removed from the race due to violations of campaign regulations, including opaque financing and suspected Russian-backed online operations. Moscow has denied the allegations. The findings highlight the ongoing threat of foreign interference in democratic elections. State-aligned influence networks can manipulate public opinion and disrupt electoral processes. This interference is transnational, as shown by the involvement of Moldovan networks linked to oli-garch Ilan Shor, which promoted anti-Western narratives. Source: DFRLab, OLARI, V., 2025. From Bucharest to Chisinau: How pro-Kremlin networks shaped Romania’s 2025 election. [online] Available at: https://dfrlab.org/2025/05/16/pro-kremlin-networks-shaping-romania-2025-election/ [Back to Table of Contents] [The War in Ukraine] Kremlin Glorifies War Through Religion and Disinformation Campaigns An article published by EUvsDisinfo on May 17, 2024, describes how, in the fourth year of its full-scale invasion of Ukraine, Russia continues to pursue long-term political and territorial goals. According to the article, this includes seeking control over large parts of Ukrainian territory and weakening the country’s statehood. Russian forces reportedly target Ukraine’s infrastructure, economy, agriculture, and industry, often without distinguishing between civilian and military objectives. EUvsDisinfo argues that the war is increasingly presented as a central element of Russia’s national identity. President Putin’s public speeches—such as the one delivered on May 9—are described as framing the war as a moral or even quasi-religious duty. The Russian Orthodox Church is said to support this framing. State-affiliated media reportedly portray war widows honoring fallen soldiers as heroic sacrifices. According to the article, this narrative also marginalizes critical civil society voices, such as the Committee of Soldiers’ Mothers of Russia, which has faced increasing legal pressure. Regarding diplomatic efforts, the article notes that Russia has responded cautiously to initiatives such as a proposed 30-day ceasefire or direct talks between Presidents Zelenskyy and Putin. Russian state media instead emphasize the so-called “root causes” of the conflict. EUvsDisinfo interprets this as a strategy to delay or deflect negotiations. The listed demands, such as a NATO rollback, Ukrainian neutrality, and recognition of Russian claims over occupied territories—are seen not as realistic negotiation points but as political pretexts to prolong the conflict. Source: EUvsDisinfo, 2025. Celebrating a new war. [online] Available at: https://euvsdisinfo.eu/celebrating-a-new-war/ [Back to Table of Contents] [China] China Accused of Disinformation and Political Pressure in Solomon Islands According to a report by ABC News Australia , the resignation of Solomon Islands' newly appointed Minister for Rural Development, Daniel Waneoroa, from the Inter-Parliamentary Alliance on China (IPAC) has sparked political controversy and renewed concerns about foreign interference. Waneoroa stated that he stepped down to promote political stability and align with Prime Minister Jeremiah Manele's national direction. The Inter-Parliamentary Alliance on China (IPAC) is an international, cross-party coalition of legislators from over thirty countries. It advocates for a coordinated and values-based approach to China policy, focusing on human rights, democracy, and global security. IPAC has been outspoken on China’s policies in Hong Kong, Xinjiang, and Taiwan, and maintains ties with Taiwanese institutions. Civil society groups and IPAC suspect that Waneoroa’s resignation followed pressure from the Chinese embassy in Honiara. According to reports, embassy officials demanded a meeting and suggested his continued affiliation with IPAC could affect development funding. The Transparency Solomon Islands group condemned the alleged interference, warning that it could undermine national sovereignty and political stability. Waneoroa’s role in IPAC was particularly sensitive, given that the Solomon Islands established formal ties with China in 2019, ending diplomatic relations with Taiwan. The Chinese embassy denied any wrongdoing, calling the allegations baseless and reaffirming its support for Solomon Islands’ sovereignty. Source: ABC NEWS, DZIEDZIC, S. & AUMANU-LEONG, C., 2025. China is accused of foreign interference in Solomon Islands after minister quits international group. [online] Available at: https://www.abc.net.au/news/2025-05-12/china-embassy-solomon-islands-embroiled-foreign-interference/105280538 [Back to Table of Contents] CYFLUENCE ATTACKS Cyber Attacks Persist After India-Pakistan Ceasefire A recent post by CyberKnow states that despite the ceasefire announced several days ago between India and Pakistan, hacktivist groups remain active and continue to manipulate the information space. As noted in our last Weekly Review ( more information available here ), the primary threat now lies less in technical attacks and more in spreading misleading or exaggerated claims. One example is the widely circulated report that cyberattacks took 70% of India’s power grid offline. According to CyberKnow, this is part of a disinformation campaign to generate public uncertainty and confusion. DDoS attacks and website defacements remain the most commonly used methods. In addition, new groups emerge, either announcing their intent to get involved or already carrying out attacks. Several alleged data breaches have also been reported recently. However, CyberKnow clarifies that many cases are fake or based on publicly available information misrepresented as sensitive or compromised data. These developments highlight how information manipulation has become a key element of modern cyber conflict—shaping public perception. Source: CyberKnow (@Cyberknow20), 2025. India-Pakistan Cybertracker #2. [online] Available at: https://x.com/Cyberknow20/status/1922269417137942839 [Back to Table of Contents] GENERAL REPORTS The Impact of Disinformation on the Philippine Midterm Elections In an article , the Center for Strategic and International Studies (CSIS) outlines key developments from the Philippine midterm elections held on May 12, 2025. Voters elected officials at all levels, with particular attention on the 12 contested Senate seats—widely seen as a preview of the 2028 presidential race. Three major political blocs shaped the vote: the Marcos administration, the Duterte camp, and a resurgent liberal opposition. President Marcos’s slate secured six Senate seats, fewer than expected. Duterte allies won four seats, and former president Rodrigo Duterte was elected mayor of Davao City despite being detained by the International Criminal Court. The liberal camp exceeded expectations, with figures like Bam Aquino and Kiko Pangilinan returning to national office. Millennials and Gen Z comprised over 60% of registered voters and generated more than 70% of political engagement online. Astroturfing played a significant role, as the Philippine Center for Investigative Journalism found over 100 Facebook pages posing as news outlets, which spent over $860,000 on political advertising. The Duterte camp revived its disinformation networks to portray Duterte as a political victim. On platform X, over 30% of pro-Duterte accounts were reportedly fake. Meanwhile, the Marcos camp promotes anti-disinformation measures—though critics argue these also serve partisan interests. Source: CSIS, Quitzon, J., 2025. Philippines Votes 2025: A Power Shift in the Senate. [online] Available at: https://www.csis.org/analysis/philippines-votes-2025-power-shift-senate [Back to Table of Contents] CSIS Futures: The Collapse of Trust in a Connected World “ Trust Fails ” is part of the Scenarios That Could Define 2035 series by the Center for Strategic and International Studies (CSIS) . The format combines future-oriented narrative scenario development with expert commentary to explore potential global trends. This one, written by Jon B. Alterman with contributions from cybersecurity and AI experts, imagines a world where trust between individuals, institutions, and nations collapses by 2035 due to technological misuse. The authors highlight how technology has historically supported global trust through secure transactions, verified identities, and reliable communication. However, they warn that the same digital infrastructure is now being used to undermine confidence. Deepfakes, synthetic media, and AI-generated disinformation allow bad actors to falsify events, damage reputations, and disrupt public understanding. A key focus is the role of social media algorithms, which amplify emotionally charged misinformation while downplaying corrections. Malicious actors can generate large volumes of disinformation and optimize it through AI-driven testing. Once public confidence in digital information systems erodes, everything from journalism to government records becomes suspect. In terms of cyberspace, the scenario anticipates a future where identity verification becomes more difficult, cyberattacks become more disruptive, and digital platforms lose legitimacy. Economic systems slow as verification costs rise, and political polarization deepens. States with weaker digital infrastructure suffer most, facing exclusion from investment, cooperation, and secure information flows. The scenario is a stark warning: trust may become a casualty of technological advancement. [Click here to explore other scenarios]. Source: Center for Strategic and International Studies (CSIS), Alterman, J., Allen, G., Carter, W., Byrd, C., & Spaulding, S., 2025. Trust Fails. [online] Available at: https://features.csis.org/scenarios2035/trust-fails/ [Back to Table of Contents] [Download Report] [Back to Table of Contents] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [Back to Table of Contents] DISCLAIMER Copyright and License of Product This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect. [Back to Table of Contents]
- The Attack on Luxury Brands: A case study of the weaponization of the online ecosystem by China
The Attack on Luxury Brands: A case study of the weaponization of the online ecosystem by China At the CRC, we primarily study digital hostile influence in three realms: military, sociopolitical, and economic. A clear distinction is usually tricky to draw in our field since these areas frequently intersect and overlap. While the methodologies and even the perpetrators may be similar, the stakeholders and, therefore, defensive strategies can differ. A ministry of defense monitoring an enemy’s propaganda will have different objectives than a government body entrusted with protecting an election, a corporation, or an entire industry under attack. Hostile influence campaigns (HICs) have predominantly been associated with the military and socio-political realms, but a massive shift has occurred in recent years. Using the same mixed bag of fake users, sock puppets, fake websites, cyber-attacks, and real influencers – to name but a few – companies and even entire industries are being attacked. Adding to this phenomenon is the “offensive ecosystem”, which has led to a privatization of influence operations. At the forefront of this trend was the Internet Research Agency, a private company that ran influence operations on behalf of the Russian government. Many more companies like the SDA, which is linked to the ongoing Doppelganger campaign (For more information, please see our article: Doppelganger in Germany), followed suit. These influence mercenaries apply methodologies that in the past were predominantly in the domain of governments and readily sell their services to anyone for the right price. This leads to an explosive mix of malign governments, influence mercenaries, and scrupulous influencers, which with ease can set their cross hairs on anyone. China is steadily moving to the forefront regarding using the online ecosystem to strengthen its position and attack its perceived enemies. Our latest research examines a nefarious attack on the French luxury industry that should be a stark warning to all industries and corporations of how the threat landscape in the economic realm is rapidly evolving. This attack needs to be seen within the context of the ongoing geopolitical turmoil and how China is increasingly positioning itself against “the West” in general and the US more specifically. The weaponization of platforms such as TikTok and networks of influencers, paired with an increasing willingness of China to put them to use, is a deeply disconcerting trend. This time, “only” the luxury industry was targeted. It would be naïve not to see this as a warning shot and a flexing of the muscles of an increasingly scrupulous adversary. I would also like to thank Intercept9500, which graciously took over a pivotal role in this joint research project and whose team stood by our side every step of the way. While we admittedly felt glamorous working on this project, it left us with a feeling of unease, but see for yourself. Florian Frank Director of the CRC [Find the Report here]
- Codebreakers Hack Sepah Bank: Financial Motive or Influence Operation?
This week, we examine the recent developments involving the hackergroup "Codebreakers" and the Iranian Sepah Bank, which have surprisingly not made international headlines yet. A Wikipedia entry addresses the incident in detail and compiles relevant sources. See this article for comprehensive coverage of how the leaks have affected discourse in Iran. Figure 1 - Codebreaker Announcement on WhatsApp (Courtesy of Meta) In the past, the United States Office of Foreign Assets Control (OFAC) has referred to Sepah Bank as the "financial backbone" of Iran’s ballistic missile program. (1) The bank is also believed to have close ties to the country’s military and security apparatus. It has been listed on the U.S. sanctions list for several years. (2) On March 25, 2025, the hacker group "Codebreakers" announced it had successfully breached Sepah Bank. However, the group exfiltrated data and publicized the information in a "hack and leak" operation. One particularly notable aspect of this case is the group’s choice of distribution channels. In addition to using conventional leak platforms, they also turned to Instagram, offering the data to the highest bidder. Choosing Instagram appears to have been a deliberate decision, especially considering it is one of Iran's most widely used platforms. This suggests an effort to reach a broad Iranian audience. The previously unknown group is called "Codebreakers" and presents itself through a simple, stereotypical profile image of a hooded hacker, most likely generated by AI. This kind of visual branding, combined with the choice of platform, seems inconsistent with a genuine ransom-driven motive. The fact that Codebreakers used Instagram to promote and share data from the breach is highly unusual, as is the large volume of information in the sample. Typically, financially motivated cybercriminals avoid obvious public platforms to conceal their tracks. Threat actors commonly use the deliberate public disclosure of stolen data online to increase pressure on the targeted organization by threatening to release sensitive information unless a ransom is paid. What is unusual in this case is using a WhatsApp group to auction off the information. Threat actors rarely use this messaging service as a platform for communication or data publication. This is partly because WhatsApp is linked to a phone number and because its parent company, Meta, is known to cooperate with law enforcement requests. Another notable aspect is that the initial posts were written exclusively in Persian. In such cases, threat actors usually communicate in English to reach a broader audience and maximize pressure on the victim. The choice of language in this instance may therefore suggest a regional focus or a politically motivated intent. Figure 1 - At first, mainly in Persian, WhatsApp (Courtesy of Meta) This strongly suggests that the goal was to generate public attention primarily within Iran. It indicates that this was not a typical financially motivated cyberattack but an influence campaign. The choice of target, the method of disclosure, and the language used in communication all suggest an intention to undermine trust in state institutions and publicly discredit Bank Sepah. The identity of the group behind the attack remains unknown. Possible actors include state-sponsored entities pursuing geopolitical interests, exiled Iranian opposition groups with the technical capabilities to carry out such an operation and a desire to influence public perception inside Iran. However, it seems unlikely that an Iranian opposition group is responsible. These groups generally lack the resources to breach a high-security institution like Bank Sepah successfully. Even if they did, they would likely release the material in English to attract international attention and exert political pressure. The fact that the communication was conducted entirely in Persian indicates an operation explicitly directed at an Iranian audience. The financial motive also appears to be secondary. While a ransom of $42 million was demanded, the amount was so excessive that payment was never a realistic expectation. This suggests that serious negotiations were not the actual goal. Furthermore, the sample data released was huge. In typical extortion cases, only small data sets are published to demonstrate the validity of the breach. The decision to publish hundreds of thousands of records indicates that the primary aim was to expose as much information as possible. Figure 2 - Ransom Offer via WhatsApp (Courtesy of Meta) Some time after the sample files were published, the data was sold on Exploit[.]in at a much more realistic price. The platform is a Russian-language cybercrime forum that has been active since 2005 and operates on both the dark and the clear web. It is considered one of the oldest and most established platforms, used by professional cybercriminals to trade illegal services such as hacking, fraud, and ransomware-as-a-service (RaaS). The data has likely already been purchased multiple times. Figure 3 - Offer on Exploit[.]in A few days later, the Codebreakers announced that they had also compromised additional Iranian government systems and databases. Figure 4 - Announcement of more data (Courtesy of Meta) The group also released personal information of account holders from the database. This included the disclosure of individual banking details. Figure 5 - Disclosure of Personal Information (Courtesy of Meta) The release specifically highlighted account holders with a military background. This, too, may indicate that the operation could be part of an information campaign. Figure 6 - Disclosure of Personal Information (Courtesy of Meta) Notably, on April 10, 2025, the Codebreakers launched a video competition inviting participants to create short videos highlighting the hack, its potential implications, and the group’s Telegram channel. Prize money in cryptocurrency was offered as an incentive, suggesting an effort to maximize reach and impact. This, too, could be seen as a possible indication of state-backed involvement. Figure 7 - Announcement for Video Competition (Courtesy of Meta) Taken as a whole, the evidence suggests that this operation is less likely to be a traditional financially motivated attack and more likely a targeted influence campaign. The choice of language, the method of disclosure, and the nature of the demands all point to objectives that are not monetary. Instead, the operation appears aimed at reaching an Iranian audience, undermining trust in state institutions, and potentially stirring domestic unrest. The operation is ongoing and will continue to be monitored closely. [Footnotes] (1) U.S. Department of the Treasury, 2007. Iran’s Bank Sepah Designated by Treasury for Facilitating Iran’s Weapons Program. [online] Available at: https://home.treasury.gov/news/press-releases/hp219 [Accessed 16 Apr. 2025]. (2) U.S. Department of the Treasury, 2007. Bank Sepah – Sanctions Listing. [online] Available at: https://sanctionssearch.ofac.treas.gov/Details.aspx?id=25580 [Accessed 16 Apr. 2025]. DISCLAIMER Copyright and License of Product This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.
- Years of Deception: ClearSyksec´s Report on a Houthi-Yemeni Influence Network
[Introduction] This week on our blog, we are presenting a new report by ClearSkySec that highlights a compelling case of a long-term influence operation. (1) This operation has demonstrated remarkable flexibility in its strategy while maintaining a consistent underlying infrastructure. As always, our focus remains on analyzing hostile influence activities and their mechanisms of operation. [Background and Development of the Operation] ClearSkySec first uncovered the campaign in 2019, when it was initially directed against Israeli audiences. Between 2019 and 2022, the focus shifted towards the Gulf states, particularly Saudi Arabia and the United Arab Emirates. During this period, no targeted activities against Israel were detected. However, since late 2024, the campaign has again pivoted toward Israel. While the technical methods employed have changed little over time, a shift in content strategy has been observed: instead of disseminating fabricated news, the operators now copy and redistribute authentic content from reputable sources. [Key Findings] The campaign has remained continuously active since 2019, demonstrating consistency in its operational techniques. Thematically, it still revolves around gossip and entertainment content. While fabricated news dominated early, the current strategy relies on real content from Israeli news outlets and social media. Content distribution is mainly done through fake profiles embedded in open Israeli Facebook community groups, seamlessly blending campaign messages into regular user activity. Many of the domains created in 2019 are still operational, pointing to a long-term strategy. The campaign offers content in both Hebrew and Arabic, addressing different regional audiences. However, while the Hebrew content appears relatively professional, the Arabic material often lacks authenticity. No indications of malware deployment or watering hole attacks have been observed to date. Although the campaign’s objectives remain unclear, the infrastructure can be repurposed at any time for spreading disinformation, inducing fear, or delivering malware. [Tactics, Techniques, and Procedures of the Campaign] The operation's fundamental tactic seems to blend invisibly into existing social media ecosystems, gaining users' trust over time. The operators likely do not aim for immediate disruption but rather for sustained, low-visibility presence, maximizing influence potential while minimizing the risk of detection. Campaign Methodology – Flowchart, page 3 (Courtesy of ClearSky) To achieve this, a series of well-established techniques is employed. Fake news websites, such as gool-live[.]com, were created to publish copied content sourced from legitimate Israeli news outlets and social media posts. In parallel, fake Facebook pages like “Celebrity News” were set up to post teaser texts and clickbait links leading back to these fake websites. Numerous fake Facebook profiles support this infrastructure, typically with generic English or transliterated Arabic names. These profiles show minimal personal activity and are primarily used to disseminate campaign materials across various groups. The procedures underpinning the operation are systematic. First, infrastructure is stablished: domains are registered using privacy protection services and frequently moved between Yemeni hosting providers to obfuscate ownership. Authentic content is then systematically harvested from reliable sources and published on fake websites. The dissemination process begins through dedicated Facebook pages and continues via fake profiles infiltrating open Israeli Facebook groups. Once inside the groups, the operators share identical posts to maximize visibility and simulate organic engagement. Content is tailored linguistically, with Hebrew posts targeting Israeli users and Arabic content directed toward audiences in the Gulf states. Over time, the fake profiles are maintained with minimal but carefully timed activity to give the impression of organic growth. [Conclusion] The Houthi-Yemeni influence campaign demonstrates how long-term operations can adapt tactics while preserving stable infrastructure and objectives. It offers a concrete example of how influential activities are embedded within target communities over extended periods with minimal visibility. ClearSkySec’s current report provides insights for understanding the dynamics and persistence of such operations. We also recommend reviewing ClearSky’s earlier reports on related campaigns to gain a broader perspective on the tactics and methods used in hostile influence efforts. (2) Footnotes (1) ClearSky Security Ltd, 2025. Houthi-Yemeni Influence Campaign. [online] Available at: https://www.clearskysec.com/wp-content/uploads/2025/04/Houthi-Influence-Campaign-april-2025.pdf (2) ClearSky Security Ltd, 2019. Yemen-Based Disinformation Campaign Distributing Fake News in Israel and the Arab World. [online] Available at: https://www.clearskysec.com/yemen-disinformation-campaign/, and ClearSky points out similarities to an Iranian operation in 2019: ClearSky Cyber Security, 2018. Global Iranian Disinformation Operation. [online] Available at: https://www.clearskysec.com/global-iranian-disinformation-operation/ DISCLAIMER Copyright and License of Product This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.
- Commercial Hostile Influence Networks: Anthropic’s Findings on Multi-Client Operations
[Introduction] In April 2025, the company Anthropic published an analysis of four coordinated influence operations, which it claims to have disrupted. They were carried out by a commercial provider ("Influence-as-a-Service") and could not be attributed to any specific state actor. (1) [Technical Implementation of the Campaigns] Claude, a large language model (LLM) developed by Anthropic, was used for content generation and managing digital identities. Based on predefined parameters, the model determined how controlled online personas should respond to posts—whether to comment, share, like, or ignore them. Operational Architecture, Courtesy of Anthropic (2) The coordination was managed through a JSON-based framework that defined each persona’s political alignment, language style, and behavioral patterns. Claude also created multilingual content, generated prompts for image-generation models, and evaluated image outputs for narra-tive alignment. This enabled the automated, cross-platform, and consistent management of over 100 social media accounts. (3) [Campaign Targets and Regional Focus] The four documented operations were directed at different regions. In Europe, the focus was on energy-related narratives; in Iran, on cultural identity. In the case of the United Arab Emirates, the content emphasized economic strength while criticizing EU regulations. In Kenya, political develop-ment initiatives were positively portrayed. The operation targeted opposition figures in one EU member state while promoting Albanian political personalities. Despite differing objectives, the technical execution was uniform across all campaigns. (4) [Communication Strategy and Impact] According to Anthropic, only a limited viral impact was reached. The goal was not rapid viral spread, but the long-term embedding of credible-appearing personas within ongoing digital conversations. The accounts were designed to appear authentic. To simulate human behavior, the personas responded to bot accusations from real users with pre-scripted sarcasm or deflection, reinforcing the impression of genuine interaction. (5) [Conclusion] LLM models will remain structurally vulnerable. Conventional filters are insufficient to detect long-term, coordinated manipulation. Preventing misuse remains one of the central challenges in managing the risks associated with scalable AI systems. The report underscores the growing role of private actors in political manipulation. Unlike state entities, these providers operate without ideological alignment or institutional oversight. They act opportunistically, adapt to client needs, and deploy general-purpose AI systems with standardized infrastructure to carry out large-scale influence operations, even with limited resources. The rising demand has given rise to an "Influence-as-a-Service" sector that offers modular, low-cost tools for outsourced political messaging while minimizing clients' attribution risks. This development has turned influence into a technically optimized, professionalized, scalable service detached from ideology. Anthropic’s findings show that individual operators can simultaneously serve conflicting geopolitical interests, accelerating an evolution in such campaigns' reach, precision, and impact. It seems that tactics are shifting: rather than chasing virality, actors increasingly prioritize the stealthy construction of long-lasting, embedded networks that gradually shape discourse through persistent and believable engagement. This strategic pivot makes influence operations harder to detect and disrupt. Addressing this challenge will require strong national policy frameworks and active cooperation from developers and providers of large language models, who play a key role in identifying misuse and enforcing safeguards at scale. Footnotes (1) Anthropic, K. Lebedev, A. Moix & J. Klein, 2025. Operating Multi-Client Influence Networks Across Platforms. p.1 [PDF] Available at: https://cdn.sanity.io/files/4zrzovbb/website/45bc6adf039848841ed9e47051fb1209d6bb2b26.pdf (2) Anthropic, K. Lebedev, A. Moix & J. Klein, 2025. Operating Multi-Client Influence Networks Across Platforms. p.3 [PDF] Available at: https://cdn.sanity.io/files/4zrzovbb/website/45bc6adf039848841ed9e47051fb1209d6bb2b26.pdf (3) Anthropic, K. Lebedev, A. Moix & J. Klein, 2025. Operating Multi-Client Influence Networks Across Platforms. p.2 [PDF] Available at: https://cdn.sanity.io/files/4zrzovbb/website/45bc6adf039848841ed9e47051fb1209d6bb2b26.pdf (4) Anthropic, K. Lebedev, A. Moix & J. Klein, 2025. Operating Multi-Client Influence Networks Across Platforms. p.2 [PDF] Available at: https://cdn.sanity.io/files/4zrzovbb/website/45bc6adf039848841ed9e47051fb1209d6bb2b26.pdf (5) Anthropic, K. Lebedev, A. Moix & J. Klein, 2025. Operating Multi-Client Influence Networks Across Platforms. p.2 [PDF] Available at: https://cdn.sanity.io/files/4zrzovbb/website/45bc6adf039848841ed9e47051fb1209d6bb2b26.pdf DISCLAIMER Copyright and License of Product This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.
- Storm-1516: VIGINUM Report Insights
[INTRODUCTION] This week, we examine the report “Analysis of the Russian Information Manipulation Campaign Storm-1516,” published in May by VIGINUM. The timing of its release is unlikely to be coincidental. At the end of April 2025, the French government publicly confirmed what had long been suspected: repeated cyberattacks on French institutions can be directly attributed to Russian intelligence services.[1] What stands out is not the substance of the accusation, such activities have been documented since 2017, but France's unusually explicit and public first attribution.[2] It seems likely that this attribution is part of a broader political communication strategy. The VIGINUM report should, therefore, be seen not only as a technical analysis but also as a political signal. It forms part of France’s effort to publicly expose Russian influence operations and foster coordinated international awareness and response. [SUMMARY] The report documents 77 information operations attributed to the actor cluster Storm-1516. The comprehensive list of individual cases included in the annex is particularly noteworthy.[3] This level of transparency sets a benchmark for structured and verifiable threat analysis. At this point, we adopt the summary before turning to selected highlights for a more in-depth examination: Figure 1 – Summary, p. 3 of the report, Courtesy of VIGINUM [INSIGHTS] The role of “CopyCop” remains open to interpretation. VIGINUM draws a clear analytical line between Storm-1516 and CopyCop—two closely intertwined operations that are nonetheless treated as distinct entities. This distinction is significant, as other organizations, such as Recorded Future, have previously treated both clusters as identical or inseparably linked. (4) VIGINUM, by contrast, emphasizes that Storm-1516 is an autonomous Russian information manipulation set (IMS) responsible for strategic disinformation operations. CopyCop, in turn, functions as a technical dissemination network used by multiple Russian actors. This differentiation is evident in the report’s executive summary, which refers to “close coordination and occasional overlap between Storm-1516 and other Russian IMS, including Project Lakhta and CopyCop.” [5] The latter is not considered an internal component of Storm-1516 but a separate actor within the broader Russian influence ecosystem. Another example of this operational overlap is the registration of the domain ensemble-24.[fr], in June 2024, by operators affiliated with CopyCop. The site impersonated the official campaign website of the French political party “Ensemble” and was used as part of a Storm-1516 election interference operation. Technical indicators, such as shared IP infrastructure, demonstrate the close connection. [6] Figure 2 - Screenshot of websites impersonating the coalition "Ensemple", p. 6 of the report, Courtesy of VIGINUM VIGINUM notes that the CopyCop network “is now used by several actors in the Russian information influence ecosystem”[7] —an indication that it functions not as a proprietary tool of Storm-1516, but as a shared technical infrastructure. The report further states that “Storm-1516 narratives are almost systematically amplified by fake news websites from the CopyCop network”[8], positioning the network less as an originator of content and more as a key amplifier in coordinated influence campaigns. Although a clear chain of command between the actors is not defined, there are technical overlaps in domain infrastructure and indications of possible links to the Russian Federal Security Service (FSB). [9] These ambiguities suggest an underlying structure that warrants further investigation into how collaboration within the network is organized. Figure 3 - Evolution of the IMS Storm-1516, p. 8 of the report, Courtesy of VIGINUM Storm-1516 strategically uses artificial intelligence, deepfakes, and forged content—standard tools in hybrid disinformation operations, but especially well-documented in this case. While the campaign initially focused on discrediting Ukraine to erode Western support, it later shifted toward direct interference in democratic elections. One example includes a fabricated audio recording falsely implicating Barack Obama in the attempted assassination of Donald Trump. In Germany, political figures such as Friedrich Merz, Robert Habeck, and Annalena Baerbock were targeted with fabricated allegations ranging from corruption to conspiracy theories involving sexual abuse and immigration.[10] Figure 4 – Distribution Chain, p. 11 of the report, Courtesy of VIGINUM Storm-1516’s disinformation architecture unfolds across five tightly connected phases: Preparation, Distribution, Laundering, Amplification, and Relays. [11] This structured process reveals a methodical, scalable operation designed to obscure attribution while maximizing narrative reach and political effect. In the Preparation phase, operators create and stage false content. They register domain names, fabricate personas, open burner social media accounts, and develop visual and textual materials. These setups, such as fake whistleblower identities, lay the technical and narrative foundation for later phases. Distribution follows via three key vectors. First, disposable social media accounts post videos or claims under the guise of spontaneous leaks. Second, paid third-party actors, from fringe influencers, distribute the narratives to broader audiences. Third, the CopyCop network, comprising over 290 fake news sites operated by John Mark Dougan, is a direct publishing platform for core disinformation content. Laundering is the third stage. Content originally seeded by Storm-1516 is republished in foreign media outlets, primarily in Africa and the Middle East. These reprints, often labeled as "sponsored" or "branded content," are meant to obscure the Russian origin and present the stories as independent, locally sourced reporting. Amplification escalates the visibility of these narratives. CopyCop sites recycle the content across multiple domains, while paid social media accounts repeat and boost messaging. Comment sections on Western media outlets—particularly tabloids and far-right platforms—are manipulated to insert links and echo narratives. Telegram channels serve as distribution hubs, frequently replicating CopyCop material. X accounts affiliated with Project Lakhta, the BRICS Journalists Association, and Russia’s Federal Security Service strategically broadcast the stories across languages and regions, reinforcing their legitimacy and volume. In the final phase, Relays, other actors take up the narratives, amplifying their reach even further. This includes Russian disinformation networks like Portal Kombat, RRN/Doppelgänger, and Mriya. Russian embassies, state broadcasters (RT, TASS, RIA), and media linked to FSB, GRU, and SVR continue the distribution. Belarusian outlets and Western pro-Russian actors—including influencers, fringe websites, and at times even elected officials—also help circulate these narratives. We’ll bypass further details and jump straight to the attribution, where a sharp chart lays out the complex web of actors driving the operation. Figure 5 - p. 21, Courtesy of VIGINUM Section 4 of the report focuses on the key actors behind the Storm-1516 campaign. Among them are John Mark Dougan, who runs the CopyCop network, and ideological figures such as Aleksandr Dugin, Valery Korovin, and Leonid Savin—individuals long embedded in Russian think tanks and influence networks. Particularly noteworthy is Section 4.4, which introduces Yury Khoroshenky, a figure allegedly linked to GRU Unit 29155. According to VIGINUM, Khoroshenky was involved early on in the organization and financing of the campaign and maintained close ties to other core actors. While the report stops short of formal attribution, it emphasizes that the structure and behavior of Storm-1516 are consistent with state-directed influence operations. The mention of a potentially GRU-affiliated individual offers a crucial lead for further investigation. [12] [Conclusion] VIGINUM concludes that Storm-1516 meets the criteria of targeted foreign digital interference. Due to its technical sophistication, high adaptability, and strategic focus, the campaign represents a serious threat to the integrity of the digital public sphere in France and Europe. Especially during election periods and international crises, Storm-1516 has the potential to distort public discourse, undermine trust in democratic institutions, and deliberately fuel social polarization. [13] Footnotes (1) Agence nationale de la sécurité des systèmes d'information (ANSSI), 2025. Targeting and compromise of French entities using the APT28 intrusion set. CERTFR-2025-CTI-007. [online] Available at: https://www.cert.ssi.gouv.fr/cti/CERTFR-2025-CTI-007/, [Accessed 18 May 2025]. (2) Reuters, Irish, J., 2025. France accuses Russian intelligence of repeated cyberattacks since 2021. [online] Available at: https://www.reuters.com/world/europe/first-france-accuses-russian-intelligence-repeated-cyber-attacks-2025-04-29/ [Accessed 18 May 2025]. (3) SGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, pp. 29-33 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf [Accessed 18 May 2025]. (4) Insikt Group®, 2024. Russia-Linked CopyCop Uses LLMs to Weaponize Influence Content at Scale. Recorded Future. [online] Available at: https://www.recordedfuture.com/research/russia-linked-copycop-uses-llms-to-weaponize-influence-content-at-scale [Accessed 18 May 2025]. (5) ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, p. 3 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf [Accessed 18 May 2025]. (6) ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, p. 6 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf [Accessed 18 May 2025]. (7) ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, p. 15 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf [Accessed 18 May 2025]. (8) ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, p. 17 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf [Accessed 18 May 2025]. (9) ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, p. 23 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf [Accessed 18 May 2025]. (10) ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, p. 7 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf [Accessed 18 May 2025]. (11) ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, p. 11-20 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf [Accessed 18 May 2025]. (12) ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, p. 27 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf [Accessed 18 May 2025]. (13) ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, p. 28 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf [Accessed 18 May 2025]. DISCLAIMER Copyright and License of Product This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.
- Weekly Report: Cyber based influence campaigns 28th April - 04th of May 2025
[Listen to the Podcast] [Report Highlights] As published by the Jamestown Foundation , Russian hockey star Alexander Ovechkin’s record-breaking goal in the NHL was swiftly politicized by the Kremlin as part of its broader disinformation strategy. An investigation by The Guardian , in collaboration with anti-racism group "Hope Not Hate", has uncovered a campaign targeting Hong Kong pro-democracy activists in the UK. According to an article by DFR Lab , the rapid advancement of GAI has revolutionized how disinformation is created and distributed. Malign actors now use sophisticated tools like GANs, diffusion models, and multi-modal AI to fabricate highly realistic synthetic media. BIRN Kosovo’s comprehensive monitoring report of over 20 broadcast outlets and 4,000 online publications during the 2025 parliamentary elections revealed a troubling landscape dominated by disinformation, hate speech, and foreign influence. According to an article published by The Conversation , disinformation has emerged as a significant concern as Australia approaches its federal election. As published by EU vs. Disinfo , Russia prepares for its Victory Day parade on May 09th, the Kremlin is expected to ramp up disinformation targeting European nations by labeling them “Nazis” and “neo-Nazis.” > TABLE OF CONTENTS < HOSTILE INFLUENCE CAMPAIGNS - STATE ACTORS [ Russia] Russia Expanding Disinformation Tactics in Africa Kremlin Uses Ovechkin’s Record for Influence Campaigns Russian Disinformation Campaign Targets Moldova’s Pro-EU President Historical Revisionism at the Core of Kremlin Disinformation [China] Suspected China-Linked Influence Campaign Targets Exile Activists in UK AI RELATED ARTICLES AI Media Advances and Disinformation Tactics GENERAL REPORTS Polling Misinformation May Mislead Australian Voters Hostile Influence Threatening the Integrity of Kosovo’s 2025 Election Viral Disinformation Marks First 100 Days of Trump’s Second Term Impact of Misinformation on the 2025 Australian Election FRAMEWORKS TO COUNTER DISINFORMATION EU Funds 5 million Euros to Strengthen Resilience GLOSSARY DOWNLOAD SECTION REPORT STATE ACTORS [Russia] Russia Expanding Disinformation Tactics in Africa A recent EU vs. Disinfo article analyzes the Russians’ FIMI approach in Africa, which is described as a hybrid disinformation strategy that combines overt state media, covert networks, and local actors Global outlets like TASS and RIA Novosti provide baseline narratives, while regional branches such as RT Africa and Sputnik Afrique adapt content specifically for African audiences. Russian embassies amplify official messaging, especially in South Africa and Kenya. Covert actors like the African Initiative, linked to Russian intelligence services, and the "Portal Kombat" (Pravda ecosystem) network are key in disseminating pro-Kremlin narratives. These entities operate through seemingly local websites and use automated republication to saturate regional information spaces with synchronized messaging. Russia uses a two-way information laundering strategy in Africa: Narratives are first localized through the African Initiative and its amplifiers, then recycled into Russian state media to create an illusion of independent validation. In addition, Offline tools, such as events, media training, and partnerships with local groups (e.g., the African Initiative Association in Burkina Faso), reinforce these efforts on the ground. The narratives focus on portraying the West, particularly France and the U.S., as exploitative and destabilizing, while framing Russia as a trustworthy partner and defender of African values. Familiar anti-Western narratives are tailored to local contexts, casting Russia as a natural ally. This strategy is not short-term propaganda but a persistent effort to reshape Africa’s information ecosystem in Russia’s favor. Sources: EUvsDisinfo, 2025. Echoes of influence: inside Russia’s FIMI activities in Africa . [online] Available at: https://euvsdisinfo.eu/echoes-of-influence-inside-russias-fimi-activities-in-africa/ [Back to Table of Contents] Kremlin Uses Ovechkin’s Record for Influence Campaigns As published by the Jamestown Foundation , Russian ice hockey star Alexander Ovechkin, forward for the Washington Capitals and the NHL’s all-time top scorer, is strategically used by the Kremlin as part of its digital influence efforts. While his athletic success is celebrated globally, the Russian state capitalizes on his popularity through online platforms such as the Kremlin’s official website, Telegram, and state-run media to advance nationalist messaging. Ovechkin’s public support for President Vladimir Putin, including his creation of the “Team Putin” campaign, and his refusal to denounce Russia’s invasion of Ukraine, make him useful in shaping pro-regime narratives. His curated image, shared through Instagram and YouTube, and appearances on Kremlin-affiliated TV, depict him as a patriotic strength and resilience symbol. These digital channels allow the regime to export influence beyond its borders, especially targeting Russian-speaking communities abroad. The Kremlin’s framing of Ovechkin’s record as a “triumph of sports soft power”—a term cited from domestic sources—illustrates how individual athletic success is woven into broader digital influence efforts designed to reinforce loyalty, distract from international isolation, and promote a unified national identity. Source: The Jamestown Foundation, R. Arnold, 2025. Kremlin Propaganda Uses Alexander Ovechkin’s Hockey Record to Promote Russian Nationalism . [online] Available at: https://jamestown.org/program/kremlin-propaganda-uses-alexander-ovechkins-hockey-record-to-promote-russian-nationalism/ [Back to Table of Contents] Russian Disinformation Campaign Targets Moldova’s Pro-EU President NewsGuard reports that Maia Sandu, Moldova's pro-European President, has once again become the target of Russian disinformation efforts—this time in the context of the upcoming parliamentary elections in September 2025. During her re-election in 2024, Moldovan authorities reported a Russian hostile influence campaign (HIC) in support of a pro-Kremlin candidate. The current campaign is driven by the Russian operation “Matryoshka,” known for producing seemingly authentic video forgeries and imitating credible media to spread false content. Since the election announcement, pro-Russian channels have circulated fake materials accusing Sandu of corruption and personal misconduct. One prominent example is a fabricated Vogue feature claiming Sandu is the most expensively dressed president in the world. Authentic images were used but falsely attributed to luxury brands with invented prices. According to NewsGuard, the clothing shown does not belong to the luxury segment but comes from a mid-range brand. Other forgeries include a manipulated Economist cover and a supposed BBC video accusing Sandu of embezzling public funds to support an alleged mistress. So far, seven such fabricated posts have been identified on pro-Kremlin Telegram and X channels. Unlike previous campaigns focused on election manipulation, the current wave targets personal defamation, aiming to undermine Sandu’s integrity, delegitimize her pro-Western stance, and deepen political polarization in Moldova. Source: NewsGuard Reality Check, E. Maitland, 2025. Russian Disinformation Campaign Targets Moldova’s Pro-EU President Ahead of Parliamentary Elections . [online]; Available at: https://www.newsguardrealitycheck.com/p/russian-disinformation-campaign-targets [Back to Table of Contents] Historical Revisionism at the Core of Kremlin Disinformation EU vs. Disinfo reports that the Kremlin is expected to intensify disinformation campaigns targeting European countries in the lead-up to Russia’s Victory Day parade on May 9. These efforts increasingly rely on labeling critics as “Nazis” or “neo-Nazis.” The overuse of the term “Nazi” has become a rhetorical tool to discredit opponents and rewrite history. Isolated incidents are exaggerated and framed as alleged state-level glorification of Nazi criminals. At the same time, Ukrainian commemorative initiatives related to World War II are portrayed as disrespectful or revisionist. This tactic serves as a political pretext to justify Russia’s war of aggression against Ukraine and to delegitimize Western support for Kyiv. Russian state media have long since begun distorting the history of World War II. They promote the narrative that Russia single-handedly defeated Nazism, while portraying Western democracies and former Soviet republics—including Ukraine—as Nazi collaborators. Historical facts, such as the participation of six million Ukrainian fighters against Hitler, are deliberately omitted. These narratives are repeatedly circulated through influence campaigns. In TV broadcasts, AI-generated images, and social media posts, EU and NATO leaders are depicted in Nazi uniforms or as grotesque caricatures. Kremlin-aligned figures like Vladimir Solovyov, Margarita Simonyan, and former President Dmitry Medvedev use radical rhetoric and extremist language to amplify these messages. Outlets like RT and Sputnik spread this content globally and in multiple languages. Source: EUvsDisinfo, 2025. Russian revisionism ahead of 9 May: ‘Modern Europe is reincarnated Nazism’ . [online] Available at: https://euvsdisinfo.eu/russian-revisionism-ahead-of-9-may-modern-europe-is-reincarnated-nazism/ [Back to Table of Contents] [China] Suspected China-Linked Influence Campaign Targets Exile Activists in UK An investigation by The Guardian , in collaboration with the anti-racism organization Hope Not Hate , has uncovered evidence of a suspected disinformation campaign. It targets pro-democracy Hong Kong activists living in the United Kingdom. Following violent unrest in the UK in 2024, far-right channels on social media began inciting violence against asylum seekers. The focus soon shifted to prominent Hong Kong activists in exile. More than 150 posts from 29 accounts on platforms such as X and Telegram called for acts of violence against these individuals. The posts used derogatory language and spread false claims, accusing the activists of unlawful support for refugees and anti-national activity. These events may be part of an online influence operation linked to Chinese state actors. Many posts were written in broken English and appeared during Chinese working hours. Some contained Chinese characters, typography, or references to figures associated with the Chinese government. In many cases, the activists were also doxxed — their home addresses and schedules were published. While no direct link to the well-known Spamouflage Dragon network could be confirmed, the patterns and methods closely resemble established state-backed disinformation efforts from China. The campaign may form part of the so-called “transnational repression strategy,” aimed at silencing dissent beyond China’s borders. Many of the targeted exiles had already faced arrest warrants, intimidation of family members, and bounties in Hong Kong. Chinese officials deny the allegations, but international cybersecurity analysts and law enforcement agencies warn that Beijing’s strategy of fear and manipulation is advancing. Source: The Guardian, T. Burgis & M. McClenaghan, 2025. Revealed: online campaign urged far right to attack China’s opponents in UK . [online] Available at: https://www.theguardian.com/world/2025/apr/28/revealed-online-campaign-urged-far-right-to-attack-chinas-opponents-in-uk?CMP=Share_iOSApp_Other [Back to Table of Contents] AI Related Articles AI Media Advances and Disinformation Tactics DFRLab takes an in-depth look at how generative AI is reshaping the dynamics of disinformation campaigns. Modern tools such as diffusion models, GANs, and multimodal systems enable the creation of highly realistic synthetic media. While GANs typically output finished images with limited user control, newer systems allow users to define image content with far greater precision. Telltale signs like asymmetrical jewelry or warped backgrounds are increasingly rare. Modern generative systems can accurately render symmetrical objects, such as glasses, or remove them entirely. With text-based prompts, users can adjust even the smallest visual elements. Additionally, many of these systems tend to favor idealized aesthetics and cinematic visuals, which enhance the believability of the output. As a result, traditional detection methods are becoming less effective. These technologies are already being used in global disinformation efforts. DFRLab highlights the use of AI-generated profile images in pro-UAE campaigns during the COP28 summit and deepfake attacks in Brazil’s 2024 elections. Multimodal AI generates content with realistic settings, branding, and contextual cues that strengthen credibility. The shift from crude forgeries to refined AI-generated visuals accelerates the reach and effectiveness of hostile influence campaigns, making them significantly more challenging for researchers and the public to detect. Source: DFRLab, A. Chenrose & M. Rizzuto, 2025. The evolving role of AI-generated media in shaping disinformation campaigns . [online] Available at: https://dfrlab.org/2025/05/01/the-evolving-role-of-ai-generated-media-in-shaping-disinformation-campaigns/ [Back to Table of Contents] GENERAL REPORTS Polling Misinformation May Mislead Australian Voters ABC News examines various types of surveys and how they may contribute to the spread of disinformation. One example involves Labor's claim that Peter Dutton was voted the worst health minister. It is based on an online survey conducted a decade ago with only 1,077 respondents from a medical publication’s readership. The survey lacked scientific methodology and excluded subsequent health ministers, yet the results are being presented as representative of the entire medical community. Some surveys may selectively present or omit results. For instance, Clive Palmer, founder and leader of the United Australia Party (UAP), claimed Australians supported "Trump-like" policies. Still, the actual results showed more people opposed them than supported them. Additionally, some polls use leading questions to influence public opinion, as seen in a survey praising an independent MP before asking respondents for their voting preference. The most manipulative form of polling is "push polling," where questions are designed to plant negative impressions rather than gather unbiased data. Some voters reported receiving biased surveys about independent candidates funded by "Climate 200," with the survey being abruptly cut off if they supported other parties. Source: ABC News, M. Workman, 2025. From unscientific surveys to omitted results, polling can mislead voters. Here's how not to get fooled . [online] Available at: https://www.abc.net.au/news/2025-04-29/how-polling-can-mislead/105214486 [Back to Table of Contents] Hostile Influence Threatening the Integrity of Kosovo’s 2025 Election The BIRN Kosovo report on the 2025 parliamentary election highlights the significant hostile external influence, especially from Russia and Serbia. Russian state-funded media outlets such as Sputnik Serbia and RT Balkan played an essential role in disseminating disinformation that undermined the credibility of the election and Prime Minister Albin Kurti. These narratives falsely suggested that the West, particularly the US, was supporting Kurti to instigate an inter-ethnic conflict in Kosovo. Serbian media outlets echoed these messages, further inflaming tensions. Chinese state-controlled media also contributed, though to a lesser extent, spreading narratives critical of NATO and the West. These efforts aimed to destabilize Kosovo's political landscape and its relations with Western institutions. Social media played a central role in amplifying these disinformation efforts. Political actors, sometimes with external backing, used platforms like Facebook, Instagram, and Twitter to bypass traditional media and directly influence public opinion. Anonymous accounts and bots were widely used to amplify misleading content. Ultimately, the lack of substantive policy debate, the unregulated use of disinformation, and opaque campaign financing severely compromised the election's informational integrity. BIRN’s findings stressed the urgent need for more vigorous enforcement of media laws, the regulation of AI in political discourse, and strategic institutional responses to foreign and domestic disinformation. Source: Balkan Investigative Reporting Network (BIRN), Gashi, K., Prebreza, V., Ahmeti, A., Jakupi, R., Kelmendi, V. & Nikolić, I., 2025. Hate speech and disinformation during the 2025 election in Kosovo. [online] Available at: https://www.eeas.europa.eu/sites/default/files/documents/2025/Hate%20Speech%20and%20Disinformation%20during%20the%202025%20Election%20in%20Kosovo%20EN.pd f [Back to Table of Contents] Viral Disinformation Marks First 100 Days of Trump’s Second Term The NewsGuard report on Donald Trump's second presidency identifies 20 viral false claims that garnered over 134 million views and nearly 4 million likes in the first 100 days of his administration. Many of these falsehoods reflect hostile influence, particularly through Russian disinformation campaigns. The Kremlin’s “Matryoshka” campaign spread fake videos linking Trump to unfounded claims, such as imposing tariffs on uninhabited islands and banning pro-Ukrainian slogans. These narratives were designed to undermine trust in Trump and the U.S. while stoking geopolitical tensions. Additionally, pro-Trump users spread false claims, presenting misinformation as proof of Trump’s support for the middle class, such as the claim that he abolished taxes on tips and overtime. This disinformation was primarily spread on platforms like TikTok and X to influence political perceptions. Satirical websites like “America’s Last Line of Defense” also deliberately spread false information that was not recognized as parody, further exacerbating political polarization. These campaigns illustrate how hostile influence from foreign and domestic sources was used to manipulate public opinion and distort political narratives during the election period. Source: NewsGuard Reality Check, Rubinson, S., 2025. 100 Days, 134 Million Views of False Claims About Trump. [online] Available at: https://www.newsguardrealitycheck.com/p/100-days-134-million-views-of-false [Back to Table of Contents] Social Media’s Influence on the 2025 Canadian Election The DFR Lab report on the 2025 Canadian federal election shows that it was marked by disinformation and hostile foreign influence. Meta’s news blackout under Bill C-18 created an information vacuum, contributing to the spread of hyperpartisan and misleading content. With Meta’s fact-checking programs ended, false narratives flourished on platforms like Facebook and Instagram. AI-generated content and deepfakes, particularly targeting candidates such as Liberal leader Mark Carney, were also widespread. China was identified as a significant foreign actor, using platforms like WeChat to spread manipulated content aimed at the Chinese-Canadian community. Although domestic actors were dominant in the information landscape, these foreign influence campaigns contributed to distorting the political discourse. The Conservative Party further politicized the issue of foreign interference during the campaign. The lack of transparency on platforms hindered efforts to trace the origins of these disinformation campaigns. While the disinformation did not directly alter the election outcome, it contributed to political polarization and eroded public trust in the electoral process. The combination of domestic and foreign disinformation presents a growing challenge to the integrity of democratic processes, particularly in the digital age. Source: DFRLab, Mashkoor, L., 2025. How social media shaped the 2025 Canadian election. [online] Available at: https://dfrlab.org/2025/04/29/how-social-media-shaped-the-2025-canadian-election/ [Back to Table of Contents] Study Highlights the Impact of Misinformation on the 2025 Australian Election According to a report by The Conversation , a national study in Australia found that at least two-thirds of Australians had already encountered false or misleading election content during the campaign's early stages. This content included distorted claims about candidates and policies, misinformation about voting procedures, and baseless allegations of election rigging. Some respondents had difficulty distinguishing fact from fiction, which led to increased uncertainty and impaired informed decision-making. The most common misinformation related to core policy issues, such as Medicare, housing, and climate change, is often associated with figures like Donald Trump, Clive Palmer, and major political parties. Both social media and established news sources were flagged as sources of this misleading content. While exposure to disinformation does not always result in a change of opinions, it can erode trust in democratic processes, especially among voters more susceptible to such misinformation. The study also drew parallels with global trends, including false narratives seen after the 2020 US elections, which can undermine trust in democratic institutions. The Australian Electoral Commission (AEC) has taken steps to combat disinformation, including a disinformation register, media partnerships, and public education initiatives. Most Australians recognize the severity of the issue and support proactive measures to counter disinformation. The study emphasizes that maintaining trust in democracy requires clear information and cooperation between institutions, the media, and voters. Source: The Conversation, Caballero, L., 2025. Fake news and the election campaign – how worried should voters be? [online] Available at: https://theconversation.com/fake-news-and-the-election-campaign-how-worried-should-voters-be-255514 [Back to Table of Contents] Appendix - Frameworks to Counter Disinformation EU Funds 5 million Euros to Strengthen Resilience to Disinformation The European Commission announces the allocation of nearly €5 million in funding to combat disinformation and strengthen media literacy across the EU. The initiative includes two important calls for proposals. The first call, worth €3.15 million, focuses on detecting harmful information manipulation and understanding how disinformation affects citizens, while developing strategies to strengthen societal resilience. The second call, valued at €1.6 million, supports independent fact-checkers by promoting verified content using creative media formats and partnerships with influencers, podcasters, and media outlets. Source: European Commission, 2025. EU funding of €5 million to strengthen media literacy and resilience to disinformation. [online] Available at: https://digital-strategy.ec.europa.eu/en/news/eu-funding-eu5-million-strengthen-media-literacy-and-resilience-disinformation [Back to Table of Contents] [Download Report] [Back to Table of Contents] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [Back to Table of Contents] DISCLAIMER Copyright and License of Product This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.