Search CRC
Results found for empty search
- Countermeasures and Defense Against AI-Driven Disinformation
Artificial intelligence (AI) is transforming the world at a rapid pace, increasingly shaping various aspects of life. With its advancements come new possibilities and challenges that have far-reaching implications for society, the economy, and communication. This technology's evolution raises fundamental questions beyond technical innovation, redefining our understanding of information, interaction, and control. In an ever-changing digital landscape, it remains uncertain how AI will continue to develop and what role it will ultimately play.
- Office of the Director of National Intelligence (ODNI) Report on Post-Election Day Threats to U.S. Elections.
Author: Ari Ben Em, Florian Frank Editor: Leiv Schink The Office of the Director of National Intelligence (ODNI) has released a report focused specifically on post-Election Day threats. This report is especially relevant given the slow vote-counting process in the U.S., which creates opportunities for adversaries to spread disinformation and undermine confidence in the electoral system.
- Foundation for Defense of Democracies (FDD)Report on Spamouflage Infrastructure
The FDD report uncovered over 25,000 inauthentic Facebook pages , many of which displayed characteristics associated with the Chinese influence campaign known as “Spamouflage”. The operation leverages a vast network of inauthentic social media entities to conduct influence campaigns, often targeting U.S. domestic issues, and aiming to manipulate political perceptions. These pages often mirrored the names or identities of previously flagged Spamouflage accounts, suggesting continuity in strategy and branding for influence operations. While most fake pages are currently inactive, some are involved in financial scams.
- Parsian Afzar Rayan Borna, a Key Instrument of Iran’s Disinformation Campaigns
A new report by Iran International raises allegations against the Iranian IT company Parsian Afzar Rayan Borna. According to the report, the company plays a key role in Iran's cyber strategy – engaging in disinformation campaigns, surveillance of regime critics, and international cyberattacks.
- CIB Operation Targeting Western Automotive Brands
The Brinker platform has detected a YouTube-focused campaign targeting the automotive industry . The campaign includes dozens of English-speaking car-themed YouTube channels that claim to share objective car reviews and industry news. A deeper analysis of the videos promoted across these channels reveals a sophisticated strategy. The videos, which ostensibly focus on car reviews and industry-related content, subtly foster negative sentiments toward Western automotive brands . [ Download Full Report here ]
- Meta Takes Down Global CIB Operations from Iran, China, and Romania
I. INTRODUCTION Meta’s latest Adversarial Threat Report details the detection and dismantling of three global Coor-dinated Inauthentic Behaviour (CIB) campaigns attributed to actors based in China, Iran, and Romania. [1] Notably, the report is only twelve pages long, making it the shortest instalment of Meta’s a dversarial Threat series to date. This trend toward brevity may suggest a deprioritisation of transparency or a strategic shift in how Meta communicates threat data. Figure shows: Quarterly adversarial threat report page count over time While concise, the report still illustrates that Meta remains actively involved in addressing hostile digital influence efforts. II. CHINA Meta attributed a network originating in China with conducting hostile influence operations targeting Myanmar, Taiwan, and Japan. The operation leveraged fake accounts (often featuring AI-generated profile photos) to pose as local users, manage pages, disseminate content, and simulate organic engagement. The campaign sought to shape public opinion in favour of Chinese political interests. While it criticized civil resistance movements and voiced support for the ruling junta in Myanmar, content targeting Japan attacked the government and condemned its military alliance with the United States. Anonymous posts in Taiwan spread allegations of corruption among political and military leaders, creating the false impression that these narratives emerged organically from local users. [2] Meta links this campaign to two earlier China-based operations, previously dismantled in September 2022 [3] and February 2024 [4] . I II. IRAN Meta reports that it disrupted an Iranian CIB network at an early stage, aided by automated detection systems and intelligence shared by Google’s Threat Intelligence Group. Operators often impersonated female journalists and spammed popular hashtags (e.g., #starbuck, #instagram) in an attempt to blend into or overwhelm trending conversations. Posts in Azeri covered current events such as the Paris Olympics, calls to boycott American brands, and criticisms of U.S. foreign policy, President Biden, and Israel’s military actions in Gaza. The campaign also referenced events such as Israel’s 2024 attacks, possibly referring to the ongoing conflict escalation. [5] IV. ROMANIA The most extensive campaign by scale originated from Romania, involving approximately 20,000 accounts and $177,000 in ad spending across multiple platforms. Fake accounts posed as local Romanian users, posting about sports, travel, and regional news. Cross-platform activity on YouTube, X, and TikTok reinforced these personas. According to the report, the operators demonstrated strong operational security (OpSec), using proxy IPs and other measures to obscure coordination and origin. Meta reports dismantling the operation before attracting a substantial, authentic audience. [6] While Meta did not directly link this campaign to electoral interference, its timing coincides with a broader surge in hostile digital influence activity surrounding the Romanian elections. See our report here for a deep dive into the efforts to influence the Romanian election. [7] Footnotes: (1) Meta, Adversarial Threat Report: May – First Quarter 2025 , 2025. [online] Available at: https://transparency.meta.com/metasecurity/threat-reporting . [Accessed June 09, 2025]. (2) Meta, Adversarial Threat Report: May – First Quarter 2025 , 2025. [online] Available at: https://transparency.meta.com/metasecurity/threat-reporting . [Accessed June 09, 2025]. (3) Meta, 2022. Removing Coordinated Inauthentic Behavior From China and Russia . [online] Available at: https://about.fb.com/news/2022/09/removing-coordinated-inauthentic-behavior-from-china-and-russia/ . [Accessed June 09, 2025]. (4) Meta, Adversarial Threat Report: Feb – Fourth Quarter 2023 , 2024. [online] Available at: https://transparency.meta.com/metasecurity/threat-reporting . [Accessed June 09, 2025]. (5) Meta, Adversarial Threat Report: May – First Quarter 2025 , 2025. [online] Available at: https://transparency.meta.com/metasecurity/threat-reporting . [Accessed June 09, 2025]. (6) Meta, Adversarial Threat Report: May – First Quarter 2025 , 2025. [online] Available at: https://transparency.meta.com/metasecurity/threat-reporting . [Accessed June 09, 2025]. (7) Cyfluence Research Center (CRC), The Romanian Presidential Elections 2024: Analysis of Information Operations and Long-term Influence Efforts, 2025, [online] Available at: https://www.cyfluence-research.org/post/the-romanian-presidential-elections-2024-analysis-of-information-operations-and-long-term-inf [Accessed June 09, 2025]. DISCLAIMER Copyright and License of Product This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.
- Visibility as Victory: The Strategic Logic of Doppelgänger
I. INTRODUCTION Doppelgänger is considered one of the most prominent Russian digital influence operations since the start of the war against Ukraine. The campaign gained notoriety for using deceptively realistic clones of Western news and government websites—so-called mirror sites —with current estimates ranging from 70 to 700. These sites were used to spread disinformation, primarily pro-Russian, anti-Ukrainian, and anti-Western narratives. Dissemination occurred through a multi-tiered system involving fake accounts, automated bot comments, and paid online advertising. The operation was coordinated primarily by the Social Design Agency (SDA) in cooperation with entities such as Structura and Dialog. According to U.S. authorities, between January and April 2024 alone, the network disseminated roughly 40,000 content items and 33.9 million comments globally. Strategic targets included the 2024 European Parliamentary elections and the Paris Olympic Games. [1] In this week’s blog post, we look at key takeaways from a major new report that challenges what we thought we knew about the so-called Doppelgänger campaign. "Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency" (2025) digs into over 3,100 leaked internal files from the SDA — a major player in Russia’s influence operations — and offers a fresh perspective on how this organisation works. Given the report's breadth, we do not aim to cover it comprehensively, but instead highlight a few key findings that shed new light on current developments in digital hostile influence. The full report is highly recommended for anyone seeking to understand the shifting strategies, infrastructures, and evaluation metrics that define the contemporary landscape of state-linked information warfare. II. KEY CONCLUSIONS The actors involved—above all the Social Design Agency (SDA) and its network—understand the information operations analysed in the report as part of a state-sanctioned information war. Their stated objective is the expansion of a far-reaching digital propaganda ecosystem that has operated under Kremlin direction since 2022. With an estimated annual budget of €600 million, this system targets informational environments in Russia, Ukraine, the West, and increasingly, the global public sphere. [2] Figure 1: Image used by SDA to demonstrate its campaign goals; Courtesy of Psychological Defence Research Institute at Lund University [3] SDA is overseen by Sergei Kiriyenko and his team within the Presidential Administration. Together with related entities, SDA does not function as a conventional state media organ but as an outsourced actor in a state-funded, semi-private model of digital propaganda. Within this system, it competes with legacy structures like RT or Sputnik for funding and political relevance. [4] Figure 2: Network of the Russian propaganda machine [red boxes added by the authors]; Courtesy of Audinet and Gérard [5] Figure 3: A representation of regular participants in Presidential Administration meetings at which strategic decisions about SDA operations were made, Courtesy of Psychological Defence Research Institute at Lund University [6] Strategic directives come directly from Kremlin officials—most notably Alexander Kharichev —and are implemented with little discussion or internal review. Campaigns are often deployed within 48 hours. Sofia Zakharova handles operational management, including performance metrics and budget oversight. Leaked documents show that media visibility, regardless of tone, is treated internally as evidence of effectiveness and is used to justify additional funding. [7] Although SDA and Structura are separate entities, they operate in practice as a single, integrated unit. [8] Crucially, neither Western sanctions nor takedown efforts have disrupted this model. Instead, these external pressures have accelerated its evolution and sharpened its competitive edge within Russia’s influence ecosystem. SDA played a central role in the Doppelgänger campaign. The report draws on a trove of leaked documents supplemented by publicly available data on affiliated entities such as Structura, the Internet Development Institute (IRI), Dialogue, and Dialogue Regions. The authors also examine a broader set of related operations, including Matryoshka, Overload, and Undercut. Several other campaigns with similar architecture and techniques appear to originate from the same organisational environment. While not all were operated directly by SDA, they likely relied on shared infrastructure or sub-licensed services, and can be understood as part of the same systemic counter-campaign. [9] The authors issue an explicit methodological critique of prior analysis, stating: “Doppelgänger has been over-hyped and under-analysed, leading to fundamental misunderstandings about its purpose and scope… Our approach has focused on capabilities, foregrounding SDA's activities' strategic and operational components.” [10] One of the report’s central arguments is that Western interpretations of Doppelgänger have been fundamentally flawed. What has often been described in media and policy discourse as a coherent Russian disinformation campaign is, from the authors’ perspective, only a technical delivery tactic within a much larger strategic framework . Mirror sites for content dissemination—often quickly detected and taken down—are not considered a standalone operation but one among many interchangeable methods. [11] The SDA’s activities, the report argues, should be understood as strategic counter-campaigns —a hybrid model that blends advertising logic, political campaigning, intelligence tradecraft, and military strategy. Within this framework, the exposure of Doppelgänger was not considered a failure, but a success. With over 350 international media reports referencing the campaign, SDA recorded a high return on investment. Visibility became the metric: attention from Western media, politicians, and fact-checkers was used internally to demonstrate effectiveness, ultimately justifying additional funding and responsibility from the Kremlin. [12] Doppelgänger illustrates a fundamental insight: Russian influence operations do not measure success by persuasion or approval, but by narrative penetration. The core objective is to ensure the presence of Russian thematic lines within Western discourse , regardless of how they are framed, received, or challenged. [13] Figure 4: Core objective of the SDA in short, Courtesy of Psychological Defence Research Institute at Lund University [14] What works is amplified; what fails to generate resonance is discarded . The Social Design Agency (SDA) draws no qualitative distinctions between disinformation narratives, distribution channels, or actors, whether through bots, fake profiles, paid advertising, or influencers. Everything serves the goal of penetrating the Western information environment. A central objective, therefore, is not merely to circumvent Western fact-checkers, analysts, and investigative journalists but to strategically reframe their efforts. Within SDA, exposés and debunkings are interpreted not as setbacks, but as mechanisms that enhance the campaign’s reach. To this end, the agency has developed internal evaluation models and success metrics prioritising visibility in public discourse , regardless of content, as the primary benchmark. [15] Figure 4: Date from reports analysing the degree of penetration of "Russian propaganda" in the information environments of the leading European countries, Courtesy of Psychological Defence Research Institute at Lund University [16] The Doppelgänger campaign should not be viewed as an isolated incident, but rather as a symptom of a broader structural shift within the Russian influence apparatus . Sanctions and restrictions targeting traditional Russian state media such as RT and Sputnik have accelerated this transition by forcing the Kremlin to rely more heavily on informal, decentralised channels. However, this shift is not unique to Russia. It reflects a wider transformation across the global information operations landscape, where non-state and commercial actors are increasingly central . At the heart of this model are professionally organised networks of private and formally independent actors , equipped with significant resources and high-level technical expertise. The Social Design Agency (SDA) exemplifies this new generation of digitally trained propagandists, who systematically leverage social and digital media to manipulate societal dynamics at scale. III. LESSONS FROM THE DOPPELGÄNGER CASE The report’s authors believe that the response to the “Doppelgänger” campaign exposed substantial structural and analytical deficiencies, highlighting the need for urgent corrective action. A central issue lies in the conceptual orientation of many current analyses. The prevailing focus on so-called TTPs—tactics, techniques, and procedures—has led many investigations to become bogged down in technical detail, without grasping the threat actors' overarching strategic capabilities and objectives. The report therefore advocates for a capabilities-based analytical framework that enables a holistic understanding of adversarial strategies and allows for targeted disruption. TTP-based methods remain essential, for example, in the forensic examination of specific campaign components, but they are insufficient for identifying systemic patterns or long-term decision-making processes. [17] Another key criticism concerns the lack of coordination within the counter-FIMI community. Responses to “Doppelgänger” were largely uncoordinated: more than 60 technical reports were produced independently, without a shared analytical baseline or strategic alignment. This fragmentation, the authors argue, ultimately made it easier for the adversarial network to maintain its operations. The case of Meta exemplifies this: despite playing a key role in uncovering the network and publishing lists of mirror sites, internal SDA documents show that the platform continued to be systematically used for advertising, fully aware of its exploitability. The report references an estimated $338,000 in ad revenue generated within the EU alone since sanctions were imposed in July 2023, which the authors call scandalous. [18] The report also critically examines the common practice of assigning catchy names to influence operations, whether by SDA or counter-influence actors. Labels such as “Doppelgänger,” “Undercut,” or “Stormkiller” may capture attention but risk reducing complex structures to seemingly isolated operations and suggesting false distinctions. In reality, the authors stress, these are interlinked activities embedded within an adaptive and networked propaganda model. Branding—also on the side of investigators—may inadvertently contribute to strategic oversimplification. [19] The report finds that the Social Design Agency (SDA) benefited from the public exposure of the “Doppelgänger” campaign. Media and political attention were internally assessed as evidence of effectiveness and used to justify increased funding from Kremlin sources. According to the authors, this visibility was not interpreted as a setback, but as confirmation of the campaign’s strategic success. The report also raises the possibility that parts of the leaked material may have been released deliberately—potentially even by SDA itself—to validate its impact and position within Russia’s influence ecosystem. In this context, public exposure becomes a calculated part of the operation, potentially contributing to its consolidation rather than disruption. [20] The report clarifies that this is not a blanket critique of investigative work, fact-checking, or digital counterstrategies. On the contrary, the effectiveness of the counter-FIMI community has made it a growing target for strategic exploitation. Therefore, it is all the more urgent for this community to reassess its role critically, act more strategically, coordinate more coherently, and develop more robust models for analysis and intervention. Only then can the next generation of Russian influence operations be effectively detected, understood, and countered. [21] Footnotes: (1) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.14. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (2) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.14. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (3) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.94. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (4) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.41. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (5) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.40. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . Source of Image: Audinet, M., & Gérard, C., 2024. Under the Radar: Crisis, Reorganization, and Clandestinization in Russia’s Ecosystem of Information Influence after the Invasion of Ukraine. Academic Policy Paper Series, no. 2, November 2024. [online] Available at: https://therussiaprogram.org/russia_information_influence . [Accessed May 30 2025]. (6) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.41. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (7) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, pp.42-43. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (8) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.20. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (9) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.20. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (10) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.14 [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (11) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.14. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (12) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.15. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (13) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.77. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (14) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.16. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (15) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.21. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (16) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.76. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (17) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, pp.189-190. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (18) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.190. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (19) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.190. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (20) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, pp.36-37 [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. (21) Psychological Defence Agency, Pamment, J. & Tsurtsumia, D., 2025. Beyond Operation Doppelgänger: A Capability Assessment of the Social Design Agency, p.191. [online] Available at: https://mpf.se/psychological-defence-agency/publications/archive/2025-05-15-beyond-operation-doppelganger-a-capability-assessment-of-the-social-design-agency . [Accessed May 30 2025]. DISCLAIMER Copyright and License of Product This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.
- Weekly Report: Cyber based influence campaigns 26th May - 01th of June 2025
[Listen to the Podcast] [Report Highlights] Global Witness's investigation ahead of Poland’s June 2025 presidential runoff revealed that TikTok’s algorithm disproportionately amplifies far-right content, posing a disinformation risk to electoral integrity. Meta’s Q1 2025 Adversarial Threat Report reveals that coordinated disinformation networks from China, Iran, and Romania used fake accounts and cross-platform tactics to manipulate public discourse and promote state-aligned narratives. As published by EU vs. Disinfo , Georgia prepares to implement its new Foreign Agents Registration Act (FARA) on June 01st, 2025. The government, led by the Georgian Dream party, has launched a wide-reaching campaign to justify the law. Reporters Without Borders (RSF) reports that Oleg Nesterenko, a Russian national based in France, has been spreading Kremlin propaganda through media outlets, social platforms, and conspiracy blogs, particularly targeting audiences in French-speaking Africa. > TABLE OF CONTENTS < HOSTILE INFLUENCE CAMPAIGNS - SOCIAL MEDIA PLATFORMS [TikTok ] Far-right content amplification threatens Polish election integrity [Meta ] Meta Uncovers Global Disinformation Networks from China, Iran, and Romania STATE ACTORS [ Russia] Russian Disinformation Targets the Baltic States France-Based Russian Propagandist Amplifies Kremlin Narratives in Africa [ The War in Ukraine] Lessons from Ukraine About Disinformation in the Cyber War AI-RELATED ARTICLES AI Disinformation Is Polluting the Internet and Undermining Public Trust The Role of Narrative in Disinformation and Its Defense GENERAL REPORTS Belarusian Disinformation Targets Poland Through Social Media Georgia's FARA Law Promoted Through Kremlin-Backed Disinformation FRAMEWORKS TO COUNTER DISINFORMATION Meta’s 2025 First Quarter Report GLOSSARY DOWNLOAD SECTION REPORT HOSTILE INFLUENCE CAMPAIGNS - SOCIAL MEDIA PLATFORMS [TikTok] Far-right content amplification threatens Polish election integrity Global Witness has investigated TikTok’s recommendation algorithm and found that far-right content is disproportionately amplified in the lead-up to Poland’s June 2025 presidential runoff. Researchers created test profiles that engaged equally with both candidates' official accounts. Despite this, the algorithm recommended over five times more pro-Nawrocki (far right) content than pro-Trzaskowski (centrist) content. Overall, two-thirds of the political content pushed by TikTok favored far-right views, including antisemitic material. The apparent skew is especially notable given that Trzaskowski led the first round vote and had a significantly stronger TikTok presence than Nawrocki. TikTok’s recommendation algorithm, which prioritizes engaging content, appears to favor the sensationalist narratives more commonly associated with the far right. This dynamic which emphasises and reinforces particular political perspectives over others, raises concerns of systematic bias and its implications for electoral integrity. TikTok disputes the findings, calling the methodology flawed and denying deliberate bias. However, similar amplification patterns were observed during past elections in Germany and Romania, suggesting a broader issue in how political content is surfaced on the platform. The European Commission is investigating TikTok’s election-related conduct under the Digital Services Act. Critics call for increased transparency, urging the platform to grant external researchers meaningful access to data and disclose how engagement signals are weighted to assess its impact better. Sources: Global Witness, 2025. TikTok algorithm recommends twice as much hard right content to users ahead of Polish election. [online] Available at: https://globalwitness.org/en/campaigns/digital-threats/tiktok-algorithm-recommends-twice-as-much-hard-right-content-to-users-ahead-of-polish-election/ [Back to Table of Contents] [Meta] Meta Uncovers Global Disinformation Networks from China, Iran, and Romania Meta’s Q1 2025 Adversarial Threat Report reveals the takedown of coordinated inauthentic behavior (CIB) networks originating in China, Iran, and Romania that aimed to manipulate public opinion through fake accounts and deceptive online content. The Chinese network targeted Myanmar, Taiwan, and Japan using AI-generated profile photos and multilingual content to pose as locals, spreading pro-regime and anti-Western narratives. The Iranian operation focused on Azeri-speaking audiences in Azerbaijan and Turkey, promoting anti-U.S. and pro-Palestinian sentiments across multiple platforms, often using fake female journalist personas and popular hashtags to blend into trending conversations. The Romanian campaign stood out due to its massive scale, 658 Facebook accounts, and $177,000 in ad spend. It sought to influence Romanian public discourse through false personas and cross-platform coordination, including YouTube, X, and TikTok. Meta disrupted these networks before they gained substantial, authentic followings. These findings highlight the persistent and evolving threat of state-linked and opportunistic actors weaponizing social platforms to sow misinformation, undermine trust, and manipulate global political narratives. Sources: Meta, 2025. Quarterly Adversarial Threat Report Q1 2025. [online] Available at: https://transparency.meta.com/metasecurity/threat-reporting [Back to Table of Contents] STATE ACTORS [Russia] Russian Disinformation Targets the Baltic States The Jamestown Foundation identifies hostile influence and disinformation as central elements of Russia’s hybrid strategy targeting the Baltic states. These efforts aim to erode democratic cohesion, weaken public trust, and disrupt alignment with Western institutions, without crossing the threshold of conventional conflict. At the core is a sustained campaign of information manipulation. Disinformation circulates through platforms like Telegram and TikTok, often supported by AI-generated content and deepfakes. Messaging is tailored to divide societies, discredit NATO and EU engagement, and diminish support for Ukraine. Russian-speaking minorities and economically vulnerable groups are frequent targets of these narratives, which are designed to exploit identity, fear, and historical grievances. These influence operations operate in tandem with other hybrid tactics, such as cyberattacks, sabotage, and covert intimidation, which are not the focus here but reinforce the overall pressure. Incidents like GPS interference, infrastructure attacks, or state-backed migration flows contribute to instability and uncertainty, creating fertile ground for manipulative messaging. Electoral influence is pursued primarily through perception management rather than direct interference. Polarizing content, the amplification of distrust, and reputational attacks on political actors are used to undermine institutional legitimacy and democratic participation. Together, these methods form a layered strategy of hostile influence that targets societal cohesion rather than territorial control. The approach is calibrated to remain deniable, cumulative, and difficult to attribute, placing democracies under sustained psychological and informational stress. Source: The Record, Antoniuk, D., 2025. Russia-linked disinformation floods Poland, Romania as voters cast ballots . [online] Available at: https://therecord.media/russia-disinformation-poland-presidential-election [Back to Table of Contents] France-Based Russian Propagandist Amplifies Kremlin Narratives in Africa Reporters Without Borders (RSF) provides a detailed report on Oleg Nesterenko, a self-styled Russian propagandist who has operated from France for nearly 30 years, spreading disinformation around 30 countries, especially in French-speaking African nations. Nesterenko disseminates pro-Russian narratives using classic propaganda techniques such as victim-perpetrator reversal, conspiracy theories, and fabricated images and videos. He denies documented war crimes like the Bucha massacre and portrays Russia as a peacemaker. Highly active on social media, he posts hundreds of messages daily, often without clear financial backing, making him a “small-scale entrepreneur” of disinformation. The report also examines his questionable background, including involvement in Nazi memorabilia trading and his radical teaching activities at a Paris business school, highlighting his ideological shift. His family’s participation in propaganda efforts is also documented. This case exemplifies how complex and multifaceted hostile influence operations by individuals can be, illustrating the challenges of identifying and combating modern hybrid warfare and propaganda. Source: Reporters Without Borders (RSF), Froger, A., Mzalouat, H., 2025. Conquering minds: A deep dive into the media offensive of a France-based Russian propagandist. [online] Available at: https://rsf.org/en/conquering-minds-deep-dive-media-offensive-france-based-russian-propagandist [Back to Table of Contents] [The War in Ukraine] Lessons from Ukraine About Disinformation in the Cyber War A report by the State Service of Special Communications and Information Protection of Ukraine (SSSCIP) underscores how influence and disinformation campaigns have become integral components of modern warfare, as evidenced during the first two years of the conflict in Ukraine. Russian cyber actors have systematically employed cyber operations to undermine media outlets and societal structures. In 2022 alone, over 200 attacks targeted Ukrainian news agencies, state media, and online platforms with the intent to disseminate propaganda, spread disinformation, and manipulate public opinion. Beyond conventional tactics such as DDoS attacks and data destruction via wiper malware, fabricated media content was increasingly used to disrupt IT systems and influence and intimidate the population. A notable example is the circulation of a deepfake video purporting to show the Ukrainian president calling for surrender, intended to sow panic and erode trust in public institutions. In 2023, these tactics evolved, with adversaries intensifying targeted “hack and leak” operations. Stolen documents and sensitive technical materials were primarily disseminated through Telegram channels to amplify psychological pressure and demoralize society. These actions exemplify hybrid warfare, combining conventional military efforts with cyber and informational campaigns to destabilize physical and societal-political domains. The report emphasizes that cyberattacks and hostile influence operations are deeply interconnected within the framework of hybrid warfare, collectively seeking to destabilize Ukraine’s physical infrastructure alongside its social and political fabric. Source: RAND Corporation, Treyger, E., Williams, H. J., & D'Arrigo, A., 2025. Measuring the Reach of Russia’s Propaganda in the Russia-Ukraine War . [online] Available at: https://www.rand.org/pubs/research_briefs/RBA3450-2.htm l [Back to Table of Contents] AI RELATED ARTICLES AI Disinformation Is Polluting the Internet and Undermining Public Trust TechXplore publishes an article from The Conversation explaining the “AI slop” phenomenon: low-quality, AI-generated content increasingly flooding social media platforms. AI slop includes automatically created texts, images, or videos that are often repetitive and emotionally charged to maximize attention, regardless of accuracy or quality. This content is commonly found on Reddit, LinkedIn, Facebook, and Twitter. Estimates suggest that over half of longer English-language posts on LinkedIn are AI-generated. Approximately 25% of all internet traffic is attributed to “bad bots”—automated programs that mimic human behavior to distribute this content widely. Bad bots are used to share, like, and comment on AI slop, boosting the visibility and accelerating the spread of these posts. These automated spreaders pretend to be real users, helping to disseminate disinformation — deliberately false or misleading information — on a large scale. Disinformation driven by AI slop and bad bots is highly relevant to hostile influence because it manipulates political opinions, interferes with elections, and deepens societal divisions. The sheer volume and sophisticated distribution of AI slop make it harder for people to discern truth and hinder open, democratic discourse. Combating this problem is challenging as AI technology and bot networks grow increasingly complex. Without coordinated technological, legal, and societal measures, these manipulative contents risk undermining democratic processes worldwide. Source: TechXplore, Provided by The Conversation, Roozenbeek, J., van der Linden, S., Kyrychenko, Y., 2025. What is AI slop? Why you are seeing more fake photos and videos in your social media feeds. [online] Available at: https://techxplore.com/news/2025-05-ai-slop-fake-photos-videos.html [Back to Table of Contents] The Role of Narrative in Disinformation and Its Defense TechXplore, with content provided by The Conversation, explores how storytelling shapes beliefs and how AI can be used to detect disinformation campaigns exploiting narrative techniques. Stories engage people emotionally and can be manipulated to influence public opinion, especially on social media. Cultural understanding is essential because symbols and meanings vary between cultures; AI risks misinterpreting narratives without it. For instance, a phrase positive in one culture may be offensive in another. Training AI with diverse cultural knowledge improves its ability to spot manipulative content for specific audiences. A significant challenge for AI is processing non-linear narratives—social media posts that jump back and forth in time or omit details. While humans handle fragmented storytelling naturally, AI struggles to extract accurate event sequences and relationships. To tackle this, AI tools can analyze narrative structures, user personas (including social media handles), and timelines to identify coordinated disinformation efforts. These systems detect repeated storylines and suspicious posting patterns, assisting intelligence agencies, platforms, researchers, and users in flagging manipulative content. Source: TechXplore, Provided by The Conversation, Finlayson, M., Islam, A.A., 2025. Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns. [online] Available at: https://techxplore.com/news/2025-05-weaponized-storytelling-ai-disinformation-campaigns.html [Back to Table of Contents] GENERAL REPORTS Belarusian Disinformation Targets Poland Through Social Media The Digital Forensic Research Lab (DFRLab) reports that Belarusian state media, specifically the Polish-language edition of Radio Belarus linked to the sanctioned entity Beltelradio, is spreading disinformation targeting Polish audiences on platforms such as TikTok, YouTube, X, and Facebook, actively circumventing European Union (EU) sanctions. As of May 7, 2025, these accounts had published over 7,790 posts and videos, garnering 16 million views and at least 542,000 engagements. This digital campaign aims to influence Poland’s 2025 presidential elections by undermining trust in democratic institutions, amplifying polarizing narratives, and either discrediting or supporting specific candidates. Although platforms are legally obligated under the EU Digital Services Act (DSA) to block sanctioned content, these channels were reported in December 2024 but remained broadly accessible throughout the election campaign. Only TikTok acted shortly before the election by applying geofencing. Platforms have ongoing obligations under the DSA to identify and mitigate systemic election risks and illegal content, but enforcement has been insufficient in this case. Source: DFRLAB, Gigitashvili, G., Hoffman, M., Nazari, S., 2025. Banned, yet broadcasting: Sanctioned Belarusian state media influencing the Polish elections. [online] Available at: https://dfrlab.org/2025/05/29/banned-yet-broadcasting-how-sanctioned-belarusian-state-media-is-influencing-the-polish-elections-on-social-media/ [Back to Table of Contents] Georgia's FARA Law Promoted Through Kremlin-Backed Disinformation As published by EU vs Disinfo , Georgia plans to implement its new Foreign Agents Registration Act (FARA) on June 1, 2025. The government, led by the Georgian Dream party, has launched a campaign to justify the law. This campaign includes elements of disinformation aligned with pro-Kremlin narratives. Officials claim the law aims to protect national sovereignty and prevent foreign-funded unrest. Critics argue the legislation is intended to restrict civil society and independent media, drawing comparisons to Russia’s foreign agent law, despite government statements that it is modeled after the U.S. FARA. The disinformation campaign portrays NGOs and critical media as agents of foreign interference. Leaders of Georgian Dream and affiliated media promote theories of a “Global War Party” and allege that Western governments are paying 120 euros per protester to fuel unrest. These claims, including allegations that NGOs are plotting a revolution, are echoed by pro-Kremlin and Azerbaijani outlets, framing the law as a defense against Western destabilization. Although the government emphasizes similarities with the U.S. FARA, observers note significant legal protections and intent differences. More than 100 civic and media organizations warn that the law could be used to suppress dissent. Unlike the U.S. version, which targets espionage-like activities with judicial oversight, Georgia’s law applies more broadly. It lacks these safeguards, raising concerns it may serve as an authoritarian tool under the pretext of transparency and national security. Source: EUvs.Disinfo, 2025. Georgia’s Foreign Agents law: protecting sovereignty or attacking civil society? [online] Available at: https://euvsdisinfo.eu/georgias-foreign-agents-law-protecting-sovereignty-or-attacking-civil-society/ [Back to Table of Contents] FRAMEWORKS TO COUNTER DISINFORMATION Meta’s 2025 First Quarter Report Meta's Integrity Report for Q1 2025 reveals that the company identified and disrupted three covert influence operations from Iran, China, and Romania, halting these campaigns before they reached significant authentic audiences. At the same time, Meta acknowledges scaling back proactive content moderation efforts, aiming to reduce enforcement errors by raising thresholds for automated content removal and relying more heavily on user reports for less severe policy violations. Despite this general reduction in proactive moderation, Meta emphasizes that targeted enforcement against serious threats, such as coordinated manipulation ("hostile influence"), remains a high priority. Additionally, the company continues enhancing its content moderation capabilities through advanced AI tools, notably large language models (LLMs), which surpass traditional AI systems and human reviewers in accuracy and efficiency. Source: Meta, 2025. Integrity Reports, First Quarter 2025. [online] Available at: https://transparency.meta.com/de-de/integrity-reports-q1-2025/ [Back to Table of Contents] [Download Report] [Back to Table of Contents] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [Back to Table of Contents] DISCLAIMER Copyright and License of Product This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect. [Back to Table of Contents]
- Weekly Report: Cyber based influence campaigns 19th - 25th of May 2025
[Listen to the Podcast] [Report Highlights] Cybernews reports that Telegram founder Pavel Durov has accused France’s foreign intelligence chief of pressuring him to block conservative voices in Romania ahead of national elections—a request he rejected. The DGSE denies any attempt at political interference. According to a report published by Graphika , a covert influence network aligned with Chinese interests has been uncovered on X. The network involves over 1,000 fake accounts designed to manipulate online discourse about U.S. Tariffs and Trade Policies. As described in the media outlets The Record , NewsGuard's Reality Check, and DFRLab Romania, the recent presidential election has become entangled in a wave of disinformation and unproven allegations, as defeated far-right candidate George Simion calls for the results to be annulled. DFRLab reports that a disinformation network linked to the French company DirectWay promoted false claims of election interference in Romania's 2025 presidential race to support a nationalist candidate and undermine democratic trust. According to a report by ABC News , during the recent conflict sparked by a deadly attack in Pahalgam, Indian-administered Kashmir, disinformation surged online with alarming speed and sophistication. In a recent report, The Record reveals that the European Union has introduced a new sanctions package targeting individuals and organizations involved in Russia’s hybrid warfare operations, including disinformation, sabotage, and espionage activities across Europe and Africa. > TABLE OF CONTENTS < HOSTILE INFLUENCE CAMPAIGNS - SOCIAL MEDIA PLATFORMS [ X] Telegram’s Durov Accuses France of Political Censorship Attempt STATE ACTORS [ Russia] Russia-Linked Disinformation Targets Elections in Romania and Poland Russia-Aligned TAG-110 Evolves Tactics in Ongoing Disinformation Campaign Targeting Tajikistan [ The War in Ukraine] RAND Study Finds Limited Impact of Russian Propaganda Narratives [ China] Graphika Exposes Chinese-Aligned Hostile Influence Campaign on X GENERAL REPORTS The Potential and Risks of Meta’s Community Notes Program Far-Right Candidate Challenges Romania Election Results French-Linked Network Amplifies False Romanian Election Claims Europe at a Crossroads: Balancing AI Innovation and Regulation Disinformation Arises in India-Pakistan Conflict FRAMEWORKS TO COUNTER DISINFORMATION EU Renews Mission in Moldova to Combat Disinformation and Hybrid Threats EU Sanctions Target Russia's Hybrid Warfare Threats GLOSSARY DOWNLOAD SECTION REPORT HOSTILE INFLUENCE CAMPAIGNS - SOCIAL MEDIA PLATFORMS [X] Telegram’s Durov Accuses France of Political Censorship Attempt Cybernews reports that Pavel Durov, founder of the messaging app Telegram, accused Nicolas Lerner, head of France’s foreign intelligence agency (DGSE), of asking him to block conservative voices in Romania ahead of national elections. The meeting allegedly occurred this spring at the Hôtel de Crillon in Paris, where Durov is under judicial supervision. He said he refused, stating that Telegram does not censor protest movements in any country, including Russia, Belarus, and Iran. The DGSE denied the accusation, stating that meetings with Durov were strictly to remind him of his responsibilities in combating terrorism and child exploitation. It firmly rejected any involvement in electoral interference. Elon Musk reacted by reposting Durov’s statement on X with the comment: “Wow.” Musk has repeatedly criticized European governments for alleged suppression of right-wing political voices. Sources: CyberNews, 2025. France asked the Telegram founder to ban conservative Romanian voices, he says . [online] Available at: https://cybernews.com/news/france-asked-the-telegram-founder-to-ban-conservative-romanian-voices-he-says/ [Back to Table of Contents] STATE ACTORS [Russia] Russia-Linked Disinformation Targets Elections in Romania and Poland A report by The Record highlights increased Russian disinformation in Romania and Poland during their presidential elections. The Kremlin-backed campaign, known as Doppelgänger , used known tactics such as cloning official websites of institutions and media outlets to spread false narratives. In Romania, the campaign aimed to erode trust in democratic institutions. It pushed claims of government abuse, large-scale electoral fraud, and false reports that the election had been cancelled. Authorities had warned of such activity before the first round. Despite the efforts, centrist candidate Nicușor Dan won the May 18 runoff, defeating far-right nationalist George Simion with 53.6% of the vote. His victory reaffirmed Romania’s pro-EU and pro-NATO course. The campaign sought to undermine support for pro-European and pro-Ukrainian policies in Poland. Authorities detected foreign-funded disinformation on Facebook ahead of the first round. According to Ukraine’s military intelligence (HUR), the Doppelgänger campaign intensified in Poland, using fake accounts and bots on platforms like X to impersonate voters and amplify false messages. Key narratives included opposition to support for Ukraine, calls to exit the EU, and attacks on government policy. The election now heads to a runoff on June 1 between centrist Rafał Trzaskowski and nationalist Karol Nawrocki. The outcome will shape Poland’s EU role and stance on Ukraine. Source: The Record, Antoniuk, D., 2025. Russia-linked disinformation floods Poland, Romania as voters cast ballots . [online] Available at: https://therecord.media/russia-disinformation-poland-presidential-election [Back to Table of Contents] Russia-Aligned TAG-110 Evolves Tactics in Ongoing Disinformation Campaign Targeting Tajikistan In a May 2025 assessment , Recorded Future’s Insikt Group analyzed a cyber operation by the Russia-aligned actor TAG-110 targeting Tajikistan’s public sector. While the primary method was technical—phishing emails and macro-enabled Word documents—the campaign had a hostile influence dimension. The attackers embedded malware into files disguised as official communications, such as election schedules or defense-related notices. When opened, these files installed persistent code that granted long-term access to government, research, and educational systems. This access allowed for more than surveillance. The timing and content of the attacks indicate an intent to influence internal decision-making during politically sensitive periods, including elections and military activity. The operation blurred the line between information and manipulation by impersonating trusted documents. The goal was not simply to gather data, but to shape perception and disrupt institutional integrity, subtly guiding outcomes in ways favorable to Russian strategic interests. TAG-110’s campaign demonstrates how cyber capabilities can serve broader geopolitical objectives. The hostile influence aspect lies in using digital tools not just for intrusion, but to quietly steer political processes from within, without overt interference, yet with significant impact. Source: Recorded Future, 2025. Russia-Aligned TAG-110 Targets Tajikistan with Macro-Enabled Word Templates . [online] Available at: https://go.recordedfuture.com/hubfs/reports/cta-2025-0522.pdf [Back to Table of Contents] [The War in Ukraine] RAND Study Finds Limited Impact of Russian Propaganda Narratives In a May 2025 study, researchers at the RAND Corporation analyzed the spread and impact of Russia’s most extreme propaganda narratives related to the war in Ukraine. The focus was on four core themes: claims of “denazification,” the dehumanization of Ukrainians through slurs and hate speech, antisemitic narratives targeting President Zelenskyy, and anti-Western rhetoric portraying Ukraine as a puppet of liberal or LGBTQ agendas. While these narratives have circulated widely on platforms like X and Telegram, RAND’s analysis reveals that their influence is more limited than often assumed. The study examined over 43 million posts from 3.8 million users across 30 languages. The most virulent content, especially dehumanizing language, was primarily concentrated in Russian-language communities and struggled to gain traction internationally. On X, most users posting such content did not engage in dialogue; most extreme posts came from anonymous, unpopular accounts with little reach. On Telegram, similar dynamics were observed: Russian-language channels were active and often widely forwarded, but the most toxic narratives failed to break out of niche audiences. In contrast, many pro-Ukrainian voices had larger followings and greater visibility, effectively challenging Russian messaging in digital spaces. Overall, RAND concludes that while Russian propaganda is aggressive in scale, its resonance beyond Russian-speaking networks remains shallow. Source: RAND Corporation, Treyger, E., Williams, H. J., & D'Arrigo, A., 2025. Measuring the Reach of Russia’s Propaganda in the Russia-Ukraine War . [online] Available at: https://www.rand.org/pubs/research_briefs/RBA3450-2.htm l [Back to Table of Contents] [China] Graphika Exposes Chinese-Aligned Hostile Influence Campaign on X A new report by Graphika has identified a covert, pro-Chinese influence network operating on X (formerly Twitter). The network appears to have been designed to shape international discourse around U.S. tariffs and trade policy. According to Graphika, over 1,000 fake accounts were identified. Using stolen content and carefully constructed counterfeit personas, the operators posed as authentic users from the United States, Canada, the United Kingdom, and Japan. At the heart of the campaign was an effort to undermine the United States' trade policies under President Donald Trump. The fake profiles also promoted the narrative that Japan, Canada, and the United Kingdom were resisting political pressure from Washington—a portrayal deliberately framed as a supposed grassroots discourse. Beyond trade-related topics, the network disseminated content aligned with China’s broader geopolitical agenda. This included critical narratives about the U.S. military presence in Japan and promotional content for Chinese government-backed tourism initiatives. While the network could not be definitively linked to a specific state actor, Graphika concluded that the combination of tactics, content, and behavioral patterns strongly suggests a pro-Chinese influence operation, with notable similarities to previously documented activities attributed to Chinese state actors. Despite temporarily easing U.S.–China trade tensions, the report warns that covert efforts to sway Western public opinion will likely persist. Source: Graphika, le Roux, J., 2025. Tariff Tirade: China-Aligned Network Poses as Grassroots Voices in Effort to Covertly Boost Online Narratives Critical of US Tariffs and Trade Policies . [online] Available at: https://public-assets.graphika.com/reports/graphika_report_tariff_tirade.pdf [Back to Table of Contents] GENERAL REPORTS The Potential and Risks of Meta’s Community Notes Program According to an article by The Conversation , Meta is preparing to launch its Community Notes program in Canada following its rollout in the U.S. in March 2025. The initiative allows users to add context to misleading posts. Notes are only made public if they receive approval from users with differing perspectives, reflecting a decentralized, consensus-based approach. Key insights come from X (formerly Twitter), which has operated a similar system, launched initially as “Birdwatch”, since 2021. Studies indicate that Community Notes on X can lead to the voluntary deletion of flagged posts and encourage contributors to use more moderate, fact-based language. One of the program’s most widely praised features is transparency: X has made its data and algorithms publicly accessible, allowing independent researchers to monitor and evaluate the system. However, significant weaknesses have also emerged. Fewer than 9% of submitted notes are published due to the high threshold for cross-perspective agreement. In politically polarized environments, this model often fails. Moreover, there is a real risk of manipulation by coordinated groups aiming to discredit accurate content through mass reporting. Another critical limitation is that neither X nor Meta penalizes users who spread misinformation. Platforms avoid direct intervention, shifting responsibility to users. Critics argue that without consequences, Community Notes risks becoming a symbolic gesture rather than a meaningful tool against disinformation. For Community Notes to be effective in Canada, Meta must address these structural flaws, learning from failures seen on X. Source: The Conversation, Borwankar, S., 2025. Meta’s Community Notes program is promising, but needs to prioritize transparency. [online] Available at: https://theconversation.com/metas-community-notes-program-is-promising-but-needs-to-prioritize-transparency-248324 [Back to Table of Contents] Far-Right Candidate Challenges Romania Election Results A recent Reality Check from NewsGuard details that after Romania’s presidential election on May 18, 2025, pro-Kremlin and nationalist X users spread false claims of electoral fraud in favor of Nicușor Dan, who defeated pro-Russian candidate George Simion in the runoff. Baseless allegations of election fraud were amplified by French conspiracy influencers and by the Pravda network, a known Russian-controlled disinformation outlet. These claims accused France, Moldova, and the European Union of orchestrating the alleged manipulation. Romanian authorities and OSCE observers confirmed the election was free, fair, and held under democratic conditions. The Record further reports that Simion called for the election to be annulled in response to the result. He cited claims by Telegram founder Pavel Durov, who alleged that French authorities had pressured the platform to silence “conservative voices in Romania.” French officials dismissed the accusation. Earlier this year, Romanian authorities annulled the first round of voting due to confirmed Russian interference, including a coordinated disinformation campaign on TikTok and other platforms. Another report by DFRLab outlines how George Simion , once known for his anti-Russian unionist activism in support of Romanian-Moldovan reunification, has increasingly echoed Kremlin-aligned narratives. He has opposed military aid to Ukraine and has framed the Russia-Ukraine war as a “fraternal conflict.” Simion’s shift toward pro-Russian messaging has drawn praise from Kremlin-aligned Moldovan figures such as Igor Dodon and support from disinformation networks linked to fugitive oligarch Ilan Shor, known for spreading false narratives online ( See our Weekly Report, W20, May 2025, for further details ). Source: NewsGuard's Reality Check, Badilini, S., 2025. After Romania Elects Pro-EU Candidate, Pro-Russian Accounts Claim Election Was Stolen. [online] Available at: https://www.newsguardrealitycheck.com/p/after-romania-elects-pro-eu-candidate The Record, Antoniuk, D., 2025. Defeated Romanian far-right candidate calls for court to annul election over alleged interference. [online] Available at: https://therecord.media/romania-election-annul-simion-george DFRLab, Olari, V., 2025. From Bucharest to Chisinau: How pro-Kremlin networks shaped Romania’s 2025 election . [online] Available at: https://dfrlab.org/2025/05/16/pro-kremlin-networks-shaping-romania-2025-election/ [Back to Table of Contents] French-Linked Network Amplifies False Romanian Election Claims A 2025 Digital Forensic Research Lab (DFRLab) investigation uncovered a network of 15 websites, three of which actively repost Romanian-language content from far-right and fringe sources, linked to the France-based company DirectWay . The company operates the news aggregator Ziar[.]com, identified as a source of disinformation, and the X account @_Direct_News, which promoted claims of election interference by the European Union and France during Romania’s presidential elections. During the 2025 vote, both platforms circulated false claims declaring nationalist candidate George Simion the winner. Official results confirmed the victory of pro-European candidate Nicușor Dan. The campaign relied on material from outlets such as Realitatea Plus , which was fined for partisan election-day coverage. Technical analyses showed that the network shares Google Analytics and AdSense codes, indicating centralized control. Historical data revealed that the network had targeted up to 13 African countries. Most of these domains were later redirected to direct[.]news, which publishes region-specific content for 55 African and 47 Asian countries. An AdSense reverse lookup traced the network's administration to a Romanian national based in Lyon, France, listed as DirectWay ’s director in the French corporate registry. The case study demonstrates how a modular, transnational digital infrastructure can be repurposed to spread digital hostile influence campaigns across regions, challenging electoral integrity and democratic stability. Source: DFRLab, Châtelet, V., 2025. Online network with French ties promotes election interference claims in Romania. [online] Available at: https://dfrlab.org/2025/05/23/online-network-with-french-ties-promotes-election-interference-claims-in-romania/ [Back to Table of Contents] Europe at a Crossroads: Balancing AI Innovation and Regulation In a recent analysis , Carnegie Europe characterizes the European Union as a global pioneer in AI governance, citing the AI Act as the first comprehensive legal framework for artificial intelligence. Complemented by instruments like the Digital Services Act and the GDPR, this approach reflects the EU’s commitment to ethics and fundamental rights. However, the Union is increasingly shifting toward innovation, with initiatives such as AI factories and the EuroStack project to enhance technological sovereignty. The deregulatory turn has been framed as a necessary response to the U.S. and China's geopolitical pressures and technological competition. Yet the report suggests that Europe’s key barriers to innovation may lie more in structural weaknesses—such as limited access to venture capital, fragmented markets, and reliance on foreign infrastructure—than in regulation itself. Recent policy changes, including the withdrawal of the proposed AI liability directive and including national security exemptions in the AI Act, may risk weakening oversight and fundamental rights protections. The EU now faces a strategic dilemma: balancing its role as a global standard-setter in ethical AI and needing to remain technologically competitive. The regulation of dual-use AI, applicable in civilian and military contexts, remains particularly unresolved. According to Carnegie Europe , a viable path forward would involve greater investment, sovereign digital infrastructure, and a binding framework for dual-use AI. A balanced approach linking innovation with responsible regulation may be key to preserving Europe’s autonomy and democratic values. Source: Carnegie Endowment for International Peace, Csernatoni, R., 2025. The EU’s AI Power Play: Between Deregulation and Innovation. [online] Available at: https://carnegieendowment.org/research/2025/05/the-eus-ai-power-play-between-deregulation-and-innovation?lang=en [Back to Table of Contents] Disinformation Arises in India-Pakistan Conflict ABC News reports that following the deadly attack in Pahalgam and the subsequent military escalation between India and Pakistan, a parallel wave of disinformation spread rapidly across platforms like X, WhatsApp, Facebook, and YouTube. AI-generated deepfakes, recycled footage, and fabricated stories distorted public perception and fueled nationalist sentiment ( we previously covered the developments between India and Pakistan in our Weekly Reviews 19 and 20 ). Prominent examples included doctored images of Rawalpindi Stadium in ruins, a deepfake video of a Pakistani general appearing to admit defeat, and video game clips shared as real airstrikes. A fake Daily Telegraph front page praising Pakistan’s air force was also widely circulated. ABC highlights how even mainstream media broadcast unverified content. One video showed a couple dancing in Kashmir, falsely framed as their final moments before death. Despite the couple confirming they were alive, the footage continued to spread. The Digital Rights Foundation recorded a surge in hate speech, while India’s blocking of 17 Pakistani YouTube channels and several X accounts, including those of journalists, raised censorship concerns. Fact-checkers like BOOM Live and AFP were overwhelmed. Of 437 X posts reviewed, 179 were from verified accounts, yet only 73 included community notes. Experts warn that disinformation will remain a powerful weapon in digital-age conflicts without stronger moderation and verification tools. Source: ABC News, Hogan, L., 2025. Misinformation war rages online amid India-Pakistan tensions. [online] Available at: https://www.abc.net.au/news/2025-05-24/misinformation-online-war-kashmir-conflict-india-pakistan/105318696 [Back to Table of Contents] FRAMEWORKS TO COUNTER DISINFORMATION EU Renews Mission in Moldova to Combat Disinformation and Hybrid Threats The European Union has extended the mandate of the EU Partnership Mission in Moldova (EUPM Moldova) until May 2027. The mission aims to strengthen Moldova’s resilience against hybrid threats, with a strong focus on foreign disinformation and information manipulation. Launched in 2023 at the request of the Moldovan government, EUPM Moldova is the EU’s first civilian mission explicitly designed to counter such threats. It provides strategic advice and operational support in crisis management, cybersecurity, and communication integrity. One key achievement is the support for establishing Moldova’s Centre for Strategic Communication and Countering Disinformation (StratCom Centre). The mission provided training, tools, and EU best practices to help identify and respond to false or manipulative narratives. Since its launch, EUPM Moldova has carried out over 60 capacity-building activities. Specialized teams work closely with Moldovan authorities to improve institutional responses. The mission has become a strategic partner in building sustainable security and protecting democratic processes from foreign interference. Source: European External Action Service, 2025. EUPM Moldova: Moving forward towards sustainable security resilience in Moldova. [online] Available at: https://www.eeas.europa.eu/eupm-moldova/eupm-moldova-moving-forward-towards-sustainable-security-resilience-moldova_en EU Sanctions Target Russia's Hybrid Warfare Threats According to a report by The Record , the European Union has introduced a new sanctions package targeting individuals and entities involved in Russia’s hybrid warfare operations. The measures focus on actors linked to disinformation, sabotage, and espionage activities across Europe and Africa. Those sanctioned include members of Russia’s military intelligence agency (GRU), individuals spreading pro-Kremlin narratives on social media, and companies providing technical infrastructure, such as web hosting and GPS jamming technologies, that support these efforts. A key target of the sanctions is Voice of Europe, a media outlet reportedly secretly funded by pro-Russian Ukrainian politician Viktor Medvedchuk. The platform allegedly ran influence operations across the continent, including attempts to covertly finance candidates in the 2024 European Parliament elections. The sanctions list also includes media figures involved in disseminating Russian disinformation in African countries and the operators of Stark Industries, a U.K.-based hosting provider. The company is believed to have supported Kremlin-aligned cyber and influence campaigns, including the well-known Doppelgänger operation to manipulate public opinion in the West. Sources: The Record, Antoniuk, D., 2025. EU sanctions target individuals, organizations behind Russia’s disinformation and sabotage operations. [online] Available at: https://therecord.media/eu-sanctions-orgs-individuals-tied-to-russia-disinformation [Back to Table of Contents] [Download Report] [Back to Table of Contents] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [Back to Table of Contents] DISCLAIMER Copyright and License of Product This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect. [Back to Table of Contents]
- Weekly Report: Cyber based influence campaigns 12th - 18th of May 2025
[Listen to the Podcast] [Report Highlights] According to Google's Threat Analysis Group (TAG) Q1 2025 bulletin , state-linked actors from Russia, Iran, and Nigeria conducted coordinated disinformation campaigns across YouTube, Google Ads, and other platforms to spread propaganda and manipulate public opinion. A small-scale experiment published by Global Witness concludes that TikTok's algorithm disproportionately promotes far-right political content in Romania before the presidential election. The DFRLab reports that pro-Kremlin networks appear to have supported Romanian far-right candidate George Simion in the 2025 presidential election by amplifying nationalist and anti-Western narratives across social media platforms. An article published by ABC highlights accusations of foreign interference and disinformation tactics in the Solomon Islands. China is implicated following the resignation of newly appointed Rural Development Minister Daniel Waneoroa from the Inter-Parliamentary Alliance on China (IPAC). As a CSIS Center for Strategic & International Studies repor t detailed, widespread disinformation campaigns significantly affected the 2025 Philippine midterm elections. According to EU vs. Disinfo , the Kremlin is using state-driven propaganda and religious narratives to glorify its war in Ukraine and obstruct peace efforts through disinformation. > TABLE OF CONTENTS < HOSTILE INFLUENCE CAMPAIGNS - SOCIAL MEDIA PLATFORMS [ Google] Google TAG Uncovers Global Disinformation Campaigns in Q1 2025 [ TikTok] TikTok’s algorithm may lean right in Romania STATE ACTORS [ Russia] White Lies, No Lines on the Kyiv Express Operation Overload: Experts You Know, Voices You Don’t VIGINUM: Russian Influence Campaign Storm-1516 Kremlin-Aligned Networks Target Romania’s 2025 Presidential Election [ The War in Ukraine] Kremlin Glorifies War Through Religion and Disinformation Campaigns [ China] China Accused of Disinformation and Political Pressure in Solomon Islands CYFLUENCE ATTACKS Cyber Attacks Persist After India-Pakistan Ceasefire GENERAL REPORTS The Impact of Disinformation on the Philippine Midterm Elections CSIS Futures: The Collapse of Trust in a Connected World GLOSSARY DOWNLOAD SECTION REPORT HOSTILE INLUENCE CAMPAIGNS - SOCIAL MEDIA PLATFORMS [Google] Google TAG Uncovers Global Disinformation Campaigns in Q1 2025 Google’s Threat Analysis Group (TAG) identified and disrupted several coordinated influence operations (CIOs) across its platforms in the first quarter of 2025. These campaigns were primarily linked to state-sponsored actors from Russia, Iran, and Nigeria, who used YouTube channels, Google Ads accounts, and websites to spread content supportive of their respective governments and critical of opposing parties. Russia’s operations stood out, with over 1,300 YouTube channels taken down and numerous domains blocked. These were connected to the actor “Portal Kombat,” which pushed pro-Russian narratives in multiple languages. The TAG report reveals a shift in disinformation tactics, including the growing use of multilingual content and the focus on regional issues to sway public opinion. Platforms like YouTube and Google News' involvement highlight the broad scope of these efforts. The findings reflect the ongoing threat posed by state-backed disinformation and the need for constant vigilance and action to protect the integrity of information ecosystems. Sources: GOOGLE Threat Analysis Group, Leonard, B., 2025. TAG Bulletin: Q1 2025. [online] Available at: https://blog.google/threat-analysis-group/tag-bulletin-q1-2025/ [Back to Table of Contents] [TikTok] TikTok’s algorithm may lean right in Romania Global Witness conducted a small-scale experiment in early May 2025 in Bucharest to assess TikTok’s political content recommendations ahead of Romania’s presidential election. Over two days, researchers created three new TikTok accounts on factory-reset phones to simulate users without prior history. Each account followed the official pages of both presidential candidates and watched around ten posts per candidate. Then, the For You feed was browsed for ten minutes—political posts were watched, while non-political posts were skipped. All political content shown was manually reviewed. The results suggest that approximately three-quarters of the political content promoted by TikTok favored far-right narratives or personalities. While the methodology was limited in scope, more exploratory than empirical, it raises concerns about the role of recommendation algorithms in amplifying extremist views. TikTok’s algorithm is designed to surface content based on user interaction and interests, a model that can inadvertently prioritize polarizing or provocative material. TikTok has rejected the findings, calling the study unscientific and misleading. However, the results echo similar concerns in other countries, including Germany. Under the EU’s Digital Services Act, platforms like TikTok are legally required to assess and mitigate risks to electoral integrity—including those posed by their algorithms. The European Commission is already investigating TikTok’s influence in the Romanian electoral context. Source: GLOBAL WITNESS, 2025. TikTok algorithm continues to push multiple times more far-right content to users ahead of Romanian election. [online] Available at: https://globalwitness.org/en/campaigns/digital-threats/tiktok-algorithm-continues-to-push-multiple-times-more-far-right-content-to-users-ahead-of-romanian-election/ [Back to Table of Contents] STATE ACTORS [Russia] White Lies, No Lines on the Kyiv Express NewsGuard’s Reality Check report reveals a targeted disinformation campaign launched in early May 2025 against several European leaders. The campaign centered on a blurry, low-resolution video that falsely claimed French President Emmanuel Macron, German Chancellor Friedrich Merz, and UK Prime Minister Keir Starmer were seen with cocaine during a train trip to Kyiv on May 9. The video in question shows a white napkin and a cocktail pick. Despite its harmless content, the footage spread rapidly across Russian state media, far-right platforms, and conspiracy websites. Kremlin-linked figures amplified the false claim, including Foreign Ministry spokesperson Maria Zakharova and RT editor-in-chief Margarita Simonyan. More than 100 articles pushing the hoax appeared across Russian media networks. The effort aimed to discredit key Western supporters of Ukraine, with a particular focus on Macron, who has emerged as one of the most vocal backers of Kyiv. This incident follows a broader Kremlin strategy of spreading fabricated drug-related claims, a tactic previously used against Ukrainian President Volodymyr Zelensky.High-resolution footage from reliable outlets like AFP and AP disproved the allegation, showing that the supposed “evidence” was entirely misleading. In addition, according to a Le Monde article , the Elysée broke from France’s traditional diplomatic communication by responding to the cocaine rumor with sarcasm and meme-style messaging on its official X account. This marked the first time the French presidency used such a tone, aiming to counter disinformation in real time and mirror Ukraine’s online tactics. Source: NEWSGUARD Reality Check, SADEGHI, M., & MAITLAND, E., 2025. It’s All a Blur: A Fuzzy Video Is Cited to Falsely Claim that European Leaders Snorted Cocaine on Their Way to Kyiv. [online] Available at: https://www.newsguardrealitycheck.com/p/its-all-a-blur-a-fuzzy-video-is-cited LE MONDE, AUDUREAU, W., 2025. How the Elysée adapted its communication style to tackle Macron cocaine rumor. [online] Available at: https://www.lemonde.fr/en/les-decodeurs/article/2025/05/16/how-the-elysee-adapted-its-communication-style-to-tackle-macron-cocaine-rumor_6741335_8.html [Back to Table of Contents] Operation Overload: Experts You Know, Voices You Don’t Findings from the Institute for Strategic Dialogue (ISD) reveal that the Russian disinformation campaign Operation Overload, first identified in 2023, has continued into 2025 with new tactics and targets. Between January and March, the campaign published at least 135 deceptive posts across platforms like X, Telegram, and Bluesky, focusing on Germany, France, and Ukraine. The operation aims to undermine democratic trust and weaken support for Ukraine by impersonating trusted sources. It uses AI-generated voices, fake headlines, and forged logos to mimic media outlets, academics, and law enforcement. In the first quarter of 2025, over 80 organisations were impersonated, with more than three-quarters linked to public institutions. While most posts saw little engagement, one video falsely claiming USAID paid celebrities to visit Ukraine reached over 4 million views, boosted by conspiracy accounts. The rest relied on bot networks to simulate visibility. Though its direct reach is limited, the campaign creates real-world harm, confusing users, damaging reputations, and draining fact-checkers' resources. Around 80% of identified posts remained online during analysis, increasing long-term risk. Earlier reports by groups like CheckFirst, Recorded Future, and the Digital Forensics Research Lab have also documented how Operation Overload floods social media with hoaxes to overwhelm journalists and institutions. The 2025 phase shows a continuation of this strategy—more refined, but equally focused on destabilisation through deception. Source: INSTITUTE FOR STRATEGIC DIALOGUE, 2025. Stolen voices: Russia-aligned operation manipulates audio and images to impersonate experts. [online] Available at: https://www.isdglobal.org/digital_dispatches/stolen-voices-russia-aligned-operation-manipulates-audio-and-images-to-impersonate-experts/ [Back to Table of Contents] VIGINUM: Russian Influence Campaign Storm-1516 The recent technical report by VIGINUM [For more background information, please find our recent blog post here ] provides an in-depth analysis of 77 documented influence operations attributed to the Russian campaign Storm-1516. The report outlines the campaign’s overarching goals—chief among them, discrediting the Ukrainian government to weaken Western aid—while highlighting its systematic targeting of political figures in France, Germany, and the United States, especially during election periods. Storm-1516 leverages deepfakes, staged videos, and a sophisticated distribution network involving burner and paid accounts. These narratives are amplified through pro-Russian networks and linked influence operations, including Project Lakhta and CopyCop. The investigation draws direct lines to individuals and groups aligned with the Russian state, including exiled former U.S. law enforcement officer John Mark Dougan and figures from the Prigozhin and Dugin networks. It further implicates Yury Khoroshenky, a suspected GRU Unit 29155 operative, as a coordinator and financier. VIGINUM concludes that Storm-1516 constitutes a clear case of foreign digital interference, posing an escalating threat to the integrity of European public discourse. Source: ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf [Back to Table of Contents] Kremlin-Aligned Networks Target Romania’s 2025 Presidential Election A report by the Digital Forensic Research Lab (DFRLab) reveals that pro-Kremlin networks in Romania and Moldova actively supported far-right presidential candidate George Simion during Romania’s 2025 election. These networks, which had previously criticized Simion, shifted to amplifying his nationalist and Eurosceptic messaging. Their efforts relied heavily on platforms such as Telegram and TikTok. The DFRLab notes that Simion’s rise was enabled by the same digital infrastructure that had earlier promoted Călin Georgescu, a candidate later disqualified by Romania’s electoral bureau and Constitutional Court. Georgescu was removed from the race due to violations of campaign regulations, including opaque financing and suspected Russian-backed online operations. Moscow has denied the allegations. The findings highlight the ongoing threat of foreign interference in democratic elections. State-aligned influence networks can manipulate public opinion and disrupt electoral processes. This interference is transnational, as shown by the involvement of Moldovan networks linked to oli-garch Ilan Shor, which promoted anti-Western narratives. Source: DFRLab, OLARI, V., 2025. From Bucharest to Chisinau: How pro-Kremlin networks shaped Romania’s 2025 election. [online] Available at: https://dfrlab.org/2025/05/16/pro-kremlin-networks-shaping-romania-2025-election/ [Back to Table of Contents] [The War in Ukraine] Kremlin Glorifies War Through Religion and Disinformation Campaigns An article published by EUvsDisinfo on May 17, 2024, describes how, in the fourth year of its full-scale invasion of Ukraine, Russia continues to pursue long-term political and territorial goals. According to the article, this includes seeking control over large parts of Ukrainian territory and weakening the country’s statehood. Russian forces reportedly target Ukraine’s infrastructure, economy, agriculture, and industry, often without distinguishing between civilian and military objectives. EUvsDisinfo argues that the war is increasingly presented as a central element of Russia’s national identity. President Putin’s public speeches—such as the one delivered on May 9—are described as framing the war as a moral or even quasi-religious duty. The Russian Orthodox Church is said to support this framing. State-affiliated media reportedly portray war widows honoring fallen soldiers as heroic sacrifices. According to the article, this narrative also marginalizes critical civil society voices, such as the Committee of Soldiers’ Mothers of Russia, which has faced increasing legal pressure. Regarding diplomatic efforts, the article notes that Russia has responded cautiously to initiatives such as a proposed 30-day ceasefire or direct talks between Presidents Zelenskyy and Putin. Russian state media instead emphasize the so-called “root causes” of the conflict. EUvsDisinfo interprets this as a strategy to delay or deflect negotiations. The listed demands, such as a NATO rollback, Ukrainian neutrality, and recognition of Russian claims over occupied territories—are seen not as realistic negotiation points but as political pretexts to prolong the conflict. Source: EUvsDisinfo, 2025. Celebrating a new war. [online] Available at: https://euvsdisinfo.eu/celebrating-a-new-war/ [Back to Table of Contents] [China] China Accused of Disinformation and Political Pressure in Solomon Islands According to a report by ABC News Australia , the resignation of Solomon Islands' newly appointed Minister for Rural Development, Daniel Waneoroa, from the Inter-Parliamentary Alliance on China (IPAC) has sparked political controversy and renewed concerns about foreign interference. Waneoroa stated that he stepped down to promote political stability and align with Prime Minister Jeremiah Manele's national direction. The Inter-Parliamentary Alliance on China (IPAC) is an international, cross-party coalition of legislators from over thirty countries. It advocates for a coordinated and values-based approach to China policy, focusing on human rights, democracy, and global security. IPAC has been outspoken on China’s policies in Hong Kong, Xinjiang, and Taiwan, and maintains ties with Taiwanese institutions. Civil society groups and IPAC suspect that Waneoroa’s resignation followed pressure from the Chinese embassy in Honiara. According to reports, embassy officials demanded a meeting and suggested his continued affiliation with IPAC could affect development funding. The Transparency Solomon Islands group condemned the alleged interference, warning that it could undermine national sovereignty and political stability. Waneoroa’s role in IPAC was particularly sensitive, given that the Solomon Islands established formal ties with China in 2019, ending diplomatic relations with Taiwan. The Chinese embassy denied any wrongdoing, calling the allegations baseless and reaffirming its support for Solomon Islands’ sovereignty. Source: ABC NEWS, DZIEDZIC, S. & AUMANU-LEONG, C., 2025. China is accused of foreign interference in Solomon Islands after minister quits international group. [online] Available at: https://www.abc.net.au/news/2025-05-12/china-embassy-solomon-islands-embroiled-foreign-interference/105280538 [Back to Table of Contents] CYFLUENCE ATTACKS Cyber Attacks Persist After India-Pakistan Ceasefire A recent post by CyberKnow states that despite the ceasefire announced several days ago between India and Pakistan, hacktivist groups remain active and continue to manipulate the information space. As noted in our last Weekly Review ( more information available here ), the primary threat now lies less in technical attacks and more in spreading misleading or exaggerated claims. One example is the widely circulated report that cyberattacks took 70% of India’s power grid offline. According to CyberKnow, this is part of a disinformation campaign to generate public uncertainty and confusion. DDoS attacks and website defacements remain the most commonly used methods. In addition, new groups emerge, either announcing their intent to get involved or already carrying out attacks. Several alleged data breaches have also been reported recently. However, CyberKnow clarifies that many cases are fake or based on publicly available information misrepresented as sensitive or compromised data. These developments highlight how information manipulation has become a key element of modern cyber conflict—shaping public perception. Source: CyberKnow (@Cyberknow20), 2025. India-Pakistan Cybertracker #2. [online] Available at: https://x.com/Cyberknow20/status/1922269417137942839 [Back to Table of Contents] GENERAL REPORTS The Impact of Disinformation on the Philippine Midterm Elections In an article , the Center for Strategic and International Studies (CSIS) outlines key developments from the Philippine midterm elections held on May 12, 2025. Voters elected officials at all levels, with particular attention on the 12 contested Senate seats—widely seen as a preview of the 2028 presidential race. Three major political blocs shaped the vote: the Marcos administration, the Duterte camp, and a resurgent liberal opposition. President Marcos’s slate secured six Senate seats, fewer than expected. Duterte allies won four seats, and former president Rodrigo Duterte was elected mayor of Davao City despite being detained by the International Criminal Court. The liberal camp exceeded expectations, with figures like Bam Aquino and Kiko Pangilinan returning to national office. Millennials and Gen Z comprised over 60% of registered voters and generated more than 70% of political engagement online. Astroturfing played a significant role, as the Philippine Center for Investigative Journalism found over 100 Facebook pages posing as news outlets, which spent over $860,000 on political advertising. The Duterte camp revived its disinformation networks to portray Duterte as a political victim. On platform X, over 30% of pro-Duterte accounts were reportedly fake. Meanwhile, the Marcos camp promotes anti-disinformation measures—though critics argue these also serve partisan interests. Source: CSIS, Quitzon, J., 2025. Philippines Votes 2025: A Power Shift in the Senate. [online] Available at: https://www.csis.org/analysis/philippines-votes-2025-power-shift-senate [Back to Table of Contents] CSIS Futures: The Collapse of Trust in a Connected World “ Trust Fails ” is part of the Scenarios That Could Define 2035 series by the Center for Strategic and International Studies (CSIS) . The format combines future-oriented narrative scenario development with expert commentary to explore potential global trends. This one, written by Jon B. Alterman with contributions from cybersecurity and AI experts, imagines a world where trust between individuals, institutions, and nations collapses by 2035 due to technological misuse. The authors highlight how technology has historically supported global trust through secure transactions, verified identities, and reliable communication. However, they warn that the same digital infrastructure is now being used to undermine confidence. Deepfakes, synthetic media, and AI-generated disinformation allow bad actors to falsify events, damage reputations, and disrupt public understanding. A key focus is the role of social media algorithms, which amplify emotionally charged misinformation while downplaying corrections. Malicious actors can generate large volumes of disinformation and optimize it through AI-driven testing. Once public confidence in digital information systems erodes, everything from journalism to government records becomes suspect. In terms of cyberspace, the scenario anticipates a future where identity verification becomes more difficult, cyberattacks become more disruptive, and digital platforms lose legitimacy. Economic systems slow as verification costs rise, and political polarization deepens. States with weaker digital infrastructure suffer most, facing exclusion from investment, cooperation, and secure information flows. The scenario is a stark warning: trust may become a casualty of technological advancement. [Click here to explore other scenarios]. Source: Center for Strategic and International Studies (CSIS), Alterman, J., Allen, G., Carter, W., Byrd, C., & Spaulding, S., 2025. Trust Fails. [online] Available at: https://features.csis.org/scenarios2035/trust-fails/ [Back to Table of Contents] [Download Report] [Back to Table of Contents] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [Back to Table of Contents] DISCLAIMER Copyright and License of Product This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect. [Back to Table of Contents]
- The Attack on Luxury Brands: A case study of the weaponization of the online ecosystem by China
The Attack on Luxury Brands: A case study of the weaponization of the online ecosystem by China At the CRC, we primarily study digital hostile influence in three realms: military, sociopolitical, and economic. A clear distinction is usually tricky to draw in our field since these areas frequently intersect and overlap. While the methodologies and even the perpetrators may be similar, the stakeholders and, therefore, defensive strategies can differ. A ministry of defense monitoring an enemy’s propaganda will have different objectives than a government body entrusted with protecting an election, a corporation, or an entire industry under attack. Hostile influence campaigns (HICs) have predominantly been associated with the military and socio-political realms, but a massive shift has occurred in recent years. Using the same mixed bag of fake users, sock puppets, fake websites, cyber-attacks, and real influencers – to name but a few – companies and even entire industries are being attacked. Adding to this phenomenon is the “offensive ecosystem”, which has led to a privatization of influence operations. At the forefront of this trend was the Internet Research Agency, a private company that ran influence operations on behalf of the Russian government. Many more companies like the SDA, which is linked to the ongoing Doppelganger campaign (For more information, please see our article: Doppelganger in Germany), followed suit. These influence mercenaries apply methodologies that in the past were predominantly in the domain of governments and readily sell their services to anyone for the right price. This leads to an explosive mix of malign governments, influence mercenaries, and scrupulous influencers, which with ease can set their cross hairs on anyone. China is steadily moving to the forefront regarding using the online ecosystem to strengthen its position and attack its perceived enemies. Our latest research examines a nefarious attack on the French luxury industry that should be a stark warning to all industries and corporations of how the threat landscape in the economic realm is rapidly evolving. This attack needs to be seen within the context of the ongoing geopolitical turmoil and how China is increasingly positioning itself against “the West” in general and the US more specifically. The weaponization of platforms such as TikTok and networks of influencers, paired with an increasing willingness of China to put them to use, is a deeply disconcerting trend. This time, “only” the luxury industry was targeted. It would be naïve not to see this as a warning shot and a flexing of the muscles of an increasingly scrupulous adversary. I would also like to thank Intercept9500, which graciously took over a pivotal role in this joint research project and whose team stood by our side every step of the way. While we admittedly felt glamorous working on this project, it left us with a feeling of unease, but see for yourself. Florian Frank Director of the CRC [Find the Report here]
- Codebreakers Hack Sepah Bank: Financial Motive or Influence Operation?
This week, we examine the recent developments involving the hackergroup "Codebreakers" and the Iranian Sepah Bank, which have surprisingly not made international headlines yet. A Wikipedia entry addresses the incident in detail and compiles relevant sources. See this article for comprehensive coverage of how the leaks have affected discourse in Iran. Figure 1 - Codebreaker Announcement on WhatsApp (Courtesy of Meta) In the past, the United States Office of Foreign Assets Control (OFAC) has referred to Sepah Bank as the "financial backbone" of Iran’s ballistic missile program. (1) The bank is also believed to have close ties to the country’s military and security apparatus. It has been listed on the U.S. sanctions list for several years. (2) On March 25, 2025, the hacker group "Codebreakers" announced it had successfully breached Sepah Bank. However, the group exfiltrated data and publicized the information in a "hack and leak" operation. One particularly notable aspect of this case is the group’s choice of distribution channels. In addition to using conventional leak platforms, they also turned to Instagram, offering the data to the highest bidder. Choosing Instagram appears to have been a deliberate decision, especially considering it is one of Iran's most widely used platforms. This suggests an effort to reach a broad Iranian audience. The previously unknown group is called "Codebreakers" and presents itself through a simple, stereotypical profile image of a hooded hacker, most likely generated by AI. This kind of visual branding, combined with the choice of platform, seems inconsistent with a genuine ransom-driven motive. The fact that Codebreakers used Instagram to promote and share data from the breach is highly unusual, as is the large volume of information in the sample. Typically, financially motivated cybercriminals avoid obvious public platforms to conceal their tracks. Threat actors commonly use the deliberate public disclosure of stolen data online to increase pressure on the targeted organization by threatening to release sensitive information unless a ransom is paid. What is unusual in this case is using a WhatsApp group to auction off the information. Threat actors rarely use this messaging service as a platform for communication or data publication. This is partly because WhatsApp is linked to a phone number and because its parent company, Meta, is known to cooperate with law enforcement requests. Another notable aspect is that the initial posts were written exclusively in Persian. In such cases, threat actors usually communicate in English to reach a broader audience and maximize pressure on the victim. The choice of language in this instance may therefore suggest a regional focus or a politically motivated intent. Figure 1 - At first, mainly in Persian, WhatsApp (Courtesy of Meta) This strongly suggests that the goal was to generate public attention primarily within Iran. It indicates that this was not a typical financially motivated cyberattack but an influence campaign. The choice of target, the method of disclosure, and the language used in communication all suggest an intention to undermine trust in state institutions and publicly discredit Bank Sepah. The identity of the group behind the attack remains unknown. Possible actors include state-sponsored entities pursuing geopolitical interests, exiled Iranian opposition groups with the technical capabilities to carry out such an operation and a desire to influence public perception inside Iran. However, it seems unlikely that an Iranian opposition group is responsible. These groups generally lack the resources to breach a high-security institution like Bank Sepah successfully. Even if they did, they would likely release the material in English to attract international attention and exert political pressure. The fact that the communication was conducted entirely in Persian indicates an operation explicitly directed at an Iranian audience. The financial motive also appears to be secondary. While a ransom of $42 million was demanded, the amount was so excessive that payment was never a realistic expectation. This suggests that serious negotiations were not the actual goal. Furthermore, the sample data released was huge. In typical extortion cases, only small data sets are published to demonstrate the validity of the breach. The decision to publish hundreds of thousands of records indicates that the primary aim was to expose as much information as possible. Figure 2 - Ransom Offer via WhatsApp (Courtesy of Meta) Some time after the sample files were published, the data was sold on Exploit[.]in at a much more realistic price. The platform is a Russian-language cybercrime forum that has been active since 2005 and operates on both the dark and the clear web. It is considered one of the oldest and most established platforms, used by professional cybercriminals to trade illegal services such as hacking, fraud, and ransomware-as-a-service (RaaS). The data has likely already been purchased multiple times. Figure 3 - Offer on Exploit[.]in A few days later, the Codebreakers announced that they had also compromised additional Iranian government systems and databases. Figure 4 - Announcement of more data (Courtesy of Meta) The group also released personal information of account holders from the database. This included the disclosure of individual banking details. Figure 5 - Disclosure of Personal Information (Courtesy of Meta) The release specifically highlighted account holders with a military background. This, too, may indicate that the operation could be part of an information campaign. Figure 6 - Disclosure of Personal Information (Courtesy of Meta) Notably, on April 10, 2025, the Codebreakers launched a video competition inviting participants to create short videos highlighting the hack, its potential implications, and the group’s Telegram channel. Prize money in cryptocurrency was offered as an incentive, suggesting an effort to maximize reach and impact. This, too, could be seen as a possible indication of state-backed involvement. Figure 7 - Announcement for Video Competition (Courtesy of Meta) Taken as a whole, the evidence suggests that this operation is less likely to be a traditional financially motivated attack and more likely a targeted influence campaign. The choice of language, the method of disclosure, and the nature of the demands all point to objectives that are not monetary. Instead, the operation appears aimed at reaching an Iranian audience, undermining trust in state institutions, and potentially stirring domestic unrest. The operation is ongoing and will continue to be monitored closely. [Footnotes] (1) U.S. Department of the Treasury, 2007. Iran’s Bank Sepah Designated by Treasury for Facilitating Iran’s Weapons Program. [online] Available at: https://home.treasury.gov/news/press-releases/hp219 [Accessed 16 Apr. 2025]. (2) U.S. Department of the Treasury, 2007. Bank Sepah – Sanctions Listing. [online] Available at: https://sanctionssearch.ofac.treas.gov/Details.aspx?id=25580 [Accessed 16 Apr. 2025]. DISCLAIMER Copyright and License of Product This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.