top of page

Search CRC

143 results found with an empty search

  • Years of Deception: ClearSyksec´s Report on a Houthi-Yemeni Influence Network

    [Introduction] This week on our blog, we are presenting a new report by ClearSkySec that highlights a compelling case of a long-term influence operation. (1) This operation has demonstrated remarkable flexibility in its strategy while maintaining a consistent underlying infrastructure. As always, our focus remains on analyzing hostile influence activities and their mechanisms of operation. [Background and Development of the Operation] ClearSkySec first uncovered the campaign in 2019, when it was initially directed against Israeli audiences. Between 2019 and 2022, the focus shifted towards the Gulf states, particularly Saudi Arabia and the United Arab Emirates. During this period, no targeted activities against Israel were detected. However, since late 2024, the campaign has again pivoted toward Israel. While the technical methods employed have changed little over time, a shift in content strategy has been observed: instead of disseminating fabricated news, the operators now copy and redistribute authentic content from reputable sources. [Key Findings] The campaign has remained continuously active since 2019, demonstrating consistency in its operational techniques. Thematically, it still revolves around gossip and entertainment content. While fabricated news dominated early, the current strategy relies on real content from Israeli news outlets and social media. Content distribution is mainly done through fake profiles embedded in open Israeli Facebook community groups, seamlessly blending campaign messages into regular user activity. Many of the domains created in 2019 are still operational, pointing to a long-term strategy. The campaign offers content in both Hebrew and Arabic, addressing different regional audiences. However, while the Hebrew content appears relatively professional, the Arabic material often lacks authenticity. No indications of malware deployment or watering hole attacks have been observed to date. Although the campaign’s objectives remain unclear, the infrastructure can be repurposed at any time for spreading disinformation, inducing fear, or delivering malware. [Tactics, Techniques, and Procedures of the Campaign] The operation's fundamental tactic seems to blend invisibly into existing social media ecosystems, gaining users' trust over time. The operators likely do not aim for immediate disruption but rather for sustained, low-visibility presence, maximizing influence potential while minimizing the risk of detection. Campaign Methodology – Flowchart, page 3 (Courtesy of ClearSky) To achieve this, a series of well-established techniques is employed. Fake news websites, such as gool-live[.]com, were created to publish copied content sourced from legitimate Israeli news outlets and social media posts. In parallel, fake Facebook pages like “Celebrity News” were set up to post teaser texts and clickbait links leading back to these fake websites. Numerous fake Facebook profiles support this infrastructure, typically with generic English or transliterated Arabic names. These profiles show minimal personal activity and are primarily used to disseminate campaign materials across various groups.  The procedures underpinning the operation are systematic. First, infrastructure is stablished: domains are registered using privacy protection services and frequently moved between Yemeni hosting providers to obfuscate ownership. Authentic content is then systematically harvested from reliable sources and published on fake websites. The dissemination process begins through dedicated Facebook pages and continues via fake profiles infiltrating open Israeli Facebook groups. Once inside the groups, the operators share identical posts to maximize visibility and simulate organic engagement. Content is tailored linguistically, with Hebrew posts targeting Israeli users and Arabic content directed toward audiences in the Gulf states. Over time, the fake profiles are maintained with minimal but carefully timed activity to give the impression of organic growth.   [Conclusion] The Houthi-Yemeni influence campaign demonstrates how long-term operations can adapt tactics while preserving stable infrastructure and objectives. It offers a concrete example of how influential activities are embedded within target communities over extended periods with minimal visibility. ClearSkySec’s current report provides insights for understanding the dynamics and persistence of such operations. We also recommend reviewing ClearSky’s earlier reports on related campaigns to gain a broader perspective on the tactics and methods used in hostile influence efforts. (2) Footnotes (1)       ClearSky Security Ltd, 2025. Houthi-Yemeni Influence Campaign. [online] Available at: https://www.clearskysec.com/wp-content/uploads/2025/04/Houthi-Influence-Campaign-april-2025.pdf (2)    ClearSky Security Ltd, 2019. Yemen-Based Disinformation Campaign Distributing Fake News in Israel and the Arab World. [online] Available at: https://www.clearskysec.com/yemen-disinformation-campaign/, and ClearSky points out similarities to an Iranian operation in 2019: ClearSky Cyber Security, 2018. Global Iranian Disinformation Operation. [online] Available at: https://www.clearskysec.com/global-iranian-disinformation-operation/   DISCLAIMER Copyright and License of Product  This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information  The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction  This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.

  • Commercial Hostile Influence Networks: Anthropic’s Findings on Multi-Client Operations

    [Introduction] In April 2025, the company Anthropic published an analysis of four coordinated influence operations, which it claims to have disrupted. They were carried out by a commercial provider ("Influence-as-a-Service") and could not be attributed to any specific state actor. (1)   [Technical Implementation of the Campaigns] Claude, a large language model (LLM) developed by Anthropic, was used for content generation and managing digital identities. Based on predefined parameters, the model determined how controlled online personas should respond to posts—whether to comment, share, like, or ignore them. Operational Architecture, Courtesy of Anthropic (2) The coordination was managed through a JSON-based framework that defined each persona’s political alignment, language style, and behavioral patterns. Claude also created multilingual content, generated prompts for image-generation models, and evaluated image outputs for narra-tive alignment. This enabled the automated, cross-platform, and consistent management of over 100 social media accounts. (3) [Campaign Targets and Regional Focus] The four documented operations were directed at different regions. In Europe, the focus was on energy-related narratives; in Iran, on cultural identity. In the case of the United Arab Emirates, the content emphasized economic strength while criticizing EU regulations. In Kenya, political develop-ment initiatives were positively portrayed. The operation targeted opposition figures in one EU member state while promoting Albanian political personalities. Despite differing objectives, the technical execution was uniform across all campaigns. (4)   [Communication Strategy and Impact] According to Anthropic, only a limited viral impact was reached. The goal was not rapid viral spread, but the long-term embedding of credible-appearing personas within ongoing digital conversations. The accounts were designed to appear authentic. To simulate human behavior, the personas responded to bot accusations from real users with pre-scripted sarcasm or deflection, reinforcing the impression of genuine interaction. (5)   [Conclusion] LLM models will remain structurally vulnerable. Conventional filters are insufficient to detect long-term, coordinated manipulation. Preventing misuse remains one of the central challenges in managing the risks associated with scalable AI systems. The report underscores the growing role of private actors in political manipulation. Unlike state entities, these providers operate without ideological alignment or institutional oversight. They act opportunistically, adapt to client needs, and deploy general-purpose AI systems with standardized infrastructure to carry out large-scale influence operations, even with limited resources. The rising demand has given rise to an "Influence-as-a-Service" sector that offers modular, low-cost tools for outsourced political messaging while minimizing clients' attribution risks. This development has turned influence into a technically optimized, professionalized, scalable service detached from ideology. Anthropic’s findings show that individual operators can simultaneously serve conflicting geopolitical interests, accelerating an evolution in such campaigns' reach, precision, and impact. It seems that tactics are shifting: rather than chasing virality, actors increasingly prioritize the stealthy construction of long-lasting, embedded networks that gradually shape discourse through persistent and believable engagement. This strategic pivot makes influence operations harder to detect and disrupt. Addressing this challenge will require strong national policy frameworks and active cooperation from developers and providers of large language models, who play a key role in identifying misuse and enforcing safeguards at scale. Footnotes (1)    Anthropic, K. Lebedev, A. Moix & J. Klein, 2025. Operating Multi-Client Influence Networks Across Platforms. p.1 [PDF] Available at: https://cdn.sanity.io/files/4zrzovbb/website/45bc6adf039848841ed9e47051fb1209d6bb2b26.pdf (2)    Anthropic, K. Lebedev, A. Moix & J. Klein, 2025. Operating Multi-Client Influence Networks Across Platforms. p.3 [PDF] Available at: https://cdn.sanity.io/files/4zrzovbb/website/45bc6adf039848841ed9e47051fb1209d6bb2b26.pdf (3)    Anthropic, K. Lebedev, A. Moix & J. Klein, 2025. Operating Multi-Client Influence Networks Across Platforms. p.2 [PDF] Available at: https://cdn.sanity.io/files/4zrzovbb/website/45bc6adf039848841ed9e47051fb1209d6bb2b26.pdf (4)    Anthropic, K. Lebedev, A. Moix & J. Klein, 2025. Operating Multi-Client Influence Networks Across Platforms. p.2 [PDF] Available at: https://cdn.sanity.io/files/4zrzovbb/website/45bc6adf039848841ed9e47051fb1209d6bb2b26.pdf (5)    Anthropic, K. Lebedev, A. Moix & J. Klein, 2025. Operating Multi-Client Influence Networks Across Platforms. p.2 [PDF] Available at: https://cdn.sanity.io/files/4zrzovbb/website/45bc6adf039848841ed9e47051fb1209d6bb2b26.pdf   DISCLAIMER Copyright and License of Product  This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information  The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction  This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.

  • Storm-1516: VIGINUM Report Insights

    [INTRODUCTION] This week, we examine the report “Analysis of the Russian Information Manipulation Campaign Storm-1516,” published in May by VIGINUM. The timing of its release is unlikely to be coincidental. At the end of April 2025, the French government publicly confirmed what had long been suspected: repeated cyberattacks on French institutions can be directly attributed to Russian intelligence services.[1] What stands out is not the substance of the accusation, such activities have been documented since 2017, but France's unusually explicit and public first attribution.[2] It seems likely that this attribution is part of a broader political communication strategy. The VIGINUM report should, therefore, be seen not only as a technical analysis but also as a political signal. It forms part of France’s effort to publicly expose Russian influence operations and foster coordinated international awareness and response. [SUMMARY] The report documents 77 information operations attributed to the actor cluster Storm-1516. The comprehensive list of individual cases included in the annex is particularly noteworthy.[3] This level of transparency sets a benchmark for structured and verifiable threat analysis. At this point, we adopt the summary before turning to selected highlights for a more in-depth examination: Figure 1 – Summary, p. 3 of the report,  Courtesy of VIGINUM [INSIGHTS] The role of “CopyCop” remains open to interpretation. VIGINUM draws a clear analytical line between Storm-1516 and CopyCop—two closely intertwined operations that are nonetheless treated as distinct entities. This distinction is significant, as other organizations, such as Recorded Future, have previously treated both clusters as identical or inseparably linked. (4)   VIGINUM, by contrast, emphasizes that Storm-1516 is an autonomous Russian information manipulation set (IMS) responsible for strategic disinformation operations. CopyCop, in turn, functions as a technical dissemination network used by multiple Russian actors. This differentiation is evident in the report’s executive summary, which refers to “close coordination and occasional overlap between Storm-1516 and other Russian IMS, including Project Lakhta and CopyCop.” [5] The latter is not considered an internal component of Storm-1516 but a separate actor within the broader Russian influence ecosystem. Another example of this operational overlap is the registration of the domain ensemble-24.[fr], in June 2024, by operators affiliated with CopyCop. The site impersonated the official campaign website of the French political party “Ensemble” and was used as part of a Storm-1516 election interference operation. Technical indicators, such as shared IP infrastructure, demonstrate the close connection. [6] Figure 2 - Screenshot of websites impersonating the coalition "Ensemple", p. 6 of the report, Courtesy of VIGINUM VIGINUM notes that the CopyCop network “is now used by several actors in the Russian information influence ecosystem”[7] —an indication that it functions not as a proprietary tool of Storm-1516, but as a shared technical infrastructure. The report further states that “Storm-1516 narratives are almost systematically amplified by fake news websites from the CopyCop network”[8], positioning the network less as an originator of content and more as a key amplifier in coordinated influence campaigns. Although a clear chain of command between the actors is not defined, there are technical overlaps in domain infrastructure and indications of possible links to the Russian Federal Security Service (FSB). [9] These ambiguities suggest an underlying structure that warrants further investigation into how collaboration within the network is organized. Figure 3 - Evolution of the IMS Storm-1516, p. 8 of the report, Courtesy of VIGINUM Storm-1516 strategically uses artificial intelligence, deepfakes, and forged content—standard tools in hybrid disinformation operations, but especially well-documented in this case. While the campaign initially focused on discrediting Ukraine to erode Western support, it later shifted toward direct interference in democratic elections. One example includes a fabricated audio recording falsely implicating Barack Obama in the attempted assassination of Donald Trump. In Germany, political figures such as Friedrich Merz, Robert Habeck, and Annalena Baerbock were targeted with fabricated allegations ranging from corruption to conspiracy theories involving sexual abuse and immigration.[10] Figure 4 – Distribution Chain, p. 11 of the report,  Courtesy of VIGINUM Storm-1516’s disinformation architecture unfolds across five tightly connected phases: Preparation, Distribution, Laundering, Amplification, and Relays. [11] This structured process reveals a methodical, scalable operation designed to obscure attribution while maximizing narrative reach and political effect. In the Preparation phase, operators create and stage false content. They register domain names, fabricate personas, open burner social media accounts, and develop visual and textual materials. These setups, such as fake whistleblower identities, lay the technical and narrative foundation for later phases. Distribution follows via three key vectors. First, disposable social media accounts post videos or claims under the guise of spontaneous leaks. Second, paid third-party actors, from fringe influencers, distribute the narratives to broader audiences. Third, the CopyCop network, comprising over 290 fake news sites operated by John Mark Dougan, is a direct publishing platform for core disinformation content. Laundering is the third stage. Content originally seeded by Storm-1516 is republished in foreign media outlets, primarily in Africa and the Middle East. These reprints, often labeled as "sponsored" or "branded content," are meant to obscure the Russian origin and present the stories as independent, locally sourced reporting. Amplification escalates the visibility of these narratives. CopyCop sites recycle the content across multiple domains, while paid social media accounts repeat and boost messaging. Comment sections on Western media outlets—particularly tabloids and far-right platforms—are manipulated to insert links and echo narratives. Telegram channels serve as distribution hubs, frequently replicating CopyCop material. X accounts affiliated with Project Lakhta, the BRICS Journalists Association, and Russia’s Federal Security Service strategically broadcast the stories across languages and regions, reinforcing their legitimacy and volume. In the final phase, Relays, other actors take up the narratives, amplifying their reach even further. This includes Russian disinformation networks like Portal Kombat, RRN/Doppelgänger, and Mriya. Russian embassies, state broadcasters (RT, TASS, RIA), and media linked to FSB, GRU, and SVR continue the distribution. Belarusian outlets and Western pro-Russian actors—including influencers, fringe websites, and at times even elected officials—also help circulate these narratives.   We’ll bypass further details and jump straight to the attribution, where a sharp chart lays out the complex web of actors driving the operation. Figure 5 - p. 21, Courtesy of VIGINUM Section 4 of the report focuses on the key actors behind the Storm-1516 campaign. Among them are John Mark Dougan, who runs the CopyCop network, and ideological figures such as Aleksandr Dugin, Valery Korovin, and Leonid Savin—individuals long embedded in Russian think tanks and influence networks. Particularly noteworthy is Section 4.4, which introduces Yury Khoroshenky, a figure allegedly linked to GRU Unit 29155. According to VIGINUM, Khoroshenky was involved early on in the organization and financing of the campaign and maintained close ties to other core actors. While the report stops short of formal attribution, it emphasizes that the structure and behavior of Storm-1516 are consistent with state-directed influence operations. The mention of a potentially GRU-affiliated individual offers a crucial lead for further investigation. [12]   [Conclusion] VIGINUM concludes that Storm-1516 meets the criteria of targeted foreign digital interference. Due to its technical sophistication, high adaptability, and strategic focus, the campaign represents a serious threat to the integrity of the digital public sphere in France and Europe. Especially during election periods and international crises, Storm-1516 has the potential to distort public discourse, undermine trust in democratic institutions, and deliberately fuel social polarization. [13] Footnotes (1)    Agence nationale de la sécurité des systèmes d'information (ANSSI), 2025. Targeting and compromise of French entities using the APT28 intrusion set. CERTFR-2025-CTI-007. [online] Available at: https://www.cert.ssi.gouv.fr/cti/CERTFR-2025-CTI-007/,  [Accessed 18 May 2025]. (2)   Reuters, Irish, J., 2025. France accuses Russian intelligence of repeated cyberattacks since 2021. [online] Available at: https://www.reuters.com/world/europe/first-france-accuses-russian-intelligence-repeated-cyber-attacks-2025-04-29/  [Accessed 18 May 2025]. (3)    SGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, pp. 29-33 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf  [Accessed 18 May 2025]. (4)   Insikt Group®, 2024. Russia-Linked CopyCop Uses LLMs to Weaponize Influence Content at Scale. Recorded Future. [online] Available at: https://www.recordedfuture.com/research/russia-linked-copycop-uses-llms-to-weaponize-influence-content-at-scale  [Accessed 18 May 2025]. (5)    ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, p. 3 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf  [Accessed 18 May 2025].   (6)    ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, p. 6 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf  [Accessed 18 May 2025]. (7)    ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, p. 15 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf  [Accessed 18 May 2025]. (8)    ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, p. 17 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf  [Accessed 18 May 2025]. (9)    ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, p. 23 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf  [Accessed 18 May 2025]. (10)    ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, p. 7 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf  [Accessed 18 May 2025]. (11)    ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, p. 11-20 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf  [Accessed 18 May 2025]. (12)    ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, p. 27 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf  [Accessed 18 May 2025]. (13)    ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, p. 28 [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf  [Accessed 18 May 2025]. DISCLAIMER Copyright and License of Product  This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information  The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction  This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.

  • Weekly Report: Cyber based influence campaigns 28th April - 04th of May 2025

    [Listen to the Podcast] [Report Highlights] As published by the Jamestown Foundation , Russian hockey star Alexander Ovechkin’s record-breaking goal in the NHL was swiftly politicized by the Kremlin as part of its broader disinformation strategy. An investigation by The Guardian , in collaboration with anti-racism group "Hope Not Hate", has uncovered a campaign targeting Hong Kong pro-democracy activists in the UK. According to an article by DFR Lab , the rapid advancement of GAI has revolutionized how disinformation is created and distributed. Malign actors now use sophisticated tools like GANs, diffusion models, and multi-modal AI to fabricate highly realistic synthetic media. BIRN Kosovo’s comprehensive monitoring report of over 20 broadcast outlets and 4,000 online publications during the 2025 parliamentary elections revealed a troubling landscape dominated by disinformation, hate speech, and foreign influence. According to an article published by The Conversation , disinformation has emerged as a significant concern as Australia approaches its federal election. As published by EU vs. Disinfo , Russia prepares for its Victory Day parade on May 09th, the Kremlin is expected to ramp up disinformation targeting European nations by labeling them “Nazis” and “neo-Nazis.” > TABLE OF CONTENTS < HOSTILE INFLUENCE CAMPAIGNS - STATE ACTORS [ Russia] Russia Expanding Disinformation Tactics in Africa Kremlin Uses Ovechkin’s Record for Influence Campaigns Russian Disinformation Campaign Targets Moldova’s Pro-EU President Historical Revisionism at the Core of Kremlin Disinformation [China] Suspected China-Linked Influence Campaign Targets Exile Activists in UK AI RELATED ARTICLES AI Media Advances and Disinformation Tactics GENERAL REPORTS Polling Misinformation May Mislead Australian Voters Hostile Influence Threatening the Integrity of Kosovo’s 2025 Election Viral Disinformation Marks First 100 Days of Trump’s Second Term Impact of Misinformation on the 2025 Australian Election FRAMEWORKS TO COUNTER DISINFORMATION EU Funds 5 million Euros to Strengthen Resilience GLOSSARY DOWNLOAD SECTION REPORT STATE ACTORS [Russia] Russia Expanding Disinformation Tactics in Africa A recent EU vs. Disinfo article analyzes the Russians’ FIMI approach in Africa, which is described as a hybrid disinformation strategy that combines overt state media, covert networks, and local actors Global outlets like TASS and RIA Novosti provide baseline narratives, while regional branches such as RT Africa and Sputnik Afrique adapt content specifically for African audiences. Russian embassies amplify official messaging, especially in South Africa and Kenya. Covert actors like the African Initiative, linked to Russian intelligence services, and the "Portal Kombat" (Pravda ecosystem) network are key in disseminating pro-Kremlin narratives. These entities operate through seemingly local websites and use automated republication to saturate regional information spaces with synchronized messaging. Russia uses a two-way information laundering strategy in Africa: Narratives are first localized through the African Initiative and its amplifiers, then recycled into Russian state media to create an illusion of independent validation. In addition, Offline tools, such as events, media training, and partnerships with local groups (e.g., the African Initiative Association in Burkina Faso), reinforce these efforts on the ground. The narratives focus on portraying the West, particularly France and the U.S., as exploitative and destabilizing, while framing Russia as a trustworthy partner and defender of African values. Familiar anti-Western narratives are tailored to local contexts, casting Russia as a natural ally. This strategy is not short-term propaganda but a persistent effort to reshape Africa’s information ecosystem in Russia’s favor. Sources:   EUvsDisinfo, 2025. Echoes of influence: inside Russia’s FIMI activities in Africa . [online] Available at: https://euvsdisinfo.eu/echoes-of-influence-inside-russias-fimi-activities-in-africa/ [Back to Table of Contents] Kremlin Uses Ovechkin’s Record for Influence Campaigns As published by the Jamestown Foundation , Russian ice hockey star Alexander Ovechkin, forward for the Washington Capitals and the NHL’s all-time top scorer, is strategically used by the Kremlin as part of its digital influence efforts. While his athletic success is celebrated globally, the Russian state capitalizes on his popularity through online platforms such as the Kremlin’s official website, Telegram, and state-run media to advance nationalist messaging. Ovechkin’s public support for President Vladimir Putin, including his creation of the “Team Putin” campaign, and his refusal to denounce Russia’s invasion of Ukraine, make him useful in shaping pro-regime narratives. His curated image, shared through Instagram and YouTube, and appearances on Kremlin-affiliated TV, depict him as a patriotic strength and resilience symbol. These digital channels allow the regime to export influence beyond its borders, especially targeting Russian-speaking communities abroad. The Kremlin’s framing of Ovechkin’s record as a “triumph of sports soft power”—a term cited from domestic sources—illustrates how individual athletic success is woven into broader digital influence efforts designed to reinforce loyalty, distract from international isolation, and promote a unified national identity.   Source:    The Jamestown Foundation, R. Arnold, 2025. Kremlin Propaganda Uses Alexander Ovechkin’s Hockey Record to Promote Russian Nationalism . [online] Available at: https://jamestown.org/program/kremlin-propaganda-uses-alexander-ovechkins-hockey-record-to-promote-russian-nationalism/ [Back to Table of Contents] Russian Disinformation Campaign Targets Moldova’s Pro-EU President NewsGuard reports  that Maia Sandu, Moldova's pro-European President, has once again become the target of Russian disinformation efforts—this time in the context of the upcoming parliamentary elections in September 2025. During her re-election in 2024, Moldovan authorities reported a Russian hostile influence campaign (HIC) in support of a pro-Kremlin candidate. The current campaign is driven by the Russian operation “Matryoshka,” known for producing seemingly authentic video forgeries and imitating credible media to spread false content. Since the election announcement, pro-Russian channels have circulated fake materials accusing Sandu of corruption and personal misconduct. One prominent example is a fabricated Vogue feature claiming Sandu is the most expensively dressed president in the world. Authentic images were used but falsely attributed to luxury brands with invented prices. According to NewsGuard, the clothing shown does not belong to the luxury segment but comes from a mid-range brand. Other forgeries include a manipulated Economist  cover and a supposed BBC video accusing Sandu of embezzling public funds to support an alleged mistress. So far, seven such fabricated posts have been identified on pro-Kremlin Telegram and X channels. Unlike previous campaigns focused on election manipulation, the current wave targets personal defamation, aiming to undermine Sandu’s integrity, delegitimize her pro-Western stance, and deepen political polarization in Moldova. Source:    NewsGuard Reality Check, E. Maitland, 2025. Russian Disinformation Campaign Targets Moldova’s Pro-EU President Ahead of Parliamentary Elections . [online]; Available at: https://www.newsguardrealitycheck.com/p/russian-disinformation-campaign-targets [Back to Table of Contents] Historical Revisionism at the Core of Kremlin Disinformation EU vs. Disinfo  reports  that the Kremlin is expected to intensify disinformation campaigns targeting European countries in the lead-up to Russia’s Victory Day parade on May 9. These efforts increasingly rely on labeling critics as “Nazis” or “neo-Nazis.” The overuse of the term “Nazi” has become a rhetorical tool to discredit opponents and rewrite history. Isolated incidents are exaggerated and framed as alleged state-level glorification of Nazi criminals. At the same time, Ukrainian commemorative initiatives related to World War II are portrayed as disrespectful or revisionist. This tactic serves as a political pretext to justify Russia’s war of aggression against Ukraine and to delegitimize Western support for Kyiv. Russian state media have long since begun distorting the history of World War II. They promote the narrative that Russia single-handedly defeated Nazism, while portraying Western democracies and former Soviet republics—including Ukraine—as Nazi collaborators. Historical facts, such as the participation of six million Ukrainian fighters against Hitler, are deliberately omitted. These narratives are repeatedly circulated through influence campaigns. In TV broadcasts, AI-generated images, and social media posts, EU and NATO leaders are depicted in Nazi uniforms or as grotesque caricatures. Kremlin-aligned figures like Vladimir Solovyov, Margarita Simonyan, and former President Dmitry Medvedev use radical rhetoric and extremist language to amplify these messages. Outlets like RT and Sputnik spread this content globally and in multiple languages. Source:    EUvsDisinfo, 2025. Russian revisionism ahead of 9 May: ‘Modern Europe is reincarnated Nazism’ . [online] Available at: https://euvsdisinfo.eu/russian-revisionism-ahead-of-9-may-modern-europe-is-reincarnated-nazism/ [Back to Table of Contents] [China] Suspected China-Linked Influence Campaign Targets Exile Activists in UK   An investigation by The Guardian , in collaboration with the anti-racism organization Hope Not Hate , has uncovered evidence of a suspected disinformation campaign. It targets pro-democracy Hong Kong activists living in the United Kingdom. Following violent unrest in the UK in 2024, far-right channels on social media began inciting violence against asylum seekers. The focus soon shifted to prominent Hong Kong activists in exile. More than 150 posts from 29 accounts on platforms such as X and Telegram called for acts of violence against these individuals. The posts used derogatory language and spread false claims, accusing the activists of unlawful support for refugees and anti-national activity. These events may be part of an online influence operation linked to Chinese state actors. Many posts were written in broken English and appeared during Chinese working hours. Some contained Chinese characters, typography, or references to figures associated with the Chinese government. In many cases, the activists were also doxxed — their home addresses and schedules were published. While no direct link to the well-known Spamouflage Dragon network could be confirmed, the patterns and methods closely resemble established state-backed disinformation efforts from China. The campaign may form part of the so-called “transnational repression strategy,” aimed at silencing dissent beyond China’s borders. Many of the targeted exiles had already faced arrest warrants, intimidation of family members, and bounties in Hong Kong. Chinese officials deny the allegations, but international cybersecurity analysts and law enforcement agencies warn that Beijing’s strategy of fear and manipulation is advancing. ​ Source:    The Guardian, T. Burgis & M. McClenaghan, 2025. Revealed: online campaign urged far right to attack China’s opponents in UK . [online] Available at: https://www.theguardian.com/world/2025/apr/28/revealed-online-campaign-urged-far-right-to-attack-chinas-opponents-in-uk?CMP=Share_iOSApp_Other   [Back to Table of Contents] AI Related Articles AI Media Advances and Disinformation Tactics DFRLab takes an in-depth look  at how generative AI is reshaping the dynamics of disinformation campaigns. Modern tools such as diffusion models, GANs, and multimodal systems enable the creation of highly realistic synthetic media. While GANs typically output finished images with limited user control, newer systems allow users to define image content with far greater precision. Telltale signs like asymmetrical jewelry or warped backgrounds are increasingly rare. Modern generative systems can accurately render symmetrical objects, such as glasses, or remove them entirely. With text-based prompts, users can adjust even the smallest visual elements. Additionally, many of these systems tend to favor idealized aesthetics and cinematic visuals, which enhance the believability of the output. As a result, traditional detection methods are becoming less effective. These technologies are already being used in global disinformation efforts. DFRLab highlights the use of AI-generated profile images in pro-UAE campaigns during the COP28 summit and deepfake attacks in Brazil’s 2024 elections. Multimodal AI generates content with realistic settings, branding, and contextual cues that strengthen credibility. The shift from crude forgeries to refined AI-generated visuals accelerates the reach and effectiveness of hostile influence campaigns, making them significantly more challenging for researchers and the public to detect. Source:    DFRLab, A. Chenrose & M. Rizzuto, 2025. The evolving role of AI-generated media in shaping disinformation campaigns . [online] Available at: https://dfrlab.org/2025/05/01/the-evolving-role-of-ai-generated-media-in-shaping-disinformation-campaigns/ [Back to Table of Contents] GENERAL REPORTS Polling Misinformation May Mislead Australian Voters   ABC News  examines  various types of surveys and how they may contribute to the spread of disinformation. One example involves Labor's claim that Peter Dutton was voted the worst health minister. It is based on an online survey conducted a decade ago with only 1,077 respondents from a medical publication’s readership. The survey lacked scientific methodology and excluded subsequent health ministers, yet the results are being presented as representative of the entire medical community. Some surveys may selectively present or omit results. For instance, Clive Palmer, founder and leader of the United Australia Party (UAP), claimed Australians supported "Trump-like" policies. Still, the actual results showed more people opposed them than supported them. Additionally, some polls use leading questions to influence public opinion, as seen in a survey praising an independent MP before asking respondents for their voting preference. The most manipulative form of polling is "push polling," where questions are designed to plant negative impressions rather than gather unbiased data. Some voters reported receiving biased surveys about independent candidates funded by "Climate 200," with the survey being abruptly cut off if they supported other parties. Source:    ABC News, M. Workman, 2025. From unscientific surveys to omitted results, polling can mislead voters. Here's how not to get fooled . [online] Available at: https://www.abc.net.au/news/2025-04-29/how-polling-can-mislead/105214486 [Back to Table of Contents] Hostile Influence Threatening the Integrity of Kosovo’s 2025 Election   The BIRN Kosovo report  on the 2025 parliamentary election highlights the significant hostile external influence, especially from Russia and Serbia. Russian state-funded media outlets such as Sputnik Serbia and RT Balkan played an essential role in disseminating disinformation that undermined the credibility of the election and Prime Minister Albin Kurti. These narratives falsely suggested that the West, particularly the US, was supporting Kurti to instigate an inter-ethnic conflict in Kosovo. Serbian media outlets echoed these messages, further inflaming tensions. Chinese state-controlled media also contributed, though to a lesser extent, spreading narratives critical of NATO and the West. These efforts aimed to destabilize Kosovo's political landscape and its relations with Western institutions. Social media played a central role in amplifying these disinformation efforts. Political actors, sometimes with external backing, used platforms like Facebook, Instagram, and Twitter to bypass traditional media and directly influence public opinion. Anonymous accounts and bots were widely used to amplify misleading content. Ultimately, the lack of substantive policy debate, the unregulated use of disinformation, and opaque campaign financing severely compromised the election's informational integrity. BIRN’s findings stressed the urgent need for more vigorous enforcement of media laws, the regulation of AI in political discourse, and strategic institutional responses to foreign and domestic disinformation. ​ Source:    Balkan Investigative Reporting Network (BIRN), Gashi, K., Prebreza, V., Ahmeti, A., Jakupi, R., Kelmendi, V. & Nikolić, I., 2025. Hate speech and disinformation during the 2025 election in Kosovo. [online] Available at: https://www.eeas.europa.eu/sites/default/files/documents/2025/Hate%20Speech%20and%20Disinformation%20during%20the%202025%20Election%20in%20Kosovo%20EN.pd f [Back to Table of Contents] Viral Disinformation Marks First 100 Days of Trump’s Second Term The NewsGuard report  on Donald Trump's second presidency identifies 20 viral false claims that garnered over 134 million views and nearly 4 million likes in the first 100 days of his administration. Many of these falsehoods reflect hostile influence, particularly through Russian disinformation campaigns. The Kremlin’s “Matryoshka” campaign spread fake videos linking Trump to unfounded claims, such as imposing tariffs on uninhabited islands and banning pro-Ukrainian slogans. These narratives were designed to undermine trust in Trump and the U.S. while stoking geopolitical tensions. Additionally, pro-Trump users spread false claims, presenting misinformation as proof of Trump’s support for the middle class, such as the claim that he abolished taxes on tips and overtime. This disinformation was primarily spread on platforms like TikTok and X to influence political perceptions. Satirical websites like “America’s Last Line of Defense” also deliberately spread false information that was not recognized as parody, further exacerbating political polarization. These campaigns illustrate how hostile influence from foreign and domestic sources was used to manipulate public opinion and distort political narratives during the election period. Source:    NewsGuard Reality Check, Rubinson, S., 2025. 100 Days, 134 Million Views of False Claims About Trump. [online] Available at: https://www.newsguardrealitycheck.com/p/100-days-134-million-views-of-false [Back to Table of Contents] Social Media’s Influence on the 2025 Canadian Election   The DFR Lab report  on the 2025 Canadian federal election shows that it was marked by disinformation and hostile foreign influence. Meta’s news blackout under Bill C-18 created an information vacuum, contributing to the spread of hyperpartisan and misleading content. With Meta’s fact-checking programs ended, false narratives flourished on platforms like Facebook and Instagram. AI-generated content and deepfakes, particularly targeting candidates such as Liberal leader Mark Carney, were also widespread. China was identified as a significant foreign actor, using platforms like WeChat to spread manipulated content aimed at the Chinese-Canadian community. Although domestic actors were dominant in the information landscape, these foreign influence campaigns contributed to distorting the political discourse. The Conservative Party further politicized the issue of foreign interference during the campaign. The lack of transparency on platforms hindered efforts to trace the origins of these disinformation campaigns. While the disinformation did not directly alter the election outcome, it contributed to political polarization and eroded public trust in the electoral process. The combination of domestic and foreign disinformation presents a growing challenge to the integrity of democratic processes, particularly in the digital age. ​ Source:   DFRLab, Mashkoor, L., 2025. How social media shaped the 2025 Canadian election. [online] Available at: https://dfrlab.org/2025/04/29/how-social-media-shaped-the-2025-canadian-election/ [Back to Table of Contents] Study Highlights the Impact of Misinformation on the 2025 Australian Election​  According to a report by The Conversation , a national study in Australia found that at least two-thirds of Australians had already encountered false or misleading election content during the campaign's early stages. This content included distorted claims about candidates and policies, misinformation about voting procedures, and baseless allegations of election rigging. Some respondents had difficulty distinguishing fact from fiction, which led to increased uncertainty and impaired informed decision-making. The most common misinformation related to core policy issues, such as Medicare, housing, and climate change, is often associated with figures like Donald Trump, Clive Palmer, and major political parties. Both social media and established news sources were flagged as sources of this misleading content. While exposure to disinformation does not always result in a change of opinions, it can erode trust in democratic processes, especially among voters more susceptible to such misinformation. The study also drew parallels with global trends, including false narratives seen after the 2020 US elections, which can undermine trust in democratic institutions. The Australian Electoral Commission (AEC) has taken steps to combat disinformation, including a disinformation register, media partnerships, and public education initiatives. Most Australians recognize the severity of the issue and support proactive measures to counter disinformation. The study emphasizes that maintaining trust in democracy requires clear information and cooperation between institutions, the media, and voters. Source:    The Conversation, Caballero, L., 2025. Fake news and the election campaign – how worried should voters be? [online] Available at: https://theconversation.com/fake-news-and-the-election-campaign-how-worried-should-voters-be-255514 [Back to Table of Contents] Appendix - Frameworks to Counter Disinformation EU Funds 5 million Euros to Strengthen Resilience to Disinformation  The European Commission announces  the allocation of nearly €5 million in funding to combat disinformation and strengthen media literacy across the EU. The initiative includes two important calls for proposals. The first call, worth €3.15 million, focuses on detecting harmful information manipulation and understanding how disinformation affects citizens, while developing strategies to strengthen societal resilience. The second call, valued at €1.6 million, supports independent fact-checkers by promoting verified content using creative media formats and partnerships with influencers, podcasters, and media outlets. ​ Source:    European Commission, 2025. EU funding of €5 million to strengthen media literacy and resilience to disinformation. [online] Available at: https://digital-strategy.ec.europa.eu/en/news/eu-funding-eu5-million-strengthen-media-literacy-and-resilience-disinformation [Back to Table of Contents] [Download Report] [Back to Table of Contents] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [Back to Table of Contents] DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.

  • Weekly Report: Cyber based influence campaigns 21st - 27th of April 2025

    [Listen to the Podcast] [Download Report] > TABLE OF CONTENTS < SOCIAL MEDIA PLATFORMS [ Meta] Ongoing Criticism of Meta’s Fact-Checking Policy STATE ACTORS [ Russia] Dutch Annual Report Warns of Expanding Russian Cyber Threats Russia’s Disinformation Offensive in Moldova’s 2024 Elections The Kremlin Exploits Auschwitz Liberation Anniversary to Spread Disinformation EU Pledges Support Against Russian Disinformation in Moldova [ The War in Ukraine] Disinformation Campaigns Surround Chornobyl Drone Strike [China] Australia: The dangers of using Chinese apps in election campaigns AI RELATED ARTICLES AI-Fueled Misinformation Threatens Canada’s Election Anthropic Discovered Sophisticated AI-Powered Influence Campaign GENERAL REPORTS Bot-Like Networks Target Canadian Election with Disinformation Canada’s 2025 Election Targeted by Foreign Disinformation Surge of Visual Disinformation Escalates Political Tensions Gen Z’s High Vulnerability to Online Disinformation Meta’s Decision to Block Canadian News: A Gateway for Misinformation Disinformation Affecting Climate Debate in Australia’s Election GLOSSARY SOCIAL MEDIA PLATFORMS [Meta]  Ongoing Criticism of Meta’s Fact-Checking Policy   TechXplore highlights recent developments regarding Meta’s decision to end its U.S. fact-checking program on Facebook in January 2025 (see Cyfluence Weekly Review, W02/2025). Since then, Meta has shifted its focus to its new “Community Notes“ system. The change has sparked concern among many independent observers. The Meta Oversight Board, which reviews the company’s content moderation practices, called the move “hasty” and warned it could increase the spread of harmful and misleading content. In response, the Board issued 17 recommendations, including a call for Meta to assess how Community Notes compares to traditional third-party fact-checking. Critics like Michael Wagner from the School of Journalism and Mass Communication at the University of Wisconsin-Madison argue that relying on unpaid users to police misinformation across a platform with billions of users reflects a serious lapse in corporate responsibility and poses risks to public safety and human rights. Meta has pledged to follow the Board’s decisions in individual cases but is not required to implement its broader policy recommendations. Source:    Tech Xplore, 2025. Oversight board voices concern about Meta nixing fact checks . [online] Available at: https://techxplore.com/news/2025-04-oversight-board-voices-meta-nixing.html [Back to Table of Contents] STATE ACTORS [Russia] Dutch Annual Report Warns of Expanding Russian Cyber Threats The Netherlands’ Military Intelligence and Security Service’s (MIVD) 2024 annual report outlines security threats posed by several states, including Russia, China, and Iran. While China and Iran are primarily associated with traditional espionage, cyber espionage, and technology theft, Russia is presented as a significantly broader and more aggressive threat. A section on digital influence appears exclusively under the Russia chapter. According to the report, Russia pursues a whole-of-society strategy. State institutions, intelligence services, private companies, and hacktivists work cooperatively. The goal is to destabilize Western societies through the spread of pro-Russian narratives, the erosion of NATO cohesion, and the disruption of democratic processes, such as the run-up to the European elections in the Netherlands. This strategy’s key and increasingly used component is deploying artificial intelligence (AI) to automate and amplify digital campaigns. These activities form part of a hybrid warfare approach that combines digital tools, psychological manipulation, and sabotage. The MIVD cites specific incidents, including DDoS attacks on websites of political parties and public transport operators, which were intended to interfere with the European elections in the Netherlands. Additionally, Infosecurity Magazine reports that the Dutch defense minister has warned of Russia’s accelerating militarization, driven by its war economy, which is advancing faster than NATO’s current efforts. This development calls for an urgent reinforcement of Europe’s defense capabilities. By deliberately blurring the boundaries between peace and conflict, Russia employs disinformation as a tool to fragment Western cohesion and undermine collective responses to its geopolitical ambitions. Sources:   Military Intelligence and Security Service (MIVD), 2025. Public Annual Report 2024 . [online] Available at: https://www.politico.eu/wp-content/uploads/2025/04/22/MIVD_Openbaarjaarverslag2024_digitaal-1-1.pdf   Infosecurity Magazine, P. Muncaster, 2025. Dutch Warn of “Whole of Society” Russian Cyber-Threat . [online] Available at: https://www.infosecurity-magazine.com/news/dutch-warn-whole-society-russia/   [Back to Table of Contents] Russia’s Disinformation Offensive in Moldova’s 2024 Elections As stated in an EU vs. Disinfo article , during Moldova’s 2024 presidential election and EU referendum, Russia launched an aggressive and multifaceted disinformation campaign aimed at destabilizing the country’s democratic processes. Russia used covert and overt channels, including official state media like TASS and Sputnik Moldova. It intensified its Foreign Information Manipulation and Interference (FIMI) operations to undermine President Maia Sandu and discredit Moldova’s EU integration. The Kremlin recycled disinformation narratives from Ukraine, such as organ trafficking allegations, and tailored them to Moldovan audiences through a vast network of Telegram channels and fake local news sites.  To flood the information space, Russian-linked platforms created Moldova-focused domains and media outlets, many tied to RT and other state-backed entities. These outlets propagated anti-EU messages, portraying European integration as a threat to Moldova’s sovereignty, a danger to Russian-speaking communities, and a provocation toward Russia. The campaign used manipulative techniques like AI-generated content, fake documents, and fabricated fact-checks to confuse the public, while even deploying a Telegram bot to pay Moldovans to spread anti-EU propaganda—weaponizing local poverty for strategic gain.  Source:    EUvsDisinfo, 2025. In Russia’s FIMI laboratory: test case, Moldova . [online] Available at: https://euvsdisinfo.eu/in-russias-fimi-laboratory-test-case-moldova/   [Back to Table of Contents] The Kremlin Exploits Auschwitz Liberation Anniversary to Spread Disinformation  EUvsDisinfo reports that Russia was not invited to the international ceremony marking the 80th anniversary of the liberation of Auschwitz on 27 January 2025. The Auschwitz-Birkenau Museum declined to invite Russian representatives, citing the ongoing war of aggression against Ukraine, which it described as a “barbaric act.” The Kremlin responded with a coordinated disinformation campaign. Russian officials and state media framed the exclusion as a result of Western “Russophobia,” historical alsification, and a supposed revival of Nazi ideology in the EU and Ukraine. Russia portrayed itself as the victim of an intentional effort to erase its role in World War II. Key Kremlin narratives include the portrayal of Russia as the sole legitimate heir to the Soviet Union, the claim that only Russian soldiers liberated Auschwitz, and the regular labeling of Ukraine and its allies as “neo-Nazis.” Historical facts—such as the participation of around seven million Ukrainians in the Red Army—are deliberately omitted. This rhetoric is revisionist. It distorts historical reality, relativizes the Holocaust, and equates present- day political criticism of Russia with the crimes of Nazism. Russia attempts to position itself as a contemporary victim of a “fascist West”. One striking example of this disinformation strategy is the fabricated claim that Poland’s Chief Rabbi, Michael Schudrich, called President Zelenskyy a “dictator who has tamed Nazis on a leash.” This false statement first circulated on Russian Telegram channels. It was later repeated publicly by Russian Foreign Ministry spokesperson Maria Zakharova—illustrating a typical case of Kremlin-generated and state-amplified disinformation. Source:    EUvsDisinfo, 2025. 80 years after Auschwitz – The Kremlin’s manipulative use of the Holocaust . [online] Available at: https://euvsdisinfo.eu/80-years-after-auschwitz-the-kremlins-manipulative-use-of-the-holocaust/   [Back to Table of Contents] EU Pledges Support Against Russian Disinformation in Moldova  In a joint press conference , EU High Representative Kaja Kallas and Moldovan President Maia Sandu highlighted the escalating threat of Russian disinformation campaigns to destabilize Moldova's democratic institutions and its path toward EU integration. They emphasized the need for coordinated efforts to counteract these hybrid threats, which include spreading false narratives and manipulating public opinion to undermine trust in democratic processes.​  It is emphasized that the disinformation campaigns targeting Moldova are primarily conducted by Russian state-sponsored actors who use various platforms to spread false or misleading narratives. These efforts aim to destabilize the country, weaken its democratic institutions, and erode its sovereignty. In response, the European Union protects Moldova's democracy and electoral infrastructure. At the request of Moldovan authorities, the EU is deploying a Hybrid Rapid Response Team to strengthen institutional capacity and safeguard the integrity of upcoming elections. Additionally, the EU continues to assist Moldova in building resilience to hybrid threats and enhancing crisis preparedness.  Source:    European External Action Service, K. Kallas and M. Sandu, 2025. Moldova: Remarks at the joint press conference . [online] Available at: https://www.eeas.europa.eu/eeas/moldova-remarks-high-representativevice-president-kaja-kallas-joint-press-conference%C2%A0-president-maia_en   [Back to Table of Contents] [War in Ukraine] Disinformation Campaigns Surround Chornobyl Drone Strike  As published by DFR Lab , on February 14th, 2025, the drone strike on the Chornobyl nuclear power plant reignited global fears of a nuclear disaster and swiftly became a battleground for disinformation. While Ukrainian officials confirmed that the attack, allegedly by Russia, damaged the reactor’s protective shelter without causing a radiation leak, Russia denied involvement and blamed Ukraine. This was quickly amplified by Kremlin-affiliated media and over 100 Russian and Belarusian Telegram channels, many of which had already primed audiences with conspiracy theories weeks prior, claiming Ukraine was planning a provocation to frame Russia. These narratives not only accused Ukraine of staging the attack to gain sympathy before the Munich Security Conference but also questioned the structural integrity of the reactor’s protective dome, suggesting corruption and Western orchestration. The messaging weaponized the historical trauma of the 1986 Chornobyl disaster, evoking panic by implying that any attack—even a contained one—could unleash catastrophic fallout. By invoking past tragedies, Russian disinformation outlets effectively reinforced fear while positioning themselves as victims of Western and Ukrainian schemes.  Beyond the Chornobyl incident, the same networks have recycled similar disinformation tactics, including unfounded claims of Ukrainian strikes on the Kursk NPP and alleged NATO complicity. These coordinated campaigns use anonymous sources, fabricated evidence, and high-volume repetition across social media to create confusion, deflect blame, and undermine trust in Ukrainian leadership and Western allies.  Source:    DFRLab, I. Adam and S. Rothbardt, 2025. Narrative and provocation: What Telegram had to say about the February attack on Chornobyl nuclear plant . [online] Available at: https://dfrlab.org/2025/04/25/narrative-and-provocation-what-telegram-had-to-say-about-the-february-attack-against-chornobyl-npp/   [Back to Table of Contents] [China] Australia: The dangers of using Chinese apps in election campaigns   ABC News highlights   increased political activity on Chinese-language platforms during Australia’s 2025 federal election campaign. The Liberal Party, and to a lesser extent, Labor, use WeChat and RedNote to reach voters in electorates with large Chinese-Australian communities. The Liberals have published over 220 authorised ads on WeChat since January. Labor has published about 30. Liberal candidate Scott Yung is mainly active in Bennelong, a key marginal seat. Both major parties have also run attack ads targeting Prime Minister Anthony Albanese and Opposition Leader Peter Dutton. These appeared on WeChat. RedNote poses additional problems. The platform discourages political content and uses shadow banning. Posts and candidate names do not appear in searches. Some content is technically online but remains hard to find, limiting the visibility of political messages. Not all candidates can reach voters equally. Ads may cost more or be suppressed without explanation. Such conditions open the door to foreign interference. If specific messages are blocked or disadvantaged by opaque algorithms, the fairness of the electoral process is at risk . ​ Source:    ABC News, F. Willan, 2025. Election battle on Chinese apps intensifies as Liberals target crucial voters . [online] Available at: https://www.abc.net.au/news/2025-04-27/chinese-social-media-apps-political-campaigns-federal-election/105201336   [Back to Table of Contents] AI Related Articles AI-Fueled Misinformation Threatens Canada's Election  CyberNews reports that, in the lead-up to Canada’s federal election on April 28, 2025, a wave of politically themed books targeting Prime Minister Mark Carney appeared on Amazon. Many of these titles were generated using artificial intelligence. The content often included factual inaccuracies, exaggerated claims, and surreal passages. Most readers were unaware that the texts were AI-generated. A report published by Anthropic detailed the disruption of a sophisticated AI-powered disinformation operation that marked a significant evolution in online influence tactics. This financially motivated “influence-as-a-service” provider used Anthropic’s Claude AI to orchestrate over 100 fake social media personas on X and Facebook, managing interactions, generating responses, and coordinating image creation. Unlike traditional viral disinformation campaigns, this operation emphasized persistence, relationship building, and covert integration into real user communities to gradually shape political opinions, promoting narratives that supported or undermined European, Iranian, UAE, and Kenyan interests. The books were published via Amazon’s Kindle Direct Publishing platform, which does not require editorial review or disclosure of AI authorship. While many of these books were later removed, they had already reached a broad audience. The case shows how generative AI, combined with open publishing platforms, can distort political narratives at scale and with minimal oversight. This creates a potential tool for digital hostile influence, especially during election periods when the integrity of public discourse is particularly vulnerable.   Source:    Cybernews, M. Walsh, 2025. AI election interference enters the Amazon era . [online] Available at: https://cybernews.com/news/ai-amazon-canadian-election/   [Back to Table of Contents] Anthropic Discovered Sophisticated AI-Powered Influence Campaign  A report published by Anthropic  detailed the disruption of a sophisticated AI-powered disinformation operation that marked a significant evolution in online influence tactics (for more information see our Blog ) This financially motivated “influence-as-a-service” provider used Anthropic’s Claude AI to orchestrate over 100 fake social media personas on X and Facebook, managing interactions, generating responses, and coordinating image creation. Unlike traditional viral disinformation campaigns, this operation emphasized persistence, relationship building, and covert integration into real user communities to gradually shape political opinions, promoting narratives that supported or undermined European, Iranian, UAE, and Kenyan interests.  The operation demonstrated notable technical sophistication by centralizing persona management through a structured, JSON-based system. Claude AI made strategic decisions for each persona, aligning interactions with specific political goals and generating content in the persona’s native languages and styles. The coordinated use of AI tools also helped operators maintain consistent political narratives across accounts, evading traditional detection methods by mimicking authentic human behavior and engagement patterns.  Anthropic’s findings highlighted a significant shift in disinformation tactics, moving from content-centric to relationship-centric strategies. The report stressed that although the campaign had limited viral impact, it effectively embedded politically aligned personas into real communities, subtly influencing conversations over time.   ​ Source:    Anthropic, K. Lebedev, A. Moix, & J. Klein, 2025. Operating Multi-Client Influence Networks Across Platforms . [PDF] Available at: https://cdn.sanity.io/files/4zrzovbb/website/45bc6adf039848841ed9e47051fb1209d6bb2b26.pdf   [Back to Table of Contents] GENERAL REPORTS Bot-Like Networks Target Canadian Election with Disinformation  DFR Lab’s investigation revealed  that bot-like accounts on X aggressively targeted Canadian political parties ahead of the 2025 federal election, amplifying disinformation through repetitive spam tactics. Primarily attacking the Liberal Party and its leader, Prime Minister Mark Carney, these accounts spread narratives linking Carney to China, Jeffrey Epstein, and the World Economic Forum. The disinformation strategy relied heavily on recycled graphics from sources like Canada Proud and Conservative leader Pierre Poilievre’s official account, rather than creating new content.  Actors involved included suspected automated X accounts and politically motivated groups, with some disinformation campaigns previously linked to Chinese influence efforts. The findings underscore broader concerns about the erosion of election integrity and democratic discourse, as highlighted by Canada's ongoing inquiry into foreign interference. DFR Lab’s research adds to mounting evidence that inauthentic online activity, especially during critical political periods, poses a profound existential threat to democratic systems worldwide.  Source:    Digital Forensic Research Lab, 2025. Bot-like activity targets Canadian political parties and their leaders ahead of election . [online] Available at: https://dfrlab.org/2025/04/25/bot-like-activity-targets-canadian-political-parties-and-their-leaders-ahead-of-election/   [Back to Table of Contents] Canada's 2025 Election Targeted by Foreign Disinformation  DFRLab investigated bot-like behavior on X (formerly Twitter) ahead of Canada’s April 2025 snap election. The analysis focused on eight English-language accounts belonging to major political parties and their leaders (Liberal, Conservative, NDP, Bloc Québécois). Between January 1 and April 17, 2025, researchers collected 3,507 posts and 2.4 million related interactions, including replies and reposts. Using behavioral indicators such as post volume, reply speed, and linguistic patterns, DFRLab identified 150 bot-like accounts, 65 of which were created in 2025, suggesting intent to influence the election. Many accounts followed generic naming conventions and demonstrated automated or coordinated behavior. Some were later restricted by X due to “unusual activity.”Roughly four of five accounts targeted the Liberal Party, especially Prime Minister Mark Carney, often through repeated image spam. Familiar narratives accused Carney of loyalty to China, ties to Jeffrey Epstein and Ghislaine Maxwell, and involvement in the World Economic Forum. Content was rarely original; bots reused graphics from groups like Canada Proud or the official account of Conservative leader Pierre Poilievre. Some replies contained text artifacts indicative of AI-generated content, such as spacing errors before punctuation. Conservative accounts were also targeted, but to a lesser extent. DFRLab concludes that such inauthentic activity distorts political discourse by artificially inflating certain narratives. In a high-stakes election marked by international tensions and record early voter turnout, bots present a clear risk to democratic integrity and public trust. ​ Source:    Digital Forensic Research Lab (DFRLab), 2025. Bot-like activity targets Canadian political parties and their leaders ahead of election . [online] Available at: https://dfrlab.org/2025/04/25/bot-like-activity-targets-canadian-political-parties-and-their-leaders-ahead-of-election/   [Back to Table of Contents] Surge of Visual Disinformation Escalates Political Tensions  A study by The Conversation examined the connection between visual disinformation and political violence. The aim was to investigate how manipulated images, memes, and other visual content contribute to political instability and conflict. The research team analyzed a dataset of approximately 6 million posts and 3 million images from 989 Russian military bloggers on Telegram. Using AI tools, they detected visual manipulations, focusing on the period leading up to Russia’s 2022 invasion of Ukraine. The study recorded a nearly 9,000% spike in posts and a 5,000% increase in altered images shortly before the outbreak of war. The findings suggest that visual disinformation is deliberately used to distort public perception, deepen social divisions, and justify violence. Images and memes often have a substantial emotional impact and spread more quickly than text-based content. One of the study’s key conclusions is that artificial intelligence can assist in identifying technical manipulations, but cannot reliably interpret visual content’s political or ideological meaning. Human expertise remains essential for understanding the context and intent of these messages. Source:    The Conversation, T. Weninger and E. Verdeja, 2025. Memes and conflict: Study shows surge of imagery and fakes can precede international and political violence . [online] Available at: https://theconversation.com/memes-and-conflict-study-shows-surge-of-imagery-and-fakes-can-precede-international-and-political-violence-233055   [Back to Table of Contents] Gen Z's High Vulnerability to Online Disinformation  Politico examines how Generation Z, though raised with digital technology, is highly susceptible to online misinformation. A 2021 Stanford University study found that only three out of 3,446 high school students could correctly identify the origin of a video that falsely claimed to show U.S. voter fraud but depicted footage from Russia. This result challenges the assumption that digital natives are naturally skilled at evaluating online content.   TikTok has become a key news source for many young users. At the same time, it is a space where conspiracy theories and unverified claims spread quickly. Despite expert warnings, these include doubts about Helen Keller’s life, allegations of government-controlled weather, and viral health trends such as applying beef tallow to the skin. A lack of trust in institutions, such as the media, education, and science, shapes how Gen Z processes information. Many rely on comment sections to assess accuracy rather than consulting reliable sources. These sections often reflect algorithm-driven echo chambers that reinforce existing views and filter out opposing perspectives. One example involved an AI-generated audio clip of Donald Trump that went viral. The correction appeared only later in the comments, after the video had been widely shared. Researchers like Rakoen Maertens from the University of Oxford point to limited life experience as another factor. Schools often teach close reading but rarely train students to verify information across sources. Experts note that this pattern may spread to older generations. As algorithms personalize information across platforms, all age groups may face growing exposure to misleading content. ​ Source:   POLITICO, C. Kim, 2025. How Gen Z Became the Most Gullible Generation . [online] Available at: https://www.politico.com/news/magazine/2025/04/23/gen-z-media-tiktok-misinformation-00287561   [Back to Table of Contents] Meta's Decision to Block Canadian News: A Gateway for Misinformation​  NewsGuard notes that in August 2023, Meta stated, “people using our platforms don’t come to us for news.” However, a June 2024 Reuters Institute survey found that 25 percent of Canadians use Facebook for news. The blocking of Canadian news due to the Online News Act created an information vacuum that was quickly filled by misinformation sources. False narratives began to spread with reliable news sources no longer available on Facebook, and Instagram. NewsGuard identified several false claims targeting Mark Carney, including one that he called Trump supporters “a generational damage to the United States,” and another claimed he sold Canada’s gold reserves. These falsehoods reached millions of users, even though they were debunked. Despite Canadian media quickly debunking these claims, the fact-checks were often not shared on Meta platforms. One example is the false claim that Carney was barred from serving in Parliament due to a Canadian law, which continued to spread as the corrections were not posted on social media. The blocking of Canadian news sources has undermined trust in democratic processes. Unchecked misinformation spreads, manipulating public opinion and contributing to societal polarization. This development illustrates the problematic nature of blocking legitimate news sources and how it can erode the integrity of public discourse. Source:    NewsGuard Reality Check, S. Howard and N. Dirks, 2025. With Canadian News Outlets Barred from Facebook and Instagram, Bad Actors Are Filling the Void with Election Misinformation . [online]  Available at: https://www.newsguardrealitycheck.com/p/with-canadian-news-outlets-barred   [Back to Table of Contents] Disinformation Affecting Climate Debate in Australia’s Election  According to The Conversation , before the 2025 Australian federal election, misinformation about climate change has emerged as a significant issue. Misleading claims about renewable energy and gas distort public perception and hinder climate action. Three major misinformation campaigns are active in the election campaign: Clive Palmer’s "Trumpet of Patriots" campaign spread an outdated scientific claim suggesting that temperatures in Greenland were not rising, which was removed by Google after 1.9 million views. The group "Responsible Future Illawarra" spreads false information about offshore wind farms, claiming they harm marine life, despite scientific studies showing the opposite. Finally, the pro-gas campaign "Australians for Natural Gas" promotes the idea that gas is essential for Australia’s energy mix and that climate policies harm the economy, even though a shift to renewable energy is feasible. Experts recommend protecting oneself from such misinformation by using "pre-bunking" strategies and the SIFT method, which involves verifying sources and tracing information to ensure the climate debate is based on facts rather than fiction.  ​ Source:    The Conversation, D. Lewins (AAP), 2025. These three climate misinformation campaigns are operating during the election run-up. Here’s how to spot them . [online] Available at: https://theconversation.com/these-3-climate-misinformation-campaigns-are-operating-during-the-election-run-up-heres-how-to-spot-them-253441   [Back to Table of Contents] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [ Back to Table of Contents ] DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.

  • Weekly Report: Cyber based influence campaigns 23th March – 30rd March 2025

    INTRODUCTION Cyber-based hostile influence campaigns  aim to influence target audiences by disseminating information and/or disinformation over the internet, sometimes in conjunction with cyberattacks, which amplify their impact (hence, forcing Cyfluence, as opposed to cyberattacks that seek to steal information, extort money, etc.). Such hostile influence campaigns and operations can be considered an epistemological branch of Information Operations (IO) or Information Warfare (IW). Typically, and as customary over the last decade, information is disseminated across various internet platforms, which are the different elements of the hostile influence campaign. As such, the connectivity and repetitiveness of content between these elements are the primary characteristics that define influence campaigns. Much like Cyber-attacks, hostile influence campaigns have also become a tool for rival nations and corporations to damage reputations or achieve various business, political, or ideological goals. Much like in the cybersecurity arena, PR professionals and government agencies respond to negative publicity and disinformation shared through news and social media. We use the term' cyber-based hostile influence campaigns' as we also include in this definition cyber-attacks aimed at influencing (such as hacking and leaking during election time), while excluding from this term other types of more traditional kinds of influence, such as diplomatic, economic, and military. Between March 24 and 30, 2025 , we observed, collected, and analyzed endpoints related to cyber-based hostile influence campaigns, including Cyfluence attacks. The following report summarizes the key events we consider most significant. Some campaigns involve social media and news outlets, while others utilize cyber-attack capabilities. [Listen to the Podcast] [Download Report] TABLE OF CONTENTS HOSTILE INFLUENCE CAMPAIGNS STATE ACTORS [Russia] Russian Intelligence Engages in Overt Disinformation [War in Urkaine] Austria: Russian Influence Campaign Unveiled [China] Global Times Rejects Allegations of Chinese Hostile Interference China's Disinformation Campaigns and Japan's Vulnerabilities AI RELATED ARTICLES The Rise of Disinformation in the Age of AI AI Spreads False Narratives and Climate Disinformation GENERAL REPORTS Disinformation Targets UK MPs and Bangladesh Central Bank Chief Hostile Influence Campaigns Target Canadian Electoral Process France Identified as Top Target of Foreign Hostile Influence in Europe Combating Disinformation in Kenya and Senegal Fake Accounts Drive Disinformation on Pope Francis’ Health APPENDIX - FRAMEWORKS TO COUNTER DISINFORMATION Disinformation Cited as Justification for Facebook Suspension in PNG GLOSSARY DISLCLAIMER HOSTILE INFLUENCE CAMPAIGNS STATE ACTORS Russia Russian Intelligence Engages in Overt Disinformation EU vs. Disinfo warns  that Russian disinformation campaigns have long aimed to portray protests in Russia and neighboring countries as being Western-orchestrated. The goal is delegitimizing democratic movements, often by alleging that protesters are paid. A typical example is the false claim that the West funded the 2020 Belarus protests with $6 billion in cryptocurrencies. After the elections in Georgia, Russian state media, including Sputnik, claimed the EU was financing protests, reinforced by an official SVR press release. While Russian intelligence usually operates covertly through proxies, leaks, and fake news platforms, its recent public messaging signals a shift. The SVR’s open statement marks growing boldness, showing that Russian spies now act both secretly and publicly. According to EU vs. Disinfo, Russia’s hostile influence architecture comprises four interconnected layers, ranging from overt to covert. While some actors operate openly, others conceal their ties to the Kremlin. Source:   EUvsDisinfo, 2025. Russian spies as disinformation actors. [online] Available at: https://euvsdisinfo.eu/russian-spies-as-disinformation-actors/ War in Ukraine Austria: Russian Influence Campaign Unveiled Austria's Directorate for State Protection and Intelligence (DSN) has uncovered  a Russian-led disinformation campaign orchestrated by a covert cell operating within the country. At the center of the investigation is a Bulgarian national suspected of acting as a liaison for Russian intelligence services. She allegedly coordinated the distribution of propaganda materials intended to sway public opinion against Ukraine and President Volodymyr Zelenskyy. The campaign used online content and real-world actions, such as graffiti and stickers, which falsely appeared to originate from pro-Ukrainian activists but contained far-right and nationalist messaging. The disinformation cell became active shortly after the Russian invasion of Ukraine began in 2022. The suspect has confessed to her involvement and provided evidence pointing to additional accomplices located in Russia and the UK. According to the DSN, this campaign is part of a broader strategy of hybrid warfare designed to undermine democracy, destabilize society, and manipulate the political discourse. Source: DSN, 2025. DSN uncovers Russian disinformation campaign against Ukraine. Article No: 27890. [online]. Available at: https://www.dsn.gv.at/news.aspx?id=684974546579734E4A61453D China Global Times Rejects Allegations of Chinese Hostile Interference DisinfoWatch analyzes  an article by the Chinese state-run newspaper Global Times, which denies any hostile influence by the Chinese government in Canada’s political process. Instead, the article claims that the Conservative Party of Canada is using these allegations to discredit the Liberal Party’s electoral successes. It further suggests that Canadian authorities exaggerate the issue to distract from trade and security tensions with the United States. The Global Times also criticizes the language used in official reports, arguing that terms like “highly likely” indicate a lack of credible evidence. DisinfoWatch classifies these claims as misleading: Chinese interference efforts have been well-documented by intelligence agencies, parliamentary committees, and independent experts, particularly in the 2019 and 2021 elections. The criticized terminology is standard in intelligence assessments and reflects analytical caution, not a lack of evidence. These narratives reflect China`s geopolitical strategy to undermine and deflect international criticism of its foreign influence operations. Source:   DisinfoWatch, 2024. PRC State Media Falsely Accuse CSIS, Conservatives of Hyping Foreign Interference Concerns. [online] Available at: https://disinfowatch.org/disinfo/prc-state-media-falsely-accuse-csis-conservatives-of-hyping-foreign-interference-concerns/   China's Disinformation Campaigns and Japan's Vulnerabilities A 2025 report by DFRLab highlights  that China’s expanding digital influence strategy poses a significant challenge to Japan. Due to economic rivalry, territorial disputes in the East China Sea, and its close alliance with the U.S., Japan has become a key target of Chinese disinformation campaigns. These efforts serve Beijing’s broader geopolitical aims by spreading false narratives, including misinformation about the Fukushima wastewater release and separatist claims concerning Okinawa ( also see our Weekly Report: 17th March – 23rd March 2025 ). Coordinated campaigns amplify such messages across social media, often using fake accounts. Japan’s outdated digital infrastructure and limited institutional coordination hinder an adequate response. Although the Digital Agency was created to modernize government systems, it struggles to keep pace with evolving threats. While Japan raises concerns about disinformation internationally, its domestic response remains underdeveloped. Public demand for stronger regulation and digital defense is growing. To boost democratic resilience, Japan must accelerate its digital transformation, improve cybersecurity, and implement coordinated countermeasures. Source:   DFRLab, D.J. Plung and K. Thibaut, 2025. Japan’s technology paradox: the challenge of Chinese disinformation. [online] Available at: https://dfrlab.org/2025/03/25/japan-tech-chinese-disinformation/ [ Back to Table of Contents ] AI RELATED ARTICLES The Rise of Disinformation in the Age of AI In an article on TechXplore , computer scientist Andreea Pocol discusses the growing challenges posed by generative AI, also known as GenAI. These inexpensive and widely accessible tools create highly realistic images and videos, known as "deepfakes," which are increasingly used for disinformation campaigns. The dangers include intensifying political conflicts, election interference, increased societal polarization, and erosion of trust in media and democratic institutions. According to a 2022 study by Pocol, participants could distinguish between authentic and AI-generated images with only 61% accuracy. This rate is likely to have declined further today. Previous detection indicators, such as unrealistic eyes, teeth, or hair, have become less reliable due to technological advancements. Algorithms that detect deepfakes often fail, particularly with low-resolution or poorly lit images. Despite calls for stronger regulation, the technology is advancing faster than reliable detection methods. Source: TechXplore, A. Pocol, 2025. As generative AI becomes more sophisticated, it becomes increasingly difficult to distinguish the real from the deepfake. [online] Available at: https://techxplore.com/news/2025-03-generative-ai-sophisticated-harder-distinguish.html AI Spreads False Narratives and Climate Disinformation A report by NewsGuard analyzed an article published by ScienceofClimateChange.org, a site known for climate change skepticism. It claimed that solar activity, rather than human actions, is the primary driver of global warming. Grok 3, an AI tool developed by Elon Musk’s platform "X," was credited as the lead author. The piece garnered widespread attention among climate change skeptics. However, AI is not always a reliable source of factual information. Tools like Grok generate responses based on training data and the structure of user prompts. The output can reflect those distortions if the prompts are biased or misleading. Additionally, Grok 3 co-authored the article alongside known climate skeptics such as David Legates, whose influence likely shaped the direction and tone of the AI-generated content. NewsGuard also notes that Grok contradicted the article’s claims when prompted neutrally and confirmed that human activities are a major driver of climate change. Source: NewsGuard, N. Dirks, 2025. No, Grok AI-Written Study Does Not Prove that Global Warming is a Natural Phenomenon. [online] Available at: https://www.newsguardrealitycheck.com/p/no-grok-ai-written-study-does-not [ Back to Table of Contents ] GENERAL REPORTS Disinformation Targets UK MPs and Bangladesh Central Bank Chief The Guardian reports  that British MPs have raised concerns about a disinformation campaign targeting Ahsan Mansur, the Governor of the central bank of Bangladesh. The campaign began shortly after he arrived in the UK, where he is investigating assets allegedly laundered by the regime of Prime Minister Sheikh Hasina. Fabricated articles and emails sent by individuals posing as journalists contain unsubstantiated claims against Mansur and his family. His daughter is a particular focus, facing allegations of unclear sources of wealth. The effort appears aimed at damaging Mansur’s credibility as an anti-corruption advocate. British politicians, including former Minister Tulip Siddiq, are also mentioned, raising concerns that the campaign aims to stoke political tension and hinder cooperation. Mansur, a former IMF official, believes those involved in money laundering are behind the attacks. MPs have forwarded the material to cybersecurity teams and foreign affairs committees, calling for a full investigation. Source: The Guardian, R. Davies, 2025. MPs think they may have been targets of ‘disinformation’ over Bangladesh inquiry. [online] Available at: https://www.theguardian.com/politics/2025/mar/24/mps-think-they-may-have-been-targets-of-disinformation-over-bangladesh-inquiry Hostile Influence Campaigns Target Canadian Electoral Process A report by DisinfoWatch warns of coordinated foreign influence targeting Canada’s electoral process. The main actors identified are China, Russia, and U.S.-based extremist networks. While the Chinese Embassy in Ottawa and the state-run Global Times deny any interference, as noted earlier in our weekly report, Canadian intelligence agencies have documented targeted influence operations. These include disinformation campaigns, covert financing, and intimidation. In December 2024, a $1 million bounty was reportedly placed on a Conservative candidate. Russia, meanwhile, uses state media outlets like Sputnik to discredit Liberal Party leader Mark Carney, particularly for his pro-Ukraine stance. AI-generated fake images—some showing Carney with Jeffrey Epstein—have circulated online and were reportedly created in part using Grok 3, the AI platform developed by Elon Musk’s X. U.S. conspiracy theorist Alex Jones has also intervened, publicly endorsing PPC leader Maxime Bernier while promoting far-right narratives. This transnational entanglement of extremist actors fuels radicalization, polarization, and public distrust in democratic institutions. Source:   DisinfoWatch, 2025. Canadian 2025 Election Report #1. [online] Available at: https://disinfowatch.org/canadian-2025-election-report-1/ [ Back to Table of Contents ] France Identified as Top Target of Foreign Hostile Influence in Europe According to Le Monde , France has become one of the main targets of foreign hostile influence in Europe—second only to Ukraine. Prime Minister François Bayrou cited a European External Action Service report indicating that nearly one-third of the 505 documented incidents across Europe between 2023 and 2024 targeted France. While Russia is the primary actor, China has also been linked to coordinated influence efforts. A February report by France’s disinformation watchdog, Viginum, confirmed that Russia has conducted a sustained information campaign over the past three years to justify its invasion of Ukraine and manipulate public opinion in France (a review of the Viginum report is available on our blog). France’s support for Ukraine likely makes it a strategic target. On March 28, President Macron hosted a European security summit. However, ongoing digital influence operations may continue to undermine these diplomatic efforts. Source: Le Monde with AFP, 2025. PM says France is the top EU target for foreign disinformation campaigns. [online] Available at: https://www.lemonde.fr/en/international/article/2025/03/28/disinformation-pm-says-france-is-the-top-eu-target-for-foreign-campaigns_6739602_4.html Combating Disinformation in Kenya and Senegal A study by The Conversation highlights  how media professionals in Kenya and Senegal are countering disinformation. Political and health-related topics are particularly affected. Journalists employ source cross-checking, reverse image searches, geolocation tools such as Google Maps, and specialized tools like InVID for video analysis to verify content. Fact-checking platforms, such as PesaCheck, Piga Firimbi, and AfricaCheck, play a crucial role. At the same time, there is an ongoing debate over who should have the authority to define what is “true”—governments, tech companies, or independent actors. In the long term, media literacy is the most effective defense against disinformation. In Kenya, outlets produce public tutorials; in Senegal, AfricaCheck collaborates with radio stations in Wolof. However, structural barriers remain: a lack of curricula, language gaps, and limited political engagement. Experts argue that media literacy must be integrated into schools, workplaces, and religious institutions to build a resilient public capable of resisting manipulative narratives. Source: The Conversation, L. Diop, 2024. Fighting fake news: how media in Kenya and Senegal check facts. [online] Available at: https://theconversation.com/fighting-fake-news-how-media-in-kenya-and-senegal-check-facts-251123 Fake Accounts Drive Disinformation on Pope Francis’ Health An analysis by Cyabra revealed  that AI-driven fake profiles heavily influenced online discussion about Pope Francis’ health. 31% of the accounts on X were fake and responsible for spreading false claims about the Pope’s death and manipulating official Vatican communications. The disinformation campaign centered on three narratives: fabricated death reports, fake news shared by seemingly credible accounts, and distortion of official statements to sow distrust. Approximately 42% of the overall conversation was negative. Fake profiles used emotional language, repeated messaging, and hijacked high-engagement threads to amplify their false narratives. The Vatican was forced to respond publicly, dedicating time and resources to counter these rumors. The case highlights the growing role of AI-powered disinformation in shaping public perception and distorting legitimate communication.   Source:    Cyabra, R. Baruchin, 2025. Pope Francis’ Health Misinformation Fueled by Fake Profiles. [online] Available at: https://cyabra.com/blog/pope-francis-health-misinformation-fueled-by-fake-profiles/ [ Back to Table of Contents ] APPENDIX - FRAMEWORKS TO COUNTER DISINFORMATION Disinformation Cited as Justification for Facebook Suspension in PNG ABC News reports  that Papua New Guinea has temporarily blocked access to Facebook. The government described the move as a test aimed at countering disinformation, hate speech, and harmful content, carried out under anti-terrorism laws. According to media reports, the ban was imposed without a specified duration; access has since been restored. Officials argued that the measure was necessary to protect public safety and reduce the spread of false information, violent material, and pornography. The decision sparked criticism from opposition politicians, journalists, and civil society groups, who raised concerns about its implications for freedom of expression. They viewed the action as disproportionate and warned it could set a precedent for future restrictions. The government has previously accused Facebook of enabling the spread of misinformation. Observers noted that such actions could affect long-term access to digital platforms and reduce transparency in online public discourse. Source: ABC News, N. Sas & B. Kora, 2025. Papua New Guinea shuts down Facebook in test to stop 'pornography, misinformation, hate speech'. [online] Available at: https://www.abc.net.au/news/2025-03-25/facebook-shut-down-in-papua-new-guinea-in-test/105092564 [ Back to Table of Contents ] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [ Back to Table of Contents ] DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. The remaining terms remain in full effect if any provision is found invalid.

  • Weekly Report: Cyber based influence campaigns 31th March – 6rd April 2025

    INTRODUCTION Cyber-based hostile influence campaigns  aim to influence target audiences by disseminating information and/or disinformation over the internet, sometimes in conjunction with cyberattacks, which amplify their impact (hence, forcing Cyfluence, as opposed to cyberattacks that seek to steal information, extort money, etc.). Such hostile influence campaigns and operations can be considered an epistemological branch of Information Operations (IO) or Information Warfare (IW). Typically, and as customary over the last decade, information is disseminated across various internet platforms, which are the different elements of the hostile influence campaign. As such, the connectivity and repetitiveness of content between these elements are the primary characteristics that define influence campaigns. Much like Cyber-attacks, hostile influence campaigns have also become a tool for rival nations and corporations to damage reputations or achieve various business, political, or ideological goals. Much like in the cybersecurity arena, PR professionals and government agencies respond to negative publicity and disinformation shared through news and social media. We use the term' cyber-based hostile influence campaigns' as we also include in this definition cyber-attacks aimed at influencing (such as hacking and leaking during election time), while excluding from this term other types of more traditional kinds of influence, such as diplomatic, economic, and military. Between March 31 and April 06th, 2025 , we observed, collected, and analyzed endpoints related to cyber-based hostile influence campaigns, including Cyfluence attacks. The following report summarizes the key events we consider most significant. Some campaigns involve social media and news outlets, while others utilize cyber-attack capabilities. [Listen to the Podcast] [Download Report] TABLE OF CONTENTS HOSTILE INFLUENCE CAMPAIGNS STATE ACTORS [Russia] Russia's Disinformation Strategy Russian Disinformation and Hostile Campaigns in Georgia [War in Ukraine] Conclusions on AI’s Influence in State-Sponsored Disinformation Campaigns Russia Escalates Disinformation War to Undermine Ukraine’s Global Support GENERAL REPORTS The Complexities of Disinformation Attribution Understanding FIMI: Key Findings and Trends in Digital Warfare APPENDIX - FRAMEWORKS TO COUNTER DISINFORMATION Early Detection of Disinformation Campaigns Using AI UK’s Foreign Influence Registration Scheme to Counter Disinformation GLOSSARY DISLCLAIMER HOSTILE INFLUENCE CAMPAIGNS STATE ACTORS [Russia] Russia's Disinformation Strategy EUvsDisinfo reports  that the 3rd EEAS (European External Action Service)  report describes Russian disinformation as a structured, four-layered system. The first layer includes official state-controlled channels like RT and Sputnik, which openly represent the Kremlin’s voice. The second layer consists of state-linked platforms such as NewsFront. They try to hide their ties to the Russian state but follow its messaging closely. The third layer includes anonymous websites and accounts. They are hard to trace but show technical and behavioural signs of coordination with known pro-Kremlin sources. The fourth and deepest layer involves state-aligned actors. They cannot be directly linked to the Russian state but regularly repeat Kremlin narratives. They also use the same infrastructure and tactics as confirmed actors. Together, all layers serve Russia’s goal of shaping public opinion and spreading confusion. Researchers use technical clues like domain data and hosting and behavioral patterns like AI or automated posting to detect and track these operations. Source:   EUvsDisinfo, 3rd EEAS Report on The Architecture of Russia’s FIMI Operations. [online], (2025), Available at: https://euvsdisinfo.eu/the-architecture-of-russias-fimi-operations/ [ Back to Table of Contents ] Russian Disinformation and Hostile Campaigns in Georgia EUvsDisinfo highlights  that Russian media outlets such as Sputnik  and Russia Today (RT)  in Georgia played a central role in spreading disinformation to undermine the protests against the Georgian government’s decision to suspend EU accession talks until 2028. The protesters were labeled as "liberal-fascist" traitors and "puppets of the West," while the police’s use of force was portrayed as necessary, lawful, and proportionate. These outlets promoted the narrative of a Western-backed "color revolution" and accused the US and EU of destabilizing Georgia. Additionally, disinformation claimed that Western organizations like USAID and NED were funding the protests to undermine Georgia’s sovereignty. Russian media also sought to link the unrest to the war in Ukraine, framing Georgia as a “second front” against Russia. These campaigns aimed to depict the West as hostile while casting Russia as the stabilizing force in Georgia. These narratives align with Russia’s broader strategy of hostile influence. Source:   EUvsDisinfo, "The war on truth: Russian disinformation and Georgia’s path to EU discord," (2025), [online] Available at: https://euvsdisinfo.eu/the-war-on-truth-russian-disinformation-and-georgias-path-to-eu-discord/ [ Back to Table of Contents ] [War in Ukraine] Conclusions on AI’s Influence in State-Sponsored Disinformation Campaigns PNAS Nexus, published by Oxford University Press , features a recent report that examines the impact of generative AI on disinformation campaigns, focusing on DCWeekly.org , a propaganda site. This site, identified as part of a Russian influence operation, spread pro-Russian narratives targeting a global audience, especially in countries like West Africa, Turkey, India, and the U.S. The report shows that the use of generative AI, particularly OpenAI's ChatGPT-3, significantly increased the production of disinformation. Before AI adoption, the content was mainly copied and edited from other sources. After integrating AI in September 2023, the articles appeared more original, though they often retained the same source base. Notably, many of these disinformation campaigns focused on anti-Ukrainian narratives, including fabricated stories about Ukrainian President Volodymyr Zelenskyy. A survey showed that these AI-generated articles were perceived as equally persuasive and credible as previous ones, highlighting the growing threat of AI-supported disinformation. Source: Oxford Academic, PNAS Nexus, Morgan Wack, C. Ehrett, D. Linvill, P. Warren, 2025. Generative propaganda: Evidence of AI’s impact from a state-backed disinformation campaign. [online] Published by Oxford University Press. Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC11950819/pdf/pgaf083.pdf [ Back to Table of Contents ] Russia Escalates Disinformation War to Undermine Ukraine’s Global Support A report from RBC-Ukraine reveals  that Russia has launched a new disinformation campaign against Ukraine. The campaign involves key Russian media outlets such as Gazeta.ru , Sputnik , Vedomosti , and Voennoye Obozreniye , as well as foreign proxy platforms like Reseau International and Magyar Hírlap , which present themselves as independent sources. Dissemination occurs through anonymous Telegram channels, manipulated TikTok accounts, and YouTube videos impersonating Ukrainian soldiers or "independent experts." These materials are spread in various formats, including opinion pieces, fake interviews, analyses, and infographics. The primary objective of this operation is to blame Kyiv for the failure of peace talks, depict Russia as a "peacemaker," and portray Ukraine as unwilling to negotiate. Simultaneously, the campaign seeks to undermine trust in the Ukrainian government and weaken Western support for Ukraine. Source:   RBC-Ukraine, B. Babaiev, 2025. Russia blames Ukraine for stalled talks in new disinformation campaign – Ukraine's intelligence. [online] Available at: https://newsukraine.rbc.ua/news/russia-blames-ukraine-for-stalled-talks-in-1743858897.html [ Back to Table of Contents ] GENERAL REPORTS The Complexities of Disinformation Attribution Oxford Academic recently published  a study examining attribution's role in disinformation campaigns and its use as a deterrence strategy. The study highlights the political risks and uncertainties in attribution decisions, particularly in liberal democracies. The research argues that technical capabilities and domestic political contexts influence attribution by analyzing cases such as the 2016 US and 2021 German Bundestag elections. The study introduces the concept of the "uncertainty loop," which describes how varying levels of political, social, and technical uncertainty influence the timing and manner of attribution decisions. The findings suggest that while technical advancements have made attribution more feasible, political considerations, such as the risk of domestic backlash and the impact on international relations, often dictate whether or not attribution is publicly pursued. Thus, disinformation attribution serves as both a deterrence measure and a politically sensitive tool in modern international relations. Source: Oxford Academic. Hedling, E., & Ördén, H. (2025). Disinformation, deterrence and the politics of attribution , Published by Oxford University Press Available at:   https://academic.oup.com/ia/advance-article/doi/10.1093/ia/iiaf012/8100244 [ Back to Table of Contents ] Understanding FIMI: Key Findings and Trends in Digital Warfare The 3rd EEAS Threat Report examines  the growing threat of disinformation and foreign information manipulation (FIMI) in the digital age. Key actors include Russia and China, who use disinformation to deepen political divisions and undermine trust in democratic institutions. The report introduces the FIMI Exposure Matrix, a tool to identify media channels connected to FIMI operations. This matrix categorizes channels based on technical and behavioral indicators to analyze their ties to manipulative actors. The increasing use of AI-generated content and fake accounts to spread disinformation is also highlighted. The report emphasizes the role of digital platforms as the primary source of these threats and calls for enhanced institutional collaboration and improved detection methods to address these challenges. The goal is to strengthen societal resilience to FIMI and increase transparency on digital platforms. Source: European External Action Service (EEAS). "3rd EEAS Report on Foreign Information Manipulation and Interference Threats." European Union External Action , 2025, Available at: https://www.eeas.europa.eu/eeas/3rd-eeas-report-foreign-information-manipulation-and-interference-threats-0_en . [ Back to Table of Contents ] Appendix - Frameworks to Counter Disinformation Early Detection of Disinformation Campaigns Using AI In its report, RAND examines  using large language models (LLMs) to detect disinformation and propaganda. Unlike traditional methods, LLMs can analyze broader contexts and identify subtle propaganda patterns. They recognize classical techniques like exaggeration and deception. The report finds that fine-tuned LLMs effectively detect disinformation, especially when trained on propaganda data. It recommends including non-English sources and developing a larger corpus of propaganda instances. In conclusion, LLMs are a promising tool for detecting foreign malign information operations. Source: RAND Corporation. (2025). Defending American Interests Abroad: Early Detection of Foreign Malign Information Operations . Christopher A. Mouton, Caleb Lucas, Shaun Ee. Available at: https://www.rand.org/pubs/research_reports/RRA2853-1.html . [ Back to Table of Contents ] UK’s Foreign Influence Registration Scheme to Counter Disinformation The UK government announced  the launch of the Foreign Influence Registration Scheme (FIRS) during an oral statement to Parliament. This scheme is part of the National Security Act 2023 and addresses the increasing risk of covert foreign interference. Its main goal is to enhance transparency regarding foreign influence, particularly from countries like Russia and Iran, and to safeguard national security, democratic institutions, and the UK’s political system. Under FIRS, individuals and organisations must register if they carry out activities on behalf of foreign powers within the UK. The scheme operates on two levels: the political tier, which applies to all foreign states, and the enhanced tier, which focuses on hostile actors that pose a more significant threat. This includes foreign governments, authorities, and state-controlled political parties. FIRS will take effect on 1 July 2025, following a three-month transition period. Failing to register will be considered a criminal offence. Source : UK Government, Home Office and Dan Jarvis MBE MP, 2025. Foreign Influence Registration Scheme implementation . [online] Available at: https://www.gov.uk/government/speeches/foreign-influence-registration-scheme-implementation [ Back to Table of Contents ] Disinformation as a Geopolitical Weapon A study published in the Journal of Complex Networks  explores how disinformation spread through social media can disrupt the operation of critical infrastructure. Using a case study from New York City, the researchers show how false reports about supposedly closed subway stations can influence passenger behavior and lead to overcrowding and delays. Many people rely on social media to plan their routes. When targeted disinformation is circulated on these platforms, it can cause detours, congestion, and inefficient system use, resulting in real-world disruptions in urban transportation. The researchers developed a mathematical model to identify the most influential users in social networks. They then target these users with accurate information to prevent the spread of false narratives. The study clarifies that protecting critical infrastructure also means tackling digital disinformation campaigns. What begins online can quickly have tangible consequences in everyday life. Source : Journal of Complex Networks, S. Jamalzadeh, K. Barker, A.D. González, S. Radhakrishnan, E. Bessarabova, G. Sansavini, 2025. Disinformation interdiction: protecting infrastructure networks from weaponized disinformation campaigns . [online] Available at: https://academic.oup.com/comnet/article/13/2/cnaf003/8103952 [ Back to Table of Contents ] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [ Back to Table of Contents ] DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.

  • Weekly Report: Cyber based influence campaigns 14th – 20th of April 2025

    [Listen to the Podcast] [Download Report] >TABLE OF CONTENTS< HOSTILE INFLUENCE CAMPAIGNS STATE ACTORS [Russia] Russia's Global Disinformation Network Infiltrates AI and Wikipedia Storm-1516 Disinformation Network Targets France with AI-Fueled Propaganda National Survey Shows Kremlin Disinformation Widely Believed by Americans Pravda Dashboard Exposes Russia's Massive Global Disinformation Network [War at Ukraine] U.S. Cuts Funding to Ukraine’s Independent Media Russian disinformation Targets Ukraine and EU Politics [China] Chinese Influence Group Advances CCP Agenda Through Cultural Diplomacy  [Iran] Iran Increasingly Integrates AI to Amplify Disinformation and Cyber Operations AI RELEATED ARTICLES Reinforcement Learning’s Potential to Amplify Disinformation on Social Media GENERAL REPORTS State-Sponsored Disinformation Harassing Thailand’s Democratic Movement Domestic Disinformation Threatens Chinese-Australian Communities APPENDIX - FRAMEWORKS TO COUNTER DISINFORMATION EU Invests in Digital Innovation and Combating Disinformation U.S. Government Closes R/FIMI Office Federal Efforts to Combat Disinformation Face Rollback Under Trump Administration GLOSSARY HOSTILE INFLUENCE CAMPAIGNS STATE ACTORS [Russia] Russia's Global Disinformation Network Infiltrates AI and Wikipedia As published by the Atlantic Council, Russia has significantly expanded its global disinformation campaign during the war in Ukraine, using sophisticated strategies to spread pro-Kremlin narratives across digital platforms. A key part of this effort is the Pravda network, a web of fake news portals active in over 80 countries. These sites recycle and repackage content from sanctioned Russian media and Kremlin-aligned Telegram channels, strategically surfacing during political events such as elections in Romania and Moldova or high-profile meetings between world leaders. The goal is to saturate global information spaces with Russian propaganda disguised as legitimate news. The campaign has now reached artificial intelligence tools and Wikipedia, enabling disinformation to seep into mainstream knowledge sources. Kremlin-aligned narratives influence AI-generated content by posing as credible references on Wikipedia and news sites cited in large language model (LLM) training data. This means that users relying on AI tools may unknowingly consume skewed, anti-Ukrainian, and anti-Western messaging—an alarming development in how information is shaped and distributed. As AI adoption grows, so does the risk of disinformation embedded in its outputs. Source:   Atlantic Council, V. Châtelet, 2024. Exposing Pravda: How pro-Kremlin forces are poisoning AI models and rewriting Wikipedia . [online] Available at: https://www.atlanticcouncil.org/blogs/new-atlanticist/exposing-pravda-how-pro-kremlin-forces-are-poisoning-ai-models-and-rewriting-wikipedia/ [ Back to Table of Contents ] Storm-1516 Disinformation Network Targets France with AI-Fueled Propaganda A report by NewsGuard revealed that Russia’s disinformation campaign, orchestrated by the Storm-1516 network, has aggressively shifted its focus to France, leveraging AI-generated content to amplify false narratives across social media and AI platforms. From December 2024 to March 2025, five Kremlin-linked hoaxes targeting France went viral in nearly 39,000 posts, racking over 55 million views. These included fabricated accusations involving French President Emmanuel Macron and First Lady Brigitte Macron, as well as baseless claims about Ukrainian President Zelensky and migrants, narratives timed to coincide with France’s increasing military support for Ukraine. What makes this wave of disinformation particularly dangerous is its infiltration into generative AI tools. NewsGuard found that several major AI chatbots—including Mistral’s French-made "Le Chat"—repeated Russian-fabricated stories, like the Macron affair hoax, as factual responses to user prompts. These claims, often laundered through fake whistleblower videos, phony regional news outlets, and YouTube uploads, are cited by AI systems trained on seemingly authoritative sources. Storm-1516’s playbook is now well-established: hijack real identities, use AI for face-swapping and video manipulation, spread content through multilingual platforms, and adapt narratives to current political tensions. While France has so far avoided a full-scale fake news site network like those deployed in the U.S. and Germany, officials are preparing for further escalation. As support for Ukraine increases, so does Russia’s digital assault. Source:   NewsGuard Reality Check, N. Huet, M. Sadeghi, and C. Labbe, 2025. Russian Propaganda Campaign Targets France with AI-Fabricated Scandals, Drawing 55 Million Views on Social Media . [online] Available at: https://www.newsguardrealitycheck.com/p/russian-propaganda-campaign-targets-france-with-ai-fabricated-scandals [ Back to Table of Contents ] National Survey Shows Kremlin Disinformation Widely Believed by Americans According to a NewsGuard report , a recent national survey conducted by YouGov for NewsGuard revealed the deep reach of Russian disinformation among the American public. One in three Americans believes at least one false claim promoted by Russian state media, highlighting how effectively the Kremlin’s narratives have taken hold. The study tested 1,000 adults with 10 widely circulated falsehoods, three of which were directly tied to Russian propaganda. Only 23.8% of respondents correctly identified all three Kremlin-linked claims as false. The misinformation included debunked narratives such as Ukraine stealing U.S. aid, Zelensky's alleged 4% approval rating, and claims that Ukraine sold U.S.-supplied weapons to Hamas. These fabrications, initially spread by outlets like RT and Sputnik, have not only gone viral on social media but have also been repeated by U.S.-based websites and even made their way into generative AI tools, further normalizing disinformation. NewsGuard analysts have cataloged nearly 300 such Russian falsehoods related to the war in Ukraine alone. The survey showed widespread vulnerability to misinformation on health and elections. A staggering 78% of respondents believed at least one false claim, and fewer than 1% accurately debunked all ten. Importantly, belief in disinformation crossed party lines, though Republicans were significantly more likely to accept Kremlin-backed narratives. Source:   NewsGuard Reality Check, M. Skibinski, 2025. One Third of Americans Believe Russian Disinformation, NewsGuard-YouGov Survey Finds . [online] Available at: https://www.newsguardrealitycheck.com/p/one-third-of-americans-believe-russian-disinformation-yougov-survey-finds [ Back to Table of Contents ] Pravda Dashboard Exposes Russia's Massive Global Disinformation Network The Pravda Dashboard, published  by the DFRLab, exposed the scale and strategy of Russia’s global disinformation network. With Finnish company CheckFirst, the DFRLab analyzed a dataset of over 3.7 million articles repurposed from Russian state media and questionable Telegram channels. Through an elaborate network of websites and automated republishing tools, the Pravda network bypassed sanctions to amplify pro-Kremlin narratives worldwide, especially in France, Germany, Ukraine, Moldova, and Serbia. Many articles sourced banned Russian outlets like TASS and RT, alongside disinformation-heavy Telegram channels such as InfoDefense and SurfNoise. The investigation revealed that the Pravda operation intensified during key political events, such as the 2024 EU Parliament elections and high-profile arrests, using surges in publication to flood news cycles with manipulated narratives. The most-cited source in the network was @Vbachir, a French-speaking Telegram account claiming ties to Russian forces, which alone appeared in more than 53,000 articles. The operation also strategically targeted linguistic regions by repackaging Russian-origin content through language-based clusters, using HTML tags to signal alternate versions for different audiences. This method enabled the network to deliver coordinated disinformation to specific countries or regions while maintaining the illusion of local news. Source:   DFRLab, V. Châtelet and A. Lesplingart, 2025. Russia’s Pravda network in numbers: Introducing the Pravda Dashboard . [online] Available at: https://dfrlab.org/2025/04/18/introducing-the-pravda-dashboard/ [ Back to Table of Contents ] [War in Ukraine] U.S. Cuts Funding to Ukraine’s Independent Media According to a report by the Atlantic Council, Ukraine’s independent media, a vital barrier against Kremlin disinformation, is facing collapse following a drastic cut in U.S. development aid. Since January 2025, the United States has quietly halted around 90% of its funding for Ukrainian media through USAID and affiliated partners. This comes at a time when Russia is intensifying its disinformation campaigns, leaving Ukrainian newsrooms like NikVesti, Vgoru, and Cykr without the resources needed to counter false narratives. These outlets, operating under war conditions, have served as a crucial line of defense by exposing propaganda and reporting on both Russian strikes and internal corruption. As these independent newsrooms downsize or shut down, Russia is rapidly exploiting the resulting information vacuum. A recent example in Sumy involved fake health warnings circulated by pro-Russian Telegram channels, falsely claiming a disease outbreak among Ukrainian soldiers. Disinformation crafted to mimic official sources is designed to mislead and erode public trust in all media. Without credible local journalism, these manipulative tactics become significantly more effective, especially in regions already isolated by war. Efforts to stay afloat through donor outreach, membership drives, and EU funding applications are underway, but are slow and uncertain. Journalists warn that without immediate support, independent media will vanish, and unchecked Russian propaganda will dominate Ukraine’s information space. Source:   Atlantic Council, M. Tahir, 2025. US funding cuts create openings for Russian disinformation in Ukraine . [online] Available at: https://www.atlanticcouncil.org/blogs/ukrainealert/us-funding-cuts-create-openings-for-russian-disinformation-in-ukraine/ [ Back to Table of Contents ] Russian disinformation Targets Ukraine and EU Politics According to an article  by EU vs. Disinfo, as Russia continues its assault on Ukraine, its disinformation machinery works in overdrive to deny responsibility for deliberate attacks on civilians. The missile strike on Sumy on April 13, which killed at least 35 civilians during a religious holiday, was falsely framed by Kremlin sources as a military operation. This follows a disturbing pattern in previous attacks, such as the one in a restaurant in Kryvyi Rih, where fabricated justifications are spread rapidly to obscure the truth. By immediately releasing false narratives—claiming military meetings or weapon caches at bombed civilian sites—Russian propaganda seeks to confuse public perception and delay international condemnation. Despite mounting evidence, including UN reports and security footage, the Kremlin persists in pushing these narratives, even as the International Criminal Court issues arrest warrants for those responsible for attacks on non-military infrastructure. Beyond Ukraine, this disinformation model is now being deployed to influence foreign elections. In early April, a coordinated campaign targeted Poland’s upcoming presidential election with fake news videos impersonating credible outlets, falsely claiming security threats. These efforts followed Russia’s known disinformation playbook—spreading content through fake accounts and amplifying it with strategic reposts. Source:   EUvsDisinfo, 2025. Missiles and lies. Again.  [online] Available at: https://euvsdisinfo.eu/missiles-and-lies-again/ [ Back to Table of Contents ] [China] Chinese Influence Group Advances CCP Agenda Through Cultural Diplomacy  An article by National Review stated that the Chinese People’s Association for Friendship with Foreign Countries (CPAFFC), a group identified by U.S. intelligence as tied to Beijing’s influence operations, has intensified its efforts to promote the Chinese Communist Party’s (CCP) global vision during high-profile visits to the United States. During a U.N. event in New York, CPAFFC officials promoted Xi Jinping’s “Global Civilization Initiative,” framing it as a path toward global cooperation and understanding. However, experts warn that these initiatives are part of China’s strategic effort to reshape international norms and governance in favor of its authoritarian model. CPAFFC's activities, including speeches at the U.N. and meetings in California and Florida, mask strategic disinformation under the guise of cultural diplomacy. Despite branding itself as a non-governmental organization, CPAFFC operates within the CCP's United Front system, which subtly seeks to influence foreign entities. For example, the group has cultivated connections. with American communities through programs honoring WWII’s Flying Tigers, reinterpreting history to support modern CCP narratives, and extending Beijing's ideological reach. Source:   National Review, J. Quinn, 2025. Chinese ‘Friendship’ Group Visits America amid Trade War . [online] Available at: https://www.nationalreview.com/corner/chinese-friendship-group-visits-america-amid-trade-war/ [ Back to Table of Contents ] [Iran] Iran Increasingly Integrates AI to Amplify Disinformation and Cyber Operations According to a report by Recorded Future, Iran is increasingly integrating artificial intelligence into its national security and foreign policy strategy to amplify cyberattacks, influence operations, and domestic surveillance. While Tehran faces significant obstacles—such as economic isolation and top-down government control—it is leveraging AI as a force multiplier, especially in the context of regional conflicts and escalating tensions with Israel. Disinformation is a key component of Iran’s AI-powered influence campaigns. The regime will likely deploy generative AI and large language models to enhance social engineering efforts, spread state-sponsored propaganda, and manipulate public opinion domestically and abroad. These operations exploit cultural and linguistic familiarity to bypass detection and increase impact, particularly targeting the West's adversarial governments and tech sectors. Source:   Recorded Future – Insikt Group®, 2025. Iran’s AI Ambitions: Balancing Economic Isolation with National Security Imperatives . [online] Available at: https://go.recordedfuture.com/hubfs/reports/ta-ir-2025-0417.pdf [ Back to Table of Contents ] AI Related Articles Reinforcement Learning’s Potential to Amplify Disinformation on Social Media A recent study  by Concordia researchers presented a reinforcement learning (RL) framework designed to actively increase polarization and disagreement on social media platforms. Using the Double Deep Q-Learning algorithm, researchers developed a method to identify and take control of key user accounts in a network. These accounts are strategically chosen to maximize the spread of divisive content, effectively turning them into tools for orchestrated disinformation and societal disruption. The study models both private and public opinion dynamics, simulating how individuals form and express beliefs online. While private opinions update synchronously, public expressions are updated asynchronously, mirroring real-world social media behavior. RL agents only observe users' public posts and follower counts, yet this limited data proves sufficient for optimizing disinformation strategies. The model demonstrates how adversarial agents can effectively exploit social platforms to deepen divides and manipulate public discourse through simulations across different network structures. This research underscores the growing threat posed by intelligent, AI-driven disinformation tactics and highlights the urgent need for robust defenses against such manipulative strategies. Source:   IEEE, M. N. Zareer and R. R. Selmic, 2025. Maximizing Disagreement and Polarization in Social Media Networks using Double Deep Q-Learning . [online] Available at: https://ieeexplore.ieee.org/document/10831299 [ Back to Table of Contents ] GENERAL REPORTS State-Sponsored Disinformation Harassing Thailand’s Democratic Movement A report  published by the Citizen Lab revealed the existence and sustained activity of JUICYJAM, a state-sponsored online influence operation designed to harass and suppress Thailand’s pro-democracy movement. Since August 2020, this campaign operated across platforms such as X and Facebook using an inauthentic persona, “Ms. Juk Khlong Sam,” who systematically doxxed activists by revealing private information like family details and school affiliations. The posts published during the campaign employed manipulation tactics such as distraction and smearing to shift narratives and discredit individuals, particularly during politically sensitive moments like protests or elections. These tactics were not random - leaked military and police documents in March 2025 linked the campaign directly to the Royal Thai Armed Forces or the Royal Thai Police. Despite JUICYJAM’s clear inauthenticity, social media platforms failed to curb its reach. The operation amassed significant engagement, over 100,000 followers, and tens of thousands of interactions. Citizen Lab emphasized that current platform policies inadequately address state-sponsored disinformation and coordinated doxxing, particularly in authoritarian contexts. Source:   Citizen Lab, A. Fittarelli, M. Scott, and K. Wongsapakdee, 2025. JUICYJAM: How Thai Authorities Use Online Doxxing to Suppress Dissent . [online] Available at: https://citizenlab.ca/2025/04/how-thai-authorities-use-online-doxxing-to-suppress-dissent/ [ Back to Table of Contents ] Domestic Disinformation Threatens Chinese-Australian Communities New research published by The Conversation has revealed that during Australia's current election cycle, disinformation targeting Chinese-speaking communities is being driven not by foreign interference but by commercial and domestic political interests. Platforms like WeChat and RedNote (Xiaohongshu), which serve as key news sources for Chinese-Australians, are being used to spread misleading political narratives. These include manipulated statements, exaggerated threats, and racially charged content that distort policy positions and stoke fear among migrant communities. The research uncovered tactics such as misleading visuals, distorted timelines, and sensational headlines, often used to misrepresent policies on immigration, visas, and foreign relations. One WeChat article falsely claimed a surge in Coalition support, while misreporting poll results and promoting racial narratives against the Labor Party. Another RedNote post selectively quoted Prime Minister Anthony Albanese to mislead readers about immigration policy. Disinformation on these platforms is particularly harmful due to weak regulation, limited platform transparency, and the vulnerable position of marginalized communities. With automated detection tools blocked or ineffective, the burden falls on individuals and community efforts to counter these narratives. The report calls for improved civic education and tailored media literacy to help users critically assess digital content, especially in non-English media ecosystems where disinformation can flourish unchecked. Source:   The Conversation, F. Yang, L. Heemsbergen, and R. Fordyce, 2025. This election, disinformation is swirling on Chinese social media. Here’s how it spreads . [online] Available at: https://theconversation.com/this-election-disinformation-is-swirling-on-chinese-social-media-heres-how-it-spreads-253849 [ Back to Table of Contents ] Appendix - Frameworks to Counter Disinformation EU Invests in Digital Innovation and Combating Disinformation The European Commission announced it has launched four new funding calls totaling €140 million under the Digital Europe Programme (DIGITAL). These calls aim to accelerate the deployment of AI, boost digital skills, expand digital innovation hubs, and, notably, tackle disinformation. Of this total, €5 million is specifically allocated to creating a European Network of Fact-Checkers. This initiative will include protective measures for fact-checkers, a shared fact-checking repository, and rapid-response capabilities for misinformation, especially during crisis situations. The funding also supports broader goals, including integrating generative AI into public administration and the agri-food sector, while enhancing regulatory compliance and supporting semiconductor and data space initiatives. Through this investment, the Commission aims to foster innovation and counter disinformation more effectively, reinforcing resilience against manipulation and false narratives across the digital landscape. Source:   European Commission, 2025. Commission invests €140 million to deploy key digital technologies . [online] Available at: https://digital-strategy.ec.europa.eu/en/news/commission-invests-eu140-million-deploy-key-digital-technologies [ Back to Table of Contents ] U.S. Government Closes R/FIMI Office In a move framed as a defense of free speech, the U.S. government has announced the closure of the State Department’s Counter Foreign Information Manipulation and Interference office (R/FIMI), formerly known as the Global Engagement Center. As stated in Marco Rubio's announcement, the office, initially tasked with countering foreign disinformation, has come under fire for allegedly targeting and censoring American citizens, prompting claims that it violated foundational democratic principles. According to the announcement, the center used its $50 million annual budget to suppress dissenting opinions under the guise of combating foreign influence, blurring the line between safeguarding information integrity and stifling free expression. The closure has been positioned as a corrective measure to realign national policy with constitutional freedoms, particularly the right to free speech. A report by Politico added that critics of the closure, however, argue that dismantling R/FIMI undermines U.S. national security. The office had played a key role in exposing Russian disinformation operations in Africa and in forming international coalitions to push back against information warfare from adversaries like China and Russia. Supporters across party lines, including Senators Chris Murphy and John Cornyn, stressed the strategic importance of the office in countering foreign influence campaigns designed to deceive and destabilize democratic societies. Sources:   U.S. Department of State, M. Rubio, 2025. Protecting and Championing Free Speech at the State Department . [online] Available at: https://www.state.gov/protecting-and-championing-free-speech-at-the-state-department/ POLITICO, M. Miller, 2025. State Department eliminates key office tasked with fighting foreign disinformation . [online] Available at: https://www.politico.com/news/2025/04/16/state-department-shutters-gec-foreign-disinformation-00292982 [ Back to Table of Contents ] F ederal Efforts to Combat Disinformation Face Rollback Under Trump Administration As published by The Free Press, a wide-ranging federal effort to combat misinformation and disinformation, especially during the Biden administration, has come under intense scrutiny, with the Trump administration now actively shutting down dozens of programs. The debate centers on using more than $1.4 billion in taxpayer-funded grants and contracts since 2017, over 600 of which were awarded during Biden’s term, to research and counter disinformation. Critics, especially from the GOP, argue that many of these initiatives effectively served as censorship tools under the guise of protecting public discourse. One such example is a now-cancelled $683,000 grant to UC Irvine to study vaccine hesitancy among Black and Latinx communities, which was withdrawn after inquiries from The Free Press . The closures follow an executive order from President Trump (for further information, see W07 March 2025 Cyfluence Report), issued on his first day back in office, calling for an investigation into federal activities that allegedly violated Americans’ free speech rights. Agencies, including the NIH, State Department, and Pentagon, are now reevaluating and terminating contracts, with language around “misinformation” being replaced by terms like “adversary propaganda.” Some programs remain active, such as a $979 million Pentagon contract with defense contractor Peraton to track foreign information threats. Others, like those involving the University of Washington and disinformation analysis firm Graphika, have faced criticism for allegedly targeting conservative voices. Although some organizations insist their work focuses solely on foreign influence or election-related falsehoods, Republicans continue to pursue deeper cuts, investigations, and policy reforms to prevent what they view as government-funded ideological enforcement masquerading as disinformation mitigation. Source:   The Free Press, G. Kaminsky and M. Rowley, 2025. Joe Biden Made More Than 600 Grants to Stop ‘Disinformation.’ Donald Trump Now Has a Plan for Them . [online] Available at: https://www.thefp.com/p/joe-biden-made-600-grants-to-stop-disinformation-misinformation-donald-trump-cancels-awards [ Back to Table of Contents ] GLOSSARY Information Operations Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare. Hybrid Warfare It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention. Cyber Warfare Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists. Cyfluence Attack Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc. Soft Warfare All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more. CIB Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms, emphasizing both coordination and inauthentic behavior. FIMI The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity. Hostile Influence Campaign (HIC) An information operation sought to influence a targeted audience for a hostile cause. Digital Impact on Discourse (DID) Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC. Misinformation A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories. Disinformation Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent". Inauthentic Behavior Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis. Fake users AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs. Unidentified users A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example). Sockpuppet accounts A sock puppet or sockpuppet is an online identity used for deception. Bots Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts. Repurposed accounts Means social media accounts that were hacked or purchased, then used for different purposes than the original ones. Fake website Is a website designed for fraudulent or scam activity, hiding its real purpose. Deep Assets These are non-human deep cover assets, divided into two sub-categories: Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.). Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives. Real platforms Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally. Astroturfing Takes place when a coordinating actor creates a false impression of grassroots support. Cyberbullying is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm. [ Back to Table of Contents ] DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.

  • Emotional Insights and Their Potential for Tackling Fake News

    Emotional Insights and Their Potential for Tackling Fake News Author: Osnat Mokryn The contribution explores how emotions shape experiences and enhance their significance . It presents psychological models explaining emotions and their influence on social interactions and sharing experiences, particularly online. Additionally, it highlights how emotions are intentionally used to amplify the spread of fake news and how emotional signatures can help mitigate their impact.

  • Norway´s Threat Assessment Report 2025

    This week, we examine Norway’s most recent national threat assessment (1) , which delineates key developments and emerging dynamics within the country’s security environment. The central state threat actors are Russia, China, and Iran . (2) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 11 ( Courtesy of PST) At the CRC, our focus is on digital hostile influence. This report highlights nicely that in the context of a broader system of hybrid warfare, digital hostile influence is one of many tools used by state actors. These are constantly adapted to shifting conditions and countermeasures. These so-called hybrid means may include cyber operations, sabotage, covert procurement, economic pressure, recruitment of human sources, and targeted influence and digital hostile influence campaigns. (3) These tools do not operate in isolation—they are interconnected and mutually reinforcing. Information obtained through cyberattacks or recruited sources can feed directly into digital hostile influence efforts, for example, to manipulate the political and societal discourse. Acts of sabotage, including those targeting civilian infrastructure, can create fear, undermine trust in institutions, and ultimately shift public opinion. (4) Russia The threat posed by Russia has intensified. In 2025, the Russian intelligence services are expected to continue operating through digital influence campaigns targeting Norway’s support for Ukraine. In this context, acts of sabotage are feared. As mentioned above, such actions are not solely aimed at the physical destruction of targets but also intended to influence public opinion, for instance, through media coverage and disinformation. It is also feared that information obtained through cyberattacks may be used in subsequent influence campaigns to undermine trust in political institutions and actors. (5) Noteworthy are historical references, such as a Russian information operation targeting the Nobel Peace Prize Committee. (6) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 26 ( Courtesy of PST) China China’s influence campaigns are increasing significantly and are part of its broader global power strategy. The country is showing a growing willingness and capability to target Western populations directly, including Norway. Digital influence operations are at the core of these efforts and are being systematically expanded. (7) A new trend is the involvement of commercial actors, who, for example, sell fake user accounts, produce propaganda videos, or recruit influencers. As a result, these campaigns are gaining in professionalism and reach. The use of artificial intelligence (AI) has also grown considerably. (8) There is also a referal to the "Viking United News" domain attributed to the Chinese disinformation campaign known as " Paperwall ". (9) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 15 , Courtesy of PST Iran Through Influence campaigns, Iran seeks to exert its influence through propaganda networks and the fostering of radicalization. A notable example is the AnzuTeam campaign, which primarily targets Sweden. However, there are indications that individuals in Norway were also targeted via SMS. (10) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 25 , Courtesy of PST Disinformation and Influence Campaigns Ahead of the 2025 Elections In light of the upcoming Norwegian parliamentary and Sámi parliamentary elections in 2025, an increase in threats and smears against politicians is anticipated, particularly when controversial issues garner significant public attention. Frequent public appearances during the campaign will likely make politicians more visible—and therefore more vulnerable—to confrontation. Additionally, it is expected that state actors will seek to exploit the election period to exert influence, using disinformation and propaganda to shape public opinion and steer political decisions in line with their interests. (11) [Footnotes (1) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025. [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025]. (2) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 8 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (3) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, pp. 8, 25 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (4) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 12 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2 (5) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, pp. 12, 25 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (6) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 26 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (4) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, pp. 14-15 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (5) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, pp. 18-20 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (6) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 25 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (7) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 14 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (8) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 15 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (9) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 15 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (10) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 25 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025] (11) Norwegian Police Security Service (PST), 2025. National Threat Assessment 2025, p. 47 [online] Available at:  https://www.pst.no/globalassets/2025/nasjonal-trusselvurdering-2025/_nasjonal-trusselvurdering-2025_uu-engelsk.pdf  [Accessed 31 March 2025]

bottom of page