top of page

Search CRC

142 results found with an empty search

  • Not All-Powerful: A Granular Perspective on Influence Networks

    In many security policy debates, hostile influence campaigns by authoritarian states like China are often portrayed as hyper-efficient, strategically orchestrated, and almost omnipotent. The report "Baybridge – Anatomy of a Chinese Information Influence Ecosystem," published by the French military research center IRSEM  in October 2025, challenges this general perception . i     The notion of a uniformly centralized and effective Chinese disinformation apparatus is inaccurate because such a unified structure does not exist. Instead, a diverse range of actors operate within this ecosystem. These include private, commercially driven entities that act on behalf of the state or maintain links to state resources, which act without strategic coherence, professional execution, or operational efficiency. To assess influence operations strategically, the report calls for a deeper understanding of the specific actors, structures, interests, and operational logics involved by using a specific analytical approach . ii   The Actor-Specific, Granular Approach  The actor-specific, granular analytical approach does not view digital influence campaigns as complex networks of concrete actors with varying interests, capabilities, and motivations. At its core, the approach asks: Who is actually acting, within what organizational framework, using what tools, and to what end? It focuses on digital assets such as websites, social media profiles, and technical infrastructures, examining their connections, modes of control, and content strategy. This allows for the identification of the individuals, companies, or organizations involved and their actual roles and motives within the broader campaign.  The approach follows a multi-step process: first, the network structure is mapped and technical linkages are revealed. Next, digital traces are attributed to real-world actors, and their interests are analyzed. Simultaneously, the content is assessed for coherence, professionalism, and resonance with target audiences. Finally, the campaign’s actual impact is evaluated: Does it exert meaningful strategic influence, or is it merely an exercise in high-volume, low-impact output?  Case Study: The Network Around Haimai and Haixun  By using this approach  Baybridge  report examines a Chinese digital influence ecosystem centered on two companies: Shenzhen Haimai Yunxiang Media Co., Ltd. (Haimai) and Shanghai Haixun Technology Co., Ltd. (Haixun). Both market PR and media packages, run multilingual websites with seemingly journalistic content and share identical infrastructure. The report findings imply that this operation is not a centrally planned and applied influence operation but a network that functions as a commercial system with propagandistic features.    Figure 1 – Infrastructure Overview iii , Courtesy of IRSEM At the core are Wu Yanni, co-founder of Haimai and member of Shenzhen’s Municipal Party Committee Propaganda apparatus, and Zhu Haisong, owner of Haixun and member of Guangdong’s Propaganda Department.    Figure 2 - Activities of Wu Yanni & Zhu Haisong in the public & private sectors iv , Courtesy of IRSEM  The IRSEM report concludes that they are not strategic propagandists, but rather local entrepreneurs leveraging political ties for commercial gain. Their motivations appear to be primarily financial, including contract acquisition, rent-seeking, and fulfilling bureaucratic performance metrics such as article volume and reach.  Why the “Baybridge-Network” is Inefficient  Despite significant technical resources, the network exhibits major deficiencies in technical, structural, and content areas . Much of the content appears machine-translated, riddled with character encoding issues, and lacks editorial oversight. The result is an incoherent visual and linguistic output that undermines credibility and consistency.  An identified core flaw lies in the coexistence of contradictory narratives: Chinese content promotes “Positive Energy,” a state-endorsed messaging style that emphasizes harmony, optimism, and trust, while the same platforms often disseminate aggressive, conflict-driven Russian rhetoric critical of Western democracies. v  This juxtaposition, described in the report as a “narrative cacophony,” creates tonal contradictions that cancel each other out. This incoherence is particularly damaging during moments of symbolic significance for China, such as diplomatic visits, where simultaneously aggressive Russian-led messaging seems to undercut Beijing’s intended messaging. vi   Conclusion  The IRSEM report demonstrates that Chinese information operations are neither uniformly structured nor consistently effective. The “Baybridge” case study highlights a particular model in which private-sector actors with close ties to the state carry out influence operations on behalf of government entities. However, their activities are primarily shaped by commercial incentives and bureaucratic performance indicators. Within this logic, quantitative metrics such as content volume, geographic reach, and language variation are prioritized, while actual strategic impact on target audiences is secondary.  This setup can lead to inefficient campaigns: technically elaborate but strategically incoherent and lacking in persuasive quality. The core issue lies not in the absence of central coordination, but in the disconnect between political objectives, operational execution, and content effectiveness. These shortcomings are not unique to China, but they manifest in distinctive ways within authoritarian systems.  Rather than assuming a centralized and uniformly professional influence apparatus, the report suggests an actor-specific, granular analytical approach that enables differentiation. By mapping concrete actors, structures, and operational logics, it becomes possible to evaluate the actual relevance of an influence operation and to allocate security resources more effectively and proportionally. vii     [Footnotes:] [i] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. pp.78-79 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [ii] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. p. 79 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [iii] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. p.18 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [iv] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. p.42 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [v] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. p.56-61 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [vi] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. pp.69-70 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [vii] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. p.79 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf

  • CRC Weekly: Cyber-based hostile influence campaigns 13th - 19th October 2025

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect.   During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events. [Highlights] A sophisticated narrative laundering operation identified by tracing a fabricated news story's journey from a fringe website through Russian state media and AI-powered search results to the U.S. congress - NewsGuard's Reality Check An investigation unmasks a sprawling pro-Kremlin influence network of 139 fake news websites in France, using AI-generated content and coordinated inauthentic behavior to manipulate public discourse. - NewsGuard Taiwan reports a significant escalation in Chinese 'cyfluence' operations, where millions of daily cyber intrusions are strategically combined with AI-driven disinformation campaigns to undermine state security and public trust. - The Record An analysis reveals how Chinese state and private actors, are using sophisticated AI tools to generate fake social media profiles for influence operations targeting India's democracy. - NDTV A detailed report outlines Iran's campaigns in Sweden, which combine traditional espionage with cyber operations like malware-laden apps and spear-phishing. - Eurasia Review Testing of OpenAI's Sora confirms its potential for creating synthetic propaganda, successfully producing realistic videos that advanced false narratives in 80% of test cases. - NewsGuard NATO's top information officer issues a stark warning that 'hybrid warfare has begun,' citing a combination of cyberattacks, disinformation campaigns, and physical disruptions. - Euronews French officials express alarm over the growing 'porosity' between the U.S. 'MAGA sphere' and Kremlin-aligned influence channels. - Le Monde [Weekly Review] From Fringe Site to US Congress: Anatomy of a Kremlin Narrative Laundering Operation AI-Powered Disinformation: Uncovering a Pro-Kremlin Network of 139 Fake French News Sites Estonian Politician Weaponizes Satire in Pro-Kremlin Influence Campaign Kremlin Deploys Disinformation to Foment Panic with 'Kyiv Evacuation' Hoax NATO Warns of China's Technologically Advanced FIMI Threat Taiwan Confronts Chinese 'Cyfluence' as Cyberattacks and Disinformation Surge Analysis: China's Use of AI and Private Firms Poses Influence Threat to India Iran's Hybrid Threat in Sweden Combines Cyber Espionage with Dissident Targeting Sora's Potential for Synthetic Propaganda Highlighted in New Analysis NATO Official: Hybrid Warfare Against Europe 'Has Already Begun' Investigation Reveals UK Far-Right Facebook Groups as 'Engine of Radicalization' French authorities fear mounting 'MAGA sphere' intrusions into domestic politics From Fringe Site to US Congress: Anatomy of a Kremlin Narrative Laundering Operation A fabricated story alleging corruption within Ukrainian President Volodymyr Zelensky's inner circle illustrates a textbook case of narrative laundering. A   report  by NewsGuard's Reality Check traces the claim's path from a fringe, pro-Russian Turkish website to amplification by Russian state media like TASS and Sputnik. The narrative gained a veneer of credibility after being republished by smaller websites and appearing on Microsoft's MSN news platform, despite a complete lack of evidence. The digital ecosystem played a crucial role in the operation's next phase, as screenshots and AI-generated summaries on Microsoft's Bing search engine facilitated the story's spread across social media. This hostile influence campaign achieved a significant milestone when U.S. Congresswoman Anna Paulina Luna shared the claim, citing MSN as her source. Russian state outlets then completed the propaganda feedback loop by citing the American lawmaker's statements as external validation of the original falsehood, demonstrating how contrived narratives can be pushed into mainstream discourse to achieve strategic objectives. Source: NewsGuard's Reality Check, How Russia Laundered a Lie About Ukraine Through Congress, Available Online:  https://www.newsguardrealitycheck.com/p/how-russia-laundered-a-lie-about-ukraine-through-congress Top Of Page AI-Powered Disinformation: Uncovering a Pro-Kremlin Network of 139 Fake French News Sites A network of 139 French-language websites with ties to Russia is disseminating false and misleading claims, often using AI-generated content to populate its pages. According to an   article  from NewsGuard , the operation is believed to be managed by John Mark Dougan, a former U.S. Marine who fled to Russia, with alleged support from Russian military intelligence (GRU). These fake websites were established between February and August 2025, using fabricated ownership details to masquerade as legitimate French media outlets. This coordinated inauthentic behavior is part of a broader Russian information operation, designated Storm-1516, which has also targeted the United States and Germany. The campaign’s tactics include impersonating real journalists and spreading fabricated narratives on high-profile topics to manipulate public discourse. The operation demonstrates an evolving approach to digital propaganda that leverages a distributed network of fake platforms to generate millions of views and influence public perception on key political issues. Source: NewsGuard, NewsGuard Rates Network of 139 Fake French News Websites with Ties to the Kremlin , Available Online:  https://www.newsguardtech.com/press/newsguard-rates-network-of-139-fake-french-news-websites-with-ties-to-the-kremlin/ Top Of Page Estonian Politician Weaponizes Satire in Pro-Kremlin Hostile Influence Campaign In Estonia, a pro-Kremlin politician has been repurposing satirical Russian content to spread malinformation among the nation's Russian-speaking population. A   report  from the  Atlantic Council’s DFRLab  identifies Genady Afanasyev, a candidate for the KOOS party, as the central actor in this hostile influence campaign. Afanasyev adapts stories from the Russian satirical outlet Panorama.pub by localizing them to Estonian contexts, altering names and institutions to make the fabricated stories appear as factual local news. This tactic exploits gaps in media literacy by mixing political messaging with humor to cultivate anti-government sentiment and normalize pro-Kremlin narratives. The content is primarily disseminated through KOOS-affiliated Facebook groups but also spreads across VKontakte (VK), TikTok, Telegram, and X, extending its reach within the target audience. The campaign highlights how foreign satirical content can be adapted into a targeted tool for domestic political influence, raising concerns about election integrity and the manipulation of specific linguistic communities. Source: DFRLab, Pro-Kremlin politician weaponizes satire to engage Russian population in Estonia ahead of local elections , Available Online:  https://dfrlab.org/2025/10/16/pro-kremlin-politician-weaponizes-satire-to-engage-russian-population-in-estonia-ahead-of-local-elections/ Top Of Page Kremlin Deploys Disinformation to Foment Panic with 'Kyiv Evacuation' Hoax Pro-Kremlin channels have been circulating a disinformation narrative claiming the West is urging an evacuation of Kyiv due to blackouts caused by Russian strikes. This information operation, detailed in an   article  by EUvsDisinfo , aims to exaggerate Ukraine's energy vulnerabilities and undermine public confidence in the Ukrainian government. By propagating these falsehoods through state-linked media and messaging platforms, the campaign seeks to distort perceptions of the conflict, reduce international support, and create the impression that Ukraine cannot withstand ongoing Russian attacks. In reality, neither Ukraine nor its allies have made any such calls for evacuation. Ukrainian authorities have maintained contingency plans since 2022 and continue to demonstrate resilience against energy disruptions. EU officials have reaffirmed their full support, mobilizing hundreds of millions of euros for energy aid and civil protection. The campaign exemplifies the Kremlin's persistent use of disinformation to generate fear and uncertainty, though international support for Ukraine remains strong. Source: EUvsDisinfo, DISINFO: The West calls on Ukraine to evacuate Kyiv amid blackouts , Available Online:  https://euvsdisinfo.eu/report/the-west-calls-on-ukraine-to-evacuate-kyiv-amid-blackouts/ Top Of Page NATO Warns of China's Technologically Advanced FIMI Threat China has significantly intensified its disinformation campaigns against NATO members since the COVID-19 pandemic, employing strategies designed to destabilize and weaken Western countries. According to a NATO   report  published by the Global Influence Operations Report (GIOR) , these operations leverage advanced technologies, social media platforms like TikTok, and cooperation with Russia to amplify pro-Chinese narratives. The campaigns aim to suppress criticism of the Chinese Communist Party and infiltrate local media ecosystems, substantially increasing the speed and reach of its information operations. The analysis emphasizes that these activities constitute a form of Foreign Information Manipulation and Interference (FIMI) that threatens Euro-Atlantic security, public trust in democratic institutions, and overall stability. By mapping key actors and tracing the tactical evolution of these campaigns, the report underscores the urgent need for coordinated countermeasures among allies to protect their populations, defend democratic processes, and mitigate the impact of Beijing's hostile influence activities. Global Influence Operations Report, NATO Report on Chinese Disinformation Reveals Escalating Threats , Available Online:  https://www.global-influence-ops.com/china-disinformation-nato-report-global-influence-operations/ Top Of Page Taiwan Confronts Chinese 'Cyfluence' as Cyberattacks and Disinformation Surge Taiwan's National Security Bureau (NSB) has reported a significant increase in cyberattacks and coordinated disinformation campaigns from China, aimed at undermining public trust and creating societal divisions. An   article  in The Record states that government networks faced an average of 2.8 million intrusions per day in 2025, a 17 percent annual increase targeting critical infrastructure. Beijing’s strategy represents a form of Cyfluence, combining these cyber intrusions with information warfare. The campaigns employ state media, an "online troll army" of fake users, and AI-generated content to spread fabricated narratives attacking the Taiwanese government and promoting pro-China messaging. The NSB report identified over 10,000 suspicious social media accounts distributing more than 1.5 million disinformation posts. This state-level strategy involves military, civilian, and private-sector hackers, with cybersecurity researchers linking activity to actors like TA415. These hybrid operations are designed to manipulate online discourse and shape public perception ahead of Taiwan's 2026 local elections. Source: The Record, Taiwan reports surge in Chinese cyber activity and disinformation efforts , Available Online:  https://therecord.media/taiwan-nsb-report-china-surge-cyberattacks-influence-operations Top Of Page Analysis: China's Use of AI and Private Firms Poses Influence Threat to India China is deploying sophisticated global influence operations that leverage disinformation, AI-generated content, and social media manipulation to polarize societies and exploit divisions within democratic systems. An   opinion article  published by NDTV  highlights the use of Chinese state institutions and private entities like GoLaxy, which run campaigns using AI tools to generate realistic social media profiles and fabricate narratives targeting individuals in India, the U.S., and elsewhere. These operations also enlist academics, media figures, and influencers to amplify messaging and reach specific audiences. For India, the campaigns risk fueling domestic polarization, undermining democratic processes, and exerting strategic influence over regional geopolitics. The analysis emphasizes the need for India to develop proactive countermeasures, including AI-focused digital forensics, robust legal frameworks, and dedicated counterespionage strategies. As China continues to exploit the information environment, vigilance is required to protect India’s domestic stability and strategic interests. Source: NDTV, What Ashley Tellis 'Spying' Allegation Should Tell India About Chinese 'Influence Ops' , Available Online:  https://www.ndtv.com/opinion/what-ashley-tellis-arrest-should-tell-india-about-chinese-influence-ops-9473545 Top Of Page Iran's Hybrid Threat in Sweden Combines Cyber Espionage with Dissident Targeting The Islamic Republic of Iran has conducted extensive intelligence, cyber, and influence operations in Sweden targeting dissidents, Jewish communities, and Israeli interests. A recent   analysis  in Eurasia Review  details how these activities are part of a broader hostile campaign to advance Tehran's geopolitical objectives. The operations employ a range of tactics, including cyber espionage through malware-laden apps and spear-phishing campaigns, assassination plots, and the infiltration of academic institutions. Iran also exploits local criminal networks and religious institutions to carry out surveillance, intimidation, and influence activities aimed at silencing opposition and evading international sanctions. These operations reveal significant vulnerabilities in Sweden's cyber defenses and immigration vetting processes. By coordinating with Russia and leveraging criminal proxies, Iran’s activities threaten not only targeted communities but also the stability of Swedish society and regional security, prompting calls for more decisive countermeasures. Source: Eurasia Review, A Growing Security Threat: Iranian Intelligence Operations In Scandinavia (Part Two: Sweden) – Analysis , Available Online:  https://www.eurasiareview.com/27092025-a-growing-security-threat-iranian-intelligence-operations-in-scandinavia-part-two-sweden-analysis/ Top Of Page Sora's Potential for Synthetic Propaganda Highlighted in New Analysis OpenAI's new text-to-video generator, Sora, produced realistic videos advancing false claims in 80% of test cases, including several narratives originating from Russian disinformation operations. A   report  from NewsGuard  found that the tool allows users to create synthetic propaganda with minimal effort, enabling hostile actors to rapidly amplify misleading narratives. The analysis raises concerns about the proliferation of high-quality manipulated media and the erosion of trust in authentic content. While OpenAI has implemented guardrails such as watermarking and C2PA metadata, the investigation found these measures can be circumvented, allowing generated videos to appear authentic to unsuspecting viewers. Sora’s accessibility and speed significantly lower the barrier for creating convincing fabricated content, which could be weaponized in large-scale information operations. The findings underscore the broader implications for media integrity and the challenge of countering AI-driven falsehoods in contested information environments. NewsGuard, OpenAI’s Sora: When Seeing Should Not Be Believing , Available Online:  https://www.newsguardtech.com/special-reports/sora-report/ Top Of Page NATO Official: Hybrid Warfare Against Europe 'Has Already Begun' Hybrid warfare, combining cyberattacks, disinformation campaigns, and physical disruptions, is already underway in Europe, with Russia suspected as a key actor. In an   article  from Euronews , NATO's first Chief Information Officer, Manfred Boudreaux-Dehmer, warned that recent incidents like unidentified drones forcing airport shutdowns are part of a broader strategy to disrupt daily life and weaken public morale. These non-kinetic tactics are designed to exploit digital and psychological vulnerabilities within NATO member states. Boudreaux-Dehmer noted that the Alliance is enhancing its cyber resilience through a new defense center in Belgium and increased coordination among its 32 members. He described the current environment as a constant technological and informational race between adversaries and defenders. The growing use of disinformation and other soft warfare methods highlights a strategic shift toward battles over public perception and trust, making collaboration with the private sector and academia critical for Alliance security. Source: Euronews, Hybrid warfare has begun, senior NATO official tells Euronews , Available Online:  https://www.euronews.com/2025/10/15/hybrid-warfare-has-begun-senior-nato-official-tells-euronews Top Of Page Investigation Reveals UK Far-Right Facebook Groups as 'Engine of Radicalization' A network of far-right Facebook groups in the United Kingdom is exposing hundreds of thousands of members to racist language, conspiracy theories, and extremist disinformation. An   investigation  by The Guardian  describes these online spaces as an "engine of radicalization." The analysis of over 51,000 posts across three large public groups revealed the widespread promotion of anti-immigration tropes and dehumanizing rhetoric. A key finding is that these groups are often managed by older, otherwise ordinary Facebook users, who moderate content and disseminate disinformation across the network. This dynamic leverages peer-to-peer trust, making users more likely to perceive the content as credible compared to institutional sources. Experts warn that such online ecosystems, amplified by platform algorithms, can accelerate radicalization, a threat potentially magnified by emerging technologies like deepfakes and automated bots. Despite a review, Meta found the groups did not violate its policies, highlighting ongoing challenges in moderating extremist content at scale. Source: The Guardian, Far-right Facebook groups are engine of radicalisation in UK, data investigation suggests , Available Online:  https://www.theguardian.com/world/2025/sep/28/far-right-facebook-groups-are-engine-of-radicalisation-in-uk-data-investigation-suggests Top Of Page French authorities fear mounting 'MAGA sphere' intrusions into domestic politics French authorities are increasingly concerned by the expanding influence of the American far-right "MAGA sphere" and its convergence with Russian disinformation networks targeting Europe.  Le Monde   reports  that this concern grew after Elon Musk amplified a claim by Telegram's founder that French intelligence attempted to censor certain accounts, an allegation officials viewed as pro-Russian propaganda. In response, France's Foreign Ministry launched an X account to counter such online falsehoods. A French official described the phenomenon as a "porosity" between U.S. far-right and Kremlin-aligned influence channels, noting that narratives on migration, freedom of expression, and the war in Ukraine spread rapidly across these ecosystems. The French government now views the MAGA-aligned media sphere, including outlets like Breitbart News and platforms like X, as a growing source of foreign information manipulation and interference that could be used to sway upcoming French elections. Le Monde, French authorities fear mounting 'MAGA sphere' intrusions into domestic politics , Available Online:  https://www.lemonde.fr/en/international/article/2025/10/14/french-authorities-fear-mounting-maga-sphere-intrusions-into-domestic-politics_6746437_4.html Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC  website   Top Of Page [Download Report] Top Of Page

  • CRC Spotlight: Ride-Hailing Apps as Vehicles of Foreign Solidarity and Potential Influence Operations

    In August and September 2025, a series of civil and political upheavals, primarily in Asian countries, shocked regional observers and elites. Viral images featuring the ‘One Piece’ pirate flag, adopted as a symbol by protestors, made front page news and social media was soon flooded with cross-country messages of support and solidarity.   Interestingly, a key characteristic of this wave of protests was the role played by popular ride-hailing and delivery apps as well as the ‘gig economy’ workers that rely on them. Platform users became central to the movement's core narratives, while being supplied in real-time by supportive netizens.   In this CRC Spotlight article, we examine the potential operational implications of this development: how commercial apps can serve as channels for on-the-ground support and how they might represent a new vector for Influence Operations. The platforms and their users are already vulnerable to exploitation, with active "Fraud-as-a-Service" networks using tactics like account takeover (ATO) and location spoofing for financial gain.   Although this wave of protests appears to be organic, existing Tactics, Techniques, and Procedures (TTPs) could easily be repurposed from financial fraud to political interference, such as astroturfing support for unrest. This emerging threat is amplified by the difficulty in attribution, inherent to the spontaneous, grassroots nature of platform-based aid.   With gig economy platforms becoming de-facto civic infrastructure worldwide, their potential for malign socio-political exploitation is outpacing the regulatory frameworks needed to mitigate the risks. Read the full report below for in depth analysis [ Download Full Report here ]

  • CRC Weekly: Cyber-based hostile influence campaigns 6th - 12th October 2025

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect.   During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events. [Review highlights] Russia is weaponizing grief by using AI to create deepfake "resurrections" of fallen soldiers , turning personal tragedy into state propaganda. – CyberNews  A Russian Influence campaign generated 200,000 social media mentions  overnight, creating "informational chaos" to deflect blame for a drone incursion. - Le Monde   Chinese chatbots  are being used for espionage,  harvesting user data  for microtargetd propaganda targeting sensitive groups like military personnel - Politico   A Chinese Influence campaign  using fake social media accounts and a pseudo-local media outlet to undermine the US-Philippine alliance  was uncovered. – Reuters The UK’s new national security adviser met  with a group that the U.S. State Department has labeled a "malign" part of Beijing’s foreign influence network . - The Telegraph An AI-enabled influence  operation, synchronized with military strikes , used deepfake videos and impersonated media to incite revolt in Iran .  - Citizen Lab C hinese and Russian state media launched coordinated campaigns  to frame Taiwan's president as a provocateur, distorting his calls for deterrence. - DisinfoWatch The U.S. has dismantled key defenses  like the Foreign Malign Influence Center, creating a vacuum exploited by adversaries - The Washington Post TikTok’s algorithm has enabled manipulated videos and propaganda to spread rapidly across Africa , fueling pro-junta sentiment during recent coups. - LSE [Week in Review] AI-Generated "Ghosts": Russia's New Front in Digital Propaganda The use of artificial intelligence in Russia to create propaganda from private grief is examined in an   article  from CyberNews . For a fee ranging from $35 to $60, families of deceased soldiers can commission AI-generated videos in which their loved ones appear to speak, embrace them, or ascend to heaven. These services, some of which reportedly handle hundreds of orders daily, produce deepfake clips that are then rapidly disseminated across Russian social media platforms, including Telegram and VKontakte. While these videos may provide a "balm effect" for grieving families, especially those unable to recover the bodies of soldiers, Ukrainian outlets like StopFake.org have warned against the manipulation of emotions inherent in such content. The practice represents a novel form of digital propaganda, turning personal mourning into a tool for reinforcing state narratives by creating a sanitized depiction of wartime loss. Source: CyberNews ‘Russian AI resurrection videos turn grief into propaganda’  Available Online Top Of Page How Russian Bot Networks Assaulted Czech Democracy Online During the October parliamentary elections in the Czech Republic, Russia engaged in coordinated disinformation campaigns aimed at interfering with the democratic process. A   report  by EUvsDisinfo details how networks of TikTok bot accounts and pro-Russian websites saturated Czech online spaces with propaganda. These operations sought to portray Vladimir Putin in a positive light, legitimize the war in Ukraine, and amplify anti-Western and anti-establishment narratives. Investigations by Czech media found that these propaganda sites published more articles daily than the country’s most established news outlets. After the election, Russian state-controlled media continued to push misleading narratives, falsely claiming the results indicated a rejection of the EU. This digital interference campaign also included accusations from Kremlin-linked sources that the European Union was itself guilty of election interference, a common tactic of projecting blame onto adversaries. Source: EUvsDisinfo ‘For the Kremlin, elections are golden opportunities for interference’   Available Online Top Of Page A Digital Blitz: Russia combined drone and Information Attack on Poland Following a Russian drone incursion into Polish airspace, the country was targeted by an unprecedented and coordinated disinformation attack, as detailed in an   article  published by Le Monde. The operation aimed to generate "informational chaos" by saturating social media algorithms with false narratives at a massive scale, resulting in up to 200,000 mentions in one night. Primarily driven by coordinated Russian and Belarusian accounts on platforms like X and Facebook, the campaign sought to divert blame by portraying the incident as a Ukrainian provocation designed to draw NATO into the conflict. Simultaneously, it characterized the Polish military and NATO as "ineffective and powerless." Experts view this incident as a significant escalation in Russia’s hybrid war, demonstrating a new phase of information warfare. The influence operation's reach extended to France, Germany, and Romania, highlighting its regional scope and its strategic goal of eroding European support for Ukraine. Source: Le Monde, ‘Poland hit by unprecedented disinformation attack following Russian drone incursion’   Available Online Top Of Page Chinese-developed chatbots leave user information vulnerable exploitation China's substantial investment in artificial intelligence is fueling concerns that extend beyond economic competition into the realms of cyberwarfare, espionage, and disinformation. According to an   article  from Politico, Beijing’s integration of AI into state-linked hacking groups could amplify the scale and sophistication of cyberattacks on U.S. infrastructure. In parallel, Chinese-made chatbots present espionage risks by harvesting user data, which could be weaponized for tailored disinformation campaigns targeting sensitive sectors such as first responders or military personnel. Research indicates that leading Chinese chatbots, including DeepSeek, Baidu’s Ernie, and Alibaba’s Qwen, consistently produce content that aligns with Beijing’s political narratives, subtly reinforcing state messaging. Such platforms pose a risk of shaping public opinion, particularly as affordable Chinese AI services become more widespread in developing nations, creating new vectors for digital influence. Source: Politico ‘Inside the Chinese AI threat to security’   Available Online Top Of Page Beijing's Shadow Campaign to Fracture  US-Philippine Alliance A Chinese-funded Foreign Information Manipulation & Interference (FIMI) campaign in the Philippines was orchestrated to undermine local support for the country’s alliance with the United States. A Reuters   investigation  uncovered that the operation was managed by the marketing firm InfinitUs Marketing Solutions, which received direct funding from China’s embassy in Manila to "guide public opinion." The campaign utilized fake social media accounts posing as Filipinos to amplify pro-China and anti-American content, as well as a fabricated media outlet named Ni Hao Manila. These accounts spread misinformation regarding U.S. military cooperation, attacked Philippine lawmakers critical of China, and disseminated false narratives on other geopolitical issues. Philippine officials warned that such digital influence operations aim to make Manila "compliant" with Beijing’s strategic interests, highlighting the information war playing out in a region of significant geopolitical importance. Source: Politico ‘How China waged an infowar against U.S. interests in the Philippines’   Available Online Top Of Page UK Security Adviser’s Past Meetings with China Influence Group Raise Concerns Sir Keir Starmer’s new national security adviser, Jonathan Powell, is facing scrutiny over past meetings with a Chinese organization identified by U.S. intelligence as part of Beijing’s foreign influence network. A The Telegraph   report  revealed that in March 2024, Powell met with the Chinese People’s Association for Friendship with Foreign Countries (CPAFFC), an organization the U.S. State Department has described as "malign." This group is linked to Chinese Communist Party efforts to co-opt global institutions and shape international narratives. U.S. officials have warned that CPAFFC and associated think tanks like the Grandview Institution are instrumental to China's "people-to-people" diplomacy, a strategy used to promote pro-Beijing messaging. Powell’s repeated visits to China and speaking engagements have fueled concerns that these exchanges may inadvertently legitimize entities associated with disinformation and political manipulation campaigns, coming at a time of heightened sensitivity over Chinese interference in the UK. Source: The Telegraph ‘Powell met ‘malign’ Chinese group before joining Starmer’s team’   Available Online Top Of Page AI-Augmented Influence Operation Targets Regime Change in Iran A covert network known as PRISONBREAK has been executing an AI-enabled influence operation targeting Iranian audiences with calls for revolt and fabricated media. An   analysis  from Citizen Lab details how the campaign utilized over 50 inauthentic profiles on X to distribute deepfake video content and impersonate media outlets, aiming to stoke domestic unrest. The operation's digital activities appear to have been tightly synchronized with kinetic military actions, such as the June 2025 Evin Prison bombing, employing tactics of narrative seeding and amplification in real-time. While definitive attribution is challenging, Citizen Lab assesses that the operator is most likely an Israeli government agency or a contractor, citing the advanced knowledge of military operations and coordinated narrative timing. This case highlights the evolving threat of AI-augmented disinformation in geopolitical conflicts, demonstrating how digital influence campaigns now operate alongside traditional warfare. Source: Citizen Lab ‘We Say You Want a Revolution: PRISONBREAK – An AI-Enabled Influence Operation Aimed at Overthrowing the Iranian Regime’   Available Online Top Of Page China and Russia Coordinate False Narratives Against Taiwan Chinese and Russian state media outlets have engaged in coordinated campaigns to distort the statements of Taiwanese President Lai Ching-te and portray Taiwan as a source of regional instability. According to DisinfoWatch, recent   analysis  shows that on October 8, 2025, China’s Global Times accused President Lai of "seeking independence through military means," a claim echoed by Russian state media. This narrative directly contradicted Lai’s actual remarks, which stressed deterrence and called on Beijing to renounce the use of force. The disinformation campaign also framed the People’s Liberation Army’s coercive military drills as a stabilizing measure. Furthermore, Beijing has manipulated international law by falsely equating its "One China" principle with UN Resolution 2758, which pertains to China’s UN seat but does not determine Taiwan’s sovereignty. These coordinated digital narratives represent a joint effort to isolate Taiwan and legitimize aggressive actions in the region. Source: DisinfoWatch ‘Converging False PRC–Russian Narratives Target Taiwan and President Lai’   Available Online Top Of Page United States Cedes Ground in the Global Information War The United States has effectively "disarmed" in the information war, leaving it vulnerable to foreign disinformation from Russia, China, and Iran. As   stated  by The Washington Post, the dismantling of key defenses, such as the Foreign Malign Influence Center, has created a vacuum that adversaries have exploited by spreading fabricated content, including AI-generated images and videos. Analysts at NewsGuard identified thousands of social media posts from state-backed media that aimed to deepen polarization by circulating conflicting lies. The impact is measurable, with surveys showing that a third of Americans believe at least one significant Russian falsehood about Ukraine. The article notes that Russian disinformation networks, like the Pravda Network, have seeded millions of false stories, some of which are now being used to "infect" large AI models that subsequently repeat these lies as fact, amplifying their reach and perceived credibility. Source: The Washington Post ‘How foreign nations are gaslighting Americans’   Available Online Top Of Page TikTok's Ascendance in Africa Reshapes Media with Misinformation Risks TikTok has rapidly become one of Africa’s most influential platforms for news consumption, bringing with it a significant surge in misinformation and political propaganda. A   news piece  by LSE describes how millions across the continent now rely on TikTok for information, while trust in traditional media outlets declines. The platform’s algorithms, designed to maximize engagement, enable manipulated videos and misleading content to achieve viral reach before they can be verified. This digital environment has had tangible real-world consequences, such as bolstering pro-junta sentiment during coups in Niger and Mali and fueling political division during elections in South Africa and Nigeria. While countermeasures are emerging, such as South Africa's partnership with TikTok’s election center and Ghana's fact-checking networks, the report concludes that combating disinformation on the platform will require stronger digital literacy, transparent moderation, and renewed investment in credible journalism. Source: LSE ‘TikTok is becoming Africa’s newsroom’   Available Online Top Of Page [Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity.  However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website   Top Of Page [Download Report]

  • Dancing with Cyfluence – Travolta, Telegram & the Moldovan Leak 

    In this week’s follow-up, we return to Moldova, where the recent parliamentary elections once again underscored the country’s vulnerability in its political information space .  As noted in our previous coverage on influence attempts surrounding the Moldovan vote (more information can be found [ here ]), competing narratives and external actors shaped much of the pre-election atmosphere.   Against this backdrop, a remarkable incident occurred — one that appears, with high probability, linked to a suspected Russian influence campaign: a likely cyfluence-counteroperation targeting the pro-Russian network of oligarch Ilan Shor and its affiliated organization, the Victorie Bloc. On 3 September, internal data from these structures appeared online, triggering a chain reaction that severely disrupted Shor’s political machinery and exposed the operational mechanics behind what is assessed to have been a foreign-directed influence apparatus.  The leak represented one of the clearest intersections of cyber intrusion and influence strategy observed during this election cycle. i   Who is Ilan Shor?   Ilan Shor, a Moldovan businessman and politician, fled to Russia several years ago after facing extensive corruption charges.  From exile, he remained politically active and established the Victorie Bloc in Moscow, a distinctly pro-Russian political platform aimed at regaining influence in Moldova through affiliated candidates. Shor is widely regarded as a symbolic figure of Moldova’s pro-Russian current: financially well-connected, politically ambitious, and closely tied to Kremlin-linked networks.  The Data Leak   On 3 September, reports surfaced that data from two Shor-affiliated companies, A7 and Anykey LLC, had been published. ii     Figure 1 – Screenshot of the Folders of the Leaked Data  The files first appeared on the encrypted cloud service ProtonDrive iii  and were later disseminated via Telegram channels. They contained internal communications, confidential financial records, and expenditure summaries for campaign activities. Particularly notable were chat logs in which Shor, using the codename “Travolta,” commented on operational issues.  The materials also included lists of names, phone numbers, and addresses of individuals allegedly paid to organize protests or promote pro-Russian messaging. The documents revealed that the Victorie Bloc functioned not merely as a political organization, but as a structured, financed iv , and centrally coordinated influence network.   Figure – 2 Leaked data: paid individuals, including names, tasks, and monthly payments v     Indicators of a Cyfluence Counteroperation   The following phase-based analysis outlines the structure and sequencing of the operation to illustrate how cyber-technical and influence-oriented components were combined. Breaking the event into three phases, intrusion, exposure, and amplification, allows for a clear understanding of how technical compromise evolved into a coordinated perception operation. At this point, we use this analytical framework to identify hybrid operations that merge cyber capabilities with psychological and narrative objectives. The incident occurred only days before Moldova’s parliamentary elections and displays key indicators of a coordinated cyber and information activity. Data from entities linked to Ilan Shor and the Victorie Bloc were exfiltrated, publicly released, and then used to directly engage individuals named in the dataset. The timing and sequencing suggest the operation’s intent was not financial gain or espionage, but the disruption and delegitimization of a Russian-backed influence network.    Cyber Intrusion and Data Exfiltration   The first phase likely involved unauthorized access to internal systems of the Shor-affiliated companies A7 and Anykey LLC. Significant volumes of data, including financial ledgers, payment records, and personally identifiable information, were exfiltrated and uploaded to ProtonDrive, an encrypted cloud-sharing platform. The material was subsequently distributed via Telegram channels and closed online groups, ensuring rapid dissemination while maintaining anonymity and non-attribution for the perpetrators. This stage established the technical foundation for the influence component that followed.    Exposure and Doxxing Component   In the second phase, the attackers deliberately released personal information, names, contact details, and payment histories of individuals associated with the Victorie Bloc.  This elevated the incident from a typical hack-and-leak to a hybrid operation with doxxing characteristics. Immediately after publication, numerous individuals listed in the leak received direct messages stating:   “The Victory Bloc is broken. You will no longer be paid. Your data is public. Russia has betrayed you.” vi   The messages were designed to have a psychological impact. They combined exposure and intimidation to pressure individual supporters of the Victorie Bloc, undermine their trust in the organization’s leadership, and weaken the internal cohesion between coordinators, financiers, and field operatives. This targeted approach effectively amplified the disruptive impact of the data release.    Narrative Amplification and Public Signaling   The third phase focused on narrative shaping and institutional signaling. The leaked documents appeared to show direct financial and organizational connections to Russian actors, framing the Victorie Bloc as a foreign-directed influence structure. Media outlets and social channels picked up these narratives, turning a data breach into a strategic reputational and operational collapse. Authorities, including the Central Electoral Commission and CERT-GOV-MD, Moldova’s national cybersecurity agency, launched preliminary reviews to verify the authenticity of the materials and assess potential election interference. This official response further amplified the visibility and perceived legitimacy of the operation’s outcomes.    Analytical Assessment   The coordination of cyber intrusion, targeted disclosure, and psychological messaging aligns with the structure of a Cyfluence Counteroperation, an integrated activity designed to weaken or neutralize a hostile influence campaign through synchronized cyber and perception mechanisms. In this case, the campaign can be assessed with high confidence as successful, given the rapid breakdown of internal communications, loss of financial control, and subsequent reputational collapse of the targeted network. Together, these components placed significant pressure on participants, disrupted internal communication processes, and eroded the organization’s stability. Moreover, the operation publicly reframed the Victorie Bloc as a foreign-directed entity, sharply reducing its domestic legitimacy and public support, a decisive influence effect extending beyond the technical breach itself.     Attribution and Context  Attribution remains undetermined. The operation could plausibly have been conducted by regional hacktivist collectives seeking to counter Russian interference, or by a state-affiliated actor executing a preemptive countermeasure. Regardless of origin, the case illustrates a mature application of Cyfluence methodology, the deliberate integration of cyber intrusion, information exposure, and psychological leverage to disrupt an active influence campaign in real-time.    Outcome   In the aftermath, communication within the Victorie Bloc collapsed, financial flows were interrupted, and several key figures publicly distanced themselves from the organization.  Public debate shifted away from the Bloc’s messaging and toward its exposure as a mechanism of Russian influence. The operation achieved dual objectives: operational neutralization and narrative delegitimization, significantly reducing the reach of a foreign-backed political campaign on the eve of the vote.   [Footnotes:] [i] WhereIsRussia Today, n.d. Collapsing from the inside: Ilan Shor’s network crumbles amid data leak. [online] Published 24 September 2025. Available at: https://whereisrussia.today/feed/politics/ilan_shors_network_crumbles_amid_data_leak [ii] Moldova1, R. Lozinschi‑Hadei, 2025. Telegram leaks: Șor’s firms used to undermine Moldova’s democracy. [online] Published 3 September 2025. Available at: https://moldova1.md/p/56415/telegram-leaks-sor-s-firms-used-to-undermine-moldova-s-democracy   [iii] Publicly accessible ProtonDrive link associated with the leak: https://drive.proton.me/urls/PAEYV2N61R#rxaNKy4NtPNL [iv] Elliptic, 2025. The A7 leaks: The role of crypto in Russian sanctions evasion and election interference . [online] Published 26 September 2025. Available at: https://www.elliptic.co/blog/the-a7-leaks-the-role-of-crypto-in-russian-sanctions-evasion-and-election-interference# [v] Source of the picture: WhereIsRussia Today, n.d. Collapsing from the inside: Ilan Shor’s network crumbles amid data leak. [online] Published 24 September 2025. Available at: https://whereisrussia.today/feed/politics/ilan_shors_network_crumbles_amid_data_leak [vi] Moldova1, R. Lozinschi‑Hadei, 2025. Telegram leaks: Șor’s firms used to undermine Moldova’s democracy. [online] Published 3 September 2025. Available at: https://moldova1.md/p/56415/telegram-leaks-sor-s-firms-used-to-undermine-moldova-s-democracy [vii] WhereIsRussia Today, n.d. Collapsing from the inside: Ilan Shor’s network crumbles amid data leak. [online] Published 24 September 2025. Available at: https://whereisrussia.today/feed/politics/ilan_shors_network_crumbles_amid_data_leak

  • CRC Weekly: Cyber-based hostile influence campaigns 29th September - 05th October 2025 

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect.   During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what weDuring the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events. regard as the main events. [Review highlights] Russia's foreign intelligence service (SVR) is now issuing public statements to amplify pro-Kremlin narratives, a significant shift in its operational tactics. - EUvsDisinfo A Russian-backed network created a fake news website using AI-generated videos and media impersonation to spread false narratives about the French president. - Le Monde Russia’s Ottawa embassy conducted an information campaign accusing Canada of covering up a fabricated nuclear incident allegedly perpetrated by Ukrainian forces. - DisinfoWatch Russia used social media influencers to promote a deceptive work-study program that lured African women into working in its military drone factories. - EUvsDisinfo The Kremlin is executing a multifaceted information campaign to deny and reframe its systematic abduction of over 20,000 Ukrainian children. - EUvsDisinfo A new report argues the 2025 Gaza flotilla was a coordinated information operation by groups with ties to Hamas, using humanitarianism to shape opinion. - Global Influence Operations Report The failure of Moscow's extensive interference efforts in Moldova highlights the declining impact of its information operations in countries it considers its near abroad. - Atlantic Council An EU-led digital literacy camp is equipping youth in Bosnia and Herzegovina with critical thinking skills to identify and counter manipulated information. - EU Delegation to Bosnia and Herzegovina [Weekly Review] Russia’s Foreign Intelligence Service Adopts Public Role in Spreading False Narratives Russia’s foreign intelligence service (SVR), an agency that typically operates covertly, has recently become a public-facing vehicle for pro-Kremlin disinformation. According to an EUvsDisinfo   analysis , the SVR has begun issuing official statements that amplify false narratives targeting NATO, the EU, and Western governments. This tactic marks a shift from the standard practice of circulating such claims through state media or deniable covert outlets. The SVR’s new role was prominent during Moldova’s September 2025 elections, where it baselessly accused the EU of planning a NATO-backed occupation following the decisive victory of a pro-EU party. The SVR has also spread disinformation in Serbia, alleging an EU-orchestrated “Maidan-style” coup, and in Georgia, where it claimed the U.S. and EU were plotting a “color revolution” while smearing NGOs with fabricated allegations. These actions represent a strategic change, leveraging the perceived authority of an intelligence agency to legitimize disinformation openly. Source: EUvsDisinfo, “The Shadowy SVR Openly Pushes Disinformation Narratives,”   Available Online Top of Page Moldova’s Pro-EU Party Secures Victory Amidst Coordinated Cyberattacks Moldova’s pro-European Action and Solidarity Party (PAS) won a parliamentary majority despite a campaign of Russian interference and cyberattacks designed to destabilize the vote. A report from The Record  detailed how authorities faced coordinated hoax bomb threats at polling stations and sustained   cyberattacks  on government infrastructure, including DDoS incidents targeting the Central Electoral Commission website and government cloud systems. These operations, coupled with disinformation campaigns aimed at Moldovan voters abroad, sought to intimidate the electorate and suppress the diaspora vote. According to France 24 , the Kremlin was identified as the central actor in the interference, with the Moldovan government accusing Moscow of spending hundreds of millions in   "dirty money"  on vote-buying and other destabilization efforts. While the attacks were blocked in real-time without disrupting the voting process, analysts warned that the Kremlin could still attempt to bribe new members of parliament to undermine the formation of a stable pro-European government. Source: The Record, “Moldova’s Pro-EU Party Wins Election Amid Cyberattacks and Kremlin Interference,”   Available Online Source: France 24, “Moldova's pro-EU party on course to win pivotal election mired in claims of Russian meddling,”   Available Online Top of Page Russian-Backed Network Deploys AI and Impersonation in Disinformation Campaign A Russian-backed influence network known as Storm-1516 created a fake news website to impersonate French media outlets and spread pro-Kremlin disinformation. An article  in Le Monde  revealed that the site, called BrutInfo, mimicked the branding of Brut and Le Monde to publish false stories, including a fabricated claim that President Emmanuel Macron was building a €148 million bunker.   This operation  utilized AI-generated videos, such as a fake interview with a supposed construction worker, to add a veneer of credibility. The network’s tactics also include employing paid actors, plagiarizing legitimate articles, and placing propaganda in low-standard international media outlets that accept paid contributions. France’s disinformation watchdog, Viginum, reported that content from Storm-1516 is frequently amplified by a network of pro-Kremlin influencers and paid accounts, extending the reach of its digitally sophisticated disinformation campaigns. Source: Le Monde, “A fake news website impersonates Le Monde and Brut,”   Available Online Top of Page Russian State Actors Accuse Canada of Concealing Fabricated Nuclear Incident The Russian Embassy in Ottawa and the state news agency TASS initiated a disinformation campaign accusing Ukraine of shelling the Zaporizhzhia Nuclear Power Plant (ZNPP) and claiming Canada was covering up the supposed crime. A DisinfoWatch   report  details how the embassy’s official statements labeled Ukrainian President Volodymyr Zelensky a “maniacal terrorist” and asserted that the International Atomic Energy Agency (IAEA) was documenting Ukrainian provocations. This narrative, however, contradicts independent monitoring and recent IAEA updates, which confirmed military activity around the plant but did not assign blame, instead urging both sides to cease hostilities in the area. Russia's claims ignored evidence of potential sabotage by its own occupying forces and misrepresented the IAEA's neutral role. No credible evidence was found to support the accusation that Canada was involved in covering up a non-existent nuclear crime, with its official position remaining aligned with its allies. Source: DisinfoWatch, “Russian Embassy and TASS claim Canada is covering up non-existent Kiev nuclear crime,”   Available Online Top of Page Russia Exploits Social Media Influencers for Deceptive Military Recruitment Russia has conducted a disinformation campaign across Africa that uses social media influencers to lure women into its war production industry under false pretenses. According to an article  by EUvsDisinfo , the campaign promoted the “Alabuga Start” program, which was advertised on TikTok, Instagram, and YouTube as a work-study opportunity in fields like hospitality.   In reality , recruits were sent to work in drone factories supporting Russia’s war in Ukraine, where they faced grueling conditions and health risks. When Nigerian media exposed the scheme, Russian embassies and pro-Kremlin channels mounted a coordinated response, dismissing the reporting as “Western disinformation.” This counternarrative was amplified by pan-Africanist influencers, who reframed the story as a Western plot against Russia-Nigeria relations, thereby creating an illusion of widespread support for the program while obscuring the evidence of exploitation. Source: EUvsDisinfo, “From social media to weapon factories: how Russia traps African women in war production,”   Available Online Top of Page Kremlin Pivots to Election Fraud Narratives After Failed Interference Following the victory of Moldova’s pro-EU party, the Kremlin and its media affiliates executed a rapid pivot in their disinformation strategy, shifting from pre-election accusations of corruption to post-election claims of widespread voter fraud. As reported by NewsGuard Reality Check , this strategy involved disseminating fabricated evidence across social media platforms like X and through state-owned outlets such as TASS.   The campaign  circulated deceptive videos, including one repurposed from Azerbaijan that falsely depicted ballot stuffing in Italy, in an attempt to delegitimize the election results. This effort, which showed signs of the Storm-1516 influence operation, ultimately failed to sway the outcome, demonstrating the limits of Russian influence and the resilience of Moldova's democratic institutions. In a separate but related effort, a DFRLab   report  identified a pro-Russian campaign codenamed "Matushka" that exploited Orthodox Christian beliefs to influence voters. The operation created a network of 67 channels on Telegram, TikTok, and other platforms, initially sharing religious content before pivoting to political messaging that framed European integration as a threat to the church. This strategy aimed to mobilize a religious voter base by suggesting that voting for pro-Kremlin candidates was a religious duty to protect traditional values from "moral decay." Source: NewsGuard Reality Check, “Russians Cry Fraud After Failing to Sway Moldovan Election With Disinformation,”   Available Online DFRLab, “Targeting the faithful: Pro-Russia campaign engages Moldova’s Christian voters,”   Available Online Top of Page Putin’s Valdai Speech Outlines a Global Disinformation Strategy At the Valdai Club, a Kremlin-controlled think tank, Russian President Vladimir Putin delivered a speech outlining a strategic disinformation campaign aimed at Western nations. A publication  by DisinfoWatch  analyzes how Putin and state media outlets are promoting a narrative that frames Russia as a moral "counterweight" to a decadent and declining Western liberal order.   The core strategy  involves driving a "culture-war wedge" by weaponizing issues like "gender terrorism" to generalize about systemic Western collapse and legitimize Moscow’s vision of a "polycentric," illiberal world. Specific disinformation tactics included inverting causality by labeling European rearmament a "provocation" and using fearmongering to deter military support for Ukraine. This coordinated information warfare campaign serves multiple goals: reassuring Russia’s domestic audience, encouraging sanctions fatigue among EU voters, and advancing Moscow’s revisionist foreign policy. Source: DisinfoWatch, “DisinfoDigest: Decoding Putin’s Valdai Speech,”   Available Online Top of Page Kremlin FIMI Campaign Aims to Obscure Child Abduction War Crimes The Kremlin is leveraging a Foreign Information Manipulation and Interference (FIMI) campaign to obscure its systematic abduction of over 20,000 Ukrainian children, a policy that constitutes a war crime. According to EUvsDisinfo , this   operation  relies on a three-pronged disinformation strategy: outright denial of the abductions, falsely reframing the kidnappings as humanitarian "evacuations," and claiming to facilitate family reunification while actively erasing the children’s identities through forced adoptions and citizenship changes. Key actors leading this effort include Russian President Vladimir Putin and his 'Commissioner for Children's Rights,' Maria Lvova-Belova, both of whom face arrest warrants from the International Criminal Court for their role in the unlawful deportations. In response, 38 countries, alongside the Council of Europe and the EU, have called for the children's immediate return, and an international coalition has been launched to address Russia's actions. Source: EUvsDisinfo, “At the 80th UNGA, Remember Russia’s War on Ukrainian Children,”   Available Online Top of Page Gaza Flotilla Analyzed as Coordinated Information Operation The 2025 Global Sumud Flotilla, a maritime campaign challenging Israel’s blockade of Gaza, functioned as both a humanitarian initiative and a coordinated information operation driven by a network aligned with the Muslim Brotherhood. A report  from the Global Influence Operations Report (GIOR)  argues that while the flotilla was framed publicly as a humanitarian intervention, its key organizers—including Turkey’s İHH and the Freedom Flotilla Coalition—have long-standing ties to Hamas.   According to the analysis , these groups leveraged humanitarian rhetoric to shape global opinion and legitimize their political activism. The report contends that the flotilla demonstrates a 15-year evolution of Gaza solidarity activism, which has transformed from grassroots convoys into a transnational influence ecosystem connecting NGOs with sympathetic states like Turkey, Qatar, and Malaysia. This suggests that humanitarian activism can serve as a vehicle for ideological influence, blurring the line between civil solidarity and coordinated campaigns. Source: Global Influence Operations Report, “The Global Sumud Flotilla of 2025: Humanitarian Activism or Islamist Influence Operation?,”   Available Online Top of Page Study Finds AI Misinformation Has Dual Effect on Media Trust Exposure to AI-generated misinformation reduces overall trust in media but can simultaneously increase engagement with credible news sources, according to a field experiment involving 17,000 readers. A study , published in TechXplore  and conducted by researchers from multiple universities in partnership with German newspaper Süddeutsche Zeitung , presented readers with pairs of real and AI-generated images.   The findings  revealed this dual effect: while trust declined, readers who became aware of the difficulty in distinguishing real from fake content subsequently visited the newspaper's digital platforms more frequently and demonstrated better information retention. This effect was most pronounced among individuals with lower prior interest in politics. The implications suggest that while AI-driven misinformation threatens public trust, it also creates an opportunity for reputable media outlets to deepen audience engagement by educating them about the challenges of the modern information environment. Source: TechXplore, “Reader survey shows AI-driven misinformation can reduce trust, but increase engagement with credible news,”   Available Online Top of Page AI-Driven Disinformation Accelerates Democratic Decay Across Africa Artificial intelligence is increasingly being deployed as a tool to destabilize democratic processes and support authoritarianism in Africa. An article  from the LSE Africa at LSE blog  highlights how AI-generated deepfakes and coordinated disinformation campaigns fueled polarization and public skepticism during Nigeria's 2023 elections.   In the Sahel region , AI-driven content, often linked to Russian-influenced networks, has been used to glorify military juntas and undermine calls for civilian governance. This trend is occurring in a context of declining public faith in democracy across the continent, with support for democratic rule having fallen by seven percentage points in the last decade. AI-fueled disinformation acts as a force multiplier for this democratic decay by accelerating the spread of false narratives, eroding trust in institutions, and overwhelming citizens' ability to discern fact from fabrication, underscoring the need for global governance frameworks. Source: LSE Africa at LSE blog, “In the age of artificial intelligence, democracy needs help,”   Available Online Top of Page AI Weaponized to Threaten Democratic Processes and Critical Systems The increasing accessibility of artificial intelligence is enabling malicious actors to undermine elections, manipulate markets, and compromise critical systems. According to an article  in TechXplore , AI-generated content like deepfakes and fake social media profiles has been used to spread disinformation and influence public opinion, leading to events such as the suspension of the 2024 Romanian presidential elections due to foreign interference.   Beyond elections , AI systems trained on biased data have resulted in discriminatory outcomes in healthcare, while AI-generated fake news has been deployed to manipulate financial markets. The World Economic Forum has highlighted AI’s potential to disrupt geopolitical stability and national security. The adaptability of AI lowers the barrier for executing large-scale attacks, making it more difficult to safeguard critical infrastructure. Experts advocate for secure AI practices, robust regulation, and international cooperation to mitigate these risks and ensure AI is harnessed responsibly. Source: TechXplore, “How AI poses a threat to national elections, health care and security,”   Available Online Top of Page Comparative Study Examines Frameworks for Measuring Disinformation Impact To better understand and counter disinformation, it is crucial to accurately measure its effects, yet methodologies for doing so vary widely. In a comparative study , the organization EU DisinfoLab  analyzed several frameworks used to assess the impact of disinformation, including the ABCDE Framework, the Disarm Framework, and the Impact-Risk Index.   The analysis  revealed that these frameworks adopt different approaches; some prioritize quantifying the reach of a disinformation campaign, while others focus on measuring the subsequent harm to public opinion and behavior. The study concludes that harmonizing these divergent methodologies is essential for developing a more comprehensive and standardized understanding of disinformation’s impact. Such work is critical for informing effective policy-making and counter-disinformation strategies, particularly as digital platforms and influence campaigns continue to grow in sophistication. The study calls for continued collaboration to refine these vital assessment tools. Source: EU DisinfoLab, “Decoding Disinformation Impact Frameworks and Indicators: a Comparative Study,”   Available Online Top of Page Moldova’s Institutional Resilience Blunts Russian Election Interference Efforts Russia’s comprehensive campaign to interfere in Moldova's recent elections was ultimately unsuccessful due to the resilience of the country's institutions and electorate. An Atlantic Council   article  explains how the Kremlin deployed operatives and AI-generated fake accounts to saturate Moldovan social media with disinformation targeting President Maia Sandu and her pro-European party.   Despite the scale  of this information operation, Moldovan authorities effectively countered the threat by uncovering illicit financing schemes and voter bribery efforts linked to the campaign. The Moldovan public demonstrated a strong commitment to democratic values by delivering decisive support for Sandu’s platform of European integration. The election outcome is seen as a significant indicator of Russia's declining influence in its near abroad, demonstrating that even well-resourced interference campaigns can be thwarted by vigilant institutions and an informed public. Source: Atlantic Council, “Putin’s Moldova election failure highlights Russia’s declining influence,”   Available Online Top of Page EU Initiative Bolsters Youth Digital Literacy to Counter Disinformation An initiative in Bosnia and Herzegovina aims to equip young people with the skills necessary to navigate the digital information landscape and counter disinformation. The EU Delegation to Bosnia and Herzegovina   reported  on its second Media and Digital Literacy Camp, which gathered youth for workshops on critical thinking, fact-checking, and assessing source credibility.   The program  featured guidance from experts in academia and from fact-checking platforms such as Raskrinkavanje, with a focus on identifying manipulated information. This initiative addresses the growing challenge of disinformation by fostering a more informed and engaged citizenry. It aligns with the EU's broader commitment, outlined in its annual human rights and democracy reports, to promote media freedom and combat the spread of false information. Such educational programs are considered a crucial component in strengthening democratic processes and ensuring information integrity in the digital age. Source: EU Delegation to Bosnia and Herzegovina, “Media and Digital Literacy Camp: Enhancing critical thinking and digital skills among youth,”   Available Online Top of Page [Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity.  Across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website   Top of Page [Download Report]

  • From Coup to Cult: The Transnational Construction of Power in West Africa’s Information Space -The Case of Burkina Faso

    The Sahel region has emerged as a key setting for significant evolutions in cognitive warfare, where the contest for its information space, although underreported, has global impact and relevance. A new analysis by Tim Stark uses a case study of Burkina Faso under Captain Ibrahim Traoré to provide a deep dive into these dynamics. It details how West African influence campaigns exploit the region’s fertile ground for narrative warfare—an environment where traditional oral storytellers have morphed into digital influencers—through the use of synthetic propaganda and hybrid operations, all in the context of a struggle by foreign powers to fill the strategic vacuum left by departing Western nations. Traoré’s trajectory from coup leader to mythologized icon of Pan-African resistance illustrates a broader transformation in the global information environment, whereby authoritarian leaders in fragile states can now project narratives across borders to build legitimacy while reshaping perceptions abroad. Stark concludes this is more than simple regime consolidation; it is a durable, transnational mythmaking effort that achieves global resonance by linking local grievances to potent anti-imperialist rhetoric, infiltrating Western timelines and directly influencing democratic discourse.   [ Download Full Report here ]

  • CRC Weekly: Cyber-based hostile influence campaigns 22nd-28th September 2025

    [Introduction] Hostile influence campaigns combine various aspects of cognitive warfare and are often accompanied by political, economic, or military strategies to achieve long-term strategic advantage. Our analysis focuses on cyber-based campaigns, with a particular emphasis on a key subset we define as  Cyfluence . Cyfluence (cyber-attacks for influence) is the strategic and operational integration of cyber threat vectors with hostile information influence operations (IIOs). Cyfluence operations, conducted by state-sponsored or independent actors, represent an advanced form of cognitive warfare. They combine cyberattacks (e.g., hack-and-leak, digital identity hijacking, DDoS) with digital influence techniques (e.g., coordinated disinformation, misinformation, and malinformation amplified through inauthentic activity). The objectives of Cyfluence operations include the manipulation of public discourse, election interference, reputation abuse, and societal polarization. During the 22nd to the 28th of September 2025 , we observed, collected, and analyzed endpoints of information related to these campaigns. The following report is a summary of what we regard as the main events from this period. [Report Highlights] A Kremlin-linked campaign succeeded in "infecting" major generative AI models with fabricated corruption claims targeting the Moldovan election. Over 200 leading figures, including Nobel Prize winners and AI experts from companies such as OpenAI, Google DeepMind, and Microsoft, have called for action to establish strict “red lines” for artificial intelligence. Georgia’s ruling party, Georgian Dream, has intensified its use of disinformation and conspiracy theories to undermine public trust in the European Union. BBC undercover investigation uncovered the secret Russian-funded network to disrupt Moldova’s September 28 parliamentary elections through coordinated disinformation campaigns. According to a report by DFR Lab, a new online outlet called REST has emerged as a key pro-Kremlin disinformation. Iran's intelligence operations are targeting Scandinavian countries with greater intensity. A report by the Middle East Quarterly provides insights into their recent campaigns with a focus on Denmark and Norway. [Weekly Review] REST Outlet Makes a New Front in Russia’s Campaign Against Moldova Manufacturing a Crisis: Inside Russia's Information War on Moldova's Election Paid to Post: Anatomy of a Pro-Russian 'Digital Army' RT Pushes Kremlin Disinformation to Undermine Canadian Support for Ukraine Deconstructing Russia's Moldova 'Occupation' Narrative Kremlin Campaign Corrupts AI Models in Moldovan Election Influence Op Iran's Scandinavian Operations: A Permissive Environment for Espionage and Influence Georgia's Ruling Party Uses 'Traditional Values' Disinformation to Counter EU Pressure Experts Issue Global Call for AI 'Red Lines' to Prevent Mass Disinformation Expert Analysis: EU's Institutional Weakness is its Greatest Vulnerability to Foreign Meddling REST Outlet Makes a New Front in Russia’s Campaign Against Moldova According to a recent analysis by the DFR Lab, a new online outlet named REST has emerged as another tool in the pro-Kremlin disinformation campaign targeting Moldova ahead of its September 2025 parliamentary elections. The publication details REST’s connection to Rybar, a major sanctioned Russian propaganda operation, suggesting the new outlet is designed to evade sanctions and regenerate influence capabilities. The connection is supported by technical evidence, including shared hosting infrastructure, identical server configurations, and leaked image metadata that directly references Rybar. The outlet’s content, which gained millions of views on TikTok and was amplified across X and Telegram, is designed to embed disinformation into Moldova’s digital environment. This activity represents a continuation of Russian influence operations, which employ a sophisticated toolkit including AI-generated deepfakes, mirror websites, and covert financing to undermine Moldova’s pro-European course. The analysis also notes the translation of REST content into EU languages, indicating a multi-platform, cross-border effort to manipulate information. Source:  DFRLab, J. Kubś & E. Buziashvili, 2025. Sanctioned Russian actor linked to new media outlet targeting Moldova. [online] Published 23 September 2025. Available at: https://dfrlab.org/2025/09/23/sanctioned-russian-actor-linked-to-new-media-outlet-targeting-moldova/ Top of Page Manufacturing a Crisis: Inside Russia's Information War on Moldova's Election The BBC reports on a Russian-funded network in Moldova. Its goal is to influence the parliamentary elections on 28 September 2025. Participants were recruited through Telegram. They were asked to post pro-Russian content on TikTok and Facebook. The reported payment was approximately $ 170 per month. Organisers gave instructions. They also provided guidance on using AI. The posts targeted President Maia Sandu and the ruling PAS party. Claims included election fraud, child trafficking and forced LGBT policies. Participants were also asked to conduct unauthorized opinion polls. The results and secret recordings could later be used to cast doubt on the election outcome. According to the BBC, the network was coordinated by Alina Juc from Transnistria. She is reportedly linked to Russia. Funding reportedly came via the Russian state-owned Promsvyazbank. There are also indications of ties to oligarch Ilan Shor. He is based in Moscow and sanctioned by the US, EU, and UK. The NGO Evrazia was also named as involved. The BBC reports that the network operates at least 90 TikTok accounts, which have garnered over 23 million views. DFRLab estimates an even wider reach. Shor, Evrazia, and Juc did not respond to questions. Moldova’s police view disinformation as the main method of interference. The Russian embassy denies the allegations. Source:  BBC, O. Marocico, S. Mirodan & R. Ings, 2025. How a Russian‑funded fake news network aims to disrupt elections in Europe. [online] Published 21 September 2025. Available at: https://www.bbc.com/news/articles/c4g5kl0n5d2o   Top of Page Paid to Post: Anatomy of a Pro-Russian 'Digital Army' The DFRLab report describes an operation with alleged links to Moscow that aims to influence Moldova’s parliamentary elections on September 28, 2025. Individuals were reportedly paid to create inauthentic accounts and spread coordinated content. The network has been active since the fall of 2024 and has been monitored since January 2025. By August 2025, around 200 so-called “InfoLeaders” had been recruited. DFRLab analyzed 253 accounts across TikTok, Facebook, and Instagram. In total, the operation generated nearly 29,000 posts, reaching over 55 million views, more than 2 million likes, and hundreds of thousands of comments. While TikTok was the main platform, Facebook activity grew in mid-2025. The structure was hierarchical. Russian-speaking curators set daily tasks, hashtags, and quotas. Recruits could advance from “communication activists” to InfoLeaders. The network utilized hashtags systematically, organized flash mobs, and instructed participants to personalize their content to make it appear more organic. The main narratives targeted President Maia Sandu and the ruling PAS party, focusing on alleged fraud, corruption, and criticism of EU and NATO integration. Politically, the operation shifted from supporting Ilan Shor’s “Victory Bloc” to promoting the “Moldova Mare” party, reusing earlier narratives under a new banner. Source:  DFRLab, V. Châtelet & V. Olari, 2025. Paid to post: Russia-linked ‘digital army’ seeks to undermine Moldovan election. [online] Published 24 September 2025. Available at: https://dfrlab.org/2025/09/24/paid-to-post-russia-linked-digital-army-seeks-to-undermine-moldovan-election/ Top of Page RT Pushes Kremlin Disinformation to Undermine Canadian Support for Ukraine A recent  analysis  by  DisinfoWatch  details another instance of Russian state media attempting to undermine Western support for Ukraine, this time targeting Canadian audiences. The report breaks down an RT article that falsely accuses Canada of funding "atrocities" and "neo-Nazi brigades." This campaign provides a clear case study of a broader Kremlin strategy to erode public support for Ukraine by reviving the well-worn "Ukraine-as-Nazi" trope and reframing legitimate aid as complicity in war crimes. The DisinfoWatch analysis highlights RT's use of classic disinformation techniques, including whataboutism, projection, and the distortion of facts, notably, ignoring ICC warrants against Russian officials. The campaign's objective is to emotionally manipulate audiences and delegitimize Canada's actual efforts, which focus on documenting war crimes in cooperation with the ICC. The report notes that the operation scores extremely high on disinformation risk, given its overt delivery by a recognized state-media asset, its reliance on single-source claims, and its repetition of established Kremlin propaganda narratives, making it a straightforward example of foreign information manipulation. Source: Publisher: DisinfoWatch, Author: DisinfoWatch, Title: RT Falsely claims “Canada keeps bankrolling Ukraine’s war crimes”, Date: 22 September 2025, Available at:  https://disinfowatch.org/disinfo/rt-falsely-claims-canada-keeps-bankrolling-ukraines-war-crimes/ Top of Page Deconstructing Russia's Moldova 'Occupation' Narrative An  article  by  DisinfoWatch  deconstructs a Russian disinformation narrative, circulated in the lead-up to Moldova's recent elections, which claimed the EU and NATO were preparing to "occupy" the country. The report traces the claim's origin to Russia's Foreign Intelligence Service (SVR), providing another clear example of a coordinated, state-level influence operation. The narrative, which cited NATO troop presence in the region as a pretext, was amplified without evidence by state media outlets like RT and TASS. The DisinfoWatch report highlights the campaign's clear strategic objectives: timed to coincide with the election, it sought to intimidate voters, delegitimize the country's pro-EU policies, and erode trust in Western partners. The analysis tracks the dissemination path from the SVR press bureau through major state media before being laundered into regional sites and social media ecosystems. By debunking the claim and contrasting it with the EU's actual policy of supporting democratic reforms, the report presents a concise case study on how unsubstantiated security threats are fabricated and deployed to create political instability. Source:  DisinfoWatch, 2025. EU is not “preparing to ‘occupy’ Moldova – Moscow” . [online] Published 23 September 2025. Available at: https://disinfowatch.org/disinfo/eu-is-not-preparing-to-occupy-moldova-moscow/ Top of Page Kremlin Campaign Corrupts AI Models in Moldovan Election Influence Op A recent analysis by NewsGuard has identified a Kremlin-linked disinformation operation. The campaign's name is "Storm-1516," and it targeted Moldova's recent parliamentary elections. The campaign represents a continuation of established malign influence efforts, focusing on disseminating false corruption claims against the incumbent pro-European government to undermine the democratic process. Utilizing a vast propaganda network, the operation achieved considerable reach, drawing over 17.7 million views on platforms like X. This saturation level underscores the scale of the effort directed at a country with a population of only 2.4 million. The investigation’s key finding, however, elaborates on an evolving tactic: the deliberate infection of Generative AI models. NewsGuard found that when prompted about the campaign's false narratives, major AI chatbots reproduced the disinformation more than one-third of the time. This successful compromise of widely used AI tools demonstrates a new and dangerous vector for FIMI campaigns. The operation highlights an escalation in tactics used to influence key elections, in this case, aiming to derail Moldova's European trajectory and reassert Russian influence in the region. Source: NewsGuard, E. Maitland, A. Lee & M. Roache, 2025. New Kremlin‑linked influence campaign targeting Moldovan elections draws 17 million views on X and infects AI models. [online] Published 26 September 2025. Available at: https://www.newsguardrealitycheck.com/p/new-kremlin-linked-influence-campaign Top of Page Iran's Scandinavian Operations: A Permissive Environment for Espionage and Influence An  analysis  published by  Eurasia Review  details the long-standing and varied intelligence operations conducted by the Islamic Republic of Iran (IRI) in Denmark and Norway. The report provides further examples of Iran's operational playbook, highlighting how the region's advanced industries, universities, and politically active diaspora make it an attractive, yet often overlooked, target for hostile state activities. The findings reinforce the understanding of Iran's global intelligence reach and its use of multifaceted tactics. The analysis outlines a range of operations, including assassination plots against dissidents, cyber espionage targeting research institutions, surveillance conducted through diplomatic and religious channels, and the use of local criminal networks for kinetic attacks. Crucially, it places these activities within the context of Iran’s strategic alignment with Russia and China, citing the Swedish Security Service's assessment that these states are collaborating to reshape the global order. The report concludes that a fragmented and weak response from Scandinavian governments has created a low-risk, permissive environment, effectively emboldening Tehran's intelligence services. Source: Eurasia Review, A. Khoshnood, M. Norell & A. M. Khoshnood, 2025. A growing security threat: Iranian intelligence operations in Scandinavia (Part One: Denmark and Norway) – Analysis. [online] Published 25 September 2025. Available at: https://www.eurasiareview.com/25092025-a-growing-security-threat-iranian-intelligence-operations-in-scandinavia-part-one-denmark-and-norway-analysis/ Top of Page Georgia's Ruling Party Uses 'Traditional Values' Disinformation to Counter EU Pressure According to an  article  from  The Jamestown Foundation's   Eurasia Daily Monitor  details the intensified use of disinformation by Georgia’s ruling party, Georgian Dream, as it faces EU pressure to reverse democratic backsliding. The analysis outlines how the party is weaponizing anti-LGBT conspiracy theories, falsely framing EU democratic norms as an imposition of “Western decadence” and a threat to national sovereignty. This narrative serves as a political tool to rally the party's conservative base and deflect blame for potential EU sanctions resulting from its own controversial policies. Despite this top-down campaign, the report highlights polling data showing that public support for EU integration remains overwhelmingly high at 78 percent. This suggests the government’s narrative has failed to shift the majority opinion on Georgia's geopolitical orientation. However, the continued promotion of these divisive conspiracies through pro-government media risks further polarizing society. The strategy illustrates a case of a state actor using value-based disinformation to undermine a supranational body and erode trust in democratic processes, even when public sentiment is resistant. Source: Jamestown Foundation, B. Chedia, 2025. Georgian Dream weaponizes LGBT‑related conspiracy theories. [online] Published 23 September 2025. Available at: https://jamestown.org/program/georgian-dream-weaponizes-lgbt-related-conspiracy-theories/ Top of Page   Experts Issue Global Call for AI 'Red Lines' to Prevent Mass Disinformation In a significant public call for urgent regulation, a coalition of over 200 leading figures, including Nobel laureates and prominent experts from OpenAI and Google DeepMind, have signed an  open letter  demanding that governments establish strict "red lines" for artificial intelligence. Released to coincide with the UN General Assembly session, the statement warns that unregulated AI poses severe dangers, explicitly highlighting its potential to enable large-scale disinformation campaigns and manipulate public opinion, thereby undermining democratic societies. The letter further details risks such as the loss of meaningful human control as AI systems, some of which have already exhibited deceptive behavior, are granted increasing autonomy. The signatories stress that voluntary commitments from developers are insufficient. They urge governments to act swiftly to create a binding international agreement on these "red lines" by the end of 2026. This framework would aim to hold AI providers accountable for preventing foreseeable harmful outcomes, directly addressing the growing threat of AI-powered foreign information manipulation and influence. Source: The Signatories of the "AI Red Lines" Letter , 2025. Global Call for AI Red Lines . [online] Published September 2025. Available at: https://red-lines.ai/   Top of Page Expert Analysis: EU's Institutional Weakness is its Greatest Vulnerability to Foreign Meddling In an  interview  published by  Follow the Money (FTM) , democracy expert Luise Quaritsch elaborates on the European Union’s systemic vulnerability to foreign malign interference, framing it as a component of a broader hybrid warfare strategy. The analysis highlights persistent Russian tactics, including the creation of "doppelganger" websites and covert influence platforms, such as "Voice of Europe", as examples of a low-level, constant stream of interference designed to exploit societal divisions. These operations are amplified by other actors and across platforms where malign content can gain traction. Quaritsch argues that the critical issue is not a lack of tools but the EU's failure to deploy its existing powers effectively. The bloc’s complex governance and interconnected member state policies create numerous institutional and physical access points for foreign actors to exploit. This means that a vulnerability in one member state poses a threat to the entire Union. While new legislative efforts, such as transparency registers, are being discussed, the interview emphasizes that the priority should be securing these inherent structural weaknesses, arguing that the EU is currently failing to counter the threat effectively. Source:Follow the Money (FTM), Keepe, A. (2025).   EU has the power to fight foreign meddling – but isn’t using it, democracy expert says . [online] Published 23 September 2025. Available at: https://www.ftm.eu/articles/interview-luise-quaritsch-eu-foreign-meddling   Top of Page [CRC Glossary] The Cyfluence Research Centre has relaunched the   CRC Glossary.  This initiative aims to serve as a shared lexicon of both foundational and emerging terms that shape the field.   To  this end, the Glossary is designed to be a continually updated resource, with new entries added weekly. We see this as a collaborative project and strongly encourage input from the expert community. The goal is to reduce the problem of ambiguous or conflicting terminology that can hinder collaborative work as well as communication effectiveness to the general public as a whole.  We invite you to submit additions, changes, or corrections via the form on our website. [Download]

  • Influence in Czechia: Digital Battles Ahead of the 2025 Elections

    On 3–4 October 2025, the Czech Republic will hold parliamentary elections. Since Russia’s invasion of Ukraine, the Czech government has supplied weapons, training, and financial support to Kyiv. President Petr Pavel has consistently argued for continued backing.  Czechia is also an important EU economy, closely tied to European supply chains in industry and energy. A change in government could affect both its Ukraine policy and its role within the EU. This election follows other recent cases where foreign information manipulation and interference (FIMI) was more than just visible. In Romania, digital campaigns contributed to the annulment of the presidential vote (for a deep dive analysis, see our report   here ). In Moldova, pro-EU parties won the parliamentary elections in September 2025, despite significant interference (for more information, see our blog  here ) . Now, it is Czechia’s turn to face similar challenges to its democratic processes and discourse.  The Czechia Country Election Risk Assessment (CERA) i  provides a detailed examination of how hostile influence networks operate within the Czech information space, encompassing coordinated Telegram ecosystems, disinformation portals, and financing structures. It also identifies structural vulnerabilities such as low trust in institutions, susceptibility to conspiracy narratives, and gaps in regulation. Taken together, these findings give one of the clearest pictures of the pressures shaping the 2025 elections. The report can be found here .  Political Context  The contest is dominated by three main actors:   The populist ANO movement of Andrej Babiš .  The governing conservative coalition SPOLU, led by Prime Minister Petr Fiala .  The far-right SPD of Tomio Okamura .  Yet the situation is more complex. Smaller political forces, from the Pirates to protest parties like Stačilo!  or the Motorists, exist on the fringes. This fragmentation is likely to complicate coalition-building and raises the stakes for every percentage point that digital influence campaigns might shift. ii    External Influence Networks  Russia remains the central external actor. For years, Moscow has invested in disinformation, cyber operations, and covert funding. Following the EU's ban on channels like Sputnik in 2022, activity shifted to the digital domain. Telegram channels, such as neČT24, distribute translated Kremlin content daily, while the Pravda network aggregates posts from more than 7,000 channels into Czech debates. iii  Parallel structures, such as Voice of Europe, operated from Prague with Russian financing.  China is less visible but still relevant, particularly through TikTok. Just days before the election, investigators uncovered around 300 fake TikTok accounts spreading pro-Russian synthetic propaganda. These profiles generated millions of views weekly, surpassing the combined reach of the official accounts of leading Czech politicians. iv   Figure 1 – Potentially inauthentic TikTok accounts, identified by The Center for Online Risk Research  Campaigns and Platforms  The Czech information environment is hybrid. Traditional outlets, such as Seznam Zprávy or public service broadcasting, enjoy high levels of trust, but alongside them, an ecosystem of problematic portals and Telegram channels operates. From Parlamentní listy  to fringe groups, narratives are orchestrated and mutually amplified. Digital mobilization often spills into physical actions: protests under the Stačilo!  banner directly channels narratives first spread on Telegram into the streets.  Figure 2 – Sources of news, courtesy of FDEI project v   Narratives and Their National Resonance  Hostile influence campaigns (HICs) in Czechia revolve around dominant narratives: electoral fraud , delegitimization of security institutions , anti-Ukraine frames , and anti-EU/anti-Western frames. vi  Their resonance derives from deep-rooted domestic fault lines. Mistrust of electoral integrity runs deep: over half of Czech citizens believe the government could manipulate election results. vii  Anti-Ukraine narratives play on war fatigue and economic hardship, while many consider support for Kyiv as excessive. Anti-EU narratives also resonate strongly: 54% of the population views EU decisions critically, making claims of alleged “Brussels dictate” highly effective. viii  These narratives are not simply imported; they exploit existing anxieties, reinforcing them until they erode trust in the country’s democratic trajectory.  Impact Assessment   The impact of HICs is less about measurable vote shifts than about long-term erosion. The CERA report highlights three risks:   The   normalization of mistrust. If 54% believe fraud is possible, the legitimacy of any future government is undermined.   Discouraging participation in voting, using the demobilization of pro-European voters, along with repeated claims of corruption and stolen elections.   Amplifying social and political fragmentation, so that smaller protest parties benefit disproportionately from digital influence and intervention efforts, thereby pushing themselves into mainstream debates.   Together, these dynamics create an electoral environment in which populist and pro-Russian forces gain strength without a single ballot being hacked. ix   Figure 3 – Key Issues Shaping Voter Sentiment in the Czech Republic, courtesy of FDEI project x   Responses and Limitations  Authorities have sought to push back: the Ministry of the Interior has launched public information campaigns, the BIS intelligence service monitors disinformation networks, and cooperation with TikTok has been initiated. xi  Yet structural deficits remain.  The Digital Services Act (DSA) , which obliges platforms to monitor manipulative content, ensure algorithmic transparency, and remove harmful material swiftly, has been in force at the EU level since 2024. But the Czech Republic has been slow in transposing and implementing the framework nationally. xii  As a result, a critical tool for curbing FIMI remains blunt.  Election authorities face similar limits: their resources are designed for physical ballot management, not real-time counter-disinformation. Coordination across agencies is often fragmented, with warnings being issued in parallel rather than centrally.  Conclusion   The Czech parliamentary elections are more than a domestic event. They are another link in an ongoing chain of growing friction between EU domestic forces and geopolitical rival powers alike.  Digital influence campaigns aim to weaken pro-European actors, empower populist currents, and challenge Czechia’s Western orientation.  Resilience in the information space is therefore crucial. Platforms must be held accountable, opaque Telegram networks cannot remain blind spots, and state institutions need a coordinated strategic communication (StratCom) approach. Clear rules on political financing are also essential to prevent covert external funding.  The recent elections in Romania, Moldova, and the Czech Republic confirm that digital information manipulation is already an inherent challenge. Europe’s response in building up resilience and implementing countermeasures will determine whether democratic trust can withstand the culminating pressure.  [Footnotes:] [i] FIMI Response Team (FRT‑24), Debunk.org , EU DisinfoLab, GLOBSEC, Institute for Strategic Dialogue (ISD), 2025. Czechia: Country election risk assessment. [online] Available at: https://fimi-isac.org/wp-content/uploads/2025/09/FRT-24_Czechia-Country-Election-Risk-Assessment-CERA_FINAL.pdf [ii] Ibid. pp. 9-10 [iii] Ibid. pp. 28-32 [iv] Radio Prague International, Jakub Ferenčík, 2025. Russian propaganda is spreading on Czech TikTok ahead of elections. [online] Published 30 September 2025. Available at: https://english.radio.cz/russian-propaganda-spreading-czech-tiktok-ahead-elections-8864264 ; [v] FIMI Response Team (FRT‑24), Debunk.org , EU DisinfoLab, GLOBSEC, Institute for Strategic Dialogue (ISD), 2025. Czechia: Country election risk assessment. [online] p.15 Available at: https://fimi-isac.org/wp-content/uploads/2025/09/FRT-24_Czechia-Country-Election-Risk-Assessment-CERA_FINAL.pdf [vi] Ibid. pp. 20-24 [vii] Ibid. p. 21 [viii] Ibid. p. 19 [ix] Ibid. pp. 18-22 [x] Ibid. p. 14 [xi] Radio Prague International, Jakub Ferenčík, 2025. Russian propaganda is spreading on Czech TikTok ahead of elections. [online] Published 30 September 2025. Available at: https://english.radio.cz/russian-propaganda-spreading-czech-tiktok-ahead-elections-8864264 ; [xii] FIMI Response Team (FRT‑24), Debunk.org , EU DisinfoLab, GLOBSEC, Institute for Strategic Dialogue (ISD), 2025. Czechia: Country election risk assessment. [online] pp. 46-47 Available at: https://fimi-isac.org/wp-content/uploads/2025/09/FRT-24_Czechia-Country-Election-Risk-Assessment-CERA_FINAL.pdf

  • CRC Spotlight: Exposing Digital Hostile Influence with Honeypots

    A recent X poll regarding a water crisis in Iran, displayed voting irregularities indicative of Coordinated Inauthentic Behaviour (CIB) attributed to regime-backed actors. In this CRC spotlight we use this incident as a case study to explore a new perspective on counter Foreign Information Manipulation & Interference (FIMI) tactics. We examine how interactive online content, such as polls on controversial topics, can provide defenders and researchers alike with an intelligence windfall. And that by baiting threat actors into action, knowledge of their Tactics, Techniques, and Procedures (TTPs) can be leveraged as part of a defensive strategy.   While there is no indication this poll was a deliberate trap, it does suggest further study is required on the potential for concepts such as an ‘Influence Honeypot’ to be included in existing defensive frameworks, such as DISARM Blue. [ Download Full Report here ]

bottom of page