top of page

Search CRC

142 results found with an empty search

  • Influence in Moldova: Coordinated Campaigns Ahead of Critical Elections

    On 28 September 2025, Moldova will hold parliamentary elections. For a country of just 2.5 million people, the stakes are unusually high. The election will determine whether Moldova continues on its path toward the European Union or whether Moscow succeeds in reasserting its influence. This contest is no longer fought solely at the ballot box, but increasingly across digital arenas where political majorities are shaped and reshaped.  The elections have also drawn attention from the EU itself, reflecting their broader significance for European security. On 10 September 2025, the European Parliament adopted a resolution  condemning Russian hybrid interference in Moldova and calling for strengthened EU support to safeguard the electoral process. Commissioner Kos reinforced this stance in a speech announcing the readiness of EU Hybrid Rapid Response Teams  to assist Moldova’s counter-FIMI efforts.  With only a few days left before the vote, a recent report, Moldova: Country Election Risk Assessment (CERA) , has gained relevance. Compiled by analysts from several organizations (DFRLab, Alliance4Europe, Debunk.org , and EU DisinfoLab) as part of the FIMI Defenders for Election Integrity Project  (FDEI), this comprehensive assessment outlines the current political landscape, identifies key actors, and explores the various influence operations affecting Moldova’s information space.  Given the significant impact of digital manipulation in the final stretch before elections, and as showcased so clearly by the case of the recent Romanian elections , the Moldova CERA report warrants close attention. This blog does not attempt to summarize all 80+ pages but highlights the central operations, structures, and narratives that may prove decisive in the upcoming vote.  Political Context  Moldova’s domestic politics are sharply divided. President Maia Sandu and her pro-European PAS party are pushing firmly toward EU integration, while pro-Russian forces gather around fugitive oligarch Ilan Șor, who is long associated with covert financing and orchestrated protest actions. i  Between these poles stand smaller parties that present themselves as pro-European yet often amplify narratives originating from Kremlin-linked channels. ii  This polarized environment provides fertile ground for external influence.  Figure 1: Foreign actors operating in Moldova’s information landscape, courtesy of FDEI iii     External Influence Networks  The FDEI report, alongside others, shows that Russian-linked structures deliberately target Moldova’s information ecosystem.  Operation Matryoshka  produces highly polished videos designed to resemble neutral think tank analysis, but which consistently frame EU membership as a threat. Distribution primarily runs through Telegram, X, and Bluesky, targeting Russian- and Romanian-speaking communities within Moldova. iv   The Pravda network, also known as “Portal Kombat,”  functions as a redistribution hub. Content from Russian state media and Kremlin-affiliated Telegram channels is translated and republished across dozens of websites posing as local outlets. Activity peaks coincide with sensitive moments such as U.S. Secretary of State Antony Blinken’s visit or the second round of presidential elections. v   Storm-1516,  a Russian digital influence network, follows an imitation strategy. It builds cloned domains of legitimate news outlets and fills them with fabricated stories. One Moldovan case accused Sandu of embezzling foreign aid, allegedly citing Mayor Ion Ceban. The article was entirely false but looked authentic, complete with stolen bylines, and circulated widely on Telegram. vi  Independent assessments, including VIGINUM, note the tactical overlap with Russia’s “Doppelgänger” operation but classify Storm-1516 as a separate network.   Anonymous Telegram channels  serve as a primary gateway for Kremlin narratives to reach Moldova. They aggregate, synchronize, and amplify manipulated content from the above networks, ensuring alignment with political events and pushing narratives into public debate. vii   Campaigns and Platforms  These networks converge in concrete campaigns.  Following the ban on several pro-Russian TV stations, Moldova24 (MD24)  emerged, hosted in Russia and backed by at least 16 mirror domains. It spreads content simultaneously across TikTok, Telegram, Instagram, and YouTube.  The U.S. platform Cameo  was also exploited: purchased celebrity greetings were re-captioned to suggest calls for Sandu’s resignation, then circulated on Moldovan Telegram and Facebook channels as supposed “Western voices”. vii i   The Șor network  illustrates the link between digital and physical mobilization. Under the hashtag #STOPUE, Telegram bots recruited referendum opponents. Participants uploaded ID documents and were paid to share content or join protests, with transactions routed through sanctioned Russian banks and the MIR payment system. This model was expanded via the Taito app, where protesters registered, signed contracts, and received up to $3,000 per month, four times Moldova’s average salary. These funds sustained the so-called “tent protests,” which appeared spontaneous but were in fact coordinated and financed. ix   Artificial intelligence  is also part of the toolkit. A deepfake video depicted Electoral Commission chair Angelica Caraman allegedly admitting to foreign interference. It spread on Telegram and was later amplified by Russian Foreign Ministry spokesperson Maria Zakharova, illustrating how anonymous digital manipulation can merge with official diplomacy. x   Narratives and Their Dynamics  The report identifies consistent narrative lines. At the meta-level, Anti-EU, Anti-West, Anti-Establishment, and pro-Russian frames dominate. xi  Beneath them operate sub-narratives: that EU membership erodes sovereignty, xii   that NATO and the EU bring war and chaos, that Sandu is corrupt and incompetent, that democracy is hollow, and that elections are rigged anyway. xiii  The impact does not stem from single stories but from cumulative reinforcement across platforms and formats. xiv   Figure 2: Meta-Narratives and Sub-Narratives, courtesy of FDEI xv     Assessing the Impact  Whether such campaigns ultimately shift votes is difficult to prove. The report is cautious, stressing the limited measurability of direct effects. Yet it warns by comparison: in Romania, similar combinations of disinformation, covert financing, and orchestrated protests contributed to elections being annulled. xvi  The risk in Moldova lies less in one decisive fake than in the steady erosion of trust and the demobilization of pro-European voters.  Figure 3: Meta-Narratives and Sub-Narratives, courtesy of FDEI xvii     Responses and Their Limits  Moldovan authorities have started to respond. In 2025, the electoral commission refused to register the pro-Russian “Victory” bloc after tracing its funding to Șor’s structures. The Supreme Security Council now designates electoral interference and illicit financing as national security threats, while foreign activists linked to destabilization are denied entry. Also, external initiatives have arrived, including programs such as M-MIIND, introduced in 2024 to reinforce independent media and experiment with approaches to countering foreign information manipulation and interference (FIMI). xviii   At the same time, Moldova’s institutions lack the resources to counter complex digital campaigns in real-time, a weakness highlighted by the report . xix  And this is not unique to Moldova: many Western European countries face the same challenge, struggling to match the speed and scale of hostile influence campaigns (HICs).  Conclusion  Moldova’s election serves as another reminder of how foreign interference has become an integral part of modern geopolitics rather than an episodic disruption. The country is not unique in facing these tactics, but its small size, polarized politics, and proximity to the EU–Russia fault line make it a prime case.  The precedent of Romania, where similar methods of disinformation, covert financing, and orchestrated protests contributed to the annulment of the presidential election, shows how fragile electoral integrity can be when external manipulation intersects with domestic fragmentation. A CRC report on Romania’s annulled elections  provides a detailed analysis of the influence efforts that fueled the situation. In the context of the upcoming Moldovan elections, this case study now offers valuable lessons, while shedding light on dynamics that are highly relevant for Moldova, where external interference threatens democratic processes.  What the Moldova: Country Election Risk Assessment  adds to that analysis is a systematic mapping of the evolving threat environment. By going beyond isolated incidents, it identifies the networks, narratives, and financial flows that drive hostile influence, and shows how digital propaganda, mobilization, and covert funding reinforce each other. This makes the report valuable not only for understanding Moldova but as a reference point for analyzing how HICs evolve across Europe.   [Footnotes:] [i] FDEI for election integrity (FIMI‑ISAC), Digital Forensic Research Lab (DFRLab), Alliance4Europe, Debunk.org , EU DisinfoLab, 2025. Country report Moldova: risk assessment (Jan 2025 – Jan 2027). Pp.62-64 [online] Available at: https://fimi-isac.org/wp-content/uploads/2025/09/Country-Report-Moldova-Risk-Assessment.pdf [ii] Ibid. 8-13 [iii] Ibid. p.21 [iv] Ibid. p.20 [v] Ibid. p.20 [vi] Ibid. pp.23-24 [vii] Ibid. p.21 [viii] Ibid. p.23 [ix] ibid. 62-64 [x] Ibid. 24 [xi] Ibid. 31-42 [xii] Ibid. p.45 [xiii] ibid. p. 45-52 [xiv] ibid. pp.56-59 [xv] ibid. p.30 [xvi] ibid. p.27 [xvii] ibid. p.33 [xviii] ibid pp.70-72 [xix] ibid. pp.72, 82

  • CRC Weekly: Cyber-based hostile influence campaigns 08th-14th September

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect. During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. [Report Highlights] Sophisticated AI-generated images and videos effectively fooled 73 percent of surveyed Americans regarding false Trump-Epstein narratives, showcasing synthetic media’s power.  - NewsGuard  Leaked documents challenge the perception of China’s digital control as purely ideological, revealing market forces drive its global surveillance and influence industry. - Wired Political violence following Charlie Kirk’s killing creates fertile ground for Russia and China to intensify disinformation and destabilize American institutions. -  FIMI – ISAC  Moldova’s weak media regulations allow banned pro-Russian outlets to persist, creating vulnerabilities exploited by foreign information manipulation and interference. -  FIMI-ISAC Social media users exploit video editing glitches to falsely claim AI generation, propagating conspiracy theories around Trumps recent address. -  NewsGuard Moscow systematically weaponizing international forums - proclaiming peace and human rights while actively undermining them through aggressive actions and pervasive disinformation campaigns. - EUvsDisinfo Weak platform moderation combined with anonymous channels allows foreign actors to weaponize Telegram for election disruption and societal division across EU member states. - EUvsDisinfo [Weekly Review] Foreign and Domestic Actors Weaponize Disinformation in Philippine Midterms Telegram’s Strategic Underreporting Challenges EU Digital Governance and Information Integrity Russia and China Target Czechia’s Elections with Advanced Disinformation Campaigns Political Violence Amplifies U.S. Vulnerability to Foreign Disinformation Operations Moldova’s Vulnerable Information Landscape Faces Escalating Foreign Influence Operations Rapid Disinformation Spread: False AI Claims Distort Trump’s Post-Assassination Address AI Chatbots Amplify Misinformation During Breaking News Crises NewsGuard Index: Persistent American Vulnerability to Digital Misinformation Continues Leaked Files Expose China’s Global Export of Disinformation and Surveillance Capabilities Kremlin’s UNGA Performance: A Masterclass in Deceptive Peacemaking and Reality Distortion Disinformation Blunts Accountability: Russia’s Pattern of Denying War Crimes Medvedev, Dmitriev Exploit Kirk’s Killing to Blame Left, Undermine Kyiv Support Foreign and Domestic Actors Weaponize Disinformation in Philippine Midterms Double Think Lab has released their analysis  exploring how foreign information manipulation and disinformation are deeply embedded in the Philippines’ electoral landscape, significantly impacting its political environment and democratic integrity. Chinese-linked influence operations, deploying PR firms and “keyboard warrior” networks, actively amplified pro-Duterte narratives and undermined trust in democratic processes during the 2025 midterm elections. These sophisticated campaigns utilize AI-generated images, viral conspiracy theories, and coordinated social media activity across platforms like X, Facebook, and TikTok. Operations blend domestic political messaging, such as the #BringPRRDHome hashtag, with pro-PRC content related to the West Philippine Sea, strategically discrediting certain candidates while supporting others. The pervasive nature of FIMI extends to foreign interference cases, exemplified by the disqualified mayor Alice Guo, a Chinese national allegedly linked to criminal activities and strategic locations. Election monitors emphasize the severe threat these manipulations pose to democracy and foreign policy, advocating for stronger safeguards and collaborative counter-FIMI strategies. Initiatives like the Shadow FIN network and the “Bayanihan” volunteer model demonstrate a convergent, multi-stakeholder approach to build resilience against hostile information operations and secure the digital ecosystem, ensuring informed democratic participation amidst evolving threats. Source:  Double Think Lab, September 2025, available online at: // medium.com/doublethinklab/bayanihan-for-eleksyon2025-philippine-midterms-monitoring-263ce456cb97 Top of Page Russia and China Target Czechia’s Elections with Advanced Disinformation Campaigns A   report  by FIMI – ISAC reveals that Czechia’s 2025 presidential elections face significant Foreign Information Manipulation and Interference (FIMI) risks, primarily from Russia and China. These actors exploit polarizing issues such as the war in Ukraine, energy security, migration, and EU relations, aiming to deepen social divides and erode trust in democratic institutions. Problematic outlets, including neČT24, Telegram-based ecosystems, and foreign-sponsored platforms like Voice of Europe, amplify divisive narratives, disseminating anti-refugee rhetoric and claims of electoral fraud, some of which feature synthetic audio and AI-generated content targeting President Petr Pavel. While Czechia possesses resilient electoral infrastructure, its information space remains vulnerable due to delayed Digital Services Act implementation, limited state capacity to analyze malign influence, polarized political discourse, and domestic actors amplifying foreign narratives. Countering these evolving threats requires a comprehensive, whole-of-society response, emphasizing closer cooperation across state institutions, civil society, independent media, and EU-level mechanisms, alongside continuous monitoring, proactive risk communication, and investment in institutional capabilities. This situation underscores the persistent challenge of safeguarding democratic integrity against sophisticated digital hostile influence operations. Source: FIMI – ISAC. (2025). Foreign Disinformation Threats to Czechia’s 2025 Presidential Elections. FIMI – ISAC.  available online at:   https://fimi-isac.org/wp-content/uploads/2025/09/FRT-24_Czechia-Country-Election-Risk-Assessment-CERA_FINAL.pdf   Top of Page Political Violence Amplifies U.S. Vulnerability to Foreign Disinformation Operations The killing of conservative commentator Charlie Kirk in Utah exposes deepening U.S. political violence and social discord, creating opportunities for foreign adversaries like Russia and China to exploit societal rifts. Newsweek   reports  that these nations are accused of leveraging such divisions through disinformation campaigns to inflame tensions and undermine American governance. The article explains how this incident intensifies focus on political violence and misinformation, providing foreign actors with a fresh flashpoint to manipulate public perception and destabilize the political landscape. Utah Governor Spencer Cox warned that bots from Russia and China actively encourage violence and misinformation, highlighting the intersection of domestic unrest, social media amplification, and foreign exploitation. China has previously exploited U.S. social crises, including the January 6 Capitol riot, by amplifying divisive narratives and using networks like “Spamouflage” to impersonate voters and spread discord, increasingly with AI-generated content. Similarly, Russia’s 2016 election interference campaigns employed fake personas and troll farms to exacerbate racial, ideological, and cultural divisions, tactics that have persisted. Modern technology, including AI, enables rapid spread of these false narratives across platforms like X, Facebook, and Telegram, targeting polarized audiences. Kirk’s murder underscores how moments of unrest are utilized to weaken U.S. cohesion and credibility globally. Source: Newsweek, Amir Daftari, Sep 2025, available online at:   https://www.newsweek.com/charlie-kirk-china-russia-oppourtunity-us-division-2128734 Top of Page Moldova’s Vulnerable Information Landscape Faces Escalating Foreign Influence Operations FIMI-ISAC’s   Country Election Risk Assessment  identifies significant threats to Moldova’s September 2025 parliamentary elections, primarily from Russian-led hybrid operations involving extensive disinformation and foreign information manipulation. These efforts aim to derail Moldova’s pro-EU path by spreading anti-EU, anti-Western, and anti-government narratives, often through the PENA model, labeling President Maia Sandu as a Western puppet, and normalizing electoral fraud. The cyfluence landscape is increasingly complex, utilizing AI-generated deepfakes, forged documents, and sophisticated cross-platform campaigns for amplification. Moldova’s media environment, marked by weak regulations and the resurgence of banned pro-Russian outlets via mirror sites and social media, exacerbates these vulnerabilities. Covert financing, bots, and encrypted messaging applications further facilitate protest organization and propaganda dissemination, complicating attribution. The report assesses the overall risk to electoral integrity as medium to high, predicting intensified activity as election day approaches, and urges proactive measures including inter-agency coordination, digital platform partnerships, civil society monitoring, and robust public communication to safeguard democratic processes. Source: fimi-isac. Escalating Disinformation Threatens Moldova’s 2025 Elections. fimi-isac. Available online at: from   https://fimi-isac.org/wp-content/uploads/2025/09/Country-Report-Moldova-Risk-Assessment.pdf Top of Page Rapid Disinformation Spread: False AI Claims Distort Trump’s Post-Assassination Address NewsGuard definitively   debunks  widespread social media claims alleging President Donald Trump’s Oval Office address, delivered after conservative commentator Charlie Kirk’s assassination, was an AI-generated deepfake. Following Kirk’s Sept. 10 killing, Trump posted a speech on Truth Social, prompting anti-Trump users to highlight unnatural hand movements and background leaf shifts as signs of artificial intelligence. These false assertions rapidly propagated, with some suggesting the purported AI indicated Trump’s poor health or even implicated his administration in Kirk’s death, potentially as a diversion from the Jeffrey Epstein case. However, analysis by AI detection software like Hive and cybersecurity experts from GetReal Labs found no evidence of AI generation in either the video or audio. The observed irregularities are attributed to a common video editing technique known as a “morph cut,” designed to seamlessly join segments and remove verbal errors, which misinformed users misinterpreted as AI glitches. This incident critically illustrates how visually ambiguous digital content can be weaponized as disinformation, quickly disseminated to spread politically charged conspiracy theories and erode public trust, directly impacting the Cyfluence landscape. Source: NewsGuard,  Sep 11 2025, Available Online at:   https://www.newsguardrealitycheck.com/p/trumps-address-on-charlie-kirk-is Top of Page AI Chatbots Amplify Misinformation During Breaking News Crises AI-generated ‘fact-checks’ are actively spreading falsehoods and fueling confusion during breaking news events, exemplified by the Charlie Kirk assassination. NewsGuard   reports  that as social media users sought clarity on the Sept. 10 incident, AI chatbots like Perplexity and Grok issued contradictory or outright inaccurate information, including denying Kirk’s death, fabricating a suspect, and falsely linking him to the Myrotvorets blacklist. This amplification of confusion occurs amidst major tech companies scaling back human fact-checkers, leading to a vacuum that AI, incapable of real-time human-like verification, fills with confident but erroneous responses. Furthermore, the accessibility of generative AI facilitates the ‘liar’s dividend,’ enabling users to baselessly label authentic footage as fabricated, thus casting doubt on legitimate content. Despite repeated examples of these tools confidently repeating falsehoods, many users continue to treat AI systems as reliable sources during crises, posing a significant challenge to information integrity and exacerbating the hostile influence landscape. Source: NewsGuard Reality Check, McKenzie Sadeghi, Sep 11, 2025,  Online at: https://www.newsguardrealitycheck.com/p/after-kirk-assassination-ai-fact Top of Page NewsGuard Index: Persistent American Vulnerability to Digital Misinformation Continues The latest Reality Gap Index from   NewsGuard  reveals that nearly two-thirds of Americans (64 percent) believed at least one of August 2025’s top false online claims, mirroring July’s high rate. This ongoing measurement highlights a significant vulnerability to digital hostile influence, with AI-generated media playing a pivotal role. Specifically, a wide margin of Americans, 73 percent, believed or were uncertain about AI-fabricated images and videos falsely depicting Donald Trump and Jeffrey Epstein with underage girls. This underscores the potent and deceptive nature of synthetic media in shaping public perception. Other significant falsehoods included an inaccurate claim about President Trump declaring martial law in Washington D.C., and a widely uncertain narrative concerning $100 million missing from a California wildfire charity. The index, based on a YouGov survey of 1,000 Americans, underscores the persistent challenge of online misinformation and its deep penetration into mainstream public belief, indicating a critical landscape for cyfluence operations where fabricated content can readily sow discord and confusion. Source: NewsGuard, Samantha Tanner, Sep 09, 2025, Available online at:   https://www.newsguardrealitycheck.com/p/nearly-two-thirds-of-americans-believe Top of Page Leaked Files Expose China’s Global Export of Disinformation and Surveillance Capabilities Leaked documents from Chinese firms Geedge Networks and GoLaxy expose a significant commercialization of censorship, surveillance, and propaganda technologies, challenging the traditional view of China’s digital control. A recent article from Wired explains how Geedge offers a ‘Great Firewall as a service’ to nations like Kazakhstan, Pakistan, Ethiopia, and Myanmar, enabling governments to monitor, intercept, and manipulate internet traffic. Concurrently, GoLaxy leverages AI for extensive social media data collection, political mapping, and pushing targeted narratives through fabricated accounts. Its clients include the Chinese Communist Party, government, and military, with internal documents boasting capabilities akin to Cambridge Analytica in shaping discourse around sensitive topics such as Taiwan, Hong Kong, and U.S. elections. Researchers highlight that these revelations demonstrate market forces actively shaping digital authoritarianism, with companies competing for contracts and setting sales targets. This commercialization extends beyond mere censorship into active disinformation, as targeted propaganda, synthetic profiles, and narrative manipulation are openly marketed to government clients. The findings underscore a concerning global proliferation of sophisticated hostile influence capabilities, driven by profit motives within China’s tech sector. Source: Wired, Z Yang & L Matsakis Sep 11, 2025, Available online at: https//www.wired.com/story/made-in-china-how-chinas-surveillance-industry-actually-works/ Top of Page   Kremlin’s UNGA Performance: A Masterclass in Deceptive Peacemaking and Reality Distortion EUvsDisinfo   reveals  the profound chasm between Moscow’s pronouncements at the United Nations General Assembly and its hostile actions, underscoring a sophisticated, global disinformation campaign. While Russia champions peace, development, and human rights at UNGA, its actions consistently subvert these ideals, deploying narratives of projection, denial, and distraction. The Kremlin’s “peace” proposals are, in reality, demands for Ukrainian surrender, masked by false accusations against Kyiv and the West for prolonging conflict. Concurrently, Moscow propagates the falsehood that Western sanctions harm Europe more, despite mounting evidence of Russia’s stagnating civilian economy and severe budget strain due to war production. Russia further attempts to position itself as a protector of the “Global South” against Western “bullying,” even as its documented interventions from Syria to the Sahel reveal a pattern of destabilization and state capture. Disinformation tactics extend to fabricating claims of Ukrainian chemical weapons use, while credible reports confirm Russia’s own deployment of such agents. Most disturbingly, Russia denies the forced deportation of over 20,000 Ukrainian children, a confirmed war crime linked to its demographic crisis, portraying these abductions as mere evacuations. This systematic deceit makes a mockery of international principles, forming a core component of Russia’s hostile influence operations. Source: EUvsDisinfo, September 12, 2025, available online:   https://euvsdisinfo.eu/please-mind-the-gap-moscows-words-at-unga-vs-deeds-on-the-ground/   Top of Page Disinformation Blunts Accountability: Russia’s Pattern of Denying War Crimes The Kremlin is shown to have falsely accused Ukraine of fabricating mass casualty figures following a Russian glide-bomb strike in Donetsk, according to   research  by Disinfowatch .  Within hours of the September 9 incident, RT published denials from an unnamed Russian Defense Ministry source, claiming the story originated with President Zelensky and was amplified by ‘Ukrainian propaganda.’ This narrative asserted ‘no strikes in the area’ and that the crater did not match an aerial bomb impact, fitting an agenda to discredit Kyiv’s care for Donbas residents. However, the attack is independently and extensively documented. Reuters, AP News, and the Los Angeles Times published on-scene reporting and imagery confirming the casualties. The UN Humanitarian Coordinator for Ukraine issued an official condemnation, directly contradicting the claim that the story began with a single politician. Ukrainian officials beyond Zelensky also reported the strike, while Kyiv’s provision of frontline pensions is well-documented, countering RT’s insinuations. This incident exemplifies a textbook Kremlin denial strategy, which attacks messengers and injects pseudo-forensics to muddy clear evidence. Such a tactic aims to blunt outrage and accountability for suspected war-crime incidents against civilians, forming a critical component of Russia’s cyfluence operations amid claims of not targeting civilians. Source: Disinfowatch,  Sep 9th, 2025, Available online:   https://disinfowatch.org/disinfo/kremlin-falsely-accuses-ukraine-of-fabricating-mass-strike-casualties/   Top of Page Medvedev, Dmitriev Exploit Kirk’s Killing to Blame Left, Undermine Kyiv Support Newsweek reports that Russian officials, including former president Dmitry Medvedev and Kremlin negotiator Kirill Dmitriev, have exploited the assassination of conservative activist Charlie Kirk to exacerbate U.S. political divisions and advance anti-Ukraine narratives. These actions represent a clear cyber hostile influence operation, leveraging a domestic tragedy to sow discord and undermine Western support for Kyiv. Kirk, known for his anti-NATO stance and criticism of Ukrainian President Zelensky—whom he once called a ‘puppet of the CIA’—provided fertile ground for this disinformation. Medvedev specifically blamed ‘left-wing liberal scum who support Banderite Kyiv’ for the murder, falsely associating Ukraine with Nazi sympathies, while Dmitriev amplified content celebrating Kirk’s death and echoing sentiments like ‘The Left is the party of murder.’ This exploitation aims to falsely link Ukraine supporters with violence and pressure right-wing Americans to withdraw their backing for Kyiv, aligning with broader Kremlin propaganda. Mark Shanahan, a U.S. politics expert, noted this incident highlights how America’s already hyper-polarized political landscape offers ripe opportunities for foreign adversaries to amplify internal conflicts and destabilize discourse. Source: Newsweek, Sep 11, Available online at   https://www.newsweek.com/kirk-killing-medvedev-maga-2128048   Top of Page Telegram’s Strategic Underreporting Challenges EU Digital Governance and Information Integrity An informative EUvsDisinfo   article  highlights Telegram’s emergence as a primary conduit for hostile digital influence campaigns across Europe, directly challenging the EU’s information space and digital governance frameworks. Since its 2013 founding, Telegram has rapidly expanded, boasting 1 billion users globally by 2025, driven by its multi-purpose functionality, perceived security, and minimal content moderation. These factors, coupled with co-founder Pavel Durov’s “freedom of speech” branding, make it attractive to malicious actors. Critically, Telegram appears to strategically underreport its EU user base to evade designation as a Very Large Online Platform (VLOP) under the EU’s Digital Services Act, sidestepping stringent content moderation and accountability measures. Case studies from Spain, Germany, France, and Poland illustrate Telegram’s use for pivoting from anti-vaccination narratives to pro-Kremlin disinformation, disrupting elections, and amplifying content banned elsewhere. Ukraine’s experience serves as a stark warning, where Telegram’s unregulated expansion has normalized anonymous channels as primary news sources, enabling Russian actors to conduct pervasive influence operations and foster societal division. The EU must heed these lessons, implementing robust regulation and transparency to safeguard democratic values from Telegram’s corrosive influence. Source: EUvsDisinfo, P Burdiak, O Monastyrskyi & O Tretyakov-Grodzevych,  September 08, 2025, Available Online:   https://euvsdisinfo.eu/eus-telegram-dilemma-the-rise-of-unchecked-influence/   Top of Page [Takeaways] This week's reporting underscores the persistent and evolving nature of Russian and Chinese information operations, which continue to target democratic vulnerabilities globally. The established strategy remains twofold: sustained campaigns to degrade electoral integrity in nations like the Philippines, Moldova, and Czechia, alongside the opportunistic weaponization of domestic crises in the U.S. to deepen polarization. Generative AI remains a key force multiplier, its utility extending beyond creating synthetic content to actively degrading the information commons through the “liar’s dividend” and the misinforming output of AI chatbots. This hostile activity is enabled by under-regulated platforms and amplified by a now-established strategic trend: the commercialization of digital authoritarianism. The continued export of influence-as-a-service by Chinese firms ensures that the capabilities to erode democratic cohesion are becoming more accessible, solidifying a long-term, systemic challenge to open societies.   [CRC Glossary] The Cyfluence Research Centre has relaunched the   CRC Glossary.  This initiative aims to serve as a shared lexicon of both foundational and emerging terms that shape the field.   To  this end, the Glossary is designed to be a continually updated resource, with new entries added weekly. We see this as a collaborative project and strongly encourage input from the expert community. The goal is to reduce the problem of ambiguous, or conflicting terminology that can hinder collaborative work as well as communication effectiveness to the general public as a whole.  We invite you to submit additions, changes, or corrections via the form on our website. [Download]

  • Stark Industries Solutions: A Threat Activity Enabler (TAE) in Focus

    This blog builds on the new Insikt Group report [i]  on Stark Industries Solutions to examine how hosting providers can serve as TAE [ii]  in hostile cyber and influence operations. The case of Stark Industries illustrates how infrastructure providers, often presenting themselves as legitimate businesses, become indispensable to the delivery of disinformation, cyberattacks, and other hybrid threats.  Stark Industries Solutions Ltd., incorporated in the United Kingdom in February 2022, was founded by Iurie and Ivan Neculiti. Both have a long history in the hosting sector, with Ivan previously linked to Morenehost Ltd . , an offshore service exposed in the Pandora Papers database (see ICIJ Offshore Leaks database [iii] ).   Stark operated as a “white label" [iv]  brand for PQ. Hosting [v] , offering Virtual Private Servers (VPS), proxy, and Virtual Private Network (VPN) services while concealing the true operators. [vi]    Over time, its networks were repeatedly observed in connection with Distributed Denial-of-Service (DDoS) attacks, financially motivated actors such as FIN7 [vii] , and, importantly, infrastructure supporting pro-Russian information manipulation operations, including the Doppelgänger or “Recent Reliable News” (RRN) network [viii]  (find more information about Doppelgänger in CRC article  and blog  section). In these contexts, Stark’s role was not to generate content but to provide the resilient infrastructure that made such campaigns scalable and durable.  On 20 May 2025, the Council of the European Union sanctioned Stark Industries Solutions Ltd., together with its CEO and owner, for enabling Russian state-sponsored cyber operations, explicitly citing their role in information manipulation, interference, and cyberattacks. [ix]   The move followed media exposure: on 8 May 2025, the Moldovan service of Radio Free Europe/Radio Liberty reported on leaked sanction lists that named the Neculiti brothers, [x]  and the central newsroom of RFE/RL confirmed the forthcoming designations on 9 May [xi] .  Timeline of events observed by Insikt Group, Courtesy of Recorded Future [xii] The Insikt report concludes that Stark anticipated the sanctions and deliberately restructured its operations. In April 2025, Russian infrastructure was already being migrated to UFO Hosting LLC [xiii] , a Moscow-based Internet Service Provider(ISP) registered under ASN: AS33993 [xvi] . Domains such as [bill-migration-db.stark-industries.solutions] and [russia.stark-industries.solutions] resolved through UFO-announced IP space before the EU’s action. [xv] When the sanctions came on 20 May, Stark was formally listed in the EU’s Official Journal. [xvi] Nine days later, on 29 May, PQ.Hosting  announced a full rebrand as THE.Hosting , presenting Dutch entity WorkTitans B.V. as the new corporate vehicle. By 24 June, a new ASN, AS209847, had been created to consolidate the rebrand. [xvii] Company details of WorkTitans B.V. , Courtesy of Recorded Future [xviii] The RIPE database [xix]  showed that maintainer [xx]  records across PQ Hosting Plus, UFO Hosting, and THE.Hosting  all shared the same identifiers tied to Russian operator Dmitrii Miasnikov. [xxi]  This demonstrated operational continuity behind the cosmetic changes.  For analysts, this case illustrates the importance of domain and network analysis in understanding influence operations. Narratives and content can shift rapidly, but infrastructure leaves durable traces. Tracking RIPE records [xxii] , ASN histories (to observe continuity despite rebrands), prefix transfers [xxiii] , and maintainer overlaps [xxiv]   enables the continuity of disinformation infrastructure to be followed even when brands and jurisdictions change. The Insikt report provides concrete examples: leaked sanction lists triggered asset transfers observable in RIPE, domains resolved through UFO Hosting while protected by DDoS, and operator fingerprints remained visible across multiple shells.   The full Insikt Group report is recommended reading for practitioners. It offers a detailed account of how a sanctioned TAE`s adapted with minimal disruption. The Stark case is a reminder that sanctioning entities involved in hostile information operations is necessary but not sufficient; without infrastructure-focused monitoring and multilateral coordination, such actors will continue to sustain malign campaigns under new names.  [Footnotes] [i] Recorded Future, Insikt Group, 2025. One step ahead: Stark Industries Solutions preempts EU sanctions. [online] Published 27 August 2025. Available at: https://assets.recordedfuture.com/insikt-report-pdfs/2025/cta-2025-0827.pdf [ii] A Threat Activity Enabler (TAE) is a company or service provider whose infrastructure,  such as hosting, VPNs, or proxy networks, is repeatedly used to support malicious cyber or influence operations. TAEs may not conduct attacks or disinformation themselves but provide the technical backbone that allows hostile actors to operate at scale. Because they sit in a gray zone between legitimate business and illicit use, TAEs are difficult to disrupt and often adapt quickly to sanctions or law enforcement actions, source: Recorded Future, Insikt Group, 2025. One step ahead: Stark Industries Solutions preempts EU sanctions. [online] Published 27 August 2025. Available at: https://assets.recordedfuture.com/insikt-report-pdfs/2025/cta-2025-0827.pdf [iii] International Consortium of Investigative Journalists, n.d. Offshore Leaks database: Morenehost Ltd (Node 240120865). [online] Available at: https://offshoreleaks.icij.org/nodes/240120865 ; Recorded Future, Insikt Group, 2025. One step ahead: Stark Industries Solutions preempts EU sanctions. [online] p.3, Published 27 August 2025. Available at: https://assets.recordedfuture.com/insikt-report-pdfs/2025/cta-2025-0827.pdf [iv] The term “white lable” refers to a reseller brand without its own infrastructure [v] “PQ Hosting is a Moldova-based hosting provider founded in 2019 by Ivan Neculiti. The company offers VPS/VDS, dedicated servers, VPN, and DNS services in over 35 countries, serving more than 100,000 clients”, Source: PQ Hosting, n.d. PQ Hosting: services, global reach, and infrastructure. [online] Available at: https://pq.hosting  (checked 12 September 2025). [vi] KrebsOnSecurity, 2024. Stark Industries Solutions: An Iron Hammer in the Cloud. [online] Published 23 May 2024. Available at: https://krebsonsecurity.com/2024/05/stark-industries-solutions-an-iron-hammer-in-the-cloud/ [vii] FIN7 (also known as the “Carbanak Group”) is a Russian-speaking cybercrime organization active since at least 2015, targeting U.S. and international retail and restaurant chains. The group is best known for deploying malware on point-of-sale systems to steal millions of payment card records. According to the FBI, FIN7’s campaigns caused billions of dollars in losses to businesses and consumers, source: FBI, 2018. How cyber crime group FIN7 attacked and stole data from hundreds of U.S. companies. [online] Published 1 August 2018. Available at: https://www.fbi.gov/contact-us/field-offices/seattle/news/stories/how-cyber-crime-group-fin7-attacked-and-stole-data-from-hundreds-of-us-companies [viii] KrebsOnSecurity, 2024. The Stark truth behind the resurgence of Russia’s Fin7. [online] Published 10 July 2024. Available at: https://krebsonsecurity.com/2024/07/the-stark-truth-behind-the-resurgence-of-russias-fin7/ [ix] European Union, 2025. Council Decision (CFSP) 2025/966 of 20 May 2025 amending Decision (CFSP) 2024/2643 concerning restrictive measures in view of Russia’s destabilising activities. ST/5953/2025/INIT. [online] Published 20 May 2025. Available at: https://eur-lex.europa.eu/eli/dec/2025/966/oj/en [x] Europa Liberă Moldova, 2025. UE pregătește sancțiuni contra a doi frați de la Bender, acuzați că luptă în războiul hibrid al Rusiei împotriva Europei. [online] Published 8 May 2025. Available at: https://moldova.europalibera.org/a/ue-pregateste-sanctiuni-contra-a-doi-frati-de-la-bender-acuzati-ca-lupta-in-razboiul-hibrid-al-rusiei-impotriva-europei/33407343.html [xi] RFE/RL, Rikard Jozwiak, 2025. The EU’s latest sanctions package against Russia might be its weakest yet. [online] Published 9 May 2025. Available at: https://www.rferl.org/a/eu-russia-sanctions-package-ukraine-hungary-/33409397.html [xii] Recorded Future, Insikt Group, 2025. One step ahead: Stark Industries Solutions preempts EU sanctions. [online] Published 27 August 2025. Available at: https://assets.recordedfuture.com/insikt-report-pdfs/2025/cta-2025-0827.pdf [xiii] IPinfo, n.d. UFO Hosting LLC (AS33993) details. [online] Available at: https://ipinfo.io/AS33993 [xiv] ASN stands for Autonomous System Number, a unique identifier for a network that participates independently in global internet routing; following ASN histories allows researchers to see when companies rebrand but continue using the same underlying infrastructure. [xv] Recorded Future, Insikt Group, 2025. One step ahead: Stark Industries Solutions preempts EU sanctions. [online] pp. 10-11, Published 27 August 2025. Available at: https://assets.recordedfuture.com/insikt-report-pdfs/2025/cta-2025-0827.pdf [xvi] European Union, 2025. Council Decision (CFSP) 2025/966 of 20 May 2025 amending Decision (CFSP) 2024/2643 concerning restrictive measures in view of Russia’s destabilising activities. ST/5953/2025/INIT. [online] Published 20 May 2025. Available at: https://eur-lex.europa.eu/eli/dec/2025/966/oj/en [xvii] RIPE NCC, n.d. RIPE database record for AS209847. [online] Available at: https://apps.db.ripe.net/db-web-ui/query?bflag=false&dflag=false&rflag=true&searchtext=AS209847&source=RIPE [xviii] Recorded Future, Insikt Group, 2025. One step ahead: Stark Industries Solutions preempts EU sanctions. [online] Published 27 August 2025. Available at: https://assets.recordedfuture.com/insikt-report-pdfs/2025/cta-2025-0827.pdf [xix] RIPE stands for: Réseaux IP Européens, the regional internet registry for Europe that records who controls IP addresses and networks. [xx] A maintainer in the RIPE database is the technical contact responsible for managing IP resources; if multiple companies use the same maintainer entries, it strongly suggests they are controlled by the same actors. [xxi] Recorded Future, Insikt Group, 2025. One step ahead: Stark Industries Solutions preempts EU sanctions. [online] p.17, Published 27 August 2025. Available at: https://assets.recordedfuture.com/insikt-report-pdfs/2025/cta-2025-0827.pdf [xxii] RIPE records are public entries showing who controls IP address blocks and networks. [xxiii] Prefix is a block of IP addresses, transfer is the movement of that block from one provider to another; these transfers often indicate attempts to mask continuity [xxiv] Maintainer overlaps share technical contacts that reveal common operators

  • CRC Weekly: Cyber-based hostile influence campaigns 15th-21th September

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect. During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. [Report Highlights] Russia's Ministry of Defense is "AI-washing" propaganda with a new program featuring a virtual host. - NewsGuard A large-scale Russian hybrid campaign, using bots and trolls, has been confirmed as a key factor in the annulment of Romania's 2024 presidential elections. - Romania Insider A Russian covert influence network is automating pro-Kremlin propaganda using uncensored LLMs to populate hundreds of new websites. - Recorded Future & Cybernews China is expanding its global influence by controlling digital platforms and shaping societal narratives through grassroots cultural organizations. - Doublethink Lab A U.S. congressional investigation is probing a "dark money network" allegedly funneling CCP funds to fuel civil unrest in America. - Global Influence Operations Report The misinformation output from leading AI chatbots has doubled as malign actors exploit real-time web search features to launder falsehoods. - Cybernews Leaked documents reveal a Beijing-based firm has developed an "army of AI personas" for conducting sophisticated online influence operations. - The Record [Weekly Review] Kremlin Targets Western Cohesion: Disinformation Undermines Ukraine 'Coalition of the Willing' Russia's AI-Powered Disinformation ‘Severe Interference’ annulment of Romanian Presidential Elections confirmed Deepfakes, Bribery, and Propaganda Target Moldova's Sovereignty CopyCop's Global Ambitions: Russian Influence Network Expands with 300 New Websites Digital Platforms and Narratives: PRC Expands Influence via Social Media Ownership Dark Money Network: Unmasking CCP's Covert Influence in US Society AI Chatbots' Misinformation Rate Has Doubled China's GoLaxy Deploys AI Persona Armies for Advanced Global Influence Operations Fabricated Israel-Kirk Assassination Claim Gains Millions of Views via Amplification Techniques Building Resilience: A Blueprint for Countering Information Warfare in Eastern Europe Kremlin Targets Western Cohesion: Disinformation Undermines Ukraine 'Coalition of the Willing' A recent analysis  from NewsGuard Reality Check reveals a multi-pronged pro-Kremlin disinformation campaign aimed at discrediting and dividing Western nations over proposed peacekeeping deployments in Ukraine. This campaign, targeting the 'Coalition of the Willing,' uses advanced tactics to undermine European support for Kyiv, a critical component of the broader information warfare landscape. Propagandists deployed AI-generated videos, impersonating legitimate news outlets like France 24, to falsely claim that French President Emmanuel Macron's army chief had disavowed an order to send troops. Simultaneously, pro-Kremlin channels circulated a fabricated map, riddled with linguistic errors, purporting to show a Western plan to partition Ukraine's territory and resources among France, the U.K., Poland, and Romania. These efforts seek to frame Western peacekeeping as imperialistic aggression, discouraging further U.S. and European involvement. The campaign leverages anonymous accounts across major social media platforms and builds on a pattern of targeting France as Macron increases support for Ukraine, as previously documented by NewsGuard. Source: NewsGuard, Reality Check, https://www.newsguardrealitycheck.com/p/kremlin-propaganda-targets-western Top of Page Russia's AI-Powered Disinformation According to NewsGuard , the state-owned Zvezda TV channel now airs a program featuring an AI-generated host named "Natasha." This approach allows the Kremlin to disseminate propaganda, including deepfakes of Western leaders, while maintaining plausible deniability by attributing the content to an algorithm. The strategy represents a significant development in state-sponsored disinformation, using AI as a tool to obscure attribution and create a layer of separation from the propaganda's message, making it more challenging to track. Source: NewsGuard, Reality Check, https://www.newsguardrealitycheck.com/p/kremlin-propaganda-targets-western   Top of Page ‘Severe Interference’ annulment of Romanian Presidential Elections confirmed Romania's general prosecutor has confirmed that the country's annulled 2024 presidential elections were heavily influenced by a large-scale Russian hybrid campaign. The report from Romania Insider states that Russia deployed bots, trolls, and AI-generated content to inflame tensions and amplify hate speech during the campaign. Four Russia-linked firms were identified as being behind the operations, which targeted voters through a mix of traditional media, troll farms, and coordinated bot networks. This digital interference was deemed so severe that it contributed to the unprecedented cancellation of the election results, highlighting a direct cyber-enabled assault on a nation's democratic process. Source: Romania-Insider.com , Radu Dumitrescu, 2025 https://www.romania-insider.com/romania-cancelled-presidential-elections-russia-general-prosecutor-2025 Top of Page Deepfakes, Bribery, and Propaganda Target Moldova's Sovereignty Ahead of its September parliamentary elections, Moldova faces significant risks from large-scale Russian interference. A report  by SBS News highlights an extensive disinformation network comprising tens of thousands of accounts that spread falsehoods via AI-generated deepfakes, bots, and trolls. Moscow has reportedly spent up to €200 million on these digital influence efforts. Pro-Russian parties, particularly in regions dominated by Russian media, benefit from Kremlin narratives that are amplified by these campaigns. Critics warn that such interference could undermine Moldova’s EU integration and jeopardize its sovereignty by exploiting the digital information space for political gain. Source: SBS NewsSource: SBS News. https://www.sbs.com.au/news/article/cash-for-votes-deepfakes-and-propaganda-russias-latest-bid-for-influence/xmp4569qg Top of Page CopyCop's Global Ambitions: Russian Influence Network Expands with 300 New Websites Recorded Future's Insikt Group provide us with their analysis  of a significant expansion of CopyCop (Storm-1516), a Russian covert influence network. Since March 2025, CopyCop has created over 300 new fictional media websites, targeting the United States, France, Canada, Armenia, and Moldova, while diversifying its linguistic reach to include Turkish, Ukrainian, and Swahili content. This vast infrastructure, very likely operated by John Mark Dougan with support from the Moscow-based Center for Geopolitical Expertise (CGE) and the GRU, disseminates pro-Russian, anti-Ukrainian, and anti-Western narratives. The network's core objectives remain consistent: eroding support for Ukraine and exacerbating political fragmentation in Western nations, now extending to Russia’s broader sphere of influence. A notable tactical evolution includes the use of self-hosted, uncensored large language models, likely based on Meta's Llama 3, to generate AI content at scale. CopyCop employs deepfakes, fake interviews, and fabricated dossiers, with content frequently amplified by social media influencers and other Russian networks like Portal Kombat. The network's impact remains significant, regularly achieving high organic engagement and breaking into mainstream political discourse, posing a persistent threat to democratic institutions and information integrity globally. Source: Recorded Future Research, Insikt Group, Sep 2025,  https://www.recordedfuture.com/research/copycop-deepens-its-playbook-with-new-websites-and-targets Top of Page Digital Platforms and Narratives: PRC Expands Influence via Social Media Ownership The 2024 China Index, assessing 101 countries across nine domains, has revealed a general intensification of Beijing’s reach since 2022.  Mediums’ Global Research Team examines  the implications of this significant expansion of China’s global influence. While East and Southeast Asia remain highly impacted, the sharpest increases are observed in Central America, the Caribbean, and Sub-Saharan Africa, reflecting the evolving strategies, such as the Belt and Road Initiative, and broadened technological investments. Technology and foreign policy persist as the most influential domains globally, but the Society domain recorded the largest increase, signaling deeper efforts to shape societal narratives. Notably, the report highlights a critical trend: PRC-linked entities now control or hold significant ownership in top social media or messaging applications in 65% of assessed countries, up from 45% in 2022. This expanded digital platform involvement, alongside intensified grassroots engagement through cultural events, underscores Beijing's growing capacity to influence information environments and align global discourse with its policy objectives. Conversely, coercive tactics show little correlation with actual policy alignment, suggesting a preference for long-term engagement. Source: Medium, Doublethink Labs  Global Research Team, Sep 2025 https://medium.com/doublethinklab/china-index-2024-mapping-prc-influence-across-101-countries-full-report-6adc37562677 Top of Page Dark Money Network: Unmasking CCP's Covert Influence in US Society A congressional investigation is escalating into a U.S. billionaire for allegedly fueling a Chinese Communist Party influence campaign. According to the Global Influence Operations Report , the strategy involves a "dark money network" that funnels millions through non-profits to pro-CCP media outlets. These funds are intended to foment civil unrest and manipulate political discourse in the United States. The probe is scrutinizing potential violations of the Foreign Agents Registration Act (FARA). The case highlights a sophisticated form of covert foreign intervention that leverages financial and media platforms to secretly create social discord. Source: The Global Influence Operations Report, The House Committee on Oversight and Government Reform, https://www.global-influence-ops.com/ccp-funding-congress-probes-neville-roy-singham/ Top of Page AI Chatbots' Misinformation Rate Has Doubled Cybernews reports  on the increasing rate of misinformation from AI chatbots, citing a NewsGuard study  that found a doubling of falsehoods in the past year. According to the article, this is due to a "structural tradeoff" where AI models' use of real-time web searches and a 0% non-response rate has made them more vulnerable to a "polluted online information ecosystem." Malign actors, such as Russian disinformation operations, exploit this by seeding the internet with falsehoods that the chatbots then "launder" as legitimate information. The article also references a report  by the American Security Project, which found that some AI models have been trained on messaging from the Chinese Communist Party. The broader issue, as the article concludes, is that the public places a high degree of trust in AI despite its known propensity to "hallucinate," a problem even acknowledged by OpenAI CEO Sam Altman. Source:  Cybernews, Sep 2025,  https://cybernews.com/ai-news/ai-chatbots-misinformation-rate-falsehoods/ ; newsguardtech, AI False Information Rate Nearly Doubles in One Year, Sep 2025 [online] Available at: https://www.newsguardtech.com/ai-monitor/august-2025-ai-false-claim-monitor/ Top of Page   China's GoLaxy Deploys AI Persona Armies for Advanced Global Influence Operations Leaked "GoLaxy papers," discovered by Brett Goldstein and analyzed with Brett Benson, are analyzed by The Record and reveal that China's GoLaxy has developed an "AI persona army" for sophisticated information warfare. Using a system that uses generative AI, like DeepSeek, to create highly realistic digital identities. These personas move beyond traditional "blunt-force" troll farm methods, instead engaging in subtle, human-like interactions to shape narratives and conduct persuasion operations. The system scrapes millions of social media data points to build psychological profiles, including thousands of Americans, for highly targeted propaganda. This development represents a major shift in global information warfare, posing a more effective threat to national security in regions such as Hong Kong, Taiwan, and potentially the United States. GoLaxy's subsequent deletion of website sections when contacted lends credibility to these claims, highlighting the challenge of detecting and defending against AI-generated manipulation. Researchers warn that AI-driven propaganda is a present danger, requiring new defenses to distinguish human from machine interactions. This new frontier in national security demands urgent public and governmental understanding. Source: The Record, Dina Temple-Raston & Erika Gajda, September 2025, https://therecord.media/golaxy-china-artificial-intelligence-papers   Top of Page Fabricated Israel-Kirk Assassination Claim Gains Millions of Views via Amplification Techniques NewsGuard's "Reality Check" reports  on a disinformation strategy in which anti-Israel social media influencers falsely claimed a letter from Israel's chief rabbi proved the country had foreknowledge of Charlie Kirk's assassination. Prominent figures like Jackson Hinkle, Candace Owens, and Nick Fuentes amplified the claim, which was based on a typo in the letter's date. The chief rabbi's office confirmed to NewsGuard that the letter was actually sent two days after Kirk's death, not eight days before. This case exemplifies how bad actors exploit errors and misinformation to fuel conspiracy theories and advance a broader narrative. In this instance, the disinformation sought to implicate Israel in the murder of a pro-Israel activist, a claim that Iran also promoted. The broad reach of these false posts, garnering millions of views, highlights the speed at which unsubstantiated claims can spread on platforms like X and the challenge of correcting such falsehoods once they have gained traction. Source: NewsGuard, Reality Check, September 2025, https://www.newsguardrealitycheck.com/p/letter-with-typo-pushed-as-evidence   Top of Page Building Resilience: A Blueprint for Countering Information Warfare in Eastern Europe The Atlantic Council's DFRLab and the Info Ops Poland Foundation have partnered to counter information manipulation in the South Caucasus and Eastern Europe, where such campaigns are used to undermine democratic institutions. The project focuses on creating a "blueprint" to counter these strategies by strengthening the capabilities and expertise of local stakeholders. Building on a previous project that documented hostile actors' tactics and techniques, the new initiative aims to build a more resilient society capable of mitigating both immediate and future threats from information warfare, moving beyond simply exposing individual disinformation campaigns. Source: The Atlantic Council's Digital Forensic Research Lab (DFRLab), Info Ops Poland Foundation, https://dfrlab.org/2025/09/15/module-ii-partnering-to-counter-information-manipulation-in-south-caucasus-and-eastern-europe/   Top of Page [CRC Glossary] The Cyfluence Research Centre has relaunched the   CRC Glossary.  This initiative aims to serve as a shared lexicon of both foundational and emerging terms that shape the field.   To  this end, the Glossary is designed to be a continually updated resource, with new entries added weekly. We see this as a collaborative project and strongly encourage input from the expert community. The goal is to reduce the problem of ambiguous or conflicting terminology that can hinder collaborative work as well as communication effectiveness to the general public as a whole.  We invite you to submit additions, changes, or corrections via the form on our website. [Download]

  • CRC Weekly: Cyber-based hostile influence campaigns 1st - 7th September

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect. During the first week of September 2025, we observed, collected and analyzed endpoints of information related to cyber-based  hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. [Highlights] The Telegram channel 'War on Fakes' was established one day before the 2022 invasion to preemptively push false narratives and deflect Russian war crimes. During the recent Pakistan Indian conflict, Competing Influence operations ‘Hidden Charkha’ and ‘Khyber Defender’ deployed over 1,200 inauthentic accounts in support of their governments. Multiple, distinct Russia-linked influence networks are converging their efforts to destabilize Moldova's elections and halt its pro-European trajectory. The Moscow Patriarchate is using religious conferences in Africa to expand Russian influence in coordination with state intelligence and mercenary operatives. The rate of AI tools repeating false news claims nearly doubled from 18 percent to 35 percent in one year. The #trumpdead hashtag campaign on X generated over 35 million views within a four-day period, showcasing the narrative's rapid, high-volume spread. The Cyfluence Research Center has relaunched the CRC Glossary .  This initiative aims to provide a lexicon of both foundational and emerging terms relating Cyfluence [Weekly Review] Influential Figures Exploit Minneapolis Shooting to Push Competing Narratives on X According to Wired , the aftermath of the Minneapolis church shooting became a case study in how X’s platform architecture accelerates the spread of hostile influence. Immediately following the event, high-profile figures, including politicians and activists, exploited the information vacuum to disseminate competing and unsubstantiated narratives about the shooter's motives. These claims, which ranged from anti-Christian hate to white supremacy and leftist radicalization, quickly went viral. This rapid spread was facilitated by X’s weakened moderation and an algorithmic model optimized for engagement over factual accuracy. The platform’s incentive structure rewarded sensational content, allowing false claims to outpace verified information. Even X’s own content summaries reportedly amplified details that fueled political narratives. While experts ultimately concluded the shooter’s motivations were likely nihilistic rather than ideological, the platform had already successfully converted the tragedy into a vehicle for political polarization and viral misinformation, demonstrating a significant vulnerability in the modern information ecosystem. Source: Wired, David Gilbert, How Disinformation About the Minnesota Shooting Spread Like Wildfire on X, Sep 3, 2025, [online] Available at:  https://www.wired.com/story/disinformation-minnesota-shooting-x/ Top of Page       Institutional Mimicry: Russia Deploys Fake Fact-Checking Outfits to Launder Propaganda EUvsDisinfo  has reported that pro-Russian actors are actively corrupting the practice of fact-checking by creating bogus organizations to legitimize and disseminate state-sponsored propaganda. This tactic represents a continuation of the Kremlin's subversion of journalistic principles, which intensified following the 2022 full-scale invasion of Ukraine with initiatives like the 'War on Fakes' Telegram channel. The most recent and sophisticated effort is the 'Global Fact-Checking Network' (GFCN), a Kremlin-funded entity intentionally named to resemble the legitimate International Fact-Checking Network. Launched in April by Russian Foreign Ministry spokesperson Maria Zakharova, the GFCN is managed by sanctioned individuals previously involved in other influence operations. Its content is notably vacuous, avoiding any scrutiny of Russia while covertly inserting pro-Kremlin messaging and allusions to Western culture wars. Despite its formal structure, the operation currently exhibits negligible impact, with its social media channels attracting almost no organic viewership or followers, suggesting its pretenses have failed to gain traction. Source: EUvsDisinfo, 2025, Fake fact-checking: when facts are fiction and falsehoods are facts. [online] Available at:  https://euvsdisinfo.eu/fake-fact-checking-when-facts-are-fiction-and-falsehoods-are-facts/   Top of Page Vilification Campaigns and Inauthentic News Deployed Against Moldovan Leadership A publication from Recorded Futures  Insikt group covers how multiple Russia-linked influence operations are converging to destabilize Moldova's September elections and derail its accession to the European Union. Networks including Operation Overload, Operation Undercut, and the Foundation to Battle Injustice are executing parallel campaigns to vilify President Maia Sandu and the ruling PAS party. These efforts portray EU integration as economically disastrous while promoting alignment with the Kremlin's “Russkiy Mir” doctrine. The operations leverage a range of tactics, from laundering pro-Kremlin content via aggregator sites like Pravda Moldova to deploying covert social media pages linked to oligarch Ilan Shor. For the first time, Operation Undercut has been observed using TikTok to target Moldovan users with anti-government narratives. While these campaigns have not yet achieved substantial success in shaping public opinion, they heighten risks to media integrity and voter trust. The report also notes that a retreat in US counter-disinformation efforts has created a more permissive environment for these increasingly sophisticated Russian campaigns. Source: Recorded Future, September 2025. Russian Influence Assets Converge on Moldovan Elections. [online] Available at:   https://www.recordedfuture.com/research/russian-influence-assets-converge-on-moldovan-elections Top of Page Russia Deploys 'Failed State' and Falsified Data Narratives Against Canada According to a report from DisinfoWatch , Russian Foreign Ministry spokesperson Maria Zakharova has signaled an escalation of information warfare against Canada. Through a Telegram post amplified by state media, Zakharova depicted Canada as a nation in a “deep crisis of values” that could cease to exist within a decade. Her commentary leveraged a combination of established Kremlin tactics, including the promotion of fringe separatism, culture-war tropes, and anti-LGBTQ narratives. The messaging relied on specific falsehoods, such as inflating support for separatism in Manitoba and misrepresenting Canadian policies on drug decriminalization and medical assistance in dying. These efforts align with documented Russian influence operation templates aimed at exploiting societal divisions, undermining support for Ukraine, and portraying liberal democracies as decadent and failing. The direct targeting of Canada suggests a renewed Kremlin focus on subverting the country's national unity and weakening its international alliances, signaling a new phase of hostile influence operations that Kremlin-aligned actors are expected to amplify. Source: DisinfoWatch, 5 September, 2025. Kremlin Spokeswoman Zakharova Takes Aim At Canada. [online] Available at:   https://disinfowatch.org/kremlin-spokeswoman-zakharova-aims-to-divide-canada/ Top of Page Moscow Patriarchate Pivots to Africa and Domestic Extremism Amid declining  political relevance A Jamestown Foundation analysis details how the Moscow Patriarchate (ROC MP) is compensating for its declining influence in the post-Soviet space by increasing its strategic utility to the Kremlin. Despite significant losses, particularly in Ukraine, the church is successfully executing a pivot towards new domestic and foreign influence operations. Domestically, it promotes traditional values and has helped elevate Orthodoxy to a core component of Russian civic identity for 61 percent of the population. Abroad, the ROC MP is expanding its geopolitical reach into Africa, holding conferences for local clerics to cement Russian influence in coordination with state intelligence. It also projects soft power by asserting canonical authority over groups like the Orthodox Church in America. A directive for the church to engage with the extremist 'Russian Community' shows a high-risk strategy to co-opt radical nationalism, which may secure Patriarch Kirill's position but entangles the state more deeply with extremist elements. Source: Jamestown Foundation, Paul Goble, 2025. Eurasia Daily Monitor. [online] Available at:   https://jamestown.org/program/despite-losses-at-home-and-abroad-moscow-patriarchate-helps-kremlin-expand-influence/ Top of Page Generative AI Falsehood Rate Doubles Amid Push for Real-Time Responsiveness The rate at which leading generative AI tools repeat false information has nearly doubled in one year, an increase that undermines industry promises of safer systems. An audit by NewsGuard found that the failure rate for news-related prompts increased from 18 percent in 2024 to 35 percent in 2025. This degradation stems from a structural tradeoff where chatbots have integrated real-time web search capabilities. While this change eliminated query non-responses, it simultaneously exposed the models to a polluted online information ecosystem. Malign actors, including Russian disinformation operations, are actively exploiting this vulnerability. They are laundering falsehoods through low-engagement websites, social media, and AI-generated content farms, which the models fail to distinguish from credible outlets. The push to make AI tools more responsive and timely has inadvertently made them more susceptible to spreading propaganda, turning them into more effective conduits for hostile influence operations. Source: NewsGuard, September 4th 2025,  AI False Information Rate Nearly Doubles in One Year. [online] Available at:   https://www.newsguardtech.com/ai-monitor/august-2025-ai-false-claim-monitor/   Top of Page False Trump Health Rumors Garner Tens of Millions of Social Media Views Liberal and anti-Trump social media accounts are executing a disinformation campaign alleging a severe decline in President Trump's health, including rumors of his death. This operation, analyzed  by NewsGuard, relies on multiple pieces of fabricated or decontextualized evidence to construct its narrative. Key tactics include circulating a misleading map screenshot to suggest road closures at Walter Reed Medical Center and using an AI-enhanced photograph to create false visual evidence of a stroke. Actors also repurposed older media, such as a 2023 photo of an ambulance at the White House and an image of the flag at half-staff for a school shooting, to imply a current medical emergency. The campaign achieved significant reach, with one associated hashtag, #trumpdead, accumulating over 35 million views on X in four days. The events demonstrate how disparate, low-effort falsifications can be networked to create a pervasive and viral political narrative. Source: NewsGuard, Sofia Rubinson, 2025. NewsGuard Reality Check. [online] Available at: https://www.newsguardrealitycheck.com/p/bogus-evidence-for-trumps-supposed Top of Page Hidden Charkha and Khyber Defender: State-Aligned IO in South Asian Conflict Two large, state-aligned influence networks, Hidden Charkha (pro-India) and Khyber Defender (pro-Pakistan), operated during the 2025 conflict between the nations. The report by Recorded Future provides a blueprint for how symmetrical influence operations are deployed by adversarial, nuclear-armed states to control escalation and garner support during kinetic military actions. Both networks attempted to frame their respective nations as holding the moral high ground through technological and military superiority, thereby justifying tactical restraint. Key tactics included amplifying forged military documents, exaggerating the impact of cyberattacks, and impersonating media outlets. Despite the scale of these operations, they were assessed as almost certainly unsuccessful in shaping public opinion. Their failure was attributed to an inability to break out of patriotic echo chambers and a recurrent use of generative AI for visual content, which likely undermined their credibility. Still, their activities demonstrate how patriotic sentiment can align non-state actors with government objectives during wartime. Source: Recorded Future, Insikt Group, 02 SEP 2025, Influence Operations and Conflict Escalation in South Asia. [online] Available at:   https://www.recordedfuture.com/research/influence-operations-and-conflict-escalation-in-south-asia Top of Page UK Democracy Remains Vulnerable to Misinformation Amid Weak Election Reforms According to an analysis by Full Fact, the UK government’s Elections Bill represents a missed opportunity, as its measures are insufficient to protect democratic processes from misinformation. While the strategy contains some positive steps, such as increasing fines for the Electoral Commission and requiring digital imprints on some campaign materials, it fails to match the scale of the threat. The proposed legislation needs significant upgrades to be effective. Key recommendations include amending the Online Safety Act to cover more categories of illegal election-related content and other material harmful to democracy. The bill should also incorporate robust media and political literacy initiatives, especially for younger voters, and establish stronger rules to deal with political deepfakes, including clear labeling requirements. Further proposals include creating a comprehensive digital library of political advertisements to enable public scrutiny and establishing an independent body to regulate non-broadcast political advertising. Without these more ambitious provisions, the bill will not achieve its stated objectives of safeguarding democracy and restoring political trust. Source: Full Fact, 1st Sep, 2025, Protecting our democracy from the harms of misinformation and disinformation. [online] Available at:   https://fullfact.org/politics/protecting-our-democracy-from-the-harms-of-misinformation-and-disinformation/ Top of Page [Takeaways] This week we saw examples of how threat actors are increasingly forgoing direct persuasion in favor of tactics, such as exploiting AI vulnerabilities and mimicking trusted institutions, to passively degrade the Information Ecosystem. This approach suggests a strategic calculation: an ungovernable and untrustworthy information space is, in itself, a victory. By fostering an environment of radical doubt, malign actors can paralyze democratic decision-making and erode social cohesion without having to win a direct contest of ideas.   Top of Page [CRC Glossary] The Cyfluence Research Center has relaunched the CRC Glossary .  This initiative aims to serve as a shared lexicon of both foundational and emerging terms that shape the field. To this end, the Glossary is designed to be a continually updated resource, with new entries added weekly. We see this as a collaborative project and strongly encourage input from the expert community. The goal is to reduce the problem of ambiguous, or conflicting terminology that can hinder collaborative work as well as communication effectiveness to the general public as a whole.  We invite you to submit additions, changes, or corrections via the form on our website. Top of Page

  • Vietnam Framed Villain, China Not Found: Influence Ops on Repeat

    In this week's blog, we examine a case study published by the Philippine newspaper PhilStar . [i]  The report analyzes a coordinated hostile digital influence (HDI) campaign on X (formerly Twitter) and illustrates how information operations are deployed in the context of the South China Sea dispute.  Geopolitical Background  The Philippines, Vietnam, and China have long been locked in disputes over islands and reefs in the South China Sea. China has reinforced its claims through large-scale land reclamation and a strong military presence.  The Philippines has responded by seeking support through alliances, closer ties with Western partners, and increasingly through cooperation with neighboring states. [ii]   Vietnam, in particular, has recently drawn closer to Manila, as the two countries have established a strategic partnership, launched joint coast guard exercises, and set up direct lines of communication. These developments risk isolating China. That is why Beijing has a strong incentive to sow mistrust between Manila and Hanoi and push narratives portraying Vietnam as the problem.  The Influence Campaign  Against this backdrop, PhilStar investigated a botnet of 71 fake accounts on X . Between November 2024 and June 2025, these accounts generated nearly 1,000 posts, often within seconds of each other and using near-identical wording. The central narrative was consistent: Vietnam is the main aggressor in the South China Sea, responsible for environmental destruction and instability.  China’s significantly larger activities in the South China Sea are absent from the posts. Yet although China is never directly mentioned, many of the messages rely on narratives drawn from Chinese state media.  Example of posts framing Vietnam, courtesy of Philstar and Geraldine Santos. [iii] At the same time, part of the network also circulated pro-Duterte and anti-Marcos narratives, inserting the operation into the domestic Philippine political debate.  The technical characteristics include accounts with no followers, AI-generated profile pictures, generic naming conventions, and automated copy-paste posting. Particularly notable is the link to a Russian SMM panel [iv] , a service that sells bulk bot and fake accounts. In the review of the case study, a researcher pointed out [v]  that the campaign’s methods and resources mirror those commonly linked to Spamouflage networks [vi]  (for more background, find our earlier blog post on Spamouflage here ).  The operation’s messaging appears to serve a dual purpose: externally, it reframes Vietnam as the aggressor in the South China Sea; internally, it reinforces pro-Duterte and anti-Marcos narratives, embedding itself in the Philippine political debate. The campaign reflects strategic interests attributed to China, particularly the aim of undermining cooperation among littoral states. By fostering distrust toward Vietnam and deepening domestic political divisions, it appears to be designed to weaken coordination among claimant countries. The case study notes that the goal is not reach but long-term influence, with narratives gradually shaping regional perception.  According to the Philstar article, a spokesperson for the Armed Forces of the Philippines described the hostile campaign as a “national security concern", warning that it distorts public understanding and shifts attention away from maritime developments . [vii] Conclusion PhilStar’s case study provides a technically detailed account of a coordinated HDI campaign, highlighting its tactics, structure, and execution. It illustrates how hostile influence is used to disrupt regional coordination, advancing strategic interests attributed to China.  PhilStar has emerged as a useful source in this context, regularly reporting on HDI operations in Southeast Asia.   [Footnotes:] [i]  Philstar, Cristina Chi, 2025. Copypasta army: Vietnam smeared by bots backing Duterte on X. [online] Published 25 August 2025. Available at: https://www.philstar.com/headlines/2025/08/25/2466374/copypasta-army-vietnam-smeared-bots-backing-duterte-x [ii] Reuters, Mikhail Flores, 2024. Philippines, Vietnam boost military ties amid South China Sea tensions. [online] Published 30 August 2024. Available at: https://www.reuters.com/world/asia-pacific/philippines-vietnam-sign-defence-agreement-2024-08-30   [iii]   Philstar, Cristina Chi, 2025. Copypasta army: Vietnam smeared by bots backing Duterte on X. [online] Published 25 August 2025. Available at: https://www.philstar.com/headlines/2025/08/25/2466374/copypasta-army-vietnam-smeared-bots-backing-duterte-x   . [iv]   An SMM panel: is an online platform that provides social media marketing services such as likes, followers, comments or views through an automated dashboard to boost reach, visibility and engagement.   [v]  Bangag — Tagalog (Philippines) slang for “high” or intoxicated, typically used to imply drug use. [vi] Spamouflage is a portmanteau of spam and camouflage. The term refers to pro-China influence operations that use swarms of bot and spam accounts to amplify state-aligned narratives and obscure criticism of Beijing. [vii] Philstar, Cristina Chi, 2025. Copypasta army: Vietnam smeared by bots backing Duterte on X. [online] Published 25 August 2025. Available at: https://www.philstar.com/headlines/2025/08/25/2466374/copypasta-army-vietnam-smeared-bots-backing-duterte-x .

  • Singapore Drops the Name, Shifting the Game

    Introduction Public attribution in the context of influence can be likened to an open letter, backed by technical reasoning and signed with a clear political signature. A specific incident is not only investigated but officially linked to a particular actor. Such attribution is more than a forensic statement: it is a form of strategic messaging . Public attribution always addresses a broader audience, allies, undecided third states, domestic publics, and potential future adversaries.   Attribution to UNC3886 [i] Until now, Singapore has avoided taking such a step regarding China. The previous tactic was to publish technical details and share indicators, while leaving the actual originator in the dark. This time, however, that cover was dropped [ii] : intrusions into critical infrastructure were attributed to UNC3886, a group described by Mandiant [iii] as a Chinese APT, meaning a state or state-aligned hacking group. While the statement did not explicitly mention "China," the message was clear enough to be registered. The response followed promptly. The Chinese embassy in Singapore rejected the accusations and “courteously” offered to cooperate in the investigation.   Figure 1 – Courtesy of Global Times [iv]   A short time later, the Global Times published a report by the Cybersecurity Association of China, claiming that the United States had exploited zero-day vulnerabilities in Microsoft Exchange to attack a major Chinese defence company. #_edn6 Figure 2 – Courtesy of Global Times [vi]   Singapore appeared in the report only in passing, as the location of “proxy IPs.” [v] Figure 2 – Courtesy of Global Times [vi] Singapore also faced a degree of backlash, albeit not particularly strong: On August 5, 2025, the Chinese embassy responded in the Straits Times “Opinion Forum” [vii] to the article published on July 25 [viii], 2025, rejecting the allegations and criticizing the newspaper itself.   Figure 3 - Courtesy of Strait Times [ix]     A Facebook post on the topic followed, drawing comments and shares that made it clear the matter hit home. Figure 4 - Courtesy of Facebook [x]   China’s Attribution Response Pattern Instead of placing Singapore at the center of a counter-campaign, Beijing shifted the focus to its strategic rival, the United States. It seems that Chinese reactions to attribution now follow a well-rehearsed pattern: Country X names Chinese actors; within a short time, a state-directed entity such as the CVERC (China Vulnerability Emergency Response Center) [xi]  issues a report accusing the United States of having done the same. This maneuver flips the script, redirects focus, and channels the discussion into another narrative. Noteworthy in this context is the rapid response to the SharePoint incident; the counter-report was released in about a week. In previous cases, such as Volt Typhoon (also addressed by the CVERC), reports from Chinese front organizations have taken considerably longer to surface. It will be interesting to see how this story develops and whether we soon see a dedicated CVERC report on Singapore.   Conclusion The public attribution of the alleged Chinese group UNC3886 by Singapore marks a notable departure from the country’s previously discreet communication style in matters of cyber threats. Instead of issuing technical reports or relying on diplomatic channels, Singapore deliberately opted for a visible public stance. This can be understood as a strategic signal, both to potential attackers and to international partners. The effectiveness of such attributions, however, remains contested. While publicly naming an actor increases the costs for attackers, forcing them to adapt tactics or infrastructure, authoritarian states such as China often respond not by retreating, but by issuing counteraccusations. This was evident in the immediate reaction of the Chinese Embassy in Singapore. Such behaviour is particularly relevant in the context of hostile digital influence, as counteraccusations are not merely a defensive move but a deliberate attempt to reshape the narrative, cast doubt on the original claims, and undermine trust in the accuser. In these situations, attribution becomes part of a broader information conflict, where the struggle is not only over establishing technical culpability but also over influencing public perception, framing international discourse, and eroding the credibility of opposing voices. For Singapore, this new visibility is not without risks. Attribution creates expectations, narrows the room for maneuver, and carries the potential to exacerbate tensions with China, particularly if the naming is interpreted not in isolation but as part of a broader political shift. [Footnotes] [i] UNC refers to "uncategorised" or "unclassified" [ii] The Straits Times, Jaipragas, B., 2025. No idle punt: Why Singapore called out cyber saboteur UNC3886 by name.   [online]  Published 25 July 2025. Available at: https://www.straitstimes.com/opinion/no-idle-punt-why-singapore-called-out-cyber-saboteur-unc3886-by-name [iii] Google Cloud Blog (Mandiant), 2024. Cloaked and Covert: Uncovering UNC3886 Espionage Operations.   [online]  Published 18 June 2024. Available at: https://cloud.google.com/blog/topics/threat-intelligence/uncovering-unc3886-espionage-operations?hl=en [iv] Global Times, 2025. US intelligence uses Microsoft Exchange zero‑day vulnerabilities to attack Chinese military‑industrial enterprise’s email servers: report.   [online] Published 1 August 2025, 02:24 PM. Available at: https://www.globaltimes.cn/page/202508/1339860.shtml [v] Global Times, 2025. US intelligence uses Microsoft Exchange zero‑day vulnerabilities to attack Chinese military‑industrial enterprise’s email servers: report.   [online] Published 1 August 2025, 02:24 PM. Available at: https://www.globaltimes.cn/page/202508/1339860.shtml [vi] Global Times, 2025. US intelligence uses Microsoft Exchange zero‑day vulnerabilities to attack Chinese military‑industrial enterprise’s email servers: report.   [online] Published 1 August 2025, 02:24 PM. Available at: https://www.globaltimes.cn/page/202508/1339860.shtml [vii] The Straits Times, “Forum: China firmly against and cracks down on all forms of cyber attacks”, 2025. [online]  Published 5 August 2025. Available at: https://www.straitstimes.com/opinion/forum/forum-china-firmly-against-and-cracks-down-on-all-forms-of-cyber-attacks [viii] The Straits Times, Jaipragas, B., 2025. No idle punt: Why Singapore called out cyber saboteur UNC3886 by name.   [online]  Published 25 July 2025. Available at: https://www.straitstimes.com/opinion/no-idle-punt-why-singapore-called-out-cyber-saboteur-unc3886-by-name [ix] The Straits Times, “Forum: China firmly against and cracks down on all forms of cyber attacks”, 2025. [online]  Published 5 August 2025. Available at: https://www.straitstimes.com/opinion/forum/forum-china-firmly-against-and-cracks-down-on-all-forms-of-cyber-attacks [x] Facebook, China Embassy in Singapore, 2025 .   Facebook post.   [online]  Available at: https://www.facebook.com/Chinaemb.SG/posts/1043167338010724/ [xi] The China National Computer Virus Emergency Response Center (CVERC) is a state-run institution responsible for analyzing cyber threats and coordinating national responses to viruses and software vulnerabilities. It also plays a role in China's geopolitical cyber strategy.  DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.

  • CRC Weekly: Cyber-based hostile influence campaigns 25th – 31st August

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect. During the 25th to the 31st of August, 2025 we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. This week's events highlight the continued diversification of actors and tactics in the information domain. Persistent Russian campaigns targeting European states occurred alongside reported US-linked activities, including a domestic covert influencer network, as well as an alleged hostile operation in Greenland. Concurrently, state responses were relatively assertive, evidenced by Indonesia’s heightened regulatory pressure on tech platforms and Denmark's formal diplomatic summons of the US Ambassador. [Contents] [Report Highlights] [Week in Review] False Flag Ban Narrative Exploits UK Patriotism Debate to Fuel Outrage Lavrov Interview Showcases Russia’s Standardized Disinformation and FIMI Playbook Russia's SVR and Georgian Dream Execute Coordinated Anti-Western Disinformation Operations Polish Elections Withstand Foreign Influence, But Systemic Vulnerabilities Persist Denmark Summons US Envoy Amid Greenland Influence Campaign Allegations Pro-Russia Network Targets Romanian Election with Anti-Sandu Disinformation Campaign Democratic-Aligned Dark Money Group Covertly Pays Influencers for Coordinated Messaging Indonesia Threatens Platforms with Fines and Expulsion Over Harmful Content [Takeaways] [Download Report] [Report Highlights] A pro-Russia network published 8,514 posts, which amassed at least 16 million views, attacking Romania’s electoral integrity and promoting anti-EU sentiment. -  DFRLab US-linked operatives allegedly compiled lists of sympathetic locals and sought negative narratives to undermine Danish authority in Greenland. – ABC News An undisclosed political group is reportedly offering Influencers up to $8,000 monthly in exchange for promoting partisan Democrat messaging in the US. - WIRED A forged screenshot of a credible news publication was circulated by conservative users to lend journalistic authority to the claim the UK is banning the English flag - NewsGuard Indonesia is shifting content moderation responsibility to platforms by demanding proactive removal of disinformation to prevent social unrest. - Reuters Poland’s absence of a permanent Digital Services Coordinator creates a critical regulatory vacuum, undermining effective enforcement of the EU Digital Services Act. - FIMI-ISAC   [Weekly Review] False Flag Ban Narrative Exploits UK Patriotism Debate to Fuel Outrage According to NewsGuard , conservative and nationalist social media users are circulating a fabricated screenshot of a Daily Mail article to falsely claim the U.K. government is considering a ban on the English flag. This disinformation leverages a real, recent controversy involving the Birmingham City Council's removal of flags from lampposts for safety reasons, a move which had already ignited a debate over patriotism and national identity. The campaign's core tactic is the use of a convincing but non-existent article, purportedly written by a real political editor and featuring Prime Minister Keir Starmer, to lend the claim false credibility. This method has proven effective, with one post receiving over 850,000 views and generating significant hostile engagement. The Daily Mail has officially confirmed the article is fake, and no credible news outlets or government sources have suggested any such policy is under consideration. The incident highlights the use of forged media to exploit existing societal tensions for political purposes. Source: NewsGuard, Mascha Wolf, 2025, NewsGuard Reality Check, Available at: https://www.newsguardrealitycheck.com/p/uk-banning-english-flag-fake-news Contents Lavrov Interview Showcases Russia’s Standardized Disinformation and FIMI Playbook An analysis from EUvsDisinfo deconstructs a recent interview by Russian Foreign Minister Sergey Lavrov, identifying it as a masterclass in the Kremlin's FIMI playbook. The piece systematically dismantles several core Russian narratives designed to manipulate Western audiences. These include the false pretense of being open to dialogue while pressing for surrender, the baseless claim of President Zelenskyy’s illegitimacy, and the lie that military strikes avoid civilian infrastructure. The analysis also refutes the foundational justifications for the invasion, such as the pretext of protecting Russian speakers from a fabricated genocide and the distortion of the Budapest Memorandum. These narratives collectively serve a strategy of denial, deflection, and distortion, aimed at rewriting history and justifying a war of aggression. The Kremlin's manipulation of legal language and international agreements is presented as a cynical tool to legitimize its military actions and pursue imperial ambitions while casting Ukraine as unreasonable. Source: EUvsDisinfo, EUvsDisinfo, 2025, Russia’s narrative manipulation, [online] Available at: https://euvsdisinfo.eu/russias-narrative-manipulation/ Contents Russia's SVR and Georgian Dream Execute Coordinated Anti-Western Disinformation Operations A report from EUvsDisinfo details how Russia's foreign intelligence service (SVR) and Georgia's ruling party, Georgian Dream, are conducting a coordinated disinformation campaign to undermine the country's Western partners. The campaign aims to erode public trust in the US, EU, and UK by portraying them as destabilizing forces orchestrating a "color revolution." A distinct operational pattern involves the SVR releasing specific accusations, which are then swiftly amplified by pro-government media outlets like TV Imedi and POSTV and echoed by high-level Georgian officials. This synchronized messaging has systematically targeted different Western actors over time, beginning with the US before shifting focus to the EU and later the UK. These actions, often supported by fabricated or unverified video evidence, represent a deliberate strategy to discredit domestic civil society, derail Georgia's Euro-Atlantic integration, and maintain the nation's position within Moscow's sphere of influence.    Source: EUvsDisinfo, EUvsDisinfo, 2025, Russian scripts, Georgian voices: How disinformation targets the country’s Western allies: the US, EU, and UK in Georgia, [online] Available at: https://euvsdisinfo.eu/russian-scripts-georgian-voices-how-disinformation-targets-the-countrys-western-allies-the-us-eu-and-uk-in-georgia/ Contents Polish Elections Withstand Foreign Influence, But Systemic Vulnerabilities Persist A FIMI-ISAC research paper on the 2025 Polish presidential election concludes that while foreign information manipulation from Russia and Belarus posed a persistent threat, its overall impact was constrained by civil society resilience and the limited sophistication of some campaigns. Known operations like Doppelganger, Operation Overload, and the Pravda Network disseminated anti-EU, anti-Ukrainian, and anti-establishment narratives, often amplifying far-right candidates by portraying them as defenders of national sovereignty. The threat landscape was notable for its consistency with previous elections, although domestic political actors were observed adopting similar manipulative tactics, such as fabricating personas and spreading false claims. Significant systemic weaknesses persist, including vulnerabilities on platforms like X, Meta, and TikTok that are exploited for coordinated inauthentic activity. A critical vulnerability identified is Poland’s lack of a permanent Digital Services Coordinator, creating a regulatory vacuum. The report recommends strengthening platform accountability under the DSA and establishing permanent cross-sector coordination to safeguard Poland's democratic processes.   Source: FIMI-ISAC, 2025, Foreign Information Manipulation and Interference (FIMI) during the 2025 Polish presidential elections, [online] Available at: https://fimi-isac.org/wp-content/uploads/2025/08/FDEI-POLISH-ELECTION-COUNTRY-REPORT-2025-2.pdf Contents Denmark Summons US Envoy Amid Greenland Influence Campaign Allegations A report by ABC News indicates Denmark has summoned a senior US diplomat following allegations of a covert American influence campaign in Greenland. The operation, reportedly conducted by at least three individuals with connections to US President Donald Trump, is believed to aim at weakening the relationship between Greenland and Denmark from within. Alleged tactics include compiling lists of US-friendly Greenlanders, identifying individuals opposed to Trump, and soliciting locals for information that could portray Denmark negatively in American media. These activities align with stated US interests in the strategically significant, resource-rich territory. In response, Denmark’s Foreign Minister deemed any interference unacceptable. The Danish Security and Intelligence Service further noted that Greenland is a target for influence campaigns designed to exploit or fabricate divisions, confirming it has increased its operational presence in the region. The incident underscores the growing geopolitical contestation in the Arctic, where influence operations are an emerging vector of statecraft.   Source: ABC News, 2025, Denmark summons US envoy over suspected influence operations in Greenland, [online] Available at: https://www.abc.net.au/news/2025-08-28/denmark-summons-us-envoy-people-carrying-influence-in-greenland/105705686 Contents Pro-Russia Network Targets Romanian Election with Anti-Sandu Disinformation Campaign Analysis from the Digital Forensic Research Lab (DFRLab) illustrates how a coordinated pro-Russia network of at least 215 accounts on Facebook, TikTok, and Instagram has been conducting a hostile influence campaign since December 2024. The operation sought to undermine Romania's presidential election by accusing Moldovan President Maia Sandu of electoral interference. The network initially supported one far-right candidate before pivoting to another, George Simion, after the first was barred from running. Operators deployed a range of tactics, including the use of generative AI for content and profile pictures, hijacked accounts, and coordinated hashtags in Russian and Romanian. Key narratives were anti-Sandu, anti-EU, and pro-Russian, with specific themes accusing Moldova of dragging Romania into conflict. With over 8,500 posts generating at least 16 million views, the campaign demonstrates a systemic effort to exploit platform vulnerabilities. The findings also reveal deficiencies in platform transparency, as many accounts operated below follower thresholds required for inclusion in public research datasets, potentially obscuring the campaign's full scale.   Source: Digital Forensic Research Lab (DFRLab), Valentin Châtelet, 2025, Cross-platform campaign accuses Moldova’s Sandu of meddling in Romanian elections, [online] Available at: https://dfrlab.org/2025/08/26/cross-platform-campaign-accuses-moldovas-sandu-of-meddling-in-romanian-elections/ Contents Democratic-Aligned Dark Money Group Covertly Pays Influencers for Coordinated Messaging According to an investigation by WIRED , a dark money organization is secretly funding prominent Democratic-aligned influencers to promote party narratives online. This initiative involves payments of up to $8,000 per month, contingent upon the influencers concealing the funding source and adhering to specific content restrictions. The operation signifies a notable evolution in domestic influence tactics, leveraging the parasocial trust and perceived authenticity of social media creators to conduct coordinated messaging campaigns. By requiring secrecy and imposing content controls, the effort intentionally blurs the line between genuine grassroots support and undisclosed paid promotion. This model effectively creates a network of astroturfed political messaging that appears organic to unwitting audiences. The use of such covert funding mechanisms within the domestic political landscape presents a significant challenge for platform transparency and the integrity of online discourse, mirroring strategies often associated with state-linked information operations.   Source: WIRED, Taylor Lorenz, 2025, A Dark Money Group Is Secretly Funding High-Profile Democratic Influencers, [online] Available at: https://www.wired.com/story/dark-money-group-secret-funding-democrat-influencers/ Contents Indonesia Threatens Platforms with Fines and Expulsion Over Harmful Content Reuters has published a story covering how the Indonesian government has summoned representatives from Meta, TikTok, and other platforms and demanded they proactively moderate harmful content. This move signifies a strategic shift, placing the onus on platforms to remove disinformation without waiting for government requests. The directive is a direct response to online campaigns that have successfully fueled public anger and real-world protests. Specific examples of this hostile influence include a deep fake video of the finance minister and mislabeled footage of past riots used to incite unrest. Notably, TikTok videos were reportedly used to mobilize youth for demonstrations, resulting in clashes and arrests. Jakarta is leveraging significant penalties for non-compliance, which range from fines and temporary suspension to the complete revocation of a platform's registration. The government's objective is to mitigate what it calls the "chaos" caused by inaccurate information and protect national stability. Meetings with X and YouTube are also planned as part of this broader regulatory push.   Source: Reuters, Reuters, 2025, Indonesia urges TikTok, Meta to act against harmful online content, [online] Available at: https://www.reuters.com/business/media-telecom/indonesia-urges-tiktok-meta-act-against-harmful-online-content-2025-08-27/ Contents [Takeaways] Indonesia’s move to impose direct liability on platforms, juxtaposed with Poland's persistent regulatory gaps, foreshadows an increasingly fragmented landscape for digital governance. This divergence creates a dual challenge: it imposes complex, country-specific compliance burdens on platforms while offering influence operators strategic havens, allowing them to exploit the jurisdictions of least resistance.   [Download Report]

  • CRC Spotlight: Iranian Digital Hostile Influence Campaign Targeting the UK

    Recent studies and reports have drawn attention to an Iranian Digital Hostile Influence Campaign targeting the UK, reflecting ongoing attempts to shape public debate and weaken cohesion. The campaign often leverages separatist movements, acting as an opportunistic “fellow traveler” that embeds itself within these narratives to amplify division. Building on recent studies and reports, our analysis provides additional insights and highlights nuances that may otherwise be overlooked. In this Spotlight article, we outline how our findings contribute to the existing picture and highlight areas that warrant further attention. [ Download Full Report here ]

  • Cross-border influence targeting Romania’s vote and Moldova’s leadership

    Cross-border influence targeting Romania’s vote and Moldova’s leadership In our March 17 report, “The Romanian Presidential Elections 2024: Analysis of Information Operations and Long-term Influence Efforts, ”  we comprehensively examined the developments that ultimately led to the annulment of Romania’s presidential elections (link to our report here ). This week, in our Weekly Blog, we want to highlight a recent report by DFRLab [i] , which analyzes a coordinated network of cross-platform accounts associated with these events. DFRLab identified at least 215 accounts in the network: 116 on Facebook, 79 on TikTok, and 17 on Instagram. Between December 2024 and June 6, 2025, these accounts published a total of 8,514 posts, amassing at least 16 million views and 681,000 likes. Content focused on spreading anti-EU, pro-Russian, and anti-PAS narratives. Initially, the network supported presidential candidate Călin Georgescu, but after the annulment of the 2024 election, it shifted toward backing far-right candidate George Simion. It also targeted Moldovan President Maia Sandu, repeatedly accusing her of election interference. In parallel, the network amplified claims made by Telegram CEO Pavel Durov that France had attempted to suppress “conservative voices” in Romania. The operators disseminated content in the form of slideshows, photos, videos, and text posts, coordinated across platforms using hashtags in Romanian and Russian. Particularly striking was the use of generative AI, including fake personas with Romanian-sounding names, AI-generated profile pictures and avatars, and AI-based video editing, which suggested an automated approach to content production and distribution. At least twelve Facebook accounts also presented themselves as “professional accounts” or “digital creators,” possibly with the intention of monetization. Posting activity surged in February, April, and May 2025, coinciding with key phases of Romania’s presidential campaign. A notable finding by DFRLab was that many accounts remained below Meta’s transparency threshold: only 55 accounts appeared in the official Content Library, while the rest had fewer than 1,000 followers. DFRLab concluded that this may have been a deliberate strategy to avoid detection, suggesting that the network’s actual reach could be significantly larger than current data indicates. [ii]   Passports and Moldovan Interference In May 2025, the network significantly escalated its activity, with over 1,450 videos promoting the so-called “Moldovan Scenario.”  These claimed that tens of thousands of Moldovans holding Romanian passports, allegedly directed by President Maia Sandu and PAS, were mobilized to interfere in Romania’s election process.   Figure 1 - Courtesy of DFRLab [iii] Anti-PAS and Anti-Sandu Narratives Another prominent narrative portrayed President Maia Sandu and PAS as corrupt, illegitimate, and warmongering. Posts claimed that Sandu was dragging Moldova into a foreign war, endangering national security, and acting in the interests of external powers rather than the country's best interests. At the same time, the network stoked tensions with Romania, labelled Sandu an “illegitimate president,” and called for her resignation. Figure 2 - Courtesy of DFRLab [iv] Victory Day vs. Europe Narrative Figure 3 - Courtesy of DFRLab [v] Narrative of unlawful arrests Between March and May 2025, the network intensified its activity around Romania’s presidential elections by reinforcing pro-Georgescu messaging and supporting detained Gagauz Governor Evghenia Guțul. Following Georgescu’s arrest on February 25, 2025, posting activity spiked: in early March, the network amplified solidarity protests in Bucharest and published dozens of supportive messages. A further wave followed the Constitutional Court's bar on him from running in May. In parallel, the network pushed anti-EU and anti-Romania narratives, casting Georgescu as a victim of political repression. It also promoted content in support of Guțul following her detention on March 27, 2025, including Soviet-era symbolism such as the song “Katyusha”  as a sign of solidarity. Pro-Russian politicians claimed her arrest lacked a legal basis, and several videos reached high viewership, with some TikTok clips attracting up to 300,000 views.   Stolen Accounts and AI Personas DFRLab reveals that the network relied on both stolen or repurposed accounts and AI-generated fake personas. Several profiles that had originally posted authentic content, such as family photos, were later repurposed to spread inauthentic material. Fake identities appeared simultaneously across Facebook, Instagram, and TikTok, with coordinated posting patterns. Some of these accounts infiltrated local Facebook news groups to expand their reach. Figure 4 - Displayed in DFRLab Report [vi] To increase credibility, the operators heavily utilized generative AI for profile pictures, avatars, and video elements, as well as stock footage that, in some cases, dated back years but was repurposed as current protest footage. Tools such as DreamFace  were used to generate synthetic faces that appeared as supposed real people in videos. Figure 5 - Use of AI-Tools Displayed in DFRLab Report [vii] DFRLab concludes that the network deliberately combined hijacked accounts, AI-generated material, and recycled stock footage to create the illusion of authenticity and lend credibility to its influence operation.   Inauthentic News Channel The network also used ono.news , a news and marketing channel, to disseminate its content. Figure 6 -  Displayed in DFRLab Report [viii] While the accounts remained active on Telegram and TikTok, the channel appears to have been removed from Facebook. Initially, ono.news appeared on Facebook under the name Gabriel Matei, a profile that bore signs of being a fake persona before shifting into the identity of a supposed news outlet. DFRLab found indications of extensive AI technology use: the TikTok and Telegram profiles utilized an AI-generated profile picture, and videos featured the watermark of D-ID, a company specializing in visual AI avatars. DFRLab concludes that ono.news  was not a genuine media channel but rather a fabricated brand identity, leveraging fake identities and generative AI to appear credible and amplify the network’s narratives.   Conclusion DFRLab’s analysis reveals that the network employed coordinated cross-platform activity to disseminate anti-PAS, anti-Sandu, and anti-EU narratives, gradually expanding its focus from Moldovan politics to Romania’s presidential election. The accounts generated at least 16 million views and 681,000 likes, though the actual impact remains difficult to quantify. Tactics included the systematic use of hijacked accounts, AI-generated personas, and recycled stock material to create an appearance of authenticity. The investigation further demonstrates how the operation operated largely below Meta’s transparency threshold, thereby avoiding detection, and exposed significant discrepancies between publicly accessible content and the data available through Meta’s Content Library and API. These gaps highlight persistent weaknesses in platform moderation and disclosure processes that continue to enable covert influence campaigns. We recommend reading the full DFRLab case study , which provides additional valuable details and context. [Footnotes] [i]  DFRLab, Valentin Châtelet, 2025. Cross‑platform campaign accuses Moldova’s Sandu of meddling in Romanian elections. [online] Published 26 August 2025. Available at: https://dfrlab.org/2025/08/26/cross-platform-campaign-accuses-moldovas-sandu-of-meddling-in-romanian-elections/ [ii]   Ibid. [iii]  Ibid. [iv]  Ibid. [v]   Ibid. [vi]  Ibid. [vii]  Ibid. [viii]  Ibid. DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.

bottom of page