top of page

Search CRC

130 results found with an empty search

  • CRC Weekly: Cyber-based hostile influence campaigns 15th-21th September

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect. During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. [Report Highlights] Russia's Ministry of Defense is "AI-washing" propaganda with a new program featuring a virtual host. - NewsGuard A large-scale Russian hybrid campaign, using bots and trolls, has been confirmed as a key factor in the annulment of Romania's 2024 presidential elections. - Romania Insider A Russian covert influence network is automating pro-Kremlin propaganda using uncensored LLMs to populate hundreds of new websites. - Recorded Future & Cybernews China is expanding its global influence by controlling digital platforms and shaping societal narratives through grassroots cultural organizations. - Doublethink Lab A U.S. congressional investigation is probing a "dark money network" allegedly funneling CCP funds to fuel civil unrest in America. - Global Influence Operations Report The misinformation output from leading AI chatbots has doubled as malign actors exploit real-time web search features to launder falsehoods. - Cybernews Leaked documents reveal a Beijing-based firm has developed an "army of AI personas" for conducting sophisticated online influence operations. - The Record [Weekly Review] Kremlin Targets Western Cohesion: Disinformation Undermines Ukraine 'Coalition of the Willing' Russia's AI-Powered Disinformation ‘Severe Interference’ annulment of Romanian Presidential Elections confirmed Deepfakes, Bribery, and Propaganda Target Moldova's Sovereignty CopyCop's Global Ambitions: Russian Influence Network Expands with 300 New Websites Digital Platforms and Narratives: PRC Expands Influence via Social Media Ownership Dark Money Network: Unmasking CCP's Covert Influence in US Society AI Chatbots' Misinformation Rate Has Doubled China's GoLaxy Deploys AI Persona Armies for Advanced Global Influence Operations Fabricated Israel-Kirk Assassination Claim Gains Millions of Views via Amplification Techniques Building Resilience: A Blueprint for Countering Information Warfare in Eastern Europe Kremlin Targets Western Cohesion: Disinformation Undermines Ukraine 'Coalition of the Willing' A recent analysis  from NewsGuard Reality Check reveals a multi-pronged pro-Kremlin disinformation campaign aimed at discrediting and dividing Western nations over proposed peacekeeping deployments in Ukraine. This campaign, targeting the 'Coalition of the Willing,' uses advanced tactics to undermine European support for Kyiv, a critical component of the broader information warfare landscape. Propagandists deployed AI-generated videos, impersonating legitimate news outlets like France 24, to falsely claim that French President Emmanuel Macron's army chief had disavowed an order to send troops. Simultaneously, pro-Kremlin channels circulated a fabricated map, riddled with linguistic errors, purporting to show a Western plan to partition Ukraine's territory and resources among France, the U.K., Poland, and Romania. These efforts seek to frame Western peacekeeping as imperialistic aggression, discouraging further U.S. and European involvement. The campaign leverages anonymous accounts across major social media platforms and builds on a pattern of targeting France as Macron increases support for Ukraine, as previously documented by NewsGuard. Source: NewsGuard, Reality Check, https://www.newsguardrealitycheck.com/p/kremlin-propaganda-targets-western Top of Page Russia's AI-Powered Disinformation According to NewsGuard , the state-owned Zvezda TV channel now airs a program featuring an AI-generated host named "Natasha." This approach allows the Kremlin to disseminate propaganda, including deepfakes of Western leaders, while maintaining plausible deniability by attributing the content to an algorithm. The strategy represents a significant development in state-sponsored disinformation, using AI as a tool to obscure attribution and create a layer of separation from the propaganda's message, making it more challenging to track. Source: NewsGuard, Reality Check, https://www.newsguardrealitycheck.com/p/kremlin-propaganda-targets-western   Top of Page ‘Severe Interference’ annulment of Romanian Presidential Elections confirmed Romania's general prosecutor has confirmed that the country's annulled 2024 presidential elections were heavily influenced by a large-scale Russian hybrid campaign. The report from Romania Insider states that Russia deployed bots, trolls, and AI-generated content to inflame tensions and amplify hate speech during the campaign. Four Russia-linked firms were identified as being behind the operations, which targeted voters through a mix of traditional media, troll farms, and coordinated bot networks. This digital interference was deemed so severe that it contributed to the unprecedented cancellation of the election results, highlighting a direct cyber-enabled assault on a nation's democratic process. Source: Romania-Insider.com , Radu Dumitrescu, 2025 https://www.romania-insider.com/romania-cancelled-presidential-elections-russia-general-prosecutor-2025 Top of Page Deepfakes, Bribery, and Propaganda Target Moldova's Sovereignty Ahead of its September parliamentary elections, Moldova faces significant risks from large-scale Russian interference. A report  by SBS News highlights an extensive disinformation network comprising tens of thousands of accounts that spread falsehoods via AI-generated deepfakes, bots, and trolls. Moscow has reportedly spent up to €200 million on these digital influence efforts. Pro-Russian parties, particularly in regions dominated by Russian media, benefit from Kremlin narratives that are amplified by these campaigns. Critics warn that such interference could undermine Moldova’s EU integration and jeopardize its sovereignty by exploiting the digital information space for political gain. Source: SBS NewsSource: SBS News. https://www.sbs.com.au/news/article/cash-for-votes-deepfakes-and-propaganda-russias-latest-bid-for-influence/xmp4569qg Top of Page CopyCop's Global Ambitions: Russian Influence Network Expands with 300 New Websites Recorded Future's Insikt Group provide us with their analysis  of a significant expansion of CopyCop (Storm-1516), a Russian covert influence network. Since March 2025, CopyCop has created over 300 new fictional media websites, targeting the United States, France, Canada, Armenia, and Moldova, while diversifying its linguistic reach to include Turkish, Ukrainian, and Swahili content. This vast infrastructure, very likely operated by John Mark Dougan with support from the Moscow-based Center for Geopolitical Expertise (CGE) and the GRU, disseminates pro-Russian, anti-Ukrainian, and anti-Western narratives. The network's core objectives remain consistent: eroding support for Ukraine and exacerbating political fragmentation in Western nations, now extending to Russia’s broader sphere of influence. A notable tactical evolution includes the use of self-hosted, uncensored large language models, likely based on Meta's Llama 3, to generate AI content at scale. CopyCop employs deepfakes, fake interviews, and fabricated dossiers, with content frequently amplified by social media influencers and other Russian networks like Portal Kombat. The network's impact remains significant, regularly achieving high organic engagement and breaking into mainstream political discourse, posing a persistent threat to democratic institutions and information integrity globally. Source: Recorded Future Research, Insikt Group, Sep 2025,  https://www.recordedfuture.com/research/copycop-deepens-its-playbook-with-new-websites-and-targets Top of Page Digital Platforms and Narratives: PRC Expands Influence via Social Media Ownership The 2024 China Index, assessing 101 countries across nine domains, has revealed a general intensification of Beijing’s reach since 2022.  Mediums’ Global Research Team examines  the implications of this significant expansion of China’s global influence. While East and Southeast Asia remain highly impacted, the sharpest increases are observed in Central America, the Caribbean, and Sub-Saharan Africa, reflecting the evolving strategies, such as the Belt and Road Initiative, and broadened technological investments. Technology and foreign policy persist as the most influential domains globally, but the Society domain recorded the largest increase, signaling deeper efforts to shape societal narratives. Notably, the report highlights a critical trend: PRC-linked entities now control or hold significant ownership in top social media or messaging applications in 65% of assessed countries, up from 45% in 2022. This expanded digital platform involvement, alongside intensified grassroots engagement through cultural events, underscores Beijing's growing capacity to influence information environments and align global discourse with its policy objectives. Conversely, coercive tactics show little correlation with actual policy alignment, suggesting a preference for long-term engagement. Source: Medium, Doublethink Labs  Global Research Team, Sep 2025 https://medium.com/doublethinklab/china-index-2024-mapping-prc-influence-across-101-countries-full-report-6adc37562677 Top of Page Dark Money Network: Unmasking CCP's Covert Influence in US Society A congressional investigation is escalating into a U.S. billionaire for allegedly fueling a Chinese Communist Party influence campaign. According to the Global Influence Operations Report , the strategy involves a "dark money network" that funnels millions through non-profits to pro-CCP media outlets. These funds are intended to foment civil unrest and manipulate political discourse in the United States. The probe is scrutinizing potential violations of the Foreign Agents Registration Act (FARA). The case highlights a sophisticated form of covert foreign intervention that leverages financial and media platforms to secretly create social discord. Source: The Global Influence Operations Report, The House Committee on Oversight and Government Reform, https://www.global-influence-ops.com/ccp-funding-congress-probes-neville-roy-singham/ Top of Page AI Chatbots' Misinformation Rate Has Doubled Cybernews reports  on the increasing rate of misinformation from AI chatbots, citing a NewsGuard study  that found a doubling of falsehoods in the past year. According to the article, this is due to a "structural tradeoff" where AI models' use of real-time web searches and a 0% non-response rate has made them more vulnerable to a "polluted online information ecosystem." Malign actors, such as Russian disinformation operations, exploit this by seeding the internet with falsehoods that the chatbots then "launder" as legitimate information. The article also references a report  by the American Security Project, which found that some AI models have been trained on messaging from the Chinese Communist Party. The broader issue, as the article concludes, is that the public places a high degree of trust in AI despite its known propensity to "hallucinate," a problem even acknowledged by OpenAI CEO Sam Altman. Source:  Cybernews, Sep 2025,  https://cybernews.com/ai-news/ai-chatbots-misinformation-rate-falsehoods/ ; newsguardtech, AI False Information Rate Nearly Doubles in One Year, Sep 2025 [online] Available at: https://www.newsguardtech.com/ai-monitor/august-2025-ai-false-claim-monitor/ Top of Page   China's GoLaxy Deploys AI Persona Armies for Advanced Global Influence Operations Leaked "GoLaxy papers," discovered by Brett Goldstein and analyzed with Brett Benson, are analyzed by The Record and reveal that China's GoLaxy has developed an "AI persona army" for sophisticated information warfare. Using a system that uses generative AI, like DeepSeek, to create highly realistic digital identities. These personas move beyond traditional "blunt-force" troll farm methods, instead engaging in subtle, human-like interactions to shape narratives and conduct persuasion operations. The system scrapes millions of social media data points to build psychological profiles, including thousands of Americans, for highly targeted propaganda. This development represents a major shift in global information warfare, posing a more effective threat to national security in regions such as Hong Kong, Taiwan, and potentially the United States. GoLaxy's subsequent deletion of website sections when contacted lends credibility to these claims, highlighting the challenge of detecting and defending against AI-generated manipulation. Researchers warn that AI-driven propaganda is a present danger, requiring new defenses to distinguish human from machine interactions. This new frontier in national security demands urgent public and governmental understanding. Source: The Record, Dina Temple-Raston & Erika Gajda, September 2025, https://therecord.media/golaxy-china-artificial-intelligence-papers   Top of Page Fabricated Israel-Kirk Assassination Claim Gains Millions of Views via Amplification Techniques NewsGuard's "Reality Check" reports  on a disinformation strategy in which anti-Israel social media influencers falsely claimed a letter from Israel's chief rabbi proved the country had foreknowledge of Charlie Kirk's assassination. Prominent figures like Jackson Hinkle, Candace Owens, and Nick Fuentes amplified the claim, which was based on a typo in the letter's date. The chief rabbi's office confirmed to NewsGuard that the letter was actually sent two days after Kirk's death, not eight days before. This case exemplifies how bad actors exploit errors and misinformation to fuel conspiracy theories and advance a broader narrative. In this instance, the disinformation sought to implicate Israel in the murder of a pro-Israel activist, a claim that Iran also promoted. The broad reach of these false posts, garnering millions of views, highlights the speed at which unsubstantiated claims can spread on platforms like X and the challenge of correcting such falsehoods once they have gained traction. Source: NewsGuard, Reality Check, September 2025, https://www.newsguardrealitycheck.com/p/letter-with-typo-pushed-as-evidence   Top of Page Building Resilience: A Blueprint for Countering Information Warfare in Eastern Europe The Atlantic Council's DFRLab and the Info Ops Poland Foundation have partnered to counter information manipulation in the South Caucasus and Eastern Europe, where such campaigns are used to undermine democratic institutions. The project focuses on creating a "blueprint" to counter these strategies by strengthening the capabilities and expertise of local stakeholders. Building on a previous project that documented hostile actors' tactics and techniques, the new initiative aims to build a more resilient society capable of mitigating both immediate and future threats from information warfare, moving beyond simply exposing individual disinformation campaigns. Source: The Atlantic Council's Digital Forensic Research Lab (DFRLab), Info Ops Poland Foundation, https://dfrlab.org/2025/09/15/module-ii-partnering-to-counter-information-manipulation-in-south-caucasus-and-eastern-europe/   Top of Page [CRC Glossary] The Cyfluence Research Centre has relaunched the   CRC Glossary.  This initiative aims to serve as a shared lexicon of both foundational and emerging terms that shape the field.   To  this end, the Glossary is designed to be a continually updated resource, with new entries added weekly. We see this as a collaborative project and strongly encourage input from the expert community. The goal is to reduce the problem of ambiguous or conflicting terminology that can hinder collaborative work as well as communication effectiveness to the general public as a whole.  We invite you to submit additions, changes, or corrections via the form on our website. [Download]

  • CRC Weekly: Cyber-based hostile influence campaigns 1st - 7th September

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect. During the first week of September 2025, we observed, collected and analyzed endpoints of information related to cyber-based  hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. [Highlights] The Telegram channel 'War on Fakes' was established one day before the 2022 invasion to preemptively push false narratives and deflect Russian war crimes. During the recent Pakistan Indian conflict, Competing Influence operations ‘Hidden Charkha’ and ‘Khyber Defender’ deployed over 1,200 inauthentic accounts in support of their governments. Multiple, distinct Russia-linked influence networks are converging their efforts to destabilize Moldova's elections and halt its pro-European trajectory. The Moscow Patriarchate is using religious conferences in Africa to expand Russian influence in coordination with state intelligence and mercenary operatives. The rate of AI tools repeating false news claims nearly doubled from 18 percent to 35 percent in one year. The #trumpdead hashtag campaign on X generated over 35 million views within a four-day period, showcasing the narrative's rapid, high-volume spread. The Cyfluence Research Center has relaunched the CRC Glossary .  This initiative aims to provide a lexicon of both foundational and emerging terms relating Cyfluence [Weekly Review] Influential Figures Exploit Minneapolis Shooting to Push Competing Narratives on X According to Wired , the aftermath of the Minneapolis church shooting became a case study in how X’s platform architecture accelerates the spread of hostile influence. Immediately following the event, high-profile figures, including politicians and activists, exploited the information vacuum to disseminate competing and unsubstantiated narratives about the shooter's motives. These claims, which ranged from anti-Christian hate to white supremacy and leftist radicalization, quickly went viral. This rapid spread was facilitated by X’s weakened moderation and an algorithmic model optimized for engagement over factual accuracy. The platform’s incentive structure rewarded sensational content, allowing false claims to outpace verified information. Even X’s own content summaries reportedly amplified details that fueled political narratives. While experts ultimately concluded the shooter’s motivations were likely nihilistic rather than ideological, the platform had already successfully converted the tragedy into a vehicle for political polarization and viral misinformation, demonstrating a significant vulnerability in the modern information ecosystem. Source: Wired, David Gilbert, How Disinformation About the Minnesota Shooting Spread Like Wildfire on X, Sep 3, 2025, [online] Available at:  https://www.wired.com/story/disinformation-minnesota-shooting-x/ Top of Page       Institutional Mimicry: Russia Deploys Fake Fact-Checking Outfits to Launder Propaganda EUvsDisinfo  has reported that pro-Russian actors are actively corrupting the practice of fact-checking by creating bogus organizations to legitimize and disseminate state-sponsored propaganda. This tactic represents a continuation of the Kremlin's subversion of journalistic principles, which intensified following the 2022 full-scale invasion of Ukraine with initiatives like the 'War on Fakes' Telegram channel. The most recent and sophisticated effort is the 'Global Fact-Checking Network' (GFCN), a Kremlin-funded entity intentionally named to resemble the legitimate International Fact-Checking Network. Launched in April by Russian Foreign Ministry spokesperson Maria Zakharova, the GFCN is managed by sanctioned individuals previously involved in other influence operations. Its content is notably vacuous, avoiding any scrutiny of Russia while covertly inserting pro-Kremlin messaging and allusions to Western culture wars. Despite its formal structure, the operation currently exhibits negligible impact, with its social media channels attracting almost no organic viewership or followers, suggesting its pretenses have failed to gain traction. Source: EUvsDisinfo, 2025, Fake fact-checking: when facts are fiction and falsehoods are facts. [online] Available at:  https://euvsdisinfo.eu/fake-fact-checking-when-facts-are-fiction-and-falsehoods-are-facts/   Top of Page Vilification Campaigns and Inauthentic News Deployed Against Moldovan Leadership A publication from Recorded Futures  Insikt group covers how multiple Russia-linked influence operations are converging to destabilize Moldova's September elections and derail its accession to the European Union. Networks including Operation Overload, Operation Undercut, and the Foundation to Battle Injustice are executing parallel campaigns to vilify President Maia Sandu and the ruling PAS party. These efforts portray EU integration as economically disastrous while promoting alignment with the Kremlin's “Russkiy Mir” doctrine. The operations leverage a range of tactics, from laundering pro-Kremlin content via aggregator sites like Pravda Moldova to deploying covert social media pages linked to oligarch Ilan Shor. For the first time, Operation Undercut has been observed using TikTok to target Moldovan users with anti-government narratives. While these campaigns have not yet achieved substantial success in shaping public opinion, they heighten risks to media integrity and voter trust. The report also notes that a retreat in US counter-disinformation efforts has created a more permissive environment for these increasingly sophisticated Russian campaigns. Source: Recorded Future, September 2025. Russian Influence Assets Converge on Moldovan Elections. [online] Available at:   https://www.recordedfuture.com/research/russian-influence-assets-converge-on-moldovan-elections Top of Page Russia Deploys 'Failed State' and Falsified Data Narratives Against Canada According to a report from DisinfoWatch , Russian Foreign Ministry spokesperson Maria Zakharova has signaled an escalation of information warfare against Canada. Through a Telegram post amplified by state media, Zakharova depicted Canada as a nation in a “deep crisis of values” that could cease to exist within a decade. Her commentary leveraged a combination of established Kremlin tactics, including the promotion of fringe separatism, culture-war tropes, and anti-LGBTQ narratives. The messaging relied on specific falsehoods, such as inflating support for separatism in Manitoba and misrepresenting Canadian policies on drug decriminalization and medical assistance in dying. These efforts align with documented Russian influence operation templates aimed at exploiting societal divisions, undermining support for Ukraine, and portraying liberal democracies as decadent and failing. The direct targeting of Canada suggests a renewed Kremlin focus on subverting the country's national unity and weakening its international alliances, signaling a new phase of hostile influence operations that Kremlin-aligned actors are expected to amplify. Source: DisinfoWatch, 5 September, 2025. Kremlin Spokeswoman Zakharova Takes Aim At Canada. [online] Available at:   https://disinfowatch.org/kremlin-spokeswoman-zakharova-aims-to-divide-canada/ Top of Page Moscow Patriarchate Pivots to Africa and Domestic Extremism Amid declining  political relevance A Jamestown Foundation analysis details how the Moscow Patriarchate (ROC MP) is compensating for its declining influence in the post-Soviet space by increasing its strategic utility to the Kremlin. Despite significant losses, particularly in Ukraine, the church is successfully executing a pivot towards new domestic and foreign influence operations. Domestically, it promotes traditional values and has helped elevate Orthodoxy to a core component of Russian civic identity for 61 percent of the population. Abroad, the ROC MP is expanding its geopolitical reach into Africa, holding conferences for local clerics to cement Russian influence in coordination with state intelligence. It also projects soft power by asserting canonical authority over groups like the Orthodox Church in America. A directive for the church to engage with the extremist 'Russian Community' shows a high-risk strategy to co-opt radical nationalism, which may secure Patriarch Kirill's position but entangles the state more deeply with extremist elements. Source: Jamestown Foundation, Paul Goble, 2025. Eurasia Daily Monitor. [online] Available at:   https://jamestown.org/program/despite-losses-at-home-and-abroad-moscow-patriarchate-helps-kremlin-expand-influence/ Top of Page Generative AI Falsehood Rate Doubles Amid Push for Real-Time Responsiveness The rate at which leading generative AI tools repeat false information has nearly doubled in one year, an increase that undermines industry promises of safer systems. An audit by NewsGuard found that the failure rate for news-related prompts increased from 18 percent in 2024 to 35 percent in 2025. This degradation stems from a structural tradeoff where chatbots have integrated real-time web search capabilities. While this change eliminated query non-responses, it simultaneously exposed the models to a polluted online information ecosystem. Malign actors, including Russian disinformation operations, are actively exploiting this vulnerability. They are laundering falsehoods through low-engagement websites, social media, and AI-generated content farms, which the models fail to distinguish from credible outlets. The push to make AI tools more responsive and timely has inadvertently made them more susceptible to spreading propaganda, turning them into more effective conduits for hostile influence operations. Source: NewsGuard, September 4th 2025,  AI False Information Rate Nearly Doubles in One Year. [online] Available at:   https://www.newsguardtech.com/ai-monitor/august-2025-ai-false-claim-monitor/   Top of Page False Trump Health Rumors Garner Tens of Millions of Social Media Views Liberal and anti-Trump social media accounts are executing a disinformation campaign alleging a severe decline in President Trump's health, including rumors of his death. This operation, analyzed  by NewsGuard, relies on multiple pieces of fabricated or decontextualized evidence to construct its narrative. Key tactics include circulating a misleading map screenshot to suggest road closures at Walter Reed Medical Center and using an AI-enhanced photograph to create false visual evidence of a stroke. Actors also repurposed older media, such as a 2023 photo of an ambulance at the White House and an image of the flag at half-staff for a school shooting, to imply a current medical emergency. The campaign achieved significant reach, with one associated hashtag, #trumpdead, accumulating over 35 million views on X in four days. The events demonstrate how disparate, low-effort falsifications can be networked to create a pervasive and viral political narrative. Source: NewsGuard, Sofia Rubinson, 2025. NewsGuard Reality Check. [online] Available at: https://www.newsguardrealitycheck.com/p/bogus-evidence-for-trumps-supposed Top of Page Hidden Charkha and Khyber Defender: State-Aligned IO in South Asian Conflict Two large, state-aligned influence networks, Hidden Charkha (pro-India) and Khyber Defender (pro-Pakistan), operated during the 2025 conflict between the nations. The report by Recorded Future provides a blueprint for how symmetrical influence operations are deployed by adversarial, nuclear-armed states to control escalation and garner support during kinetic military actions. Both networks attempted to frame their respective nations as holding the moral high ground through technological and military superiority, thereby justifying tactical restraint. Key tactics included amplifying forged military documents, exaggerating the impact of cyberattacks, and impersonating media outlets. Despite the scale of these operations, they were assessed as almost certainly unsuccessful in shaping public opinion. Their failure was attributed to an inability to break out of patriotic echo chambers and a recurrent use of generative AI for visual content, which likely undermined their credibility. Still, their activities demonstrate how patriotic sentiment can align non-state actors with government objectives during wartime. Source: Recorded Future, Insikt Group, 02 SEP 2025, Influence Operations and Conflict Escalation in South Asia. [online] Available at:   https://www.recordedfuture.com/research/influence-operations-and-conflict-escalation-in-south-asia Top of Page UK Democracy Remains Vulnerable to Misinformation Amid Weak Election Reforms According to an analysis by Full Fact, the UK government’s Elections Bill represents a missed opportunity, as its measures are insufficient to protect democratic processes from misinformation. While the strategy contains some positive steps, such as increasing fines for the Electoral Commission and requiring digital imprints on some campaign materials, it fails to match the scale of the threat. The proposed legislation needs significant upgrades to be effective. Key recommendations include amending the Online Safety Act to cover more categories of illegal election-related content and other material harmful to democracy. The bill should also incorporate robust media and political literacy initiatives, especially for younger voters, and establish stronger rules to deal with political deepfakes, including clear labeling requirements. Further proposals include creating a comprehensive digital library of political advertisements to enable public scrutiny and establishing an independent body to regulate non-broadcast political advertising. Without these more ambitious provisions, the bill will not achieve its stated objectives of safeguarding democracy and restoring political trust. Source: Full Fact, 1st Sep, 2025, Protecting our democracy from the harms of misinformation and disinformation. [online] Available at:   https://fullfact.org/politics/protecting-our-democracy-from-the-harms-of-misinformation-and-disinformation/ Top of Page [Takeaways] This week we saw examples of how threat actors are increasingly forgoing direct persuasion in favor of tactics, such as exploiting AI vulnerabilities and mimicking trusted institutions, to passively degrade the Information Ecosystem. This approach suggests a strategic calculation: an ungovernable and untrustworthy information space is, in itself, a victory. By fostering an environment of radical doubt, malign actors can paralyze democratic decision-making and erode social cohesion without having to win a direct contest of ideas.   Top of Page [CRC Glossary] The Cyfluence Research Center has relaunched the CRC Glossary .  This initiative aims to serve as a shared lexicon of both foundational and emerging terms that shape the field. To this end, the Glossary is designed to be a continually updated resource, with new entries added weekly. We see this as a collaborative project and strongly encourage input from the expert community. The goal is to reduce the problem of ambiguous, or conflicting terminology that can hinder collaborative work as well as communication effectiveness to the general public as a whole.  We invite you to submit additions, changes, or corrections via the form on our website. Top of Page

  • Vietnam Framed Villain, China Not Found: Influence Ops on Repeat

    In this week's blog, we examine a case study published by the Philippine newspaper PhilStar . [i]  The report analyzes a coordinated hostile digital influence (HDI) campaign on X (formerly Twitter) and illustrates how information operations are deployed in the context of the South China Sea dispute.  Geopolitical Background  The Philippines, Vietnam, and China have long been locked in disputes over islands and reefs in the South China Sea. China has reinforced its claims through large-scale land reclamation and a strong military presence.  The Philippines has responded by seeking support through alliances, closer ties with Western partners, and increasingly through cooperation with neighboring states. [ii]   Vietnam, in particular, has recently drawn closer to Manila, as the two countries have established a strategic partnership, launched joint coast guard exercises, and set up direct lines of communication. These developments risk isolating China. That is why Beijing has a strong incentive to sow mistrust between Manila and Hanoi and push narratives portraying Vietnam as the problem.  The Influence Campaign  Against this backdrop, PhilStar investigated a botnet of 71 fake accounts on X . Between November 2024 and June 2025, these accounts generated nearly 1,000 posts, often within seconds of each other and using near-identical wording. The central narrative was consistent: Vietnam is the main aggressor in the South China Sea, responsible for environmental destruction and instability.  China’s significantly larger activities in the South China Sea are absent from the posts. Yet although China is never directly mentioned, many of the messages rely on narratives drawn from Chinese state media.  Example of posts framing Vietnam, courtesy of Philstar and Geraldine Santos. [iii] At the same time, part of the network also circulated pro-Duterte and anti-Marcos narratives, inserting the operation into the domestic Philippine political debate.  The technical characteristics include accounts with no followers, AI-generated profile pictures, generic naming conventions, and automated copy-paste posting. Particularly notable is the link to a Russian SMM panel [iv] , a service that sells bulk bot and fake accounts. In the review of the case study, a researcher pointed out [v]  that the campaign’s methods and resources mirror those commonly linked to Spamouflage networks [vi]  (for more background, find our earlier blog post on Spamouflage here ).  The operation’s messaging appears to serve a dual purpose: externally, it reframes Vietnam as the aggressor in the South China Sea; internally, it reinforces pro-Duterte and anti-Marcos narratives, embedding itself in the Philippine political debate. The campaign reflects strategic interests attributed to China, particularly the aim of undermining cooperation among littoral states. By fostering distrust toward Vietnam and deepening domestic political divisions, it appears to be designed to weaken coordination among claimant countries. The case study notes that the goal is not reach but long-term influence, with narratives gradually shaping regional perception.  According to the Philstar article, a spokesperson for the Armed Forces of the Philippines described the hostile campaign as a “national security concern", warning that it distorts public understanding and shifts attention away from maritime developments . [vii] Conclusion PhilStar’s case study provides a technically detailed account of a coordinated HDI campaign, highlighting its tactics, structure, and execution. It illustrates how hostile influence is used to disrupt regional coordination, advancing strategic interests attributed to China.  PhilStar has emerged as a useful source in this context, regularly reporting on HDI operations in Southeast Asia.   [Footnotes:] [i]  Philstar, Cristina Chi, 2025. Copypasta army: Vietnam smeared by bots backing Duterte on X. [online] Published 25 August 2025. Available at: https://www.philstar.com/headlines/2025/08/25/2466374/copypasta-army-vietnam-smeared-bots-backing-duterte-x [ii] Reuters, Mikhail Flores, 2024. Philippines, Vietnam boost military ties amid South China Sea tensions. [online] Published 30 August 2024. Available at: https://www.reuters.com/world/asia-pacific/philippines-vietnam-sign-defence-agreement-2024-08-30   [iii]   Philstar, Cristina Chi, 2025. Copypasta army: Vietnam smeared by bots backing Duterte on X. [online] Published 25 August 2025. Available at: https://www.philstar.com/headlines/2025/08/25/2466374/copypasta-army-vietnam-smeared-bots-backing-duterte-x   . [iv]   An SMM panel: is an online platform that provides social media marketing services such as likes, followers, comments or views through an automated dashboard to boost reach, visibility and engagement.   [v]  Bangag — Tagalog (Philippines) slang for “high” or intoxicated, typically used to imply drug use. [vi] Spamouflage is a portmanteau of spam and camouflage. The term refers to pro-China influence operations that use swarms of bot and spam accounts to amplify state-aligned narratives and obscure criticism of Beijing. [vii] Philstar, Cristina Chi, 2025. Copypasta army: Vietnam smeared by bots backing Duterte on X. [online] Published 25 August 2025. Available at: https://www.philstar.com/headlines/2025/08/25/2466374/copypasta-army-vietnam-smeared-bots-backing-duterte-x .

  • Singapore Drops the Name, Shifting the Game

    Introduction Public attribution in the context of influence can be likened to an open letter, backed by technical reasoning and signed with a clear political signature. A specific incident is not only investigated but officially linked to a particular actor. Such attribution is more than a forensic statement: it is a form of strategic messaging . Public attribution always addresses a broader audience, allies, undecided third states, domestic publics, and potential future adversaries.   Attribution to UNC3886 [i] Until now, Singapore has avoided taking such a step regarding China. The previous tactic was to publish technical details and share indicators, while leaving the actual originator in the dark. This time, however, that cover was dropped [ii] : intrusions into critical infrastructure were attributed to UNC3886, a group described by Mandiant [iii] as a Chinese APT, meaning a state or state-aligned hacking group. While the statement did not explicitly mention "China," the message was clear enough to be registered. The response followed promptly. The Chinese embassy in Singapore rejected the accusations and “courteously” offered to cooperate in the investigation.   Figure 1 – Courtesy of Global Times [iv]   A short time later, the Global Times published a report by the Cybersecurity Association of China, claiming that the United States had exploited zero-day vulnerabilities in Microsoft Exchange to attack a major Chinese defence company. #_edn6 Figure 2 – Courtesy of Global Times [vi]   Singapore appeared in the report only in passing, as the location of “proxy IPs.” [v] Figure 2 – Courtesy of Global Times [vi] Singapore also faced a degree of backlash, albeit not particularly strong: On August 5, 2025, the Chinese embassy responded in the Straits Times “Opinion Forum” [vii] to the article published on July 25 [viii], 2025, rejecting the allegations and criticizing the newspaper itself.   Figure 3 - Courtesy of Strait Times [ix]     A Facebook post on the topic followed, drawing comments and shares that made it clear the matter hit home. Figure 4 - Courtesy of Facebook [x]   China’s Attribution Response Pattern Instead of placing Singapore at the center of a counter-campaign, Beijing shifted the focus to its strategic rival, the United States. It seems that Chinese reactions to attribution now follow a well-rehearsed pattern: Country X names Chinese actors; within a short time, a state-directed entity such as the CVERC (China Vulnerability Emergency Response Center) [xi]  issues a report accusing the United States of having done the same. This maneuver flips the script, redirects focus, and channels the discussion into another narrative. Noteworthy in this context is the rapid response to the SharePoint incident; the counter-report was released in about a week. In previous cases, such as Volt Typhoon (also addressed by the CVERC), reports from Chinese front organizations have taken considerably longer to surface. It will be interesting to see how this story develops and whether we soon see a dedicated CVERC report on Singapore.   Conclusion The public attribution of the alleged Chinese group UNC3886 by Singapore marks a notable departure from the country’s previously discreet communication style in matters of cyber threats. Instead of issuing technical reports or relying on diplomatic channels, Singapore deliberately opted for a visible public stance. This can be understood as a strategic signal, both to potential attackers and to international partners. The effectiveness of such attributions, however, remains contested. While publicly naming an actor increases the costs for attackers, forcing them to adapt tactics or infrastructure, authoritarian states such as China often respond not by retreating, but by issuing counteraccusations. This was evident in the immediate reaction of the Chinese Embassy in Singapore. Such behaviour is particularly relevant in the context of hostile digital influence, as counteraccusations are not merely a defensive move but a deliberate attempt to reshape the narrative, cast doubt on the original claims, and undermine trust in the accuser. In these situations, attribution becomes part of a broader information conflict, where the struggle is not only over establishing technical culpability but also over influencing public perception, framing international discourse, and eroding the credibility of opposing voices. For Singapore, this new visibility is not without risks. Attribution creates expectations, narrows the room for maneuver, and carries the potential to exacerbate tensions with China, particularly if the naming is interpreted not in isolation but as part of a broader political shift. [Footnotes] [i] UNC refers to "uncategorised" or "unclassified" [ii] The Straits Times, Jaipragas, B., 2025. No idle punt: Why Singapore called out cyber saboteur UNC3886 by name.   [online]  Published 25 July 2025. Available at: https://www.straitstimes.com/opinion/no-idle-punt-why-singapore-called-out-cyber-saboteur-unc3886-by-name [iii] Google Cloud Blog (Mandiant), 2024. Cloaked and Covert: Uncovering UNC3886 Espionage Operations.   [online]  Published 18 June 2024. Available at: https://cloud.google.com/blog/topics/threat-intelligence/uncovering-unc3886-espionage-operations?hl=en [iv] Global Times, 2025. US intelligence uses Microsoft Exchange zero‑day vulnerabilities to attack Chinese military‑industrial enterprise’s email servers: report.   [online] Published 1 August 2025, 02:24 PM. Available at: https://www.globaltimes.cn/page/202508/1339860.shtml [v] Global Times, 2025. US intelligence uses Microsoft Exchange zero‑day vulnerabilities to attack Chinese military‑industrial enterprise’s email servers: report.   [online] Published 1 August 2025, 02:24 PM. Available at: https://www.globaltimes.cn/page/202508/1339860.shtml [vi] Global Times, 2025. US intelligence uses Microsoft Exchange zero‑day vulnerabilities to attack Chinese military‑industrial enterprise’s email servers: report.   [online] Published 1 August 2025, 02:24 PM. Available at: https://www.globaltimes.cn/page/202508/1339860.shtml [vii] The Straits Times, “Forum: China firmly against and cracks down on all forms of cyber attacks”, 2025. [online]  Published 5 August 2025. Available at: https://www.straitstimes.com/opinion/forum/forum-china-firmly-against-and-cracks-down-on-all-forms-of-cyber-attacks [viii] The Straits Times, Jaipragas, B., 2025. No idle punt: Why Singapore called out cyber saboteur UNC3886 by name.   [online]  Published 25 July 2025. Available at: https://www.straitstimes.com/opinion/no-idle-punt-why-singapore-called-out-cyber-saboteur-unc3886-by-name [ix] The Straits Times, “Forum: China firmly against and cracks down on all forms of cyber attacks”, 2025. [online]  Published 5 August 2025. Available at: https://www.straitstimes.com/opinion/forum/forum-china-firmly-against-and-cracks-down-on-all-forms-of-cyber-attacks [x] Facebook, China Embassy in Singapore, 2025 .   Facebook post.   [online]  Available at: https://www.facebook.com/Chinaemb.SG/posts/1043167338010724/ [xi] The China National Computer Virus Emergency Response Center (CVERC) is a state-run institution responsible for analyzing cyber threats and coordinating national responses to viruses and software vulnerabilities. It also plays a role in China's geopolitical cyber strategy.  DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.

  • CRC Weekly: Cyber-based hostile influence campaigns 25th – 31st August

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect. During the 25th to the 31st of August, 2025 we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. This week's events highlight the continued diversification of actors and tactics in the information domain. Persistent Russian campaigns targeting European states occurred alongside reported US-linked activities, including a domestic covert influencer network, as well as an alleged hostile operation in Greenland. Concurrently, state responses were relatively assertive, evidenced by Indonesia’s heightened regulatory pressure on tech platforms and Denmark's formal diplomatic summons of the US Ambassador. [Contents] [Report Highlights] [Week in Review] False Flag Ban Narrative Exploits UK Patriotism Debate to Fuel Outrage Lavrov Interview Showcases Russia’s Standardized Disinformation and FIMI Playbook Russia's SVR and Georgian Dream Execute Coordinated Anti-Western Disinformation Operations Polish Elections Withstand Foreign Influence, But Systemic Vulnerabilities Persist Denmark Summons US Envoy Amid Greenland Influence Campaign Allegations Pro-Russia Network Targets Romanian Election with Anti-Sandu Disinformation Campaign Democratic-Aligned Dark Money Group Covertly Pays Influencers for Coordinated Messaging Indonesia Threatens Platforms with Fines and Expulsion Over Harmful Content [Takeaways] [Download Report] [Report Highlights] A pro-Russia network published 8,514 posts, which amassed at least 16 million views, attacking Romania’s electoral integrity and promoting anti-EU sentiment. -  DFRLab US-linked operatives allegedly compiled lists of sympathetic locals and sought negative narratives to undermine Danish authority in Greenland. – ABC News An undisclosed political group is reportedly offering Influencers up to $8,000 monthly in exchange for promoting partisan Democrat messaging in the US. - WIRED A forged screenshot of a credible news publication was circulated by conservative users to lend journalistic authority to the claim the UK is banning the English flag - NewsGuard Indonesia is shifting content moderation responsibility to platforms by demanding proactive removal of disinformation to prevent social unrest. - Reuters Poland’s absence of a permanent Digital Services Coordinator creates a critical regulatory vacuum, undermining effective enforcement of the EU Digital Services Act. - FIMI-ISAC   [Weekly Review] False Flag Ban Narrative Exploits UK Patriotism Debate to Fuel Outrage According to NewsGuard , conservative and nationalist social media users are circulating a fabricated screenshot of a Daily Mail article to falsely claim the U.K. government is considering a ban on the English flag. This disinformation leverages a real, recent controversy involving the Birmingham City Council's removal of flags from lampposts for safety reasons, a move which had already ignited a debate over patriotism and national identity. The campaign's core tactic is the use of a convincing but non-existent article, purportedly written by a real political editor and featuring Prime Minister Keir Starmer, to lend the claim false credibility. This method has proven effective, with one post receiving over 850,000 views and generating significant hostile engagement. The Daily Mail has officially confirmed the article is fake, and no credible news outlets or government sources have suggested any such policy is under consideration. The incident highlights the use of forged media to exploit existing societal tensions for political purposes. Source: NewsGuard, Mascha Wolf, 2025, NewsGuard Reality Check, Available at: https://www.newsguardrealitycheck.com/p/uk-banning-english-flag-fake-news Contents Lavrov Interview Showcases Russia’s Standardized Disinformation and FIMI Playbook An analysis from EUvsDisinfo deconstructs a recent interview by Russian Foreign Minister Sergey Lavrov, identifying it as a masterclass in the Kremlin's FIMI playbook. The piece systematically dismantles several core Russian narratives designed to manipulate Western audiences. These include the false pretense of being open to dialogue while pressing for surrender, the baseless claim of President Zelenskyy’s illegitimacy, and the lie that military strikes avoid civilian infrastructure. The analysis also refutes the foundational justifications for the invasion, such as the pretext of protecting Russian speakers from a fabricated genocide and the distortion of the Budapest Memorandum. These narratives collectively serve a strategy of denial, deflection, and distortion, aimed at rewriting history and justifying a war of aggression. The Kremlin's manipulation of legal language and international agreements is presented as a cynical tool to legitimize its military actions and pursue imperial ambitions while casting Ukraine as unreasonable. Source: EUvsDisinfo, EUvsDisinfo, 2025, Russia’s narrative manipulation, [online] Available at: https://euvsdisinfo.eu/russias-narrative-manipulation/ Contents Russia's SVR and Georgian Dream Execute Coordinated Anti-Western Disinformation Operations A report from EUvsDisinfo details how Russia's foreign intelligence service (SVR) and Georgia's ruling party, Georgian Dream, are conducting a coordinated disinformation campaign to undermine the country's Western partners. The campaign aims to erode public trust in the US, EU, and UK by portraying them as destabilizing forces orchestrating a "color revolution." A distinct operational pattern involves the SVR releasing specific accusations, which are then swiftly amplified by pro-government media outlets like TV Imedi and POSTV and echoed by high-level Georgian officials. This synchronized messaging has systematically targeted different Western actors over time, beginning with the US before shifting focus to the EU and later the UK. These actions, often supported by fabricated or unverified video evidence, represent a deliberate strategy to discredit domestic civil society, derail Georgia's Euro-Atlantic integration, and maintain the nation's position within Moscow's sphere of influence.    Source: EUvsDisinfo, EUvsDisinfo, 2025, Russian scripts, Georgian voices: How disinformation targets the country’s Western allies: the US, EU, and UK in Georgia, [online] Available at: https://euvsdisinfo.eu/russian-scripts-georgian-voices-how-disinformation-targets-the-countrys-western-allies-the-us-eu-and-uk-in-georgia/ Contents Polish Elections Withstand Foreign Influence, But Systemic Vulnerabilities Persist A FIMI-ISAC research paper on the 2025 Polish presidential election concludes that while foreign information manipulation from Russia and Belarus posed a persistent threat, its overall impact was constrained by civil society resilience and the limited sophistication of some campaigns. Known operations like Doppelganger, Operation Overload, and the Pravda Network disseminated anti-EU, anti-Ukrainian, and anti-establishment narratives, often amplifying far-right candidates by portraying them as defenders of national sovereignty. The threat landscape was notable for its consistency with previous elections, although domestic political actors were observed adopting similar manipulative tactics, such as fabricating personas and spreading false claims. Significant systemic weaknesses persist, including vulnerabilities on platforms like X, Meta, and TikTok that are exploited for coordinated inauthentic activity. A critical vulnerability identified is Poland’s lack of a permanent Digital Services Coordinator, creating a regulatory vacuum. The report recommends strengthening platform accountability under the DSA and establishing permanent cross-sector coordination to safeguard Poland's democratic processes.   Source: FIMI-ISAC, 2025, Foreign Information Manipulation and Interference (FIMI) during the 2025 Polish presidential elections, [online] Available at: https://fimi-isac.org/wp-content/uploads/2025/08/FDEI-POLISH-ELECTION-COUNTRY-REPORT-2025-2.pdf Contents Denmark Summons US Envoy Amid Greenland Influence Campaign Allegations A report by ABC News indicates Denmark has summoned a senior US diplomat following allegations of a covert American influence campaign in Greenland. The operation, reportedly conducted by at least three individuals with connections to US President Donald Trump, is believed to aim at weakening the relationship between Greenland and Denmark from within. Alleged tactics include compiling lists of US-friendly Greenlanders, identifying individuals opposed to Trump, and soliciting locals for information that could portray Denmark negatively in American media. These activities align with stated US interests in the strategically significant, resource-rich territory. In response, Denmark’s Foreign Minister deemed any interference unacceptable. The Danish Security and Intelligence Service further noted that Greenland is a target for influence campaigns designed to exploit or fabricate divisions, confirming it has increased its operational presence in the region. The incident underscores the growing geopolitical contestation in the Arctic, where influence operations are an emerging vector of statecraft.   Source: ABC News, 2025, Denmark summons US envoy over suspected influence operations in Greenland, [online] Available at: https://www.abc.net.au/news/2025-08-28/denmark-summons-us-envoy-people-carrying-influence-in-greenland/105705686 Contents Pro-Russia Network Targets Romanian Election with Anti-Sandu Disinformation Campaign Analysis from the Digital Forensic Research Lab (DFRLab) illustrates how a coordinated pro-Russia network of at least 215 accounts on Facebook, TikTok, and Instagram has been conducting a hostile influence campaign since December 2024. The operation sought to undermine Romania's presidential election by accusing Moldovan President Maia Sandu of electoral interference. The network initially supported one far-right candidate before pivoting to another, George Simion, after the first was barred from running. Operators deployed a range of tactics, including the use of generative AI for content and profile pictures, hijacked accounts, and coordinated hashtags in Russian and Romanian. Key narratives were anti-Sandu, anti-EU, and pro-Russian, with specific themes accusing Moldova of dragging Romania into conflict. With over 8,500 posts generating at least 16 million views, the campaign demonstrates a systemic effort to exploit platform vulnerabilities. The findings also reveal deficiencies in platform transparency, as many accounts operated below follower thresholds required for inclusion in public research datasets, potentially obscuring the campaign's full scale.   Source: Digital Forensic Research Lab (DFRLab), Valentin Châtelet, 2025, Cross-platform campaign accuses Moldova’s Sandu of meddling in Romanian elections, [online] Available at: https://dfrlab.org/2025/08/26/cross-platform-campaign-accuses-moldovas-sandu-of-meddling-in-romanian-elections/ Contents Democratic-Aligned Dark Money Group Covertly Pays Influencers for Coordinated Messaging According to an investigation by WIRED , a dark money organization is secretly funding prominent Democratic-aligned influencers to promote party narratives online. This initiative involves payments of up to $8,000 per month, contingent upon the influencers concealing the funding source and adhering to specific content restrictions. The operation signifies a notable evolution in domestic influence tactics, leveraging the parasocial trust and perceived authenticity of social media creators to conduct coordinated messaging campaigns. By requiring secrecy and imposing content controls, the effort intentionally blurs the line between genuine grassroots support and undisclosed paid promotion. This model effectively creates a network of astroturfed political messaging that appears organic to unwitting audiences. The use of such covert funding mechanisms within the domestic political landscape presents a significant challenge for platform transparency and the integrity of online discourse, mirroring strategies often associated with state-linked information operations.   Source: WIRED, Taylor Lorenz, 2025, A Dark Money Group Is Secretly Funding High-Profile Democratic Influencers, [online] Available at: https://www.wired.com/story/dark-money-group-secret-funding-democrat-influencers/ Contents Indonesia Threatens Platforms with Fines and Expulsion Over Harmful Content Reuters has published a story covering how the Indonesian government has summoned representatives from Meta, TikTok, and other platforms and demanded they proactively moderate harmful content. This move signifies a strategic shift, placing the onus on platforms to remove disinformation without waiting for government requests. The directive is a direct response to online campaigns that have successfully fueled public anger and real-world protests. Specific examples of this hostile influence include a deep fake video of the finance minister and mislabeled footage of past riots used to incite unrest. Notably, TikTok videos were reportedly used to mobilize youth for demonstrations, resulting in clashes and arrests. Jakarta is leveraging significant penalties for non-compliance, which range from fines and temporary suspension to the complete revocation of a platform's registration. The government's objective is to mitigate what it calls the "chaos" caused by inaccurate information and protect national stability. Meetings with X and YouTube are also planned as part of this broader regulatory push.   Source: Reuters, Reuters, 2025, Indonesia urges TikTok, Meta to act against harmful online content, [online] Available at: https://www.reuters.com/business/media-telecom/indonesia-urges-tiktok-meta-act-against-harmful-online-content-2025-08-27/ Contents [Takeaways] Indonesia’s move to impose direct liability on platforms, juxtaposed with Poland's persistent regulatory gaps, foreshadows an increasingly fragmented landscape for digital governance. This divergence creates a dual challenge: it imposes complex, country-specific compliance burdens on platforms while offering influence operators strategic havens, allowing them to exploit the jurisdictions of least resistance.   [Download Report]

  • CRC Spotlight: Iranian Digital Hostile Influence Campaign Targeting the UK

    Recent studies and reports have drawn attention to an Iranian Digital Hostile Influence Campaign targeting the UK, reflecting ongoing attempts to shape public debate and weaken cohesion. The campaign often leverages separatist movements, acting as an opportunistic “fellow traveler” that embeds itself within these narratives to amplify division. Building on recent studies and reports, our analysis provides additional insights and highlights nuances that may otherwise be overlooked. In this Spotlight article, we outline how our findings contribute to the existing picture and highlight areas that warrant further attention. [ Download Full Report here ]

  • Cross-border influence targeting Romania’s vote and Moldova’s leadership

    Cross-border influence targeting Romania’s vote and Moldova’s leadership In our March 17 report, “The Romanian Presidential Elections 2024: Analysis of Information Operations and Long-term Influence Efforts, ”  we comprehensively examined the developments that ultimately led to the annulment of Romania’s presidential elections (link to our report here ). This week, in our Weekly Blog, we want to highlight a recent report by DFRLab [i] , which analyzes a coordinated network of cross-platform accounts associated with these events. DFRLab identified at least 215 accounts in the network: 116 on Facebook, 79 on TikTok, and 17 on Instagram. Between December 2024 and June 6, 2025, these accounts published a total of 8,514 posts, amassing at least 16 million views and 681,000 likes. Content focused on spreading anti-EU, pro-Russian, and anti-PAS narratives. Initially, the network supported presidential candidate Călin Georgescu, but after the annulment of the 2024 election, it shifted toward backing far-right candidate George Simion. It also targeted Moldovan President Maia Sandu, repeatedly accusing her of election interference. In parallel, the network amplified claims made by Telegram CEO Pavel Durov that France had attempted to suppress “conservative voices” in Romania. The operators disseminated content in the form of slideshows, photos, videos, and text posts, coordinated across platforms using hashtags in Romanian and Russian. Particularly striking was the use of generative AI, including fake personas with Romanian-sounding names, AI-generated profile pictures and avatars, and AI-based video editing, which suggested an automated approach to content production and distribution. At least twelve Facebook accounts also presented themselves as “professional accounts” or “digital creators,” possibly with the intention of monetization. Posting activity surged in February, April, and May 2025, coinciding with key phases of Romania’s presidential campaign. A notable finding by DFRLab was that many accounts remained below Meta’s transparency threshold: only 55 accounts appeared in the official Content Library, while the rest had fewer than 1,000 followers. DFRLab concluded that this may have been a deliberate strategy to avoid detection, suggesting that the network’s actual reach could be significantly larger than current data indicates. [ii]   Passports and Moldovan Interference In May 2025, the network significantly escalated its activity, with over 1,450 videos promoting the so-called “Moldovan Scenario.”  These claimed that tens of thousands of Moldovans holding Romanian passports, allegedly directed by President Maia Sandu and PAS, were mobilized to interfere in Romania’s election process.   Figure 1 - Courtesy of DFRLab [iii] Anti-PAS and Anti-Sandu Narratives Another prominent narrative portrayed President Maia Sandu and PAS as corrupt, illegitimate, and warmongering. Posts claimed that Sandu was dragging Moldova into a foreign war, endangering national security, and acting in the interests of external powers rather than the country's best interests. At the same time, the network stoked tensions with Romania, labelled Sandu an “illegitimate president,” and called for her resignation. Figure 2 - Courtesy of DFRLab [iv] Victory Day vs. Europe Narrative Figure 3 - Courtesy of DFRLab [v] Narrative of unlawful arrests Between March and May 2025, the network intensified its activity around Romania’s presidential elections by reinforcing pro-Georgescu messaging and supporting detained Gagauz Governor Evghenia Guțul. Following Georgescu’s arrest on February 25, 2025, posting activity spiked: in early March, the network amplified solidarity protests in Bucharest and published dozens of supportive messages. A further wave followed the Constitutional Court's bar on him from running in May. In parallel, the network pushed anti-EU and anti-Romania narratives, casting Georgescu as a victim of political repression. It also promoted content in support of Guțul following her detention on March 27, 2025, including Soviet-era symbolism such as the song “Katyusha”  as a sign of solidarity. Pro-Russian politicians claimed her arrest lacked a legal basis, and several videos reached high viewership, with some TikTok clips attracting up to 300,000 views.   Stolen Accounts and AI Personas DFRLab reveals that the network relied on both stolen or repurposed accounts and AI-generated fake personas. Several profiles that had originally posted authentic content, such as family photos, were later repurposed to spread inauthentic material. Fake identities appeared simultaneously across Facebook, Instagram, and TikTok, with coordinated posting patterns. Some of these accounts infiltrated local Facebook news groups to expand their reach. Figure 4 - Displayed in DFRLab Report [vi] To increase credibility, the operators heavily utilized generative AI for profile pictures, avatars, and video elements, as well as stock footage that, in some cases, dated back years but was repurposed as current protest footage. Tools such as DreamFace  were used to generate synthetic faces that appeared as supposed real people in videos. Figure 5 - Use of AI-Tools Displayed in DFRLab Report [vii] DFRLab concludes that the network deliberately combined hijacked accounts, AI-generated material, and recycled stock footage to create the illusion of authenticity and lend credibility to its influence operation.   Inauthentic News Channel The network also used ono.news , a news and marketing channel, to disseminate its content. Figure 6 -  Displayed in DFRLab Report [viii] While the accounts remained active on Telegram and TikTok, the channel appears to have been removed from Facebook. Initially, ono.news appeared on Facebook under the name Gabriel Matei, a profile that bore signs of being a fake persona before shifting into the identity of a supposed news outlet. DFRLab found indications of extensive AI technology use: the TikTok and Telegram profiles utilized an AI-generated profile picture, and videos featured the watermark of D-ID, a company specializing in visual AI avatars. DFRLab concludes that ono.news  was not a genuine media channel but rather a fabricated brand identity, leveraging fake identities and generative AI to appear credible and amplify the network’s narratives.   Conclusion DFRLab’s analysis reveals that the network employed coordinated cross-platform activity to disseminate anti-PAS, anti-Sandu, and anti-EU narratives, gradually expanding its focus from Moldovan politics to Romania’s presidential election. The accounts generated at least 16 million views and 681,000 likes, though the actual impact remains difficult to quantify. Tactics included the systematic use of hijacked accounts, AI-generated personas, and recycled stock material to create an appearance of authenticity. The investigation further demonstrates how the operation operated largely below Meta’s transparency threshold, thereby avoiding detection, and exposed significant discrepancies between publicly accessible content and the data available through Meta’s Content Library and API. These gaps highlight persistent weaknesses in platform moderation and disclosure processes that continue to enable covert influence campaigns. We recommend reading the full DFRLab case study , which provides additional valuable details and context. [Footnotes] [i]  DFRLab, Valentin Châtelet, 2025. Cross‑platform campaign accuses Moldova’s Sandu of meddling in Romanian elections. [online] Published 26 August 2025. Available at: https://dfrlab.org/2025/08/26/cross-platform-campaign-accuses-moldovas-sandu-of-meddling-in-romanian-elections/ [ii]   Ibid. [iii]  Ibid. [iv]  Ibid. [v]   Ibid. [vi]  Ibid. [vii]  Ibid. [viii]  Ibid. DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.

  • Weekly Report: Cyber-based hostile influence campaigns 18th - 24th August

    [Listen to the Podcast] [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect. During the 18 th  to the 24 th of August, 2025 we observed, collected and analyzed endpoints of information related to cyber-based  hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. This week's intelligence landscape was dominated by Russia's influence operations. Russia employed a spectrum of tactics, from sophisticated AI-generated fakes targeting European diplomacy to imposter news sites aimed at destabilizing Armenia. This digital assault was complemented by a soft-power push, as state-backed media expanded across the Global South to build narratives favorable to Moscow. [Report Highlights] ●     Russian influence networks circulated a fabricated image portraying European leaders as humiliated supplicants waiting in a White House corridor. -  NewsGuard ●     Russia is running at least two parallel influence campaigns, Storm-1516 and Matryoshka, to destabilize Armenia as it pivots politically toward the West. -  NewsGuard ●     A Chilean television channel allegedly handed over its entire broadcast signal to the Russian state-backed news broadcaster RT, prompting sanction proceedings from the country's regulator. - BBC Global Disinformation Unit   ●     British Columbia Wildfire Service formally warned residents about the public safety threat posed by viral, AI-generated fake emergency images. - The Conversation ●     Days after meeting with Vladimir Putin, President Trump publicly used the Russian leader's claims about rigged elections to justify his new push against mail-in voting in the U.S. - Politico ●     A top Greek communications official has identified disinformation as the "greatest threat faced by modern Western democracies" and a key tool of global extremist movements. -  Greek City Times ●     The G7 Rapid Response Mechanism developed and delivered a collective response framework to standardize how member states counter FIMI threats. - European External Action Service [Weekly Review] Russia Uses AI Fake to Mock European Leaders at Trump Summit  Storm-1516 deploys Imposter News Site targeting the Armenian Government Assessing Russia’s Asymmetric Media Expansion in the Global South Digital Wildfires: Countering GenAI Disinformation in Emergency Management China-Linked APT Targets Taiwan's Web Hosting Sector Following Putin Meeting, Trump's Plan to Ban Mail-In Ballots Ignites Security Fears Greece Confronts "Greatest Threat" of Political Fake News The EEAS's Counter-Interference Posture  - ‘A FIMI Toolbox’ Russia Uses AI Fake to Mock European Leaders at Trump Summit A report from NewsGuard reveals that pro-Kremlin sources are disseminating a sophisticated, AI-generated image to mock European leaders following a White House summit on the war in Ukraine. The fabricated image purports to show the leaders of France, the UK, Germany, Italy, Finland, and the European Commission looking somber and subservient in a corridor, supposedly snubbed by U.S. President Trump. Propagated across Telegram, X, and the Pravda network of websites, the narrative aimed to portray the European delegation as a “coalition of those in waiting.” The analysis confirms the deception, citing the AI detection tool Hive, which found the image to be synthetic with 99.8 percent certainty. This incident is presented as part of a recurring Russian information warfare tactic that targets high-profile diplomatic events to sow discord and project an image of Western weakness.   Source: NewsGuard, Hope Talbot, Aug 19, 2025, NewsGuard Reality Check, [online] Available at: https://www.newsguardrealitycheck.com/p/pro-russian-sources-spread-ai-generated Contents     Storm-1516 deploys Imposter News Site targeting Armenian Government An investigation by NewsGuard has uncovered a Russian influence operation, identified as Storm-1516, targeting Armenia with destabilizing disinformation. The campaign’s centerpiece is a baseless claim, originating from an imposter news site called EULeaks.eu , that the Armenian Prime Minister’s wife, Anna Hakobyan, embezzled $3.4 million from a children’s cancer charity. This narrative was amplified across multiple social media platforms. The report asserts this is a deliberate effort to undermine Prime Minister Nikol Pashinyan's government as it pivots away from Moscow’s orbit and toward the West. The analysis also reveals the imposter outlet’s tactic of using AI to rewrite articles from legitimate sources to feign authenticity. A parallel Russian campaign, dubbed Matryoshka, was observed simultaneously targeting Armenia with separate disinformation, indicating a multi-pronged assault.   Source: NewsGuard's Reality Check, Eva Maitland, Aug 19, 2025, Russia Targets Armenia Via an Imposter European News Outlet, [online] Available at: https://www.newsguardrealitycheck.com/p/russia-targets-armenia-via-an-imposter Contents     Assessing Russia’s Asymmetric Media Expansion in the Global South A recent article   from the BBC Global Disinformation Unit  details how Russian state-backed media outlets RT and Sputnik are expanding their influence across the Global South as they face increasing restrictions in the West. The analysis explains that since Russia’s 2022 invasion of Ukraine prompted bans in the EU, US, and UK, these channels have opened new bureaus and launched services in Africa, Latin America, and the Balkans. This expansion strategically fills a void left by some downsizing Western media outlets. Experts cited in the article describe Russia's approach as a "careful manipulation" that blends some accurate reporting with curated narratives and disinformation to appeal to audiences with anti-imperialist sentiments. According to the analysis, this media push is designed to undermine Western influence, build support for Moscow’s geopolitical agenda, and foster new economic partnerships in a shifting global order.   Source: BBC, Juliana Gragnani and Maria Korenyuk, August 25, 2025, How Russia is quietly trying to win over the world beyond the West, [online] Available at:  https://www.bbc.com/news/articles/cm2vr37yd4no Contents     Digital Wildfires: Countering GenAI Disinformation in Emergency Management An article   in The Conversation  warns that the proliferation of advanced, accessible generative AI tools presents a growing threat to public safety during emergencies. Citing a recent case where British Columbia’s Wildfire Service had to caution residents about fake, AI-generated fire images, the analysis explains that such content exploits human psychology. People under stress are more susceptible to sensational, emotionally charged information, which can lead to confusion, panic, and the misallocation of resources. The authors argue that this digital disinformation can cause direct harm and disproportionately affects vulnerable populations. To counter this, the article advocates for a comprehensive strategy that includes fostering public media literacy, establishing clear policies for newsrooms using AI, strengthening platform-level fact-checking, and enforcing legal deterrents against the deliberate spread of false information.   Source: The Conversation, August 2025, AI-generated misinformation can create confusion and hinder responses during emergencies, [online] Available at: https://theconversation.com/ai-generated-misinformation-can-create-confusion-and-hinder-responses-during-emergencies-263081 Contents     China-Linked APT Targets Taiwan's Web Hosting Sector According to Infosecurity , Cisco Talos, a newly identified Chinese advanced persistent threat (APT) group (UAT-7237) is targeting web infrastructure providers in Taiwan. Active since 2022, the group focuses on establishing long-term access for data theft and shows a specific interest in victims' VPN and cloud infrastructure. By compromising web hosting providers, the attackers gain access to sensitive government and corporate data. This allows for long-term espionage and the potential to disrupt critical infrastructure. The campaigns highlight the growing sophistication of state-sponsored cyberespionage and the strategic importance of protecting digital infrastructure to prevent data theft and maintain national security. Researchers assess with high confidence that UAT-7237 is a distinct Chinese state-backed actor, likely operating as a subgroup of another actor, UAT-5918, but with significant deviations in its tactics. This activity occurs amid escalating Chinese cyber intrusions against Taiwan’s critical infrastructure, as noted by Taiwanese security officials in early 2025.   Source: Infosecurity Magazine, James Coker, Chinese APT Group Targets Web Hosting Services in Taiwan, [online] Available at: https://www.infosecurity-magazine.com/news/chinese-apt-web-hosting-taiwan/ Contents     Following Putin Meeting, Trump's Plan to Ban Mail-In Ballots Ignites Security Fears After meeting with Vladimir Putin in Alaska, President Trump praised the Russian leader and echoed his claims that U.S. mail-in voting had rigged the 2020 election. A subsequent Politico report   reveals that just days later, Trump announced his intention to sign an executive order to ban mail-in ballots and some voting machines. This sequence of events has sent shockwaves through the election security community. Officials warn that dismantling modern voting infrastructure creates the very vulnerabilities it claims to solve. Experts argue that mass hand-counting is an unworkable fantasy that would be slow, error-prone, and an "open invitation" for malfeasance. Critics, including lawmakers and state officials, assert that the president's actions, influenced by an adversary known for election interference, dangerously erode faith in the democratic process and hand a strategic gift to those who wish to weaken the United States.   Source: Politico, Maggie Miller, 08/19/2025, Trump and Putin are both criticizing mail-in voting. Election officials are freaking out., [online] Available at: https://www.politico.com/news/2025/08/19/trump-and-putin-are-both-criticizing-mail-in-voting-election-officials-are-freaking-out-00515513 Contents     Greece Confronts "Greatest Threat" of Political Fake News In mid-August, a false story alleging that Greek Prime Minister Kyriakos Mitsotakis’s wife had purchased a luxury apartment was published online, only to be retracted under the threat of legal action. This incident, according to a Greek City Times article , exemplifies the "almost daily" slanderous attacks faced by the couple. The Prime Minister's Director of Digital Communication, Nikos Romanos, describes this tactic as part of a broader disinformation problem that constitutes the greatest threat to Western democracies. These campaigns exploit platforms from X to YouTube, promoting conspiracy theories on topics ranging from personal matters to national crises like the recurring summer wildfires. Romanos warns that while many stories seem absurd, they remain dangerous, highlighting legal recourse as one of the only effective countermeasures against anonymous online actors.   Source: Greek City Times, Bill Giannopoulos, August 25, 2025, Fake News Campaigns Target Greek PM Mitsotakis and Wife, [online] Available at: https://greekcitytimes.com/2025/08/25/fake-news-campaigns-target-greek-pm-mitsotakis-and-wife/ Contents     The EEAS's Counter-Interference Posture  - ‘A FIMI Toolbox’ A 2024 report   from the European External Action Service (EEAS)  details the European Union’s comprehensive strategy for combating Foreign Information Manipulation and Interference (FIMI), a threat increasingly amplified by AI-driven tools. The EEAS frames its response around a four-pillar FIMI Toolbox, focusing on situational awareness, resilience building, regulation, and external diplomatic action. Central to this effort in 2024 was protecting the European Parliament elections through enhanced cooperation via the Rapid Alert System. The EEAS highlights significant progress in operationalizing tools like the FIMI Information Sharing and Analysis Centre (FIMI-ISAC) to unite civil society responders and expanding the reach of the EUvsDisinfo platform to over 38 million people. The EU also intensified its global posture, leading the G7 Rapid Response Mechanism, signing new security partnerships with FIMI components, and applying sanctions against Russian propaganda entities to defend the integrity of the EU’s information space and support partners like Ukraine.   Source: European External Action Service (EEAS), European External Action Service (EEAS), August 2025, 2024 Report on EEAS Activities to Counter Foreign Information Manipulation and Interference (FIMI), [online] Available at: https://www.eeas.europa.eu/sites/default/files/2025/documents/2024 Report on EEAS Activities to Counter FIMI.pdf Contents     [Takeaways] When state actors can successfully inject and legitimize fabricated information at the highest political levels, it erodes the ability of institutions and the public to engage in evidence-based decision making. This creates a strategic vulnerability where policy and public safety become susceptible to manipulation by whichever narratives are most effectively propagated, regardless of their connection to facts. Russia's multi-front Hostile Information Campaigns were met with institutional responses from the EU, yet a Russian narrative was simultaneously adopted as policy by the U.S. president. The core implication is a growing strategic asymmetry between offensive and defensive actions in the information domain. While Western responses like the 'FIMI toolbox' are designed to build resilience methodically, adversaries appear to be securing strategic victories by exploiting the political dynamics within open democracies. This raises critical questions about whether the current Western counter-influence posture is adequately calibrated to address threats that are not just informational but profoundly political. Contents [Download Report]

  • Weekly Report: Cyber based hostile influence campaigns 11th - 17th August

    [Listen to the Podcast] [Introduction] During the 11 th  to the 17 th of August, 2025 , we observed, collected, and analyzed endpoints of information related to cyber based hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. Russian backed influence operations dominated the cyber landscape this week, demonstrating a significant evolution in tactics and targeting. Kremlin-aligned groups are moving beyond simple propaganda, employing sophisticated methods like impersonating news organizations and fact-checkers, seeding large language models with state-approved narratives, and deploying AI-generated content to spoof legitimate media. These technical escalations are paired with a consistent strategy of exploiting and manufacturing societal divisions in target nations, particularly in Canada, where campaigns incited xenophobia against Indian and Ukrainian diasporas. Concurrently, the institutional decay in nations like Bulgaria created a permissive environment for Russian disinformation to flourish, turning domestic political chaos into a vector for broader European instability. [Report Highlights] A NewsGuard Reality Check article details how the Russian influence campaign “Matroyoshka” is impersonating NewsGuard through fake videos to undermine its credibility. A POLITICO  article warns that as Russian group Storm-1679 escalates AI-driven disinformation by spoofing trusted media outlets, the U.S. government is simultaneously dismantling key agencies meant to counter foreign influence. An article by the Institute for Strategic Dialogue (ISD)  examines foreign information manipulation targeting Poland’s 2025 presidential election and its implications for democratic resilience. An International Centre for Counter-Terrorism ( ICCT)  article argues that traditional counter-narratives fail against identity-based disinformation, often reinforcing the very beliefs they seek to challenge. A BalkanInsight  article reveals how Bulgaria’s political instability and weakened institutions have turned the country into a prime gateway for Russian disinformation into the EU. An investigation uncovered 51 front companies driving large-scale election influence campaigns. [Weekly Review] The Kremlin ramps up projection tactics ahead of the Alaska conference Matroyoshka's TTPs Against NewsGuard – a Meta Campaign Pravda Operation's LLM Seeding Targets Canadian Historical Narrative Russian IO Deploys Xenophobic Tropes Against Canada's Indian Diaspora Russian Group Spoofs News Outlets Using AI-Enhanced Disinformation Razor-Thin Mandate in a Disinformation Crossfire Disinformation as a Systemic Social Media Phenomenon Report Advocates Competing With, Not Countering, Extremist Narratives Bulgaria's Institutional Decay Enables EU-Wide Disinformation Threat The Kremlin ramps up projection tactics ahead of the Alaska conference  A publication  by EUvsDisinfo   details how In the run up to the conference in Alaska, Russia’s disinformation strategy relied heavily on projection, accusing others of the very aggression and bad faith it commits. Moscow vehemently portrays itself as a key global power and peace seeker, yet this posture reflects its efforts to mask its severely weakened economy and frustrated military ambitions in Ukraine. The report details how Russia engages in bad faith negotiations while it continues its military assault, targeting civilian infrastructure and undermining Ukraine’s economy. The analysis concludes that Russia’s informational strategy is a smokescreen designed to distract from its own violations and battlefield atrocities. Source: EUvsDisinfo, 2025. Alaska and the illusion of strength. [online] Available at: https://euvsdisinfo.eu/alaska-and-the-illusion-of-strength/ [Table of Contents] Matroyoshka's TTPs Against NewsGuard – a Meta Campaign An article from NewsGuard details a Russian malign influence campaign, dubbed "Matroyoshka," that has begun directly impersonating NewsGuard to undermine its credibility. The campaign escalated after NewsGuard reported on its efforts to spread false claims about Moldova’s pro-Western government. In retaliation, Matroyoshka produced fabricated videos styled as NewsGuard reports, falsely alleging corruption within the organization. The campaign’s latest tactic involves a layered falsehood: a fake NewsGuard "rebuttal" video that purports to correct a previous fabricated claim, stating it received "only 100 million Euros" from Moldova. This meta-campaign aims to overwhelm fact-checkers and erode public trust in information arbiters. Source: NewsGuard, McKenzie Sadeghi, 2025. Russian Campaign Targets NewsGuard — Again. [online] Available at: https://www.newsguardrealitycheck.com/p/russian-campaign-targets-newsguard [Table of Contents] Pravda Operation's LLM Seeding Targets Canadian Historical Narrative An article by DisinfoWatch  details a Russian smear campaign targeting Canada’s National Memorial to the Victims of Communism as part of a broader historical revisionism strategy. The effort, amplified by a Kremlin-linked influence group known as the "Pravda Operation," uses Telegram to link the memorial to an unrelated 2023 controversy in Canada's Parliament. This tactic aims to create domestic suspicion and discredit the monument. The analysis places the campaign in the context of Russia’s annual efforts to deflect from the anniversary of the 1939 Molotov-Ribbentrop Pact with Nazi Germany. A key goal of the Pravda Operation is to seed and manipulate Large Language Models (LLMs) with Kremlin-approved narratives, embedding propaganda into future AI technologies. Source: DisinfoWatch, 2025. Canadian Memorial Targeted By Russian Pravda Operation. [online] Available at: https://disinfowatch.org/disinfo/canadian-memorial-targeted-by-russian-pravda-operation/ [Table of Contents] Russian IO Deploys Xenophobic Tropes Against Canada's Indian Diaspora A publication by DisinfoWatch  reports that Kremlin-aligned propagandists are promoting racist disinformation to incite hatred against Canadians of Indian heritage. The campaign, amplified by outlets like Tsargrad.TV and the Pravda Network falsely claim that Hindus control all three major political parties in Canada. This assertion is demonstrably false, as census data shows Hindus comprise about 2.3% of the population and hold only 1.2% of parliamentary seats. The analysis highlights specific tactics, such as substituting the NDP party's name with the Russian term "Narodniks" to invoke extremism and conflating Hindu and Sikh identities to create confusion. This campaign mirrors previous Russian efforts that targeted Ukrainian Canadians, revealing a consistent strategy of weaponizing cultural tensions to stoke xenophobic fears and undermine Canada's multicultural society. Source: DisinfoWatch, 2025. Kremlin Propagandists Promoting anti-Indian Hate in Canada. [online] Available at: https://disinfowatch.org/disinfo/kremlin-propagandists-promoting-anti-indian-hate-in-canada/ [Table of Contents] Russian Group Spoofs News Outlets Using AI-Enhanced Disinformation A POLITICO article reports that a pro-Russian propaganda group, tracked by Microsoft as Storm-1679, is intensifying its efforts to spread disinformation by spoofing reputable news organizations and leveraging artificial intelligence. The campaign produces fabricated content, particularly videos with AI-generated audio, that mimics the branding of outlets like the BBC, ABC News, and others. Storm-1679 strategically times its operations around high-profile news events, such as elections or diplomatic meetings, to maximize potential impact. While most of the group's content fails to gain traction, occasional successes, like a fake video amplified by high-profile American figures, validate its high-volume approach. This escalation of foreign influence operations comes as the U.S. administration is actively scaling back the federal agencies tasked with countering such threats. Source: POLITICO, Dana Nickel, 2025. Russia is quietly churning out fake content posing as US news. [online] Available at: https://www.politico.com/news/2025/08/17/russia-us-news-media-disinformation-campaign-00512173?ICID=ref_fark&utm_content=link&utm_medium=website&utm_source=fark [Table of Contents] Razor-Thin Mandate in a Disinformation Crossfire In an article  from Institute for Strategic Dialogue (ISD) , the 2025 Polish presidential election was heavily targeted by FIMI campaigns from Russia and Belarus pushing anti-EU, anti-Ukrainian, and anti-establishment narratives. Through Doppelganger, Operation Overload, the Pravda Network, and sanctioned Radio Belarus, alongside domestic nationalist actors, operators exploited platform gaps on X, Meta, and TikTok. Overall impact was limited by civil society mobilization and rapid-reaction networks, and no cyber incidents affected core voting. Yet the persistence of tactics since 2017 and 2021 signals an unchanged threat environment stoking divides over migration, Ukraine, and the economy. The piece urges a permanent Digital Services Coordinator, stricter DSA enforcement, and more media literacy and civil society capacity. With Karol Nawrocki winning by under one percentage point, even modest manipulation remains consequential in a polarized contest. Source: FDEI for election integrity (FIMI‑ISAC), Alliance4Europe, Debunk.org, GLOBSEC, EU DisinfoLab, DFRLab, Institute for Strategic Dialogue (ISD), 2025. FDEI for election integrity (Jan 2025 – Jan 2027): polish election country report 2025. [online] Available at: https://fimi-isac.org/wp-content/uploads/2025/08/FDEI-POLISH-ELECTION-COUNTRY-REPORT-2025-2.pdf [Table of Contents] Disinformation as a Systemic Social Media Phenomenon A new article in Sage Journals  reframes disinformation not as isolated falsehoods but as a systemic phenomenon actively amplified by the architecture of social media. The analysis by Raquel Recuero argues that platforms confer "structural advantages" upon disinformation through algorithms and engagement-driven economic models, enabling it to outpace accurate information. This systemic view explains why simple content debunking is insufficient. Disinformation, often a mix of true and false elements, adapts and persists through a feedback loop involving platforms, actors, and audiences. The article concludes that effective analysis requires a shift from examining individual pieces of content to understanding the broader social and technical dynamics that constitute the entire communicative system.   Source:  Sage Journals, Raquel Recuero, 2025. A systemic framework for disinformation on social media platforms. [online] Available at: https://journals.sagepub.com/doi/full/10.1177/29768624251367199 [Table of Contents] Report Advocates Competing With, Not Countering, Extremist Narratives In an article  from ICCT , the authors argue that identity-based disinformation (IBD) is a primary accelerant of extremist mobilisation and communal violence, and that prevailing countermeasures underperform. They show how IBD exploits pre-existing biases and identity needs to polarise communities, citing Myanmar’s anti-Rohingya campaigns and the Great Replacement narrative, while noting the migration of content into closed channels and the “jailbreaking” of AI systems by far-right actors. Traditional counter-narratives rarely shift violent intent and can backfire when beliefs are identity-bound. As a remedy, the piece advances the Equal-Alternative Narrative (EAN) model, operationalised through Positive Identity Expansion, which provides prosocial, culturally authentic pathways that satisfy needs for belonging, status, and purpose without confronting ideology head-on. Examples include reframing Buddhist protection in Myanmar around compassion and virtue rather than violence. The Reclaiming Our Narratives toolkit from OICD translates this into practice, emphasising identity mapping, credible messengers, narrative literacy alongside media literacy, and needs-based interventions aligned with initiatives such as the UK’s Healthy Identity Intervention.   Source: ICCT, Anna Kruglova and Bruce White, Countering identity-based disinformation through positive narrative expansion. [online] Available at: https://icct.nl/publication/countering-identity-based-disinformation-through-positive-narrative-expansion [Table of Contents] Bulgaria's Institutional Decay Enables EU-Wide Disinformation Threat With seven national elections in just three years, Bulgaria's political chaos has created an institutional vacuum where disinformation now metastasizes. A BalkanInsight article explains how this environment has made Bulgarian citizens the most vulnerable in Europe to false narratives and has allowed the country to become a backdoor for Russian influence into the EU. The analysis points to a captured state broadcaster, the suspension of a national anti-disinformation coalition, and the weaponization of social media by politically connected networks. These internal failures have enabled large-scale, Russian-funded campaigns targeting elections and sowing discord over the adoption of the euro. The authors warn that unless the EU prioritizes the enforcement of existing standards, new laws will fail to address the root cause of the problem: the collapse of institutional credibility. Source: BalkanInsight, Alexandra Karppi and Vanesa Valcheva, 2025. In Dysfunctional Bulgaria, Disinformation Thrives and Spills Over into EU. [online] Available at: https://balkaninsight.com/2025/08/12/in-dysfunctional-bulgaria-disinformation-thrives-and-spills-over-into-eu/ [Table of Contents] [Takeaways] The core implication of the observed activities is that the modern information battlefield is no longer about the message, but the system that delivers it. Malign actors are successfully targeting the foundational elements of public trust: fact checking bodies, news media, and the integrity of AI-driven information technologies. The impersonation of NewsGuard and the deliberate seeding of LLMs represent a strategic effort to corrupt the very tools societies use to discern truth. Consequently, defensive strategies must evolve from content debunking to securing the information ecosystem itself, focusing on platform architecture, AI integrity, and the psychological resilience of populations. [Table of Contents] [Download Report]

  • Beyond Contractors: China’s Cyber Militia Model

    China’s Cyber Militia Model A recent article by Margin Research highlights how extensively China has integrated civilian companies and quasi-private organizations into its security and defence apparatus. [i] Figure 1 - Courtesy of Margin Research [ii]   The People’s Republic has developed its own model in which private enterprises, state authorities, and military structures are tightly interwoven, a structure that is relevant across all forms of information operations (IO) and influence activities.   While these companies operate under market principles, they are organizationally embedded within China’s military and security architecture. The model is distinct: beyond contractors and fronts, Beijing employs cyber militias. Civilian companies and institutes, such as the mentioned firm Qihoo 360 [iii] , are formally linked to defence structures, including the People’s Armed Police Forces Departments (PAFD). This results in a hybrid system, illustrated in the figure below.   Figure 2 - Courtesy of Margin Research [iv]   Figure 3 - Courtesy of Margin Research [v]   For China, cutout and front companies serve as proxies to conduct highly sensitive operations below the radar. Margin Research thus highlights that this structural approach fundamentally differs from the American model. In the U.S., the government relies on private contractors to provide cyber and information operations support, but these entities remain outside the military chain of command and operate under clear contracts and oversight. Other actors, such as Russia and Iran, use hybrid models, combining contractors with front companies and cutouts. These civilian-looking firms act as covert state proxies, deliberately blurring the line between state and non-state actors to preserve plausible deniability.   This model makes clear that dealing with China means engaging with a system where private and state interests are deeply intertwined. Its relevance extends beyond information operations to all sectors reliant on Chinese technology, as underscored by the recent discovery of rogue communication devices in Chinese solar inverters. [vi] [Footnotes] [i]   Margin Research, Green, K., 2025.   Cyber Militias Redux: Or, "Why Your Boss Might Also Be Your Platoon Leader in China".  Verfügbar unter: https://margin.re/2025/08/cyber-militias-redux-or-why-your-boss-might-also-be-your-platoon-leader-in-china-2/ [ii]   Margin Research, Green, K., 2025.   Cyber Militias Redux: Or, "Why Your Boss Might Also Be Your Platoon Leader in China".  Verfügbar unter: https://margin.re/2025/08/cyber-militias-redux-or-why-your-boss-might-also-be-your-platoon-leader-in-china-2/ [iii]   Margin Research, Green, K., 2025.   Cyber Militias Redux: Or, "Why Your Boss Might Also Be Your Platoon Leader in China".  Verfügbar unter: https://margin.re/2025/08/cyber-militias-redux-or-why-your-boss-might-also-be-your-platoon-leader-in-china-2/ [iv]   Margin Research, Green, K., 2025.   Cyber Militias Redux: Or, "Why Your Boss Might Also Be Your Platoon Leader in China".  Verfügbar unter: https://margin.re/2025/08/cyber-militias-redux-or-why-your-boss-might-also-be-your-platoon-leader-in-china-2/ [v]   Margin Research, Green, K., 2025.   Cyber Militias Redux: Or, "Why Your Boss Might Also Be Your Platoon Leader in China".  Verfügbar unter: https://margin.re/2025/08/cyber-militias-redux-or-why-your-boss-might-also-be-your-platoon-leader-in-china-2/ [vi]   Quelle: Reuters, 2025.   Rogue communication devices found in Chinese solar power inverters.   [online]  Published 14 May 2025. Available at: https://www.reuters.com/sustainability/climate-energy/ghost-machine-rogue-communication-devices-found-chinese-inverters-2025-05-14/ DISCLAIMER Copyright and License of Product   This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained. Disclaimer of Warranties The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks. Accuracy of Information   The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice. Limitation of Liability To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law. Indemnification The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms. Third-Party Rights The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms. Governing Law and Jurisdiction   This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.

bottom of page