top of page

CRC Weekly: Cyber-based hostile influence campaigns 08th-14th September

  • Writer: CRC
    CRC
  • Sep 17
  • 11 min read

Updated: Sep 25

Weekly Media Update cover:  Text: Information Operations report, 18th-24th of August 2025


[Introduction]


Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect. During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events.



[Report Highlights]


  • Sophisticated AI-generated images and videos effectively fooled 73 percent of surveyed Americans regarding false Trump-Epstein narratives, showcasing synthetic media’s power.  - NewsGuard

  •  Leaked documents challenge the perception of China’s digital control as purely ideological, revealing market forces drive its global surveillance and influence industry. - Wired

  • Political violence following Charlie Kirk’s killing creates fertile ground for Russia and China to intensify disinformation and destabilize American institutions. - FIMI – ISAC

  •  Moldova’s weak media regulations allow banned pro-Russian outlets to persist, creating vulnerabilities exploited by foreign information manipulation and interference. - FIMI-ISAC

  • Social media users exploit video editing glitches to falsely claim AI generation, propagating conspiracy theories around Trumps recent address. - NewsGuard

  • Moscow systematically weaponizing international forums - proclaiming peace and human rights while actively undermining them through aggressive actions and pervasive disinformation campaigns. - EUvsDisinfo

  • Weak platform moderation combined with anonymous channels allows foreign actors to weaponize Telegram for election disruption and societal division across EU member states. - EUvsDisinfo


[Weekly Review]


  1. Foreign and Domestic Actors Weaponize Disinformation in Philippine Midterms

  2. Telegram’s Strategic Underreporting Challenges EU Digital Governance and Information Integrity

  3. Russia and China Target Czechia’s Elections with Advanced Disinformation Campaigns

  4. Political Violence Amplifies U.S. Vulnerability to Foreign Disinformation Operations

  5. Moldova’s Vulnerable Information Landscape Faces Escalating Foreign Influence Operations

  6. Rapid Disinformation Spread: False AI Claims Distort Trump’s Post-Assassination Address

  7. AI Chatbots Amplify Misinformation During Breaking News Crises

  8. NewsGuard Index: Persistent American Vulnerability to Digital Misinformation Continues

  9. Leaked Files Expose China’s Global Export of Disinformation and Surveillance Capabilities

  10. Kremlin’s UNGA Performance: A Masterclass in Deceptive Peacemaking and Reality Distortion

  11. Disinformation Blunts Accountability: Russia’s Pattern of Denying War Crimes

  12. Medvedev, Dmitriev Exploit Kirk’s Killing to Blame Left, Undermine Kyiv Support

Foreign and Domestic Actors Weaponize Disinformation in Philippine Midterms


Double Think Lab has released their analysis exploring how foreign information manipulation and disinformation are deeply embedded in the Philippines’ electoral landscape, significantly impacting its political environment and democratic integrity. Chinese-linked influence operations, deploying PR firms and “keyboard warrior” networks, actively amplified pro-Duterte narratives and undermined trust in democratic processes during the 2025 midterm elections. These sophisticated campaigns utilize AI-generated images, viral conspiracy theories, and coordinated social media activity across platforms like X, Facebook, and TikTok. Operations blend domestic political messaging, such as the #BringPRRDHome hashtag, with pro-PRC content related to the West Philippine Sea, strategically discrediting certain candidates while supporting others. The pervasive nature of FIMI extends to foreign interference cases, exemplified by the disqualified mayor Alice Guo, a Chinese national allegedly linked to criminal activities and strategic locations. Election monitors emphasize the severe threat these manipulations pose to democracy and foreign policy, advocating for stronger safeguards and collaborative counter-FIMI strategies. Initiatives like the Shadow FIN network and the “Bayanihan” volunteer model demonstrate a convergent, multi-stakeholder approach to build resilience against hostile information operations and secure the digital ecosystem, ensuring informed democratic participation amidst evolving threats.


Source: Double Think Lab, September 2025, available online at: //medium.com/doublethinklab/bayanihan-for-eleksyon2025-philippine-midterms-monitoring-263ce456cb97

Russia and China Target Czechia’s Elections with Advanced Disinformation Campaigns


A report by FIMI – ISAC reveals that Czechia’s 2025 presidential elections face significant Foreign Information Manipulation and Interference (FIMI) risks, primarily from Russia and China. These actors exploit polarizing issues such as the war in Ukraine, energy security, migration, and EU relations, aiming to deepen social divides and erode trust in democratic institutions. Problematic outlets, including neČT24, Telegram-based ecosystems, and foreign-sponsored platforms like Voice of Europe, amplify divisive narratives, disseminating anti-refugee rhetoric and claims of electoral fraud, some of which feature synthetic audio and AI-generated content targeting President Petr Pavel. While Czechia possesses resilient electoral infrastructure, its information space remains vulnerable due to delayed Digital Services Act implementation, limited state capacity to analyze malign influence, polarized political discourse, and domestic actors amplifying foreign narratives. Countering these evolving threats requires a comprehensive, whole-of-society response, emphasizing closer cooperation across state institutions, civil society, independent media, and EU-level mechanisms, alongside continuous monitoring, proactive risk communication, and investment in institutional capabilities. This situation underscores the persistent challenge of safeguarding democratic integrity against sophisticated digital hostile influence operations.


Source: FIMI – ISAC. (2025). Foreign Disinformation Threats to Czechia’s 2025 Presidential Elections. FIMI – ISAC.  available online at: https://fimi-isac.org/wp-content/uploads/2025/09/FRT-24_Czechia-Country-Election-Risk-Assessment-CERA_FINAL.pdf

Political Violence Amplifies U.S. Vulnerability to Foreign Disinformation Operations


The killing of conservative commentator Charlie Kirk in Utah exposes deepening U.S. political violence and social discord, creating opportunities for foreign adversaries like Russia and China to exploit societal rifts. Newsweek reports that these nations are accused of leveraging such divisions through disinformation campaigns to inflame tensions and undermine American governance. The article explains how this incident intensifies focus on political violence and misinformation, providing foreign actors with a fresh flashpoint to manipulate public perception and destabilize the political landscape. Utah Governor Spencer Cox warned that bots from Russia and China actively encourage violence and misinformation, highlighting the intersection of domestic unrest, social media amplification, and foreign exploitation. China has previously exploited U.S. social crises, including the January 6 Capitol riot, by amplifying divisive narratives and using networks like “Spamouflage” to impersonate voters and spread discord, increasingly with AI-generated content. Similarly, Russia’s 2016 election interference campaigns employed fake personas and troll farms to exacerbate racial, ideological, and cultural divisions, tactics that have persisted. Modern technology, including AI, enables rapid spread of these false narratives across platforms like X, Facebook, and Telegram, targeting polarized audiences. Kirk’s murder underscores how moments of unrest are utilized to weaken U.S. cohesion and credibility globally.


Source: Newsweek, Amir Daftari, Sep 2025, available online at: https://www.newsweek.com/charlie-kirk-china-russia-oppourtunity-us-division-2128734

Moldova’s Vulnerable Information Landscape Faces Escalating Foreign Influence Operations


FIMI-ISAC’s Country Election Risk Assessment identifies significant threats to Moldova’s September 2025 parliamentary elections, primarily from Russian-led hybrid operations involving extensive disinformation and foreign information manipulation. These efforts aim to derail Moldova’s pro-EU path by spreading anti-EU, anti-Western, and anti-government narratives, often through the PENA model, labeling President Maia Sandu as a Western puppet, and normalizing electoral fraud. The cyfluence landscape is increasingly complex, utilizing AI-generated deepfakes, forged documents, and sophisticated cross-platform campaigns for amplification. Moldova’s media environment, marked by weak regulations and the resurgence of banned pro-Russian outlets via mirror sites and social media, exacerbates these vulnerabilities. Covert financing, bots, and encrypted messaging applications further facilitate protest organization and propaganda dissemination, complicating attribution. The report assesses the overall risk to electoral integrity as medium to high, predicting intensified activity as election day approaches, and urges proactive measures including inter-agency coordination, digital platform partnerships, civil society monitoring, and robust public communication to safeguard democratic processes.


Source: fimi-isac. Escalating Disinformation Threatens Moldova’s 2025 Elections. fimi-isac. Available online at: from https://fimi-isac.org/wp-content/uploads/2025/09/Country-Report-Moldova-Risk-Assessment.pdf

Rapid Disinformation Spread: False AI Claims Distort Trump’s Post-Assassination Address


NewsGuard definitively debunks widespread social media claims alleging President Donald Trump’s Oval Office address, delivered after conservative commentator Charlie Kirk’s assassination, was an AI-generated deepfake. Following Kirk’s Sept. 10 killing, Trump posted a speech on Truth Social, prompting anti-Trump users to highlight unnatural hand movements and background leaf shifts as signs of artificial intelligence. These false assertions rapidly propagated, with some suggesting the purported AI indicated Trump’s poor health or even implicated his administration in Kirk’s death, potentially as a diversion from the Jeffrey Epstein case. However, analysis by AI detection software like Hive and cybersecurity experts from GetReal Labs found no evidence of AI generation in either the video or audio. The observed irregularities are attributed to a common video editing technique known as a “morph cut,” designed to seamlessly join segments and remove verbal errors, which misinformed users misinterpreted as AI glitches. This incident critically illustrates how visually ambiguous digital content can be weaponized as disinformation, quickly disseminated to spread politically charged conspiracy theories and erode public trust, directly impacting the Cyfluence landscape.


Source: NewsGuard,  Sep 11 2025, Available Online at: https://www.newsguardrealitycheck.com/p/trumps-address-on-charlie-kirk-is

AI Chatbots Amplify Misinformation During Breaking News Crises


AI-generated ‘fact-checks’ are actively spreading falsehoods and fueling confusion during breaking news events, exemplified by the Charlie Kirk assassination. NewsGuard reports that as social media users sought clarity on the Sept. 10 incident, AI chatbots like Perplexity and Grok issued contradictory or outright inaccurate information, including denying Kirk’s death, fabricating a suspect, and falsely linking him to the Myrotvorets blacklist. This amplification of confusion occurs amidst major tech companies scaling back human fact-checkers, leading to a vacuum that AI, incapable of real-time human-like verification, fills with confident but erroneous responses. Furthermore, the accessibility of generative AI facilitates the ‘liar’s dividend,’ enabling users to baselessly label authentic footage as fabricated, thus casting doubt on legitimate content. Despite repeated examples of these tools confidently repeating falsehoods, many users continue to treat AI systems as reliable sources during crises, posing a significant challenge to information integrity and exacerbating the hostile influence landscape.


Source: NewsGuard Reality Check, McKenzie Sadeghi, Sep 11, 2025,  Online at: https://www.newsguardrealitycheck.com/p/after-kirk-assassination-ai-fact

NewsGuard Index: Persistent American Vulnerability to Digital Misinformation Continues


The latest Reality Gap Index from NewsGuard reveals that nearly two-thirds of Americans (64 percent) believed at least one of August 2025’s top false online claims, mirroring July’s high rate. This ongoing measurement highlights a significant vulnerability to digital hostile influence, with AI-generated media playing a pivotal role. Specifically, a wide margin of Americans, 73 percent, believed or were uncertain about AI-fabricated images and videos falsely depicting Donald Trump and Jeffrey Epstein with underage girls. This underscores the potent and deceptive nature of synthetic media in shaping public perception. Other significant falsehoods included an inaccurate claim about President Trump declaring martial law in Washington D.C., and a widely uncertain narrative concerning $100 million missing from a California wildfire charity. The index, based on a YouGov survey of 1,000 Americans, underscores the persistent challenge of online misinformation and its deep penetration into mainstream public belief, indicating a critical landscape for cyfluence operations where fabricated content can readily sow discord and confusion.


Source: NewsGuard, Samantha Tanner, Sep 09, 2025, Available online at: https://www.newsguardrealitycheck.com/p/nearly-two-thirds-of-americans-believe

Leaked Files Expose China’s Global Export of Disinformation and Surveillance Capabilities


Leaked documents from Chinese firms Geedge Networks and GoLaxy expose a significant commercialization of censorship, surveillance, and propaganda technologies, challenging the traditional view of China’s digital control. A recent article from Wired explains how Geedge offers a ‘Great Firewall as a service’ to nations like Kazakhstan, Pakistan, Ethiopia, and Myanmar, enabling governments to monitor, intercept, and manipulate internet traffic. Concurrently, GoLaxy leverages AI for extensive social media data collection, political mapping, and pushing targeted narratives through fabricated accounts. Its clients include the Chinese Communist Party, government, and military, with internal documents boasting capabilities akin to Cambridge Analytica in shaping discourse around sensitive topics such as Taiwan, Hong Kong, and U.S. elections. Researchers highlight that these revelations demonstrate market forces actively shaping digital authoritarianism, with companies competing for contracts and setting sales targets. This commercialization extends beyond mere censorship into active disinformation, as targeted propaganda, synthetic profiles, and narrative manipulation are openly marketed to government clients. The findings underscore a concerning global proliferation of sophisticated hostile influence capabilities, driven by profit motives within China’s tech sector.


Source: Wired, Z Yang & L Matsakis Sep 11, 2025, Available online at: https//www.wired.com/story/made-in-china-how-chinas-surveillance-industry-actually-works/

Kremlin’s UNGA Performance: A Masterclass in Deceptive Peacemaking and Reality Distortion


EUvsDisinfo reveals the profound chasm between Moscow’s pronouncements at the United Nations General Assembly and its hostile actions, underscoring a sophisticated, global disinformation campaign. While Russia champions peace, development, and human rights at UNGA, its actions consistently subvert these ideals, deploying narratives of projection, denial, and distraction. The Kremlin’s “peace” proposals are, in reality, demands for Ukrainian surrender, masked by false accusations against Kyiv and the West for prolonging conflict. Concurrently, Moscow propagates the falsehood that Western sanctions harm Europe more, despite mounting evidence of Russia’s stagnating civilian economy and severe budget strain due to war production. Russia further attempts to position itself as a protector of the “Global South” against Western “bullying,” even as its documented interventions from Syria to the Sahel reveal a pattern of destabilization and state capture. Disinformation tactics extend to fabricating claims of Ukrainian chemical weapons use, while credible reports confirm Russia’s own deployment of such agents. Most disturbingly, Russia denies the forced deportation of over 20,000 Ukrainian children, a confirmed war crime linked to its demographic crisis, portraying these abductions as mere evacuations. This systematic deceit makes a mockery of international principles, forming a core component of Russia’s hostile influence operations.


Disinformation Blunts Accountability: Russia’s Pattern of Denying War Crimes


The Kremlin is shown to have falsely accused Ukraine of fabricating mass casualty figures following a Russian glide-bomb strike in Donetsk, according to research by Disinfowatch. Within hours of the September 9 incident, RT published denials from an unnamed Russian Defense Ministry source, claiming the story originated with President Zelensky and was amplified by ‘Ukrainian propaganda.’ This narrative asserted ‘no strikes in the area’ and that the crater did not match an aerial bomb impact, fitting an agenda to discredit Kyiv’s care for Donbas residents. However, the attack is independently and extensively documented. Reuters, AP News, and the Los Angeles Times published on-scene reporting and imagery confirming the casualties. The UN Humanitarian Coordinator for Ukraine issued an official condemnation, directly contradicting the claim that the story began with a single politician. Ukrainian officials beyond Zelensky also reported the strike, while Kyiv’s provision of frontline pensions is well-documented, countering RT’s insinuations. This incident exemplifies a textbook Kremlin denial strategy, which attacks messengers and injects pseudo-forensics to muddy clear evidence. Such a tactic aims to blunt outrage and accountability for suspected war-crime incidents against civilians, forming a critical component of Russia’s cyfluence operations amid claims of not targeting civilians.


Medvedev, Dmitriev Exploit Kirk’s Killing to Blame Left, Undermine Kyiv Support


Newsweek reports that Russian officials, including former president Dmitry Medvedev and Kremlin negotiator Kirill Dmitriev, have exploited the assassination of conservative activist Charlie Kirk to exacerbate U.S. political divisions and advance anti-Ukraine narratives. These actions represent a clear cyber hostile influence operation, leveraging a domestic tragedy to sow discord and undermine Western support for Kyiv. Kirk, known for his anti-NATO stance and criticism of Ukrainian President Zelensky—whom he once called a ‘puppet of the CIA’—provided fertile ground for this disinformation. Medvedev specifically blamed ‘left-wing liberal scum who support Banderite Kyiv’ for the murder, falsely associating Ukraine with Nazi sympathies, while Dmitriev amplified content celebrating Kirk’s death and echoing sentiments like ‘The Left is the party of murder.’ This exploitation aims to falsely link Ukraine supporters with violence and pressure right-wing Americans to withdraw their backing for Kyiv, aligning with broader Kremlin propaganda. Mark Shanahan, a U.S. politics expert, noted this incident highlights how America’s already hyper-polarized political landscape offers ripe opportunities for foreign adversaries to amplify internal conflicts and destabilize discourse.


Source: Newsweek, Sep 11, Available online at https://www.newsweek.com/kirk-killing-medvedev-maga-2128048

Telegram’s Strategic Underreporting Challenges EU Digital Governance and Information Integrity


An informative EUvsDisinfo article highlights Telegram’s emergence as a primary conduit for hostile digital influence campaigns across Europe, directly challenging the EU’s information space and digital governance frameworks. Since its 2013 founding, Telegram has rapidly expanded, boasting 1 billion users globally by 2025, driven by its multi-purpose functionality, perceived security, and minimal content moderation. These factors, coupled with co-founder Pavel Durov’s “freedom of speech” branding, make it attractive to malicious actors. Critically, Telegram appears to strategically underreport its EU user base to evade designation as a Very Large Online Platform (VLOP) under the EU’s Digital Services Act, sidestepping stringent content moderation and accountability measures. Case studies from Spain, Germany, France, and Poland illustrate Telegram’s use for pivoting from anti-vaccination narratives to pro-Kremlin disinformation, disrupting elections, and amplifying content banned elsewhere. Ukraine’s experience serves as a stark warning, where Telegram’s unregulated expansion has normalized anonymous channels as primary news sources, enabling Russian actors to conduct pervasive influence operations and foster societal division. The EU must heed these lessons, implementing robust regulation and transparency to safeguard democratic values from Telegram’s corrosive influence.


Source: EUvsDisinfo, P Burdiak, O Monastyrskyi & O Tretyakov-Grodzevych,  September 08, 2025, Available Online: https://euvsdisinfo.eu/eus-telegram-dilemma-the-rise-of-unchecked-influence/

[Takeaways]


This week's reporting underscores the persistent and evolving nature of Russian and Chinese information operations, which continue to target democratic vulnerabilities globally. The established strategy remains twofold: sustained campaigns to degrade electoral integrity in nations like the Philippines, Moldova, and Czechia, alongside the opportunistic weaponization of domestic crises in the U.S. to deepen polarization.


Generative AI remains a key force multiplier, its utility extending beyond creating synthetic content to actively degrading the information commons through the “liar’s dividend” and the misinforming output of AI chatbots. This hostile activity is enabled by under-regulated platforms and amplified by a now-established strategic trend: the commercialization of digital authoritarianism. The continued export of influence-as-a-service by Chinese firms ensures that the capabilities to erode democratic cohesion are becoming more accessible, solidifying a long-term, systemic challenge to open societies.

 


[CRC Glossary]


The Cyfluence Research Centre has relaunched the CRC Glossary.  This initiative aims to serve as a shared lexicon of both foundational and emerging terms that shape the field. To this end, the Glossary is designed to be a continually updated resource, with new entries added weekly. We see this as a collaborative project and strongly encourage input from the expert community. The goal is to reduce the problem of ambiguous, or conflicting terminology that can hinder collaborative work as well as communication effectiveness to the general public as a whole. 


We invite you to submit additions, changes, or corrections via the form on our website.


[Download]



 
 
bottom of page