top of page

CRC Weekly: Cyber-based hostile influence campaigns 27th October - 2nd November 2025

  • Writer: CRC
    CRC
  • Nov 7
  • 8 min read

Updated: Nov 12

Kid in suit with light bulb helmet gestures on blue grid. Text: "Weekly Media Update: Information Operations, 22nd-28th Sep 2025."

[Introduction]


Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect.  


During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events.



[Contents]




[Report Highlights]


An independent analysis found that nearly one-third of hostile online comments during Japan's election were linked to Russia-aligned accounts. - Nippon


A coordinated smear campaign has falsely attributed 14 international properties to President Zelenskyy, including estates once owned by Nazi figures. - NewsGuard's Reality Check


Moscow is exploiting climate issues as a front in its hybrid war, aiming to fracture EU consensus and protect its fossil fuel economy. - EUvsDisinfo


Beijing is justifying its censorship of domestic grievances by framing online dissent as a form of "Western ideological infiltration." - Jamestown


When asked about the war in Ukraine, major AI chatbots cited sanctioned Russian state media in nearly 18 percent of responses. - WIRED


Despite moderation policies, top AI video tools are generating synthetic propaganda that included Holocaust denial and violent imagery. - Cybernews


 Influence campaigns are "pivoting" entire networks of old accounts, turning COVID-era anti-vaccine profiles into tools for anti-Ukrainian messaging. - EU DisinfoLab


As the Voice of America retreats from the global stage, Russian and Chinese state media are expanding operations to fill the information void. - GIOR



[Weekly Review]


1. Russian Influence Operations Target Japan's Election and Poison AI Training Data


An article from Nippon has revealed that Russian-linked information operations targeted Japan’s House of Councillors election by using networks of bots and trolls to sow doubt and amplify divisive narratives. The hostile influence campaign relied on a large number of small, inauthentic accounts to infiltrate online conversations and amplify pro-Kremlin messaging. An independent analysis by international affairs expert Ichihara Maiko identified that approximately 32 percent of hostile comments on a single post were connected to Russia-aligned accounts. Beyond direct engagement, the operation also sought to contaminate the information ecosystem by deliberately flooding the web with pro-Kremlin content to "groom" large language models. Japanese-language hubs, such as Pravda Nihon, republished hundreds of pro-Russia items daily. This strategy is designed to bias search results through query manipulation and pollute the training data used by AI chatbots, leading to the proliferation of AI slop. Independent audits confirmed that leading generative AI systems now return false or misleading information on controversial topics more frequently.


Source: Nippon, 'Japan's Upper House Election Reveals how Russian Influence Operations Infecting AI with Flood of Propaganda, Stoking Divisions' Available Online: https://www.nippon.com/en/in-depth/d01170/japan%E2%80%99s-upper-house-election-reveals-how-russian-influence-operations-infecting-ai-with-.html


2. Fake Websites and Forged Documents Fuel Russian Smear Against Zelenskyy


Russian state media and affiliated online networks have fabricated a series of false claims as part of a smear campaign accusing Ukrainian President Volodymyr Zelenskyy of owning a real estate empire worth over $682 million. A NewsGuard's Reality Check investigation revealed that the latest hoax alleged Zelenskyy purchased a $79 million ranch in Wyoming, a claim originating from a fake website mimicking a legitimate U.S. real estate firm. This marks the 14th property Russia has falsely attributed to the Ukrainian leader. The disinformation is disseminated using a network of fake websites, often featuring forged documents, before being amplified across major social media platforms like X, Facebook, and TikTok. Propagandists have even alleged that Zelenskyy’s supposed purchases included properties once owned by Nazi figures. Despite repeated debunking, the false narratives continue to circulate widely. Both NewsGuard and Forbes have confirmed that Zelenskyy's actual assets are valued under $20 million, finding no evidence of misused Western aid.


Source: NewsGuard's Reality Check, NewsGuard, Forbes, ‘Russian Fake: Mapping Zelensky’s Made-Up $682 Million Real Estate Portfolio’ Available Online: https://www.newsguardrealitycheck.com/p/russian-fake-mapping-zelenskys-made



3. Kremlin's Information Operations Target EU Climate Policy to Protect War Economy


The Kremlin is conducting information operations that weaponize climate change narratives to advance its geopolitical goals and undermine European unity. An article by EUvsDisinfo explains that these campaigns are a component of Russia’s broader hybrid warfare strategy against the West. Pro-Kremlin media outlets deliberately frame the European Union's Green Deal as an elitist policy designed to harm key sectors like agriculture, using claims that it is "killing farmers." This tactic of cognitive warfare aims to erode public support for environmental initiatives, which are seen by Moscow as a direct threat to its fossil fuel-dependent economy and its ability to exert energy-related pressure on Europe. By portraying decarbonization as self-destructive, the disinformation seeks to fracture social consensus within the EU, protect Russian energy exports, and weaken the bloc's resolve on sanctions and energy independence.


Source: EUvsDisinfo, 'Weaponising climate change to undermine the West' Available Online: https://euvsdisinfo.eu/weaponising-climate-change-to-undermine-the-west/


4. China's 'Clean and Bright' Campaign Aims to Control the Digital Narrative


The Cyberspace Administration of China (CAC) has launched a new "clean and bright" campaign that redefines online criticism and social frustration as "negative energy" that endangers national security. An article published by The Jamestown Foundation's China Brief explains that the campaign targets posts discussing unemployment, gender inequality, and social anxiety, portraying them as products of "Western ideological infiltration." This strategy reframes censorship as a necessary defense against cognitive warfare, empowering regulators to erase narratives that challenge the Chinese Communist Party's (CCP) image of a harmonious society. This effort is a deepening of the CCP's comprehensive system of propaganda and ideological management, or Xuanchuan. Influencers discussing youth job struggles have already been banned, reflecting the government's push to enforce an "authorized reality." By linking social stability to "total national security," Beijing normalizes censorship as a security measure and may be creating an exportable model of digital authoritarianism for other governments seeking to justify repression.


Source: Jamestown: article, "Beijing's War on 'Negative Energy' Available Online: https://jamestown.org/program/beijings-war-on-negative-energy/


5. Russian Propaganda Networks Are Successfully Poisoning Major AI Chatbot Training Data


Generative AI systems are proving vulnerable to manipulation by Russian information warfare tactics, with leading chatbots frequently reproducing content from sanctioned state media. A study by the Institute for Strategic Dialogue (ISD), covered by WIRED, found that AI models exhibit a form of confirmation bias, delivering more pro-Kremlin content when users enter biased or manipulative prompts. This vulnerability is being actively exploited by Russian networks like the "Pravda" operation, which are deliberately working to "poison" the data that large language models (LLMs) are trained on. By flooding the information ecosystem with false narratives, these actors ensure their propaganda is ingested and later presented as factual by Western AI tools. The findings highlight a significant challenge for platform regulation, as the very architecture of current AI systems can be turned into a vector for disseminating state-sponsored disinformation.


Source: WIRED, 'Chatbots Are Pushing Sanctioned Russian Propaganda' Available Online: https://www.wired.com/story/chatbots-are-pushing-sanctioned-russian-propaganda/


6. Leading AI Video Generators Are Producing Antisemitic and Extremist Synthetic Propaganda


The proliferation of synthetic propaganda is being accelerated by the failure of leading AI video generators to block the creation of extremist and hateful content. A new study from the Anti-Defamation League (ADL) found that top platforms produced antisemitic content, including Holocaust denial and violent tropes, in at least 40% of test cases when prompted with hateful text. Cybernews reports that despite stated moderation policies, these systems consistently failed to filter out harmful narratives, demonstrating a significant vulnerability in their design. The ADL warns that this capability not only allows malicious actors to create high volumes of disinformation but also poses a direct threat to historical memory and online safety. The findings illustrate how AI tools, trained on vast and often unfiltered datasets from the internet, can become powerful engines for amplifying and normalizing extremist ideologies.


Source: Cybernews, Anti-Defamation League (ADL), 'Popular AI video generators amplify antisemitic tropes​, Available Online: https://cybernews.com/ai-news/ai-videos-antisemitism/


7. Hostile Actors Repurpose Old Social Media Accounts to Target Poland


A factsheet published by EU DisinfoLab details how Poland's information space has been shaped by recurring disinformation waves pushed by far-right activists, politicized media, and Russia-aligned networks. The hostile influence campaigns have focused on anti-immigrant, anti-vaccine, anti-Ukrainian, and culture-war themes. Common tactics include "narrative pivoting," where repurposed accounts from the COVID era were switched to anti-Ukrainian messaging, and the amplification of rumors through bot and troll activity. Fabricated materials, such as a forged ministry letter and an AI-generated video, have also been used to inflame grievances. These information operations aim to polarize society, degrade trust in institutions, and distort policy debates on migration, public health, and EU agreements. Russia's invasion of Ukraine served as an accelerant, with Kremlin-linked narratives exploiting economic strains and election cycles. The response remains fragmented, and the politicization of public broadcasting risks laundering these narratives into the mainstream, contributing to widespread information disorder.


Source: EU DisinfoLab, 'The disinformation landscape in Poland' Available Online: https://www.disinfo.eu/publications/disinformation-landscape-in-poland/


8. VOA Cutbacks Create Strategic Void for State-Sponsored Propaganda Operations


The scaling back of U.S. international broadcasting is weakening American soft power and ceding narrative control to adversarial states in the global information war. An article in the Global Influence Operations Report (GIOR) details how the operational reductions at Voice of America (VOA) are creating a strategic vacuum that is being actively filled by Russia’s RT/Sputnik apparatus and the China Media Group. These state-sponsored actors are expanding their own information operations into regions where VOA was once a primary source of independent news. This shift represents a significant setback for U.S. strategic communications, as it removes a credible voice from contested information ecosystems and emboldens authoritarian regimes. By relinquishing its role in these environments, the U.S. allows hostile actors to more easily shape perceptions and advance their geopolitical objectives without counterbalance.


Source: GIOR, 'Voice of America Shutdown Benefits Russia, China: GOP Warns - Global Influence Operations Report' Available Online: https://www.global-influence-ops.com/voice-of-america-shutdown-benefits-russia-china-gop-warns/


9. Foreign Actors Use Coordinated Betting to Manipulate NYC Election Perceptions


Blockchain analysis has revealed a coordinated effort to manipulate political prediction markets, representing a novel vector for platform-enabled foreign interference. According to a report in The New York Post, investigators found that a small number of digital wallets, funded overwhelmingly from offshore exchanges in China and the Middle East, were responsible for a disproportionate volume of bets on a New York City mayoral candidate. This activity, which appears automated and is not financially rational, constitutes a form of digital astroturfing designed to artificially inflate the candidate's perceived support. Experts warn that because media outlets and campaigns often cite these markets as indicators of public sentiment, such manipulation can distort the political narrative and potentially discourage voter turnout by creating a false sense of inevitability. The incident raises serious questions about the integrity of data from unregulated financial platforms in an electoral context.


Source: The New York Post, ‘Foreign betting markets could influence NYC election — as data shows pro-Mamdani bets from China, Middle East skewing odds’ Available Online: https://nypost.com/2025/10/28/business/pro-mamdani-bets-from-china-middle-east-skewing-market-odds-experts/?utm_campaign=nypost&utm_medium=social&utm_source=twitter


[CRC Glossary]


The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult.


To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.


As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website 





[Download Report]














 
 
bottom of page