Search CRC
142 results found with an empty search
- CRC Weekly: Cyber-based hostile influence campaigns 17th-23rd November 2025
[Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect. During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Weekly Review] Russia Shifts Strategic Budget to Prioritize Digital Information Operations Pro-Russian Cyberattacks Target Danish Political Websites During Election Period Russia Incorporating Cartoons and Animation in African Narrative Warfare Strategy Generative AI utilised to Fabricate Ukrainian Surrender Narratives Multiplying Inauthentic Networks Signal Rising Chinese Interference Risk to South Korea Generative AI Facilitates High Volume Low Quality Influence Operations Artificial Intelligence Behind Trend for Personalized Influence Campaigns Political Instability in Bulgaria Amplifies Vulnerability to Hostile Narratives Strategic Use of AI Suggested to Counter Venezuelan Regime Narratives Crowdsourced Context on Platforms Significantly Reduces Virality of Misleading Content LLMs Accelerate Convergence of Cybercrime and Information Threats [CRC Glossary] [Download Report] [ Report Highlights] Moscow shifts strategy, boosting state media funding by 54 percent to prioritize information warfare over conventional military spending. -Jamestown Foundation Pro-Russian hackers target Danish political party websites with DDoS attacks to manufacture confusion and disrupt the ongoing election period. - Danish Social Security Agency Moscow weaponizes animated cartoons and historical grievances to bypass literacy barriers and reshape political alliances across the African continent. - EU vs. Disinfo Kremlin-linked actors exploit OpenAI’s Sora 2 to generate realistic but fabricated footage of Ukrainian soldiers surrendering on the battlefield. - NewsGuard Multiplication of Chinese linked sockpuppet accounts mark early signs of Chinese interference operations targeting Korean elections. - Doublethink Lab High-volume "AI slop" floods social platforms, prioritizing quantity over sophistication in modern influence operations. -Graphika Artificial intelligence transforms information warfare into a precision weapon, enabling actors to deploy millions of personalized, microtargeted falsehoods simultaneously. - Foreign Affairs.com Prolonged political paralysis and high news avoidance in Bulgaria create a fertile environment for foreign and domestic hostile influence campaigns. - Disinfo Lab Experts suggest leveraging agentic AI to penetrate Venezuela’s closed information environment and deliver precision-targeted narratives to counter the regime. - CSIS Study finds crowd-sourced corrections on X significantly reduce misinformation by leveraging reputational pressure to prompt deletions. - University of Rochester The AI arms race enables malicious actors to deploy automated "swarms" and sophisticated cybercrime with minimal expertise. -Oxford Internet Institute [Weekly Review] 1. Russia Shifts Strategic Budget to Prioritize Digital Information Operations According to an article by the Jamestown Foundation , the Russian state is significantly reallocating resources from conventional military spending toward a comprehensive information warfare strategy, evidenced by a 54 percent increase in state media funding within its 2026 draft budget. The Kremlin is operationalizing an expansive propaganda ecosystem that integrates traditional state media and diplomatic channels with advanced digital tactics, including AI-generated content and coordinated online influence operations. These efforts are designed to shape the perceptions of domestic, Ukrainian, and Western audiences simultaneously. Domestically, the narrative warfare focuses on projecting military success and economic stability while denying war crimes. Externally, Kremlin-backed platforms are engaged in a hostile influence campaign (HIC) aimed at intimidating Western governments and undermining support for Ukraine. The analysis notes that these hybrid warfare tactics are becoming increasingly sophisticated, utilizing AI-generated fake videos and cloned social media accounts to erode Ukrainian morale. This strategic pivot highlights Russia's reliance on non-kinetic methods to offset conventional military stagnation, aiming to destabilize Western unity and manipulate public opinion through a synchronized combination of kinetic strikes and digital information operations. Source: Jamestown Foundation, 'Kremlin Shifts Focus to Information Warfare' Available Online: https://jamestown.org/kremlin-shifts-focus-to-information-warfare/ Top Of Page 2. Pro-Russian Cyberattacks Target Danish Political Websites During Election Period Several political party websites in Denmark were temporarily disabled following a series of Distributed Denial-of-Service (DDoS) attacks on November 17. An official statement by The Danish Agency for Social Security links this disruptive activity to a pro-Russian hacker group, noting that while the attacks did not compromise sensitive data or election systems, they serve as a tactic to generate confusion. Authorities warn that these Cyfluence operations aim to amplify misleading narratives about stability during the municipal and regional election period. The timing of the disruption aligns with elevated threat assessments predicting that foreign actors would attempt to shape public perception by hindering access to political information. Although DDoS attacks are a recurring feature of the national cyber landscape, this specific campaign underscores how cyber threats are integrated into broader hostile influence campaigns. Officials emphasize that attack methods are continuously evolving, necessitating ongoing adaptations in cyber defenses to protect democratic processes and counter attempts to manipulate public confidence through digital interference. Source: The Danish Agency for Social Security, 'Several political party websites hit by DDoS attacks' Available Online: https://samsik.dk/artikler/2025/11/flere-partiers-hjemmesider-ramt-af-ddos-angreb/ Top Of Page 3. Russia Incorporating Cartoons and Animation in African Narrative Warfare Strategy Russian influence actors are increasingly exploiting historical grievances and anti-Western sentiment to reshape public perception in Africa, as detailed in an article by EU vs. Disinfo . Through projects such as TruthAfrica, analysts have documented a coordinated effort to disseminate narratives that position Moscow as a defender of traditional values while framing Western nations as neo-colonial aggressors. A central component of this hostile influence campaign is the deployment of memetic warfare tactics, specifically the use of animation and easily shareable media, to circumvent literacy barriers and linguistic differences across the continent. Cartoons that glorify Russian-linked paramilitary groups and vilify Western actors have emerged as potent vehicles for this propaganda. This content is designed to shift blame for regional instability away from jihadist groups and toward Western partners, thereby increasing local acceptance of Russian private military companies. The article highlights that these techniques are not limited to the African information environment; similar animated propaganda targeting Ukraine demonstrates that these digital methods are part of a global strategy. By leveraging these accessible media formats, Russian operators aim to effectively manipulate the information ecosystem and secure long-term geopolitical advantages. Source: EU vs. Disinfo, 'A Battle for Hearts and Minds: How Russian Propaganda Takes Over Africa' Available Online: https://euvsdisinfo.eu/a-battle-for-hearts-and-minds-how-russian-propaganda-takes-over-africa/ Top Of Page 4. Generative AI utilised to Fabricate Ukrainian Surrender Narratives Kremlin-linked propagandists are increasingly leveraging advanced generative AI tools to produce fabricated video content depicting Ukrainian soldiers surrendering. As revealed in a NewsGuard report , actors have utilized OpenAI's Sora 2 text-to-video technology to create viral clips intended to erode trust in the Ukrainian Armed Forces during critical combat phases. Despite the realistic visual quality of this synthetic propaganda, experts have identified clear inconsistencies, including incorrect uniforms, distorted facial features, and mispronounced geographic names, confirming the footage is inauthentic. The investigation identified a specific cluster of videos circulating in November 2025, many of which retained traces of the AI tool’s watermarks. These clips were disseminated by anonymous accounts previously associated with pro-Kremlin influence operations, indicating a structured effort to amplify false narratives. Ukraine’s Center for Countering Disinformation has similarly flagged this activity as part of a coordinated network targeting foreign audiences. The campaign highlights the growing reliance on AI-generated media to manipulate perceptions of battlefield reality and demoralize target populations. Source: NewsGuard, 'Kremlin Propagandists Weaponize OpenAI’s' Available Online: https://www.newsguardrealitycheck.com/p/kremlin-propagandists-weaponize-openais Top Of Page 5. Multiplying Inauthentic Networks Signal Rising Chinese Interference Risk to South Korea As published by Doublethink Lab , rising concerns regarding Chinese interference have triggered right-wing protests in South Korea, despite court rulings debunking claims of direct election manipulation. However, the publication confirms that Foreign Information Manipulation and Interference (FIMI) by actors linked to the People’s Republic of China is occurring, albeit on a limited scale. Investigations have uncovered evidence of fake Korean-language news websites and of coordinated comment manipulation on major digital platforms, both designed to influence online discourse. A specific tactic involves the use of fabricated social media personas, often sockpuppet accounts posing as young Korean women, which engage with pro-China content and amplify anti-U.S. narratives. While these networks have not yet executed large-scale coordinated campaigns, their presence represents a "canary in the coal mine" for future operations. The report suggests that while current reach is low, these subtle digital activities fit established patterns of PRC-aligned influence. Strengthening information literacy and monitoring for inauthentic behavior are cited as critical measures to prevent these nascent operations from escalating into broader societal destabilization. Source: Doublethink Lab, 'Canaries in the Coal Mine: Cases of Foreign Information Manipulation in South Korea' Available Online: https://medium.com/doublethinklab/canaries-in-the-coal-mine-cases-of-foreign-information-manipulation-in-south-korea-163217242330 Top Of Page 6. Generative AI Facilitates High Volume Low Quality Influence Operations Recent research by Graphika indicates that while generative AI has increased the speed and volume of influence operations, it has not yet resulted in the sophisticated, high-quality campaigns initially feared. Instead, the technology is enabling state and non-state actors to flood social platforms with "AI slop"—high-volume, low-quality content. Operations such as CopyCop, Doppelgänger, and Spamouflage are utilizing these tools to automate the creation of personas and the generation of text, effectively scaling their ability to mislead targeted audiences and amplify polarizing narratives without requiring significant human resources. Applying the Actor-Behavior-Content (ABCDE) framework, analysts observed that propagandists are delegating core tasks to AI, blurring the lines between professional and amateur operators. This shift allows for the saturation of the information environment with fabricated content, even if individual pieces remain relatively crude. The report notes that despite the surge in volume, engagement metrics suggest these AI-driven campaigns struggle to achieve organic reach. The primary threat identified is not the persuasiveness of the synthetic propaganda, but the sheer quantity of misleading material, which can distort public debate and overwhelm fact-checking capabilities within the digital information ecosystem. Source: Graphika, 'Cheap Tricks' Available Online: https://public-assets.graphika.com/reports/Graphika_Report_Cheap_Tricks.pdf Top Of Page 7. Defensive AI Tool ‘Silverer’ Uses Data Poisoning to Combat Deepfakes According to a report by Foreign Affairs , the integration of artificial intelligence is transforming information operations from crude mass persuasion into precise, adaptive campaigns capable of undermining democratic processes. Recent incidents, such as deepfake audio impersonating the U.S. Secretary of State and the large-scale psychological profiling conducted by China's GoLaxy, demonstrate how AI enables hostile actors to deploy microtargeted propaganda. This technological shift allows for the creation of convincing forgeries and the simultaneous dissemination of millions of targeted falsehoods, effectively supercharging the toolkit available for cognitive warfare. Case studies highlight the global reach of these tactics, from AI-generated hate speech in India to Russian-linked operations that disrupted Romania’s 2024 election. The report cites China’s "Uncle Spam" campaign as an example of how adversaries harvest vast amounts of personal data to refine their targeting strategies. Despite the escalation of these AI-enabled threats, U.S. defenses have been compromised by budget cuts to agencies responsible for countering malign information. Experts warn that without renewed investment and collaboration with tech platforms, the information environment will remain vulnerable to these sophisticated, automated forms of strategic deception and foreign interference. Source: Foreign Affairs, 'Artificial Intelligence Supercharging Disinformation Warfare' Available Online: https://www.foreignaffairs.com/united-states/artificial-intelligence-supercharging-disinformation-warfare Top Of Page 8. Political Instability in Bulgaria Amplifies Vulnerability to Hostile Narratives Bulgaria’s prolonged political instability, characterized by fragmented parties and repeated no-confidence votes, has created an information environment highly susceptible to malign influence. A report by Disinfo Lab outlines how the split within the Movement for Rights and Freedoms (MRF) and the rise of nationalist factions have fueled public mistrust in democratic institutions. This institutional paralysis, combined with multiple snap elections, has weakened public confidence, making citizens more vulnerable to narrative warfare that exploits political fatigue. The situation is further aggravated by a deteriorating media landscape, where fragile press freedom and high rates of news avoidance drive users toward unverified content on social platforms. These conditions allow hostile influence campaigns (HICs), whether domestic or foreign, to effectively manipulate public sentiment regarding governance and corruption. The report notes that Bulgaria currently has the highest rate of news avoidance in Europe, creating a vacuum that is easily filled by information disorder. While the country has continued its integration into the EU, including Schengen entry, the prevalence of misleading narratives remains a destabilizing force. This environment of polarization serves to undermine the legitimacy of institutions and complicates Bulgaria's strategic communications regarding its European orientation. Source: Disinfo Lab, 'Disinfo landscape in Bulgaria' Available Online: https://www.disinfo.eu/wp-content/uploads/2025/11/20251117_Disinfo-landscape-in-Bulgaria-V2-1.pdf Top Of Page 9. Strategic Use of AI Suggested to Counter Venezuelan Regime Narratives The deployment of a U.S. carrier strike group near Venezuela signals increased attention on the region, yet a CSIS commentary argues that the strategic outcome will likely be determined by information operations rather than kinetic force. The author suggests that Venezuela's tightly controlled information environment, characterized by digital surveillance and censorship, presents unique opportunities for soft warfare. With the Maduro regime facing deep unpopularity, U.S. actors could leverage agentic AI to monitor sentiment and deploy tailored strategic communications that align with local grievances, effectively bypassing the state’s narrative control while minimizing physical risks. The commentary emphasizes that AI-enabled tools, such as automated source monitoring and anomaly detection, can facilitate precision targeting in areas like Puerto Cabello. By analyzing AI-curated datasets from social platforms, analysts can detect subtle shifts in public mood that might otherwise go unnoticed. This approach posits that information warfare is becoming central to modern statecraft. The ethical application of AI-assisted messaging offers a method to pressure repressive regimes and shape public perception, advancing strategic objectives through the digital domain without the need to escalate into traditional armed conflict. Source: CSIS, 'How to Win the Information Game in Venezuela' Available Online: https://www.csis.org/analysis/how-win-information-game-venezuela Top Of Page 10. Crowdsourced Context on Platforms Significantly Reduces Virality of Misleading Content New research published by the University of Rochester indicates that X's Community Notes system is an effective tool for Digital Impact on Discourse (DID), significantly curbing the spread of misleading content. The study found that posts receiving public notes, which add corrective context visible to all users, were 32 percent more likely to be deleted by their authors compared to posts receiving only private notes. This suggests that peer-generated corrections can outperform traditional algorithmic enforcement or expert fact-checking in influencing user behavior within the social media information ecosystem. The researchers identify reputational pressure as the primary mechanism driving these retractions, particularly among high-visibility or verified users who wish to maintain credibility. These findings have broader implications for countering information disorder, demonstrating that transparency-driven, community-based tools can improve information integrity without resorting to direct censorship. By leveraging users' collective action, platforms can create a scalable model to mitigate the impact of viral false narratives and foster a more resilient digital environment amid polarized online discourse. Source: University of Rochester, 'Effective Online Fact Checkers Are Peers' Available Online: https://techxplore.com/news/2025-11-effective-online-fact-checkers-peers.html Top Of Page 11. LLMs Accelerate Convergence of Cybercrime and Information Threats The rapid acceleration of the AI "arms race," driven by major developers, is facilitating a new wave of sophisticated operations that blur the line between cybercrime and information warfare. According to an article by the Oxford Internet Institute , Large Language Models (LLMs) now enable malicious actors to execute personalized phishing campaigns and "perfect scams" with minimal technical expertise. Furthermore, the availability of custom malicious models, such as WormGPT, lowers the barrier to entry for conducting offensive cyber activities. Simultaneously, multi-agent AI systems can infiltrate online communities to coordinate inauthentic behavior and manufacture synthetic consensus, challenging traditional verification systems. The article notes that this environment is spurring rapid innovation in defensive measures, including AI-detection tools and provenance systems like C2PA, which aim to enhance transparency. The evolving landscape represents a paradigm shift where the focus must move from restricting access to AI tools—which are now widely available—to building a resilient information ecosystem capable of withstanding AI-driven manipulation. The analysis underscores the urgent need for coordinated action between policymakers and industry to counter these scalable, automated threats that combine the disruptive potential of cyber warfare with the deceptive power of synthetic propaganda. Source: Oxford Internet Institute, 'The LLM Arms Race' Available Online: https://www.oii.ox.ac.uk/news-events/the-llm-arms-race/ Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website Top Of Page [Download Report] Top Of Page
- CRC Addendum: New Threat Vectors, Legacy Constraints: Structural Gaps in Indonesia’s Smart-City Vision
Author: Nur Mawaddah, CRC Indonesian Program Director This article examines why Indonesia’s ambitious smart-city vision, despite notable progress, continues to face significant structural challenges. It highlights how uneven local capacities, fragmented cyber governance, and limited coordination shape the pace of digital transformation, while the new capital Nusantara emerges as a technologically integrated countermodel. At its core, the article raises a key question: How can Indonesia move from isolated flagship projects toward a nationwide, resilient, and digitally connected urban system? It outlines where foundational reforms and capacity building are needed, particularly in legislation, organisational structures, digital literacy, and cognitive resilience. [Full Report Below]
- CRC Weekly: Cyber-based hostile influence campaigns 10th-16th November 2025
[Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect. During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Weekly Review] Russian Network Strom-1516 Targets Kirk with Fabricated Video Armenian Election Targeted Early and Viciously by Russian Campaign China’s Coordinated Disinformation Amplifies Russia’s Hybrid War in Europe NewsGuard Finds Grok Still Spreading 2020 U.S. Election Falsehoods Grokipedia’s Reliance on Questionable Sources Risks Influencing Other AI Models Forged News Site Fuels Cross-Platform Influence Campaign Targeting Armenia Defensive AI Tool ‘Silverer’ Uses Data Poisoning to Combat Deepfakes Proposed U.S. Legislation Seeks to Counter Foreign Election Influence Minor Digital Frictions Could Slow the Online Spread of Misinformation Democracy Shield launched by EU to ICounter Foreign Interference EU Initiative Trains Botswana Journalists in Counter-Disinformation [CRC Glossary] [Download Report] [ Report Highlights] Researchers have developed a proactive public defense tool that allows individuals to corrupt their own image data, thereby disrupting the supply chain for malicious deepfake generation - TechXplore A Carnegie Europe article argues that China has become the “weak link” in Europe’s strategy against Russia, actively undermining Western unity and sanctions enforcement - Carnegie Europe Research using computer models suggests that introducing small obstacles, or “digital friction,” into the social media sharing process can significantly slow the spread of misinformation - TechXplore A critical long-term threat of Grokipedia is that its poorly sourced articles could be ingested as training data, creating a self-reinforcing chain of systemic misinformation for future AI models - Techxplore Analysis of a viral video threat against a U.S. commentator revealed linguistic cues and production styles consistent with the known Russian influence actor Storm-1516/”CopyCop” - DisinfoWatch Moscow’s early and aggressive influence campaign targeting the 2026 election is seen as a response to Armenia’s geopolitical pivot toward Europe and away from its traditional alliance with Russia - NewsGuard Users were observed citing Grok’s AI-generated falsehoods about the 2020 election as “evidence” of fraud, demonstrating a pathway for laundering synthetic content into online discourse - NewsGuard The dissemination of a fabricated story about nuclear waste in Armenia followed a recurring pattern of Russian-linked TTPs, where disinformation is seeded on dubious foreign websites and amplified cross-platform - DFR Lab Election-integrity experts warn that without stronger federal regulations, adversarial nations could exploit U.S. non-profit networks to seed disinformation and manipulate public opinion - The Daily Wire • The EU’s new Democracy Shield initiative represents a strategic shift from reactive analysis to a proactive and aggressive defense against Foreign Information Manipulation and Interference (FIMI) - EU vs. Disinfo • The EU’s training of journalists in Botswana reflects a broader strategy of using targeted capacity-building to protect freedom of expression and counter disinformation in partner nations - EU Delegation to Botswana [Weekly Review] 1. Russian Network Strom-1516 Targets Kirk with Fabricated Video A report from DisinfoWatch has detailed a viral video that falsely depicted Ukrainian soldiers threatening a conservative American commentator. The video, which circulated on X, was found by BBC Verify and independent researchers to have no connection to Ukraine’s armed forces. Instead, the investigation revealed that linguistic cues, production style, and digital dissemination patterns were consistent with Russia’s Storm-1516/”CopyCop” network, which is known for producing staged propaganda using actors and fake uniforms. This hostile influence campaign aligns with the network’s established tactics, techniques, and procedures, which have been previously documented by Microsoft and French authorities in relation to fabricated content targeting Western political figures and elections. The video was initially shared by a partisan U.S. account before being amplified across Telegram and far-right websites, demonstrating a clear pathway for injecting foreign-produced disinformation into domestic political discourse. The incident serves as another example of a state-linked actor creating synthetic content for a targeted influence operation. Source: DisinfoWatch, ‘Russia’s Storm-1516 Fabricates “Ukrainian” Threat to Erika Kirk ‘ Available Online: https://disinfowatch.org/disinfo/russias-storm-1516-fabricates-ukrainian-threat-to-erika-kirk/ Top Of Page 2. Armenian Election Targeted Early and Viciously by Russian Campaign Russia has initiated a hostile influence campaign targeting Armenia’s parliamentary elections more than a year in advance, an unusually early start for such operations. A NewsGuard investigation found that since April 2025, Moscow’s propaganda network has propagated at least 18 distinct fabricated narratives attacking Armenia’s pro-Western government. The campaign utilized digital tactics, including AI-generated news sites, coordinated social media networks, and manipulated videos distributed across 11 platforms in 8 languages, achieving an estimated 45 million views. This operation, attributed to Russian influence groups like Storm-1516, deploys narratives centered on corruption and the betrayal of national sovereignty, mirroring tactics previously used against other European nations. The campaign’s goal is to erode public confidence in the current leadership ahead of the 2026 vote, reflecting Moscow’s response to Armenia’s geopolitical shift toward Europe. Armenian officials have acknowledged the challenge of countering these accelerating propaganda attacks. Source: NewsGuard, ‘Russia Targets Armenia’s Elections – Early and Viciously’ Available Online: https://www.newsguardtech.com/wp-content/uploads/2025/11/Armenia-Re-port-1.pdf Top Of Page 3. China’s Coordinated Disinformation Amplifies Russia’s Hybrid War in Europe An article from Carnegie Europe asserts that China is becoming a central actor in Russia’s hybrid war against Europe through expanded economic, technological, and informational support. The publication highlights the increasing coordination of disinformation strategies between Beijing and Moscow. This is most evident in Chinese campaigns, such as Spamouflage, which are subsequently amplified by Russian state media and diplomatic channels. Both state actors deploy synchronized narratives that blame the West for the war in Ukraine, leveraging digital tools like generative AI and fake news portals to manipulate public perception and reinforce mutual messaging. This coordinated information manipulation is identified as one component of a broader hybrid strategy that also includes cyberattacks and espionage. The article warns that this growing China-Russia disinformation ecosystem undermines Western unity and sanctions enforcement, ultimately enabling Moscow to prolong its aggression. It concludes that stronger, jointly enforced sanctions and deeper international cooperation are needed to counter this threat. Source: Carnegie Europe, ‘China is the Weak Link in Europe’s Ukraine Strategy’ Available Online: https://carnegieendowment.org/europe/strategic-europe/2025/11/china-is-the-weak-link-in-europes-ukraine-strategy?lang=en Top Of Page 4. NewsGuard Finds Grok Still Spreading 2020 U.S. Election Falsehoods The AI chatbot Grok has confidently and incorrectly stated that Donald Trump won the 2020 U.S. presidential election in multiple instances. A NewsGuard report identified that although the chatbot initially provided the certified result, users who repeatedly prodded it were able to elicit false claims asserting a “legitimate win” for Trump. These outputs occurred despite the absence of evidence for widespread fraud and the election’s certification by all relevant authorities. The findings raise concerns about the platform’s potential to spread election denialism, especially given the high volume of user engagement with Grok as a fact-checking tool. NewsGuard also found examples of users citing the chatbot’s erroneous responses as evidence of election fraud, demonstrating how AI-generated falsehoods can be laundered into online discourse. This behavior is part of a pattern, as previous investigations found Grok spreading other unfounded claims. When asked for comment, Grok reaffirmed the correct election outcome but insisted no correction was necessary. Source: NewsGuard, ‘X’s Grok Claims Trump Won the 2020 Election’ Available Online: https://www.newsguardrealitycheck.com/p/xs-grok-claims-trump-won-the-2020 Top Of Page 5. Grokipedia’s Reliance on Questionable Sources Risks Influencing Other AI Models According to a Techxplore article , the AI-generated encyclopedia Grokipedia, launched by Elon Musk’s xAI, poses a significant risk to the digital information environment due to its reliance on problematic sources. The platform uses its underlying AI model, Grok, to generate content with limited transparency, resulting in thousands of citations to “questionable” sources, particularly for politically contested topics. Critics cited in the article note that the content often aligns with right-wing perspectives and appears to downplay controversies involving Musk while legitimizing conspiracy theories. By visually mimicking a trusted resource like Wikipedia but using inferior sourcing, Grokipedia could erode public trust and mainstream fringe narratives. The most critical long-term threat identified is the potential for Grokipedia’s poorly sourced articles to be ingested as training data for future generative AI models. This could create a self-reinforcing chain of errors, systematically embedding misinformation and algorithmic bias into the global information infrastructure. Source: Techxplore, ‘Musk’s Grokipedia leans on ‘questionable’ sources, study says’ Available Online: https://techxplore.com/news/2025-11-musk-grokipedia-sources.html Top Of Page 6. Forged News Site Fuels Cross-Platform Influence Campaign Targeting Armenia A coordinated disinformation campaign used a forged media outlet and impersonated NGOs to falsely claim France had dumped nuclear waste in Armenia. An investigation from DFRLab details how the operation began with a fabricated article on a fake website designed to mimic the legitimate news organization France 24. The narrative was then amplified by a website impersonating a real Armenian environmental NGO; this impersonator site appears to be a deep asset, having been revived after years of dormancy with backdated content to simulate legitimacy. The false claim was disseminated across social media by high-volume accounts, including bots and pro-Kremlin amplifiers, which targeted French politicians and Armenian communities. Azerbaijani state-aligned media further amplified the narrative. This operation follows a recurring pattern of hostile influence campaigns against Armenia that seed disinformation on dubious foreign websites impersonating legitimate media, a tactic consistent with Russia-linked operations like Storm-1516. Source : DFR Lab, ‘How a forged France 24 report fueled a radioactive lie concerning France-Armenia relations’ Available Online: https://dfrlab.org/2025/11/11/how-a-forged-france-24-report-fueled-a-radioactive-lie-concerning-france-armenia-relations/ Top Of Page 7. Defensive AI Tool ‘Silverer’ Uses Data Poisoning to Combat Deepfakes TechXplore reports on the development of ‘Silverer,’ a defensive AI tool designed to combat the creation of malicious deepfake images and videos. Developed by researchers at Monash University and the Australian Federal Police, the tool employs a technique known as data poisoning. This method involves subtly altering an image’s pixels before it is uploaded online. While the alterations are invisible to the human eye, they are engineered to deceive the large machine learning models used by malicious actors to train their generative AI systems. The intended result is that any AI-generated content produced from this poisoned data will be of low quality, skewed, or completely unrecognizable. The ‘Silverer’ tool represents a significant shift toward a proactive defense, empowering the public to protect their images from being used in the datasets that fuel the creation of hyper-realistic synthetic media. By corrupting the data supply chain, the initiative aims to raise the costs and slow the production of malicious deepfakes. Source : TechXplore, ‘Poisoned pixels: New AI tool to fight malicious deepfake images’ Available Online: https://techxplore.com/news/2025-11-poisoned-pixels-ai-tool-malicious.html Top Of Page 8. Proposed U.S. Legislation Seeks to Counter Foreign Election Influence New legislation proposed by Senator Bill Hagerty aims to address federal loopholes that permit foreign nationals to indirectly influence U.S. elections. According to a an article by The Daily Wire , election-integrity experts warn that these gaps allow for covert political pressure and create vulnerabilities to foreign-funded issue campaigns that can distort policy debates. While direct contributions to candidates are illegal, foreign funding can still flow to ballot initiatives and voter registration drives. Advocacy groups argue that this involvement is extensive, with billions invested by overseas donors in U.S. policy organizations, funding litigation and lobbying that can shape public narratives. Such undisclosed funding can enable sophisticated influence strategies that appear domestic but are driven by foreign interests. Experts cited in the report warn that without stronger federal regulations, adversarial nations could exploit these same channels to seed misleading information, manipulate public opinion, and compromise the integrity of the American information environment. Source : The Daily Wire, ‘New Legislation Would Slam The Door On Foreign Influence Of Elections’ Available Online: https://www.dailywire.com/news/new-legisla-tion-would-slam-the-door-on-foreign-influence-of-elections?topStoryPosition=undefined&author=Leif+Le+Mahieu&category=Original&elementPosition=4&row=1&row-Headline=Top+Stories&rowType=Top+Stories&title=New+Legislation+Would+Slam+The+Door+On+Foreign+Influence+Of+Elections Top Of Page 9. Minor Digital Frictions Could Slow the Online Spread of Misinformation New research from the University of Copenhagen, reported by TechXplore , suggests that integrating small obstacles into the social media sharing process can significantly reduce the speed at which misinformation spreads. Using computer models simulating platforms like X and Mastodon, researchers found that minimal “digital friction,” such as brief pauses or pop-up prompts, encourages users to think twice before sharing sensational or false content, thereby slowing its overall diffusion across a network. The study also found that while friction alone can slow the spread, it does not necessarily improve the quality of shared content. However, when a brief learning component—such as a pop-up quiz about how misinformation works—was added to the friction, the model showed a notable improvement in the quality of information being shared. The researchers hope to test these interventions in real-world settings and collaborate with technology companies to evaluate their effectiveness in reducing engagement with low-quality posts. Source : TechXplore, ‘Small digital frictions can slow the spread of misinformation’ Available Online: https://techxplore.com/news/2025-11-small-digital-frictions-misinformation.html Top Of Page 10. Democracy Shield launched by EU to Counter Foreign Interference The European Union has established the European Democracy Shield to counter the threat of Foreign Information Manipulation and Interference (FIMI). An article from EU vs. Disinfo explains that the joint initiative by the European Commission and the European External Action Service addresses how Russia and other autocratic actors exploit open democratic information spaces as a “geopolitical battleground.” The Shield represents a strategic shift from reactive detection and analysis to a more proactive and aggressive defense against persistent FIMI campaigns and state propaganda. The initiative is designed to reinforce three pillars of governance: protecting the integrity of the information space, ensuring free and fair elections, and building societal resilience. Its external dimension will scale up counter-FIMI training through EU Delegations and deepen collaboration with international partners like the G7 and NATO. A core component of the new strategy is to “raise the costs” for FIMI actors by publicly exposing manipulators and imposing sanctions. Source : EU vs. Disinfo, ‘European Democracy Shield and EU Strategy for Civil Society pave the way for stronger and more resilient democracies’ Available Online: https://euvsdis-info.eu/the-european-democracy-shield/ Top Of Page 11. EU Initiative Trains Botswana Journalists in Counter-Disinformation A press release from the EU Delegation to Botswana details a training session held for local journalists to address the proliferation of misinformation and disinformation. The initiative, a collaboration with the fact-checking organization Africa Check, aimed to equip media representatives with the skills needed to counter manipulated information that erodes public trust. The core counter-strategy taught was rigorous fact-checking and verification, which was described by the Botswana Editors Forum Chairman as a “moral discipline” for safeguarding newsroom integrity. The workshop provided journalists with practical tools, such as reverse image search, and techniques to verify the authenticity of content in a complex information ecosystem. This initiative reflects the EU’s broader commitment to supporting democratic resilience and independent media through partnerships. By providing these essential skills, the EU aims to assist journalists in protecting freedom of expression and strengthening the local information environment against malign influence. Source: EU Delegation to Botswana, ‘Building media capacity and promoting democracy through fact-checking and verification’ Available Online: https://www.eeas.europa.eu/delegations/botswana/building-media-capacity-and-promoting-democracy-through-fact-checking-and-verification_en Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website Top Of Page [Download Report] Top Of Page
- CRC Spotlight: Smart Cities: Future Urban Environments in the Crosshairs of Cyber Threats and Information Disorder
Modern smart cities rely on extensively interconnected digital infrastructures that link not only administrative processes, but also mobility, energy systems, communication networks, urban services, and private-sector platforms. This dense connectivity creates significant exposure to hybrid threats in which technical cyberattacks overlap with strategic influence efforts, affecting both critical infrastructure and the wider informational sphere of a city. Against this backdrop, the article analyzes how smart cities evolve into environments where cyber vulnerabilities and informational fragilities reinforce one another, creating conditions for the emergence of cyfluence risks—hybrid threats that combine system intrusion with targeted narrative manipulation. As municipal infrastructure increasingly depends on IoT devices, real-time data streams, cloud-based applications, and automated urban management systems, disruptions can cascade across networks, while manipulated information circulating through public apps, digital signage, transport interfaces, and social media can amplify societal impact. Smart cities thus face a dual risk landscape in which breaches of technical systems and distortions of the information ecosystem can interact, accelerate one another, and undermine public trust at scale. [Full Report Below]
- CRC Weekly: Cyber-based hostile influence campaigns 3rd-9th November 2025
[Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect. During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Weekly Review] 1. Kremlin-Linked Digital Campaigns Target Japanese Election and Corrupt AI Models 2. Russian Influence Operation "Storm-1516" Impersonates Journalists and Media 3. NATO Report Details Russia's Adaptive Strategy for Global Information Warfare 4. Russia's RT Uses Deepfakes and AI in Global Propaganda Pivot 5. Pro-Kremlin Channels Exploit Climate Change Discourse to Divide European Societies 6. The Valdai Discussion Club: Putin's Propaganda Soapbox 7. Canadian Province's Request Sparks National Call for China Interference Inquiry 8. China Uses "Sharp Power" to Deepen Influence in Western Balkans 9. Authoritarian Regimes Manipulate Context to Create Alternative Digital Realities 10. EU and UK Rules Target Manipulated Content in Global Conflicts [CRC Glossary] [Download Report] [ Report Highlights] Russian influence operations are now deliberately flooding the web with propaganda to corrupt the training data of AI language models. - Nippon Evidence of foreign influence is now surfacing even in Canada's smallest province, intensifying calls for a full national inquiry into Chinese operations. - The Hill Times A Russian operation codenamed "Storm-1516" is stealing journalists' identities to publish fake articles on fabricated news websites. - Euro News Authoritarian states are sculpting a "warped alternative reality" by manipulating context to exploit social media algorithms. - The Conversation New EU and UK laws aim to hold platforms accountable for amplifying propaganda and deleting evidence of human rights abuses. - European External Action Service (EEAS) A NATO report exposes the structure of Russia's influence machine, which pairs overt state media with deniable, covert "gray networks." - NATO StratCom COE To mark its 20th anniversary, RT released a deepfake video of U.S. news anchors admitting to serving government interests. - United24Media China is embedding its influence in the Balkans by creating deep technological dependency through surveillance and infrastructure. - Small Wars Journal Pro-Kremlin channels are weaponizing climate change, framing Europe's green energy policies as a form of "economic self-destruction" to divide societies. - EU vs. Disinfo In a major speech, Vladimir Putin claimed that Europe is fleeing “gender terrorism” as part of a campaign to undermine Western unity. - DisinfoWatch [Weekly Review] 1. Kremlin-Linked Digital Campaigns Target Japanese Election and Corrupt AI Models Russian-linked influence operations targeted Japan's July 20 House of Councillors election using networks of bots and coordinated replies to amplify divisive content and sow doubt. An article by Nippon revealed that a significant portion of hostile comments on domestic reporting originated from a Kremlin-aligned ecosystem, with one analysis finding that 32 percent of such comments on a single post were linked to Russia-aligned accounts. The hostile influence campaign relied on a high volume of small, inauthentic accounts to infiltrate online conversations. The operation also involved deliberately flooding the web with pro-Kremlin content to "groom" large language models and other AI tools. Japanese-language hubs republished hundreds of pro-Russia items daily, a strategy designed to bias search results and the training data used by AI chatbots. Audits reported that leading generative AI systems subsequently returned false or misleading information on controversial topics far more often, at an average rate of 35 percent. Source: Nippon, 'Japan's Upper House Election Reveals how Russian Influence Operations Infecting AI with Flood of Propaganda, Stoking Divisions', Available Online: https://www.nippon.com/en/in-depth/d01170/japan%E2%80%99s-upper-house-election-reveals-how-russian-influence-operations-infecting-ai-with-.html Top Of Page 2. Russian Influence Operation "Storm-1516" Impersonates Journalists and Media According to a report from the NATO Strategic Communications Centre of Excellence , Russia has developed a comprehensive and adaptive communications strategy that integrates state-controlled media, covert influence networks, and digital campaigns to manipulate global narratives. The publication, titled "The Collage of Kremlin ComStrat," reveals how Moscow combines traditional propaganda with modern hybrid tactics, including AI-driven content and coordinated online amplification through proxy media outlets. This approach aims to sow distrust in Western institutions and shape perceptions of geopolitical events like the war in Ukraine. The Kremlin's strategy emphasizes psychological influence and information saturation, using overt channels like RT and Sputnik alongside covert networks on social media. The report underscores that these information operations are not isolated events but part of a long-term, state-sponsored effort to weaken support for Ukraine and amplify polarization in Western societies. By blending intelligence tradecraft with digital information warfare, Russia's communication strategy demonstrates how manipulated information has become a core instrument of state power. Source: Euro News , 'False claims and stolen bylines: The Russian propaganda strategy haunting the newsroom' Available Online: https://www.euronews.com/my-europe/2025/11/04/false-claims-and-stolen-bylines-the-russian-propaganda-strategy-haunting-the-newsroom Top Of Page 3. NATO Report Details Russia's Adaptive Strategy for Global Information Warfare Two decades after its launch, Russia's state-controlled media outlet RT has fully transformed into a tool for global hostile influence campaigns. A report from United24Media highlights how, to mark its 20th anniversary, RT released a deepfake video using AI to impersonate prominent U.S. news anchors, falsely showing them admitting to serving U.S. government interests. This synthetic propaganda is emblematic of RT's pivot toward audiences in the Middle East, Africa, and Latin America—regions less affected by Western sanctions. Despite being banned across Europe and North America, RT has adapted its operations, functioning as what its editor-in-chief calls an "information guerrilla." The channel now utilizes mirror sites, front companies, and alternative platforms like Rumble and VK to continue its reach. It also employs AI to create fake journalists, clone voices, and automate the dissemination of propaganda. Former branches, including RT France and RT Germany, continue to shape local discourse through sympathetic media figures, ensuring the persistence of their information operations. Source: NATO StratCom COE, 'The Collage of the Kremlin's Communication Strategy' Available Online: https://stratcomcoe.org/publications/the-collage-of-the-kremlins-communication-strategy/324 Top Of Page 4. Russia's RT Uses Deepfakes and AI in Global Propaganda Pivot Two decades after its launch, Russia's state-controlled media outlet RT has fully transformed into a tool for global hostile influence campaigns. A report from United24Media highlights how, to mark its 20th anniversary, RT released a deepfake video using AI to impersonate prominent U.S. news anchors, falsely showing them admitting to serving U.S. government interests. This synthetic propaganda is emblematic of RT's pivot toward audiences in the Middle East, Africa, and Latin America—regions less affected by Western sanctions. Despite being banned across Europe and North America, RT has adapted its operations, functioning as what its editor-in-chief calls an "information guerrilla." The channel now utilizes mirror sites, front companies, and alternative platforms like Rumble and VK to continue its reach. It also employs AI to create fake journalists, clone voices, and automate the dissemination of propaganda. Former branches, including RT France and RT Germany, continue to shape local discourse through sympathetic media figures, ensuring the persistence of their information operations. Source: United24Media, '20 Years of RT: How Russia's Propaganda Hydra Survived the Ban' Available Online: https://united24media.com/anti-fake/20-years-of-rt-how-russias-propaganda-hydra-survived-the-ban-13121 Top Of Page 5. Pro-Kremlin Channels Exploit Climate Change Discourse to Divide European Societies As Europe faces worsening climate disasters, pro-Kremlin channels are actively manipulating climate discourse to undermine trust in Western institutions. A report from EU vs. Disinfo explains that while Russia's official media acknowledges climate science, its broader information networks push climate denial and distort facts. For Kremlin propagandists, climate change is a strategic weapon used to divide societies and weaken democratic consensus. Their narratives often link EU green energy transitions and sanctions against Russia to "industrial decline," framing Europe's environmental efforts as economic self-destruction. These misleading claims are designed to erode public support for sanctions and renewable energy by exploiting legitimate economic fears. Through coordinated messaging, Moscow’s information operations also smear climate science as a "religion" and attack political leaders who address environmental realities. This strategy is part of a broader effort to portray Russia as a more responsible global actor than the EU while advancing its geopolitical goals. Source: EU vs. Disinfo (the publisher), 'Sneaky heat: the Kremlin uses climate change to push its favourite FIMI narratives' (the original source title in quotes) Available Online: https://euvsdisinfo.eu/sneaky-heat-the-kremlin-uses-climate-change-to-push-its-favourite-fimi-narratives/ Top Of Page 6. The Valdai Discussion Club: Putin's Propaganda Soapbox An article in DisinfoWatch examines how Vladimir Putin used the 2025 Valdai Discussion Club forum to advance propagandistic narratives, portraying the West as culturally collapsing and Russia as a moral alternative. The speech highlighted specific claims, such as Europe fleeing “gender terrorism” and NATO hysterically militarizing, to reframe defense and diplomacy narratives. By weaponizing culture-war rhetoric, Moscow seeks to undermine Western unity and credibility, especially among vulnerable audiences. The use of the Kremlin-backed Valdai platform and state media like RT ensures these messages are amplified globally, contributing to Russia’s ongoing information warfare campaign. The Valdai Discussion Club, a Moscow-based think tank, has long served as a key venue where Putin and Kremlin officials outline Russia’s ideological and geopolitical positions to both domestic and international audiences, making it a central component in their strategic communications. Source : DisinfoWatch, 'DisinfoDigest: Decoding Putin's Valdai Speech' Available Online: https://disinfowatch.org/disinfodigest-decoding-putins-valdai-speech/ Top Of Page 7. Canadian Province's Request Sparks National Call for China Interference Inquiry Growing calls for a national inquiry into China's interference in Canada have followed Prince Edward Island (PEI) Premier Rob Lantz's request for a federal investigation into local groups allegedly linked to Beijing's United Front network. A report by The Hill Times notes that this appeal follows revelations from a recent book and a media investigation exposing how Chinese state-affiliated organizations may be influencing Canadian institutions. The report argues that only a full-scale, independent national inquiry, paired with a criminal investigation, can uncover the extent of these hostile influence campaigns. Despite repeated intelligence warnings about election interference, diaspora intimidation, and espionage, federal responses have been described as fragmented and politically cautious. P.E.I.'s call for an inquiry is being viewed as a national call to action, demonstrating that even provinces removed from the country's geopolitical epicenters are experiencing the effects of foreign influence. Source : DisinfoWatch, 'DisinfoDigest: Decoding Putin's Valdai Speech' Available Online: https://disinfowatch.org/disinfodigest-decoding-putins-valdai-speech/ Top Of Page 8. China Uses "Sharp Power" to Deepen Influence in Western Balkans As published by the Small Wars Journal , China is deepening its presence in the Western Balkans through a blend of defense cooperation, technological dependence, and information manipulation, an approach described as "sharp power." Unlike overt tactics, Beijing's influence relies on subtle yet pervasive methods, with Serbia becoming the focal point of its regional strategy. The country has welcomed Chinese weapon systems, joint military exercises, and advanced surveillance technology that embed long-term dependencies. Chinese state media and local affiliates amplify pro-Beijing narratives through content-sharing agreements and educational programs that promote authoritarian governance models. Through control of digital infrastructure and surveillance systems via companies like Huawei, China not only gains access to critical data but also reinforces its influence over local governments and media ecosystems. This networked approach combines information manipulation with economic leverage, making democratic institutions more vulnerable to external control. Source : Small Wars Journal (the publisher), 'China's Rising Influence in the Western Balkans and How the West Should Respond' (the original source title in quotes) Available Online: https://smallwarsjournal.com/2025/11/05/chinas-rising-influence-in-the-western-balkans/ Top Of Page 9. Authoritarian Regimes Manipulate Context to Create Alternative Digital Realities An article by The Conversation explains how authoritarian regimes, particularly Russia and China, are perfecting a form of information operation that relies on manipulating context and selective truth rather than outright falsehoods. By amplifying strategically chosen facts while omitting others, these governments create a misleading "alternative reality" that portrays Western democracies as unstable and hypocritical. This strategy is executed through state-run media, influencer networks, and coordinated bot activity across social media platforms, ensuring that distorted narratives infiltrate the news feeds of both domestic and foreign audiences. The analysis argues that this is an adaptive, data-driven campaign designed to exploit the mechanics of modern social media algorithms. This form of narrative warfare reinforces cynicism and polarization, weakening trust in journalism, democratic governance, and the concept of a shared truth. The broader implication is the gradual normalization of authoritarian narratives within global discourse and the erosion of the common factual foundation necessary for democratic societies to function. Source : The Conversation, 'How authoritarian states sculpt a warped alternative reality in our news feeds' Available Online: https://theconversation.com/how-authoritarian-states-sculpt-a-warped-alternative-reality-in-our-news-feeds-266092 Top Of Page 10. EU and UK Rules Target Manipulated Content in Global Conflicts According to a policy brief by the European External Action Service (EEAS ), manipulated information has become a key strategic weapon in modern conflicts, employed by state and non-state actors to disseminate propaganda and erode trust. Online platforms amplify these risks through algorithmic promotion of harmful content, while weak moderation in conflict zones allows hate speech and foreign information manipulation to proliferate. The brief highlights the European Union's Digital Services Act (DSA) and the United Kingdom's Online Safety Act (OSA) as emerging regulatory tools to counter these threats. These laws require platforms to assess and mitigate systemic risks, including those from hostile influence campaigns and foreign interference, through crisis response mechanisms and transparency requirements. By applying these frameworks with a conflict-sensitive approach, the EU and UK can strengthen information integrity, protect diaspora communities, and set global standards for platform accountability in times of conflict. Source : European External Action Service (EEAS), 'Assessing and Mitigating Conflict-Related Online Risks: Challenges for Governments, Regulators and Online Platforms ‘Available Online: https://www.isdglobal.org/isd-publications/assessing-and-mitigating-conflict-related-online-risks-challenges-for-governments-regulators-and-online-platforms/ Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website Top Of Page [Download Report] Top Of Page
- CRC Weekly: Cyber-based hostile influence campaigns 27th October - 2nd November 2025
[Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect. During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Weekly Review] 1. Russian Influence Operations Target Japan's Election and Poison AI Training Data 2. Fake Websites and Forged Documents Fuel Russian Smear Against Zelenskyy 3. Kremlin's Information Operations Target EU Climate Policy to Protect War Economy 4. China's 'Clean and Bright' Campaign Aims to Control the Digital Narrative 5. Russian Propaganda Networks Are Successfully Poisoning Major AI Chatbot Training Data 6. Leading AI Video Generators Are Producing Antisemitic and Extremist Synthetic Propaganda 7. Hostile Actors Repurpose Old Social Media Accounts to Target Poland 8. VOA Cutbacks Create Strategic Void for State-Sponsored Propaganda Operations 9. Foreign Actors Use Coordinated Betting to Manipulate NYC Election Perceptions [CRC Glossary] [Download Report] [ Report Highlights] An independent analysis found that nearly one-third of hostile online comments during Japan's election were linked to Russia-aligned accounts. - Nippon A coordinated smear campaign has falsely attributed 14 international properties to President Zelenskyy, including estates once owned by Nazi figures. - NewsGuard's Reality Check Moscow is exploiting climate issues as a front in its hybrid war, aiming to fracture EU consensus and protect its fossil fuel economy. - EUvsDisinfo Beijing is justifying its censorship of domestic grievances by framing online dissent as a form of "Western ideological infiltration." - Jamestown When asked about the war in Ukraine, major AI chatbots cited sanctioned Russian state media in nearly 18 percent of responses. - WIRED Despite moderation policies, top AI video tools are generating synthetic propaganda that included Holocaust denial and violent imagery. - Cybernews Influence campaigns are "pivoting" entire networks of old accounts, turning COVID-era anti-vaccine profiles into tools for anti-Ukrainian messaging. - EU DisinfoLab As the Voice of America retreats from the global stage, Russian and Chinese state media are expanding operations to fill the information void. - GIOR [Weekly Review] 1. Russian Influence Operations Target Japan's Election and Poison AI Training Data An article from Nippon has revealed that Russian-linked information operations targeted Japan’s House of Councillors election by using networks of bots and trolls to sow doubt and amplify divisive narratives. The hostile influence campaign relied on a large number of small, inauthentic accounts to infiltrate online conversations and amplify pro-Kremlin messaging. An independent analysis by international affairs expert Ichihara Maiko identified that approximately 32 percent of hostile comments on a single post were connected to Russia-aligned accounts. Beyond direct engagement, the operation also sought to contaminate the information ecosystem by deliberately flooding the web with pro-Kremlin content to "groom" large language models. Japanese-language hubs, such as Pravda Nihon, republished hundreds of pro-Russia items daily. This strategy is designed to bias search results through query manipulation and pollute the training data used by AI chatbots, leading to the proliferation of AI slop. Independent audits confirmed that leading generative AI systems now return false or misleading information on controversial topics more frequently. Source: Nippon, 'Japan's Upper House Election Reveals how Russian Influence Operations Infecting AI with Flood of Propaganda, Stoking Divisions' Available Online: https://www.nippon.com/en/in-depth/d01170/japan%E2%80%99s-upper-house-election-reveals-how-russian-influence-operations-infecting-ai-with-.html Top of Page 2. Fake Websites and Forged Documents Fuel Russian Smear Against Zelenskyy Russian state media and affiliated online networks have fabricated a series of false claims as part of a smear campaign accusing Ukrainian President Volodymyr Zelenskyy of owning a real estate empire worth over $682 million. A NewsGuard's Reality Check investigation revealed that the latest hoax alleged Zelenskyy purchased a $79 million ranch in Wyoming, a claim originating from a fake website mimicking a legitimate U.S. real estate firm. This marks the 14th property Russia has falsely attributed to the Ukrainian leader. The disinformation is disseminated using a network of fake websites, often featuring forged documents, before being amplified across major social media platforms like X, Facebook, and TikTok. Propagandists have even alleged that Zelenskyy’s supposed purchases included properties once owned by Nazi figures. Despite repeated debunking, the false narratives continue to circulate widely. Both NewsGuard and Forbes have confirmed that Zelenskyy's actual assets are valued under $20 million, finding no evidence of misused Western aid. Source: NewsGuard's Reality Check, NewsGuard, Forbes, ‘Russian Fake: Mapping Zelensky’s Made-Up $682 Million Real Estate Portfolio’ Available Online: https://www.newsguardrealitycheck.com/p/russian-fake-mapping-zelenskys-made Top of Page 3. Kremlin's Information Operations Target EU Climate Policy to Protect War Economy The Kremlin is conducting information operations that weaponize climate change narratives to advance its geopolitical goals and undermine European unity. An article by EUvsDisinfo explains that these campaigns are a component of Russia’s broader hybrid warfare strategy against the West. Pro-Kremlin media outlets deliberately frame the European Union's Green Deal as an elitist policy designed to harm key sectors like agriculture, using claims that it is "killing farmers." This tactic of cognitive warfare aims to erode public support for environmental initiatives, which are seen by Moscow as a direct threat to its fossil fuel-dependent economy and its ability to exert energy-related pressure on Europe. By portraying decarbonization as self-destructive, the disinformation seeks to fracture social consensus within the EU, protect Russian energy exports, and weaken the bloc's resolve on sanctions and energy independence. Source: EUvsDisinfo, 'Weaponising climate change to undermine the West' Available Online: https://euvsdisinfo.eu/weaponising-climate-change-to-undermine-the-west/ Top of Page 4. China's 'Clean and Bright' Campaign Aims to Control the Digital Narrative The Cyberspace Administration of China (CAC) has launched a new "clean and bright" campaign that redefines online criticism and social frustration as "negative energy" that endangers national security. An article published by The Jamestown Foundation's China Brief explains that the campaign targets posts discussing unemployment, gender inequality, and social anxiety, portraying them as products of "Western ideological infiltration." This strategy reframes censorship as a necessary defense against cognitive warfare, empowering regulators to erase narratives that challenge the Chinese Communist Party's (CCP) image of a harmonious society. This effort is a deepening of the CCP's comprehensive system of propaganda and ideological management, or Xuanchuan. Influencers discussing youth job struggles have already been banned, reflecting the government's push to enforce an "authorized reality." By linking social stability to "total national security," Beijing normalizes censorship as a security measure and may be creating an exportable model of digital authoritarianism for other governments seeking to justify repression. Source: Jamestown: article, "Beijing's War on 'Negative Energy' Available Online: https://jamestown.org/program/beijings-war-on-negative-energy/ Top of Page 5. Russian Propaganda Networks Are Successfully Poisoning Major AI Chatbot Training Data Generative AI systems are proving vulnerable to manipulation by Russian information warfare tactics, with leading chatbots frequently reproducing content from sanctioned state media. A study by the Institute for Strategic Dialogue (ISD), covered by WIRED, found that AI models exhibit a form of confirmation bias, delivering more pro-Kremlin content when users enter biased or manipulative prompts. This vulnerability is being actively exploited by Russian networks like the "Pravda" operation, which are deliberately working to "poison" the data that large language models (LLMs) are trained on. By flooding the information ecosystem with false narratives, these actors ensure their propaganda is ingested and later presented as factual by Western AI tools. The findings highlight a significant challenge for platform regulation, as the very architecture of current AI systems can be turned into a vector for disseminating state-sponsored disinformation. Source: WIRED, 'Chatbots Are Pushing Sanctioned Russian Propaganda' Available Online: https://www.wired.com/story/chatbots-are-pushing-sanctioned-russian-propaganda/ Top of Page 6. Leading AI Video Generators Are Producing Antisemitic and Extremist Synthetic Propaganda The proliferation of synthetic propaganda is being accelerated by the failure of leading AI video generators to block the creation of extremist and hateful content. A new study from the Anti-Defamation League (ADL) found that top platforms produced antisemitic content, including Holocaust denial and violent tropes, in at least 40% of test cases when prompted with hateful text. Cybernews reports that despite stated moderation policies, these systems consistently failed to filter out harmful narratives, demonstrating a significant vulnerability in their design. The ADL warns that this capability not only allows malicious actors to create high volumes of disinformation but also poses a direct threat to historical memory and online safety. The findings illustrate how AI tools, trained on vast and often unfiltered datasets from the internet, can become powerful engines for amplifying and normalizing extremist ideologies. Source: Cybernews, Anti-Defamation League (ADL), 'Popular AI video generators amplify antisemitic tropes, Available Online: https://cybernews.com/ai-news/ai-videos-antisemitism/ Top of Page 7. Hostile Actors Repurpose Old Social Media Accounts to Target Poland A factsheet published by EU DisinfoLab details how Poland's information space has been shaped by recurring disinformation waves pushed by far-right activists, politicized media, and Russia-aligned networks. The hostile influence campaigns have focused on anti-immigrant, anti-vaccine, anti-Ukrainian, and culture-war themes. Common tactics include "narrative pivoting," where repurposed accounts from the COVID era were switched to anti-Ukrainian messaging, and the amplification of rumors through bot and troll activity. Fabricated materials, such as a forged ministry letter and an AI-generated video, have also been used to inflame grievances. These information operations aim to polarize society, degrade trust in institutions, and distort policy debates on migration, public health, and EU agreements. Russia's invasion of Ukraine served as an accelerant, with Kremlin-linked narratives exploiting economic strains and election cycles. The response remains fragmented, and the politicization of public broadcasting risks laundering these narratives into the mainstream, contributing to widespread information disorder. Source: EU DisinfoLab, 'The disinformation landscape in Poland' Available Online: https://www.disinfo.eu/publications/disinformation-landscape-in-poland/ Top of Page 8. VOA Cutbacks Create Strategic Void for State-Sponsored Propaganda Operations The scaling back of U.S. international broadcasting is weakening American soft power and ceding narrative control to adversarial states in the global information war. An article in the Global Influence Operations Report (GIOR) details how the operational reductions at Voice of America (VOA) are creating a strategic vacuum that is being actively filled by Russia’s RT/Sputnik apparatus and the China Media Group. These state-sponsored actors are expanding their own information operations into regions where VOA was once a primary source of independent news. This shift represents a significant setback for U.S. strategic communications, as it removes a credible voice from contested information ecosystems and emboldens authoritarian regimes. By relinquishing its role in these environments, the U.S. allows hostile actors to more easily shape perceptions and advance their geopolitical objectives without counterbalance. Source: GIOR, 'Voice of America Shutdown Benefits Russia, China: GOP Warns - Global Influence Operations Report' Available Online: https://www.global-influence-ops.com/voice-of-america-shutdown-benefits-russia-china-gop-warns/ Top of Page 9. Foreign Actors Use Coordinated Betting to Manipulate NYC Election Perceptions Blockchain analysis has revealed a coordinated effort to manipulate political prediction markets, representing a novel vector for platform-enabled foreign interference. According to a report in The New York Post, investigators found that a small number of digital wallets, funded overwhelmingly from offshore exchanges in China and the Middle East, were responsible for a disproportionate volume of bets on a New York City mayoral candidate. This activity, which appears automated and is not financially rational, constitutes a form of digital astroturfing designed to artificially inflate the candidate's perceived support. Experts warn that because media outlets and campaigns often cite these markets as indicators of public sentiment, such manipulation can distort the political narrative and potentially discourage voter turnout by creating a false sense of inevitability. The incident raises serious questions about the integrity of data from unregulated financial platforms in an electoral context. Source: The New York Post, ‘Foreign betting markets could influence NYC election — as data shows pro-Mamdani bets from China, Middle East skewing odds’ Available Online: https://nypost.com/2025/10/28/business/pro-mamdani-bets-from-china-middle-east-skewing-market-odds-experts/?utm_campaign=nypost&utm_medium=social&utm_source=twitter Top of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website Top of Page [Download Report] Top of Page
- CRC Weekly: Cyber-based Hostile influence campaigns 20th-26th October 2025
[Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect. During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Weekly Review] Matryoshka Campaign Deploys Synthetic Media to Attack Journalism Credibility Russia Trains Local Journalists to Spread Pro-Kremlin Narratives in Africa Russia Pushes False Arctic Narrative to Mask Arctic Military Expansion Pro-Kremlin Actors Use AI and Data Collection to Target Ukraine-EU Relations Beijing Combines Cultural Diplomacy with AI-Driven Influence in Europe American Fugitive in Moscow Runs AI-Powered Pro-Kremlin Fake News Network Russia Engages in 'LLM Grooming' to Manipulate AI Chatbots Climate Action Hindered by Coordinated Disinformation and Greenwashing Campaigns EU-Funded 'Digital Detectives' Initiative Trains Uzbek Journalists to Counter Falsehoods Europe's Counter-Disinformation Efforts Face External Threats and Internal Resistance [CRC Glossary] [Download Report] [ Report Highlights] An American fugitive in Moscow is behind a network of 141 fake news sites powered by an AI programmed to insert bizarre and irrelevant praise for Vladimir Putin into unrelated articles. - NewsGuard Through a strategy dubbed "LLM grooming" aims to manipulate AI chatbots by flooding the internet with pro-Kremlin content, effectively weaponizing the models to reproduce false narratives. - EUvsDisinfo Posing as legitimate news agencies, covert Russian entities are expanding hybrid warfare in Africa by training local journalists and influencers to spread pro-Kremlin narratives. - European Council on Foreign Relations To mask its own aggressive military expansion, a Russian information operation inverts reality by accusing Canada and NATO of militarizing the Arctic. - DisinfoWatch China's hybrid influence campaigns in Europe combine soft-power tactics through cultural and academic channels with advanced AI-driven digital operations. - Taipei Times Recognizing that information manipulation by fossil fuel interests is a primary obstacle to progress, the COP30 climate summit will make public trust a central issue for the first time. - Global Witness An EU-funded "Digital Detectives" project is building a nationwide network in Uzbekistan by training local experts to equip journalists and fact-checkers with advanced verification skills. - EEAS [The Week In Review] Matryoshka Campaign Deploys Synthetic Media to Attack Journalism Credibility The Russian Matryoshka network is impersonating reputable media organizations to spread fabricated stories and undermine trust in Western journalism. A report from NewsGuard details how the hostile influence campaign uses AI-generated videos and fake social media accounts to circulate false claims about political scandals in Germany and France. The videos have falsely attributed quotes to NewsGuard executives and presented entirely invented events, such as Germany suing the organization for exposing war preparations. Matryoshka’s strategy mirrors the very information manipulation tactics it accuses others of employing. Its content relies on AI voice-overs, manipulated footage, and fictitious experts, all designed to exploit real-world controversies, like France's 2023 bedbug panic, to insert Russian narratives into public discourse. The operation highlights a sophisticated use of synthetic media to attack the credibility of established news and research entities. Source: Newsguard, Why Russia Puts Words in NewsGuard’s Mouth, Available Online: ( https://www.newsguardrealitycheck.com/p/why-russia-puts-words-in-newsguards ) Top Of Page Russia Trains Local Journalists to Spread Pro-Kremlin Narratives in Africa Russia has intensified its hybrid warfare tactics in Africa, employing information operations to influence public opinion and destabilize regional politics. The Kremlin established entities like the Africa Corps and the Africa Initiative to bolster its presence and spread pro-Russian narratives across the continent. These operations involve training local journalists, influencers, and activists to disseminate content in multiple languages, including English, French, Arabic, and regional languages like Hausa and Swahili. A report by the European Council on Foreign Relations (ECFR) notes that the Africa Initiative operates covertly, posing as a news agency while engaging in information manipulation. The ECFR highlights the need for a coordinated European response, suggesting current anti-disinformation policies are ineffective. Recommendations include investing in local media and using platforms like WhatsApp to counteract hostile narratives, as Europe risks ceding influence to Russia in Africa's information ecosystem. Source: European Council on Foreign Relations, The bear and the bot farm: Countering Russian hybrid warfare in Africa , Available Online: https://ecfr.eu/publication/the-bear-and-the-bot-farm-countering-russian-hybrid-warfare-in-africa/#recommendations Top Of Page Russia Pushes False Arctic Narrative to Mask Arctic Military Expansion Russian state media is amplifying a narrative that Canada and NATO are promoting "war rhetoric" in the Arctic, while portraying Russia as a peaceful actor. This information operation inverts reality, as Russia has aggressively expanded its military infrastructure in the region since 2021, whereas recent Canadian measures are defensive. The Kremlin uses tactics including selective omission, projection, and euphemism laundering to present its maximalist Arctic claims as benign while framing allied defensive actions as provocative. The campaign is amplified through Russian diplomatic channels, Telegram, and pro-Kremlin outlets, reflecting a broader strategic goal of weakening allied cohesion and chilling Canadian Arctic policy. A DisinfoWatch report notes that by framing Russia as restrained, the campaign seeks to normalize its jurisdictional ambitions and discourage deterrence investments, following a recurring Kremlin pattern of "peaceful Russia/militarizing NATO." Source: DisinfoWatch, Russian MFA Accuses West and Canada of Militarizing The Arctic , Available Online: https://disinfowatch.org/disinfo/russian-mfa-accuses-west-and-canada-of-militarizing-the-arctic/ Top Of Page Pro-Kremlin Actors Use AI and Data Collection to Target Ukraine-EU Relations Pro-Kremlin propagandists have intensified information operations aimed at undermining Ukraine-EU relations and demoralizing Ukrainians. According to a report by the Delegation of the European Union to Ukraine and the DARE Project, these campaigns use Telegram channels, Facebook groups, and fake news websites to spread false narratives. The fabricated stories include claims that the EU is "prolonging the war," accusations of aggressive policies toward Russia, and false stories about refugee conditions and child trade schemes. The report highlights that pro-Kremlin actors are using sophisticated strategies, including emotional manipulation, AI-generated visuals, and fake media outlets. Regional patterns revealed tailored falsehoods in Kherson, Donetsk, and Odesa, with claims about "combat moths" imported from the EU and the sale of cities to foreign interests. Some campaigns also collected personal data, illustrating a dual strategy of psychological influence and opportunistic exploitation. Source: EEAS, Results of pro-Russian information manipulation and disinformation monitoring targeting Ukraine-EU relations during June – August, 2025 , Available Online: https://www.eeas.europa.eu/delegations/ukraine/results-pro-russian-information-manipulation-and-disinformation-monitoring-targeting-ukraine-eu_en Top Of Page Beijing Combines Cultural Diplomacy with AI-Driven Influence in Europe Concerns are growing over Beijing's disinformation and hybrid influence campaigns across Europe, even as some nations distance themselves diplomatically. A recent Italian Senate conference highlighted how China continues to exert pressure through psychological manipulation, propaganda, and economic coercion, despite Italy’s 2023 withdrawal from the Belt and Road Initiative. As published by the Taipei Times , Chinese influence persists through academic and cultural channels, including Confucius Institutes and the suppression of performances by groups critical of the Chinese Communist Party. The digital dimension of these operations leverages platforms like DeepSeek and AI-driven tools to manipulate public perception and amplify state-controlled messaging. This technological aspect has raised alarms among European governments, which now view China's use of AI and data tracking as a severe national security threat, prompting new measures to strengthen democratic resilience and curb foreign manipulation. Source: Taipei Times, EU facing increased interference from China , Available Online: https://www.taipeitimes.com/News/editorials/archives/2025/10/26/2003787875 Top Of Page American Fugitive in Moscow Runs AI-Powered Pro-Kremlin Fake News Network John Mark Dougan, a former Florida deputy now based in Moscow, has become a key figure in Russia's digital influence operations, using a self-trained generative AI system to create large volumes of fake news. An investigation from NewsGuard identifies Dougan as part of the pro-Kremlin influence group Storm-1516. His recent campaign involves 141 French-language websites spreading Russian propaganda and false claims aimed at undermining Western democracies. A notable feature of the AI-generated articles is the consistent insertion of exaggerated and irrelevant praise for Russian President Vladimir Putin, regardless of the topic. Evidence from cybersecurity researchers suggests Dougan's AI is programmed with a pro-Russia, anti-West bias, even leaving behind visible AI prompts that instruct it on how to frame content. While Dougan denies responsibility, he has publicly boasted about receiving a Russian state honor for his "work in the information sphere." Source: NewsGuard, Russian AI Sites Can’t Stop Gushing About Putin , Available Online: https://www.newsguardtech.com/special-reports/ai-driven-john-mark-dougan-pro-kremlin-disinformation-campaign/ Top Of Page Russia Engages in 'LLM Grooming' to Manipulate AI Chatbots Russia has shifted its information warfare tactics to target artificial intelligence, deliberately manipulating large language models (LLMs) through a strategy known as "LLM grooming." This involves flooding the internet with millions of low-quality articles and content from pro-Kremlin websites, including the Pravda network, to ensure AI chatbots reproduce false narratives. The goal is to weaponize AI to spread misleading information, such as fabricated claims about Ukraine's President Zelenskyy. According to analysis by EUvsDisinfo , the campaigns involve multiple actors, including Russian state media, pro-Kremlin influencers, and offshoots of the Internet Research Agency. The broader significance lies in the Kremlin's ability to shape digital information ecosystems, erode trust in AI-generated knowledge, and amplify global security risks as automated disinformation becomes harder to detect and counter, threatening the integrity of online fact-finding. Source: EUvsDisinfo, Large language models: the new battlefield of Russian information warfare , Available Online: https://euvsdisinfo.eu/large-language-models-the-new-battlefield-of-russian-information-warfare/ Top Of Page Climate Action Hindered by Coordinated Disinformation and Greenwashing Campaigns Information manipulation has become one of the most significant obstacles to meaningful climate action, as fossil fuel companies and their allies use influence campaigns to cast doubt on climate science and delay policy responses. These tactics range from outright denial to more insidious strategies like greenwashing, where polluters portray themselves as environmentally responsible while expanding fossil fuel production. Social media algorithms amplify such content, rewarding polarization over accuracy. The growing recognition of this threat has pushed information integrity into the spotlight, with COP30 set to make public trust a central issue for the first time. A Global Witness article states that while informing people of the fossil fuel industry's deception can increase support for accountability, Big Tech's failure to curb falsehoods continues to erode public understanding. Experts now call for stronger oversight and education, arguing that defending information integrity is inseparable from defending the planet. Source: Global Witness, What does information integrity have to do with climate? , Available Online: https://globalwitness.org/en/campaigns/digital-threats/what-does-information-integrity-have-to-do-with-climate/ Top Of Page EU-Funded 'Digital Detectives' Initiative Trains Uzbek Journalists to Counter Falsehoods A new initiative in Uzbekistan, the "Digital Detectives" project, aims to strengthen the country's defenses against disinformation and promote media literacy. Funded by the European Union and implemented by the Modern Journalism Development Centre, the project has launched its first Training of Trainers session in Tashkent to establish a nationwide network of experts. These trainers will assist journalists and fact-checkers across Uzbekistan in identifying and countering false information more effectively. As published by the EEAS , participants explored key fact-checking strategies, including promise tracking, detecting fake news, and utilizing digital verification tools such as the Wayback Machine. They also discussed the importance of storytelling as a method for strengthening credibility and public trust. By empowering local media professionals, the project represents a proactive effort to create a more resilient information environment and safeguard the public sphere against manipulation. Source: EEAS, “Digital Detectives” Project Launches First Training of Trainers on Fact-Checking in Uzbekistan , Available Online: https://www.eeas.europa.eu/delegations/uzbekistan/%E2%80%9Cdigital-detectives%E2%80%9D-project-launches-first-training-trainers-fact-checking-uzbekistan_en Top Of Page Europe's Counter-Disinformation Efforts Face External Threats and Internal Resistance Europe's battle against information manipulation has reached a critical turning point, as new and complex challenges undermine progress. Foreign Information Manipulation and Interference (FIMI), fueled by geopolitical conflicts and hybrid warfare, continues to expand, while generative AI has lowered the barriers for malicious actors to produce large-scale propaganda. At the same time, the fight against disinformation is facing growing internal resistance, with some nationalist movements portraying counter-disinformation efforts as censorship, thereby weakening institutional trust. A recent article from the EU Disinfo Lab notes that major digital platforms have also reversed some commitments to content moderation, allowing false narratives to spread more easily. This has created a dual threat from external state-backed propaganda and domestic disengagement. The report concludes that Europe's resilience depends on enforcing regulations, empowering civil society, and achieving strategic digital autonomy. Source: EU Disinfo Lab, Documenting the setbacks: The new environment for counter-disinformation in Europe and Germany , Available Online: https://www.disinfo.eu/publications/documenting-the-setbacks-the-new-environment-for-counter-disinformation-in-europe-and-germany/ Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website Top Of Page [Download Report] Top Of Page
- Tonga Before the Election: Influence and the Information Space
Background On 20 November 2025, Tongans will head to the polls to directly elect 17 representatives to the Legislative Assembly, while the country’s nobles choose another nine members. The final composition of parliament will include these and up to 4 additional seats determined through the established procedure. [i] As the only constitutional monarchy in the Pacific, Tonga blends democratic governance with deeply rooted traditional structures, where the monarchy retains significant influence over national affairs. Despite its small population of roughly 105,000, Tonga holds strategic importance in the South Pacific. [ii] It sits at the crossroads of a tense China-U.S. rivalry, with Australia and New Zealand playing a key role. Tonga’s strategic location and information environment make it an interesting case study for understanding information flows and cognitive resilience in small island democracies. Influence Vectors Tonga’s internal dynamics and international relations are shaped by a combination of financial dependence, migration trends, regional security cooperation, and diaspora engagement. At the same time, the country’s media landscape has largely shifted to the digital realm [iii] , where outlets face mounting challenges as social media increasingly dominates public discourse. This environment has made Tonga more vulnerable to information disorder [iv] , illustrated by incidents such as deepfake audio clips [v] , fabricated political letters [vi] , and COVID-19 conspiracy theories. Although these cases have largely been domestic and organic rather than coordinated foreign operations, they underscore the country’s vulnerability to information manipulation. Efforts to strengthen resilience are emerging, exemplified by local fact-checkers such as “Supa Mario” [vii], who has gained attention for his debunking work, and by education programs supported by international partners like ASPI–ROI [viii] . Nevertheless, systematic monitoring and institutional frameworks to counter information disorder remain scarce. Economic and Development Assistance Recently, the United States has reduced its direct presence in the Pacific, while Australia, Japan, and New Zealand remain Tonga’s primary security, development, and disaster-response partners. They maintain military and police cooperation programs that provide training, capacity-building, and regional security coordination. Figure 1 – Development financing by partner, Courtesy of Lowy Institute [ix] Meanwhile, China’s role is increasingly apparent: roughly two-thirds of Tonga’s foreign debt (≈USD 195 million) is owed to Beijing. Loan servicing consumes about 4% of GDP annually [x] , raising concerns about long-term strategic dependency. [xi] Chinese aid projects and infrastructure investments have increased visibly in the run-up to the 2025 elections, including a new agricultural agreement signed in October 2025. [xii] Aid, Physical Support, and On-the-Ground Presence Tonga’s 150th Constitution celebrations, held from 31 October to 4 November 2025, illustrated how external actors employ visible, on-the-ground engagement to assert presence. The Chinese Embassy sponsored the official fireworks display and supported the participation of over 300 members of the Chinese community in the float parade. Figure 2 – Posts of the Chinese Embassy in Tonga, Courtesy of Facebook Australia demonstrated its presence through the largest float parade, combining official and community representation to underline partnership and historical connection. Both governments extended these actions to digital platforms, where their embassies documented and circulated images, official statements, and hashtags. This online communication amplified the reach of their physical presence, turning local acts of participation into enduring digital signals of influence and engagement. Figure 3 – Posts of the Australian Embassy in Tonga, Courtesy of Facebook Migration and the Local Economy In recent years, Chinese immigrants have transformed Tonga’s small business landscape. Although consumers benefit from lower prices and greater availability of goods, many local businesses struggle to compete with Chinese-owned shops. Public opinion is therefore divided, with some Tongans expressing concerns over the country’s financial sovereignty. [xiii] Diaspora Influence Tonga’s diaspora, which is larger [xiv] than its domestic population, plays an outsized role in shaping opinions back home. Communities in Australia, New Zealand, and the U.S. frequently engage in online debates about domestic politics, often injecting or amplifying narratives from afar. In contrast, external actors’ ability to leverage coordinated inauthentic behavior (CIB) is limited. Tonga’s tight-knit social networks and small population size make it harder to utilize sockpuppet accounts and operational assets effectively. In essence, diaspora-based involvement acts as a force multiplier in Tonga’s digital information ecosystem, primarily through Facebook, which reaches over 64% of the population. [xv] Conclusion Tonga’s 2025 elections will unfold in an information environment inherently different from that of European nations, where foreign information manipulation and interference (FIMI) activities have had a significant impact. Notable examples include the recent elections in Czechia and Moldova , which are attributed to Russia. Ahead of the upcoming election, there are a few key takeaways for stakeholders, particularly Cyfluence Defence practitioners: Although there’s currently no evidence indicating ongoing coordinated FIMI efforts targeting the Pacific nation and its democratic processes, past misinformation incidents exhibit nascent vulnerabilities. The limited analytical and monitoring capacity within Tonga’s media and civil society means potential influence activities could go undetected. Empowering local institutions, including independent investigative journalism, is crucial. Media literacy and cognitive resilience must be seen as strategic assets that are essential to safeguard trust in public institutions and electoral integrity, and to ensure societal cohesion. [Footnotes:] [i] Inter-Parliamentary Union (IPU), 2025. Tonga – Legislative Assembly (Fale Alea) . [online] Available at: https://data.ipu.org/parliament/TO/TO-LC01/ [ii] Congressional Research Service, J. G. Tupuola, 2025. Tonga: Background and Issues for Congress . [online] pp. 1-2 Published 11 September 2025. Available at: https://www.congress.gov/crs_external_products/IF/PDF/IF12866/IF12866.3.pdf [iii] ABC International Development, 2025. State of the Media: Tonga, 2025 . [online] Published 4 March 2025. Available at: https://www.abc.net.au/abc-international-development/state-of-the-media-tonga-2025/105005712 [iv] ABC International Development, T. Kami Enoka & P.’Ulikae’eva Havili, 2023. Tonga’s Star Fact-Checker Helps Fight COVID-19 Vaccine Misinformation and Government Corruption . [online] Published 14 March 2023; updated 16 March 2023. Available at: https://www.abc.net.au/abc-international-development/pacmas-tonga-fact-checking/102073118 [v] Australian Strategic Policy Institute (ASPI), B. Johnson, F. Fakafanua & S. Vikilani, 2024. As technology distorts information, Pacific governments and media must cooperate . [online] Published 17 July 2024. Available at: https://www.aspistrategist.org.au/as-technology-distorts-information-pacific-governments-and-media-must-cooperate/#:~:text=In%20Tonga%2C%20we%20have%20also,the%20reputation%20of%20those%20involved [vi] Radio New Zealand (RNZ), 2017. Tonga police investigate letter claiming to be from PM . [online] Published 24 February 2017. Available at: https://www.rnz.co.nz/international/pacific-news/325222/tonga-police-investigate-letter-claiming-to-be-from-pm [vii] Ibid. [viii] Royal Oceania Institute, 2024. Training Program for Tonga: “Disinformation: Government and Media Challenges” . [online] Published 8 May 2024. Available at: https://royaloceaniainstitute.org/2024/05/08/training-program-for-tonga-disinformation-government-and-media-challenges/ [ix] Lowy Institute, 2025. Tonga – Pacific Aid Map . [online] Available at: https://pacificaidmap.lowyinstitute.org/country/tonga/ [x] Congressional Research Service, J. G. Tupuola, 2025. Tonga: Background and Issues for Congress . [online] pp. 1-2 Published 11 September 2025. Available at: https://www.congress.gov/crs_external_products/IF/PDF/IF12866/IF12866.3.pdf [xi] Pacific Media Network, A. Vailala, 2025. No debt forgiveness from China, analyst warns as Tonga faces repayment pressure . [online] Published 30 April 2025. Available at: https://pmn.co.nz/read/political/no-debt-forgiveness-from-china-analyst-warns-as-tonga-faces-repayment-pressure [xii] Radio New Zealand (RNZ), C. Rovoi, 2025. Tonga bets on China deal to modernise farming ahead of general election . [online] Published 30 October 2025. Available at: https://www.rnz.co.nz/international/pacific-news/577307/tonga-bets-on-china-deal-to-modernise-farming-ahead-of-general-election [xiii] Tonga Independent News, 2025. ‘Trust Is More Important Than Money’: Inside One Chinese Businessman’s Vision for Tonga . [online] Published 14 August 2025. Available at: https://tongaindependent.com/trust-is-more-important-than-money-inside-one-chinese-businessmans-vision-for-tonga/ [xiv] United Nations, 2022. The Kingdom of Tonga: National Voluntary GCM Review – Implementing the Global Compact for Safe, Orderly and Regular Migration . [online] Published 2022. Available at: https://www.un.org/sites/un2.un.org/files/imrf-tonga.pdf [xv] DataReportal, n.d. Digital 2024: Tonga . [online] Published 2024. Available at: https://datareportal.com/reports/digital-2024-tonga
- Information Warfare in the Early Stages of the Russia-Ukraine War
The prelude and opening stages of Russia's 2022 invasion of Ukraine were one of history's most intense periods of hostile cyber and influence activity . Alongside conventional warfare, both states engaged in a sophisticated battle for influence, deploying digital propaganda , psychological operations , and cyberattacks . This study examines the conflict's information dimension from late 2021 to April 2022 via a novel analytical paradigm adapted from strategic marketing and audience segmentation. By focusing on who the target is, when they are susceptible, and how operations are executed, analysts can systematically map cyber, influence, and hybrid (Cyfluence) operations across time and audience, identifying strategic and operational intent, as well as potential cardinal indicators for conflict escalation. Applying this analytical model to the early stages of the Russia-Ukraine Information War provides valuable insights and strategic context from a pivotal moment in the evolution of hybrid warfare . The analysis breaks down the key events and examines and expands on the key strategic and operational implications. The lessons drawn from this analysis are relevant for countries in the Southeast Asian and Indo-Pacific region, as they grapple with the realization that they too may face a similar threat to Ukraine. China , for example, is closely following Russia’s playbook , is coordinating with Russian cyber-influence agencies, and has shown willingness to deploy its own advanced capabilities in the region. And for European countries , while more familiar with Russian doctrines of hybrid warfare, the idea of a future hybrid conflict taking place in their backyard is more immediate. They too might benefit from a new analytical model on how to better predict, detect and defend against future hybrid conflict s . [Full Report Below]
- Cyfluence: The Latest Frontier of Cognitive Warfare
The term 'Cyfluence' refers to the full spectrum of integrated cyber–influence operations that combine technical and informational tactics within a unified framework. It encompasses both cyberattacks conducted to shape perceptions or behavior and influence campaigns designed to facilitate or enhance cyber operations. In practice, Cyfluence represents the convergence of technical infiltration, sabotage, data exfiltration, information manipulation, and narrative campaigns - all embedded within mutually reinforcing, influence-centered kill chains. It is the comprehensive expression of how power is applied and projected across today’s interconnected information environments. In this primer, we present an updated definition of Cyfluence, reflecting the latest evolutions of the concept and the increasing convergence of cyber and cognitive domains. [ Download PDF Here ]
.png)









