top of page

CRC Weekly: Cyber-based hostile influence campaigns 10th-16th November 2025

  • Writer: CRC
    CRC
  • 1 hour ago
  • 10 min read
Cover Image- Text: Weekly Media Update: Information Operations

[Introduction]


Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect.  


During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events.



[Contents]




[ Report Highlights]


Researchers have developed a proactive public defense tool that allows individuals to corrupt their own image data, thereby disrupting the supply chain for malicious deepfake generation - TechXplore


A Carnegie Europe article argues that China has become the “weak link” in Europe’s strategy against Russia, actively undermining Western unity and sanctions enforcement - Carnegie Europe


Research using computer models suggests that introducing small obstacles, or “digital friction,” into the social media sharing process can significantly slow the spread of misinformation - TechXplore


A critical long-term threat of Grokipedia is that its poorly sourced articles could be ingested as training data, creating a self-reinforcing chain of systemic misinformation for future AI models - Techxplore


Analysis of a viral video threat against a U.S. commentator revealed linguistic cues and production styles consistent with the known Russian influence actor Storm-1516/”CopyCop” - DisinfoWatch


Moscow’s early and aggressive influence campaign targeting the 2026 election is seen as a response to Armenia’s geopolitical pivot toward Europe and away from its traditional alliance with Russia - NewsGuard


Users were observed citing Grok’s AI-generated falsehoods about the 2020 election as “evidence” of fraud, demonstrating a pathway for laundering synthetic content into online discourse - NewsGuard


The dissemination of a fabricated story about nuclear waste in Armenia followed a recurring pattern of Russian-linked TTPs, where disinformation is seeded on dubious foreign websites and amplified cross-platform - DFR Lab


Election-integrity experts warn that without stronger federal regulations, adversarial nations could exploit U.S. non-profit networks to seed disinformation and manipulate public opinion - The Daily Wire


• The EU’s new Democracy Shield initiative represents a strategic shift from reactive analysis to a proactive and aggressive defense against Foreign Information Manipulation and Interference (FIMI) - EU vs. Disinfo


• The EU’s training of journalists in Botswana reflects a broader strategy of using targeted capacity-building to protect freedom of expression and counter disinformation in partner nations - EU Delegation to Botswana




[Weekly Review]



1. Russian Network Strom-1516 Targets Kirk with Fabricated Video


A report from DisinfoWatch has detailed a viral video that falsely depicted Ukrainian soldiers threatening a conservative American commentator. The video, which circulated on X, was found by BBC Verify and independent researchers to have no connection to Ukraine’s armed forces. Instead, the investigation revealed that linguistic cues, production style, and digital dissemination patterns were consistent with Russia’s Storm-1516/”CopyCop” network, which is known for producing staged propaganda using actors and fake uniforms. This hostile influence campaign aligns with the network’s established tactics, techniques, and procedures, which have been previously documented by Microsoft and French authorities in relation to fabricated content targeting Western political figures and elections. The video was initially shared by a partisan U.S. account before being amplified across Telegram and far-right websites, demonstrating a clear pathway for injecting foreign-produced disinformation into domestic political discourse. The incident serves as another example of a state-linked actor creating synthetic content for a targeted influence operation.


Source: DisinfoWatch, ‘Russia’s Storm-1516 Fabricates “Ukrainian” Threat to Erika Kirk ‘


2. Armenian Election Targeted Early and Viciously by Russian Campaign


Russia has initiated a hostile influence campaign targeting Armenia’s parliamentary elections more than a year in advance, an unusually early start for such operations. A NewsGuard investigation found that since April 2025, Moscow’s propaganda network has propagated at least 18 distinct fabricated narratives attacking Armenia’s pro-Western government. The campaign utilized digital tactics, including AI-generated news sites, coordinated social media networks, and manipulated videos distributed across 11 platforms in 8 languages, achieving an estimated 45 million views. This operation, attributed to Russian influence groups like Storm-1516, deploys narratives centered on corruption and the betrayal of national sovereignty, mirroring tactics previously used against other European nations. The campaign’s goal is to erode public confidence in the current leadership ahead of the 2026 vote, reflecting Moscow’s response to Armenia’s geopolitical shift toward Europe. Armenian officials have acknowledged the challenge of countering these accelerating propaganda attacks.


Source: NewsGuard, ‘Russia Targets Armenia’s Elections – Early and Viciously’


3. China’s Coordinated Disinformation Amplifies Russia’s Hybrid War in Europe


An article from Carnegie Europe asserts that China is becoming a central actor in Russia’s hybrid war against Europe through expanded economic, technological, and informational support. The publication highlights the increasing coordination of disinformation strategies between Beijing and Moscow. This is most evident in Chinese campaigns, such as Spamouflage, which are subsequently amplified by Russian state media and diplomatic channels. Both state actors deploy synchronized narratives that blame the West for the war in Ukraine, leveraging digital tools like generative AI and fake news portals to manipulate public perception and reinforce mutual messaging. This coordinated information manipulation is identified as one component of a broader hybrid strategy that also includes cyberattacks and espionage. The article warns that this growing China-Russia disinformation ecosystem undermines Western unity and sanctions enforcement, ultimately enabling Moscow to prolong its aggression. It concludes that stronger, jointly enforced sanctions and deeper international cooperation are needed to counter this threat.


Source: Carnegie Europe, ‘China is the Weak Link in Europe’s Ukraine Strategy’ Available Online: https://carnegieendowment.org/europe/strategic-europe/2025/11/china-is-the-weak-link-in-europes-ukraine-strategy?lang=en


4. NewsGuard Finds Grok Still Spreading 2020 U.S. Election Falsehoods


The AI chatbot Grok has confidently and incorrectly stated that Donald Trump won the 2020 U.S. presidential election in multiple instances. A NewsGuard report identified that although the chatbot initially provided the certified result, users who repeatedly prodded it were able to elicit false claims asserting a “legitimate win” for Trump. These outputs occurred despite the absence of evidence for widespread fraud and the election’s certification by all relevant authorities. The findings raise concerns about the platform’s potential to spread election denialism, especially given the high volume of user engagement with Grok as a fact-checking tool. NewsGuard also found examples of users citing the chatbot’s erroneous responses as evidence of election fraud, demonstrating how AI-generated falsehoods can be laundered into online discourse. This behavior is part of a pattern, as previous investigations found Grok spreading other unfounded claims. When asked for comment, Grok reaffirmed the correct election outcome but insisted no correction was necessary.


Source: NewsGuard, ‘X’s Grok Claims Trump Won the 2020 Election’


5. Grokipedia’s Reliance on Questionable Sources Risks Influencing Other AI Models


According to a Techxplore article, the AI-generated encyclopedia Grokipedia, launched by Elon Musk’s xAI, poses a significant risk to the digital information environment due to its reliance on problematic sources. The platform uses its underlying AI model, Grok, to generate content with limited transparency, resulting in thousands of citations to “questionable” sources, particularly for politically contested topics. Critics cited in the article note that the content often aligns with right-wing perspectives and appears to downplay controversies involving Musk while legitimizing conspiracy theories. By visually mimicking a trusted resource like Wikipedia but using inferior sourcing, Grokipedia could erode public trust and mainstream fringe narratives. The most critical long-term threat identified is the potential for Grokipedia’s poorly sourced articles to be ingested as training data for future generative AI models. This could create a self-reinforcing chain of errors, systematically embedding misinformation and algorithmic bias into the global information infrastructure.


Source: Techxplore, ‘Musk’s Grokipedia leans on ‘questionable’ sources, study says’


6. Forged News Site Fuels Cross-Platform Influence Campaign Targeting Armenia


A coordinated disinformation campaign used a forged media outlet and impersonated NGOs to falsely claim France had dumped nuclear waste in Armenia. An investigation from DFRLab details how the operation began with a fabricated article on a fake website designed to mimic the legitimate news organization France 24. The narrative was then amplified by a website impersonating a real Armenian environmental NGO; this impersonator site appears to be a deep asset, having been revived after years of dormancy with backdated content to simulate legitimacy. The false claim was disseminated across social media by high-volume accounts, including bots and pro-Kremlin amplifiers, which targeted French politicians and Armenian communities. Azerbaijani state-aligned media further amplified the narrative. This operation follows a recurring pattern of hostile influence campaigns against Armenia that seed disinformation on dubious foreign websites impersonating legitimate media, a tactic consistent with Russia-linked operations like Storm-1516.


Source: DFR Lab, ‘How a forged France 24 report fueled a radioactive lie concerning France-Armenia relations’ Available Online: https://dfrlab.org/2025/11/11/how-a-forged-france-24-report-fueled-a-radioactive-lie-concerning-france-armenia-relations/


7. Defensive AI Tool ‘Silverer’ Uses Data Poisoning to Combat Deepfakes


TechXplore reports on the development of ‘Silverer,’ a defensive AI tool designed to combat the creation of malicious deepfake images and videos. Developed by researchers at Monash University and the Australian Federal Police, the tool employs a technique known as data poisoning. This method involves subtly altering an image’s pixels before it is uploaded online. While the alterations are invisible to the human eye, they are engineered to deceive the large machine learning models used by malicious actors to train their generative AI systems. The intended result is that any AI-generated content produced from this poisoned data will be of low quality, skewed, or completely unrecognizable. The ‘Silverer’ tool represents a significant shift toward a proactive defense, empowering the public to protect their images from being used in the datasets that fuel the creation of hyper-realistic synthetic media. By corrupting the data supply chain, the initiative aims to raise the costs and slow the production of malicious deepfakes.


Source: TechXplore, ‘Poisoned pixels: New AI tool to fight malicious deepfake images’



8. Proposed U.S. Legislation Seeks to Counter Foreign Election Influence


New legislation proposed by Senator Bill Hagerty aims to address federal loopholes that permit foreign nationals to indirectly influence U.S. elections. According to a an article by The Daily Wire, election-integrity experts warn that these gaps allow for covert political pressure and create vulnerabilities to foreign-funded issue campaigns that can distort policy debates. While direct contributions to candidates are illegal, foreign funding can still flow to ballot initiatives and voter registration drives. Advocacy groups argue that this involvement is extensive, with billions invested by overseas donors in U.S. policy organizations, funding litigation and lobbying that can shape public narratives. Such undisclosed funding can enable sophisticated influence strategies that appear domestic but are driven by foreign interests. Experts cited in the report warn that without stronger federal regulations, adversarial nations could exploit these same channels to seed misleading information, manipulate public opinion, and compromise the integrity of the American information environment.



9. Minor Digital Frictions Could Slow the Online Spread of Misinformation


New research from the University of Copenhagen, reported by TechXplore, suggests that integrating small obstacles into the social media sharing process can significantly reduce the speed at which misinformation spreads. Using computer models simulating platforms like X and Mastodon, researchers found that minimal “digital friction,” such as brief pauses or pop-up prompts, encourages users to think twice before sharing sensational or false content, thereby slowing its overall diffusion across a network. The study also found that while friction alone can slow the spread, it does not necessarily improve the quality of shared content. However, when a brief learning component—such as a pop-up quiz about how misinformation works—was added to the friction, the model showed a notable improvement in the quality of information being shared. The researchers hope to test these interventions in real-world settings and collaborate with technology companies to evaluate their effectiveness in reducing engagement with low-quality posts.


Source: TechXplore, ‘Small digital frictions can slow the spread of misinformation’


10. Democracy Shield launched by EU to Counter Foreign Interference


The European Union has established the European Democracy Shield to counter the threat of Foreign Information Manipulation and Interference (FIMI). An article from EU vs. Disinfo explains that the joint initiative by the European Commission and the European External Action Service addresses how Russia and other autocratic actors exploit open democratic information spaces as a “geopolitical battleground.” The Shield represents a strategic shift from reactive detection and analysis to a more proactive and aggressive defense against persistent FIMI campaigns and state propaganda. The initiative is designed to reinforce three pillars of governance: protecting the integrity of the information space, ensuring free and fair elections, and building societal resilience. Its external dimension will scale up counter-FIMI training through EU Delegations and deepen collaboration with international partners like the G7 and NATO. A core component of the new strategy is to “raise the costs” for FIMI actors by publicly exposing manipulators and imposing sanctions.


Source: EU vs. Disinfo, ‘European Democracy Shield and EU Strategy for Civil Society pave the way for stronger and more resilient democracies’ Available Online: https://euvsdis-info.eu/the-european-democracy-shield/

11. EU Initiative Trains Botswana Journalists in Counter-Disinformation


A press release from the EU Delegation to Botswana details a training session held for local journalists to address the proliferation of misinformation and disinformation. The initiative, a collaboration with the fact-checking organization Africa Check, aimed to equip media representatives with the skills needed to counter manipulated information that erodes public trust. The core counter-strategy taught was rigorous fact-checking and verification, which was described by the Botswana Editors Forum Chairman as a “moral discipline” for safeguarding newsroom integrity. The workshop provided journalists with practical tools, such as reverse image search, and techniques to verify the authenticity of content in a complex information ecosystem. This initiative reflects the EU’s broader commitment to supporting democratic resilience and independent media through partnerships. By providing these essential skills, the EU aims to assist journalists in protecting freedom of expression and strengthening the local information environment against malign influence.


Source: EU Delegation to Botswana, ‘Building media capacity and promoting democracy through fact-checking and verification’ Available Online: https://www.eeas.europa.eu/delegations/botswana/building-media-capacity-and-promoting-democracy-through-fact-checking-and-verification_en


[CRC Glossary]


The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult.


To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.


As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website 



[Download Report]












 
 
bottom of page