top of page

Weekly: Cyber-based hostile influence campaigns 24th-30th November 2025

  • Writer: CRC
    CRC
  • 16 minutes ago
  • 14 min read
Cover Image- Text: Weekly Media Update: Information Operations

[Introduction]


Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect.  


During the last week we observed, collected and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This weeks report is a summary of what we regard as the main events.



[Contents]



[Social Media Platforms]


[State Actors]

Russia

[The War in Ukraine]


[AI Related Articles]


[General Reports]


[Appendix - Framworks to Counter Disinformation]





[ Report Highlights]


· According to an article by ABC, Elon Musk’s social platform X introduced a global location-display feature intended to verify account authenticity and enhance platform integrity. Instead, the tool generated confusion, inaccuracies, and opportunities for misinformation quickly.


· An article published by DW states that Russia is intensifying a coordinated disinformation campaign ahead of Armenia’s 2026 elections, using AI-generated content, bot networks, and impersonating media outlets to undermine Prime Minister Nikol Pashinyan and discredit Armenia’s Western orientation.


· As reported by DFR Lab, in August and September, pro-Russia media promoted two coordinated disinformation campaigns aimed at weakening Western support for Ukraine.


· As stated in a Stop Fake report, since Russia invaded Ukraine in 2022, Kyiv has actively sought foreign volunteers, with a notable surge from Latin America, especially Colombia.


· According to a report by the Institute for Strategic Dialogue (ISD), the 2025 Czech parliamentary elections unfolded amid an increasingly complex information environment shaped by long-standing foreign influence efforts, particularly from Russian-linked networks.





[Social Media Platforms]



New location feature on X 'weaponized' to spread misinformation

According to an ABC article, Elon Musk’s social platform X introduced a global location-display feature intended to verify account authenticity and enhance platform integrity. Instead, the tool generated confusion, inaccuracies, and opportunities for misinformation quickly. Experts warned that the feature amounted to “weaponized decontextualization,” as accounts were assigned incorrect countries due to VPN use, outdated data, or technical errors.


These inaccuracies were quickly exploited to question the legitimacy of users, including journalists reporting on sensitive conflicts. Several prominent accounts, including those from Australian media outlets, government institutions, and academics, were mislabeled, further eroding trust in the platform. The feature not only failed to improve transparency but also breached user trust by publicly revealing information that many did not expect to be shared. While aggregated location signals may have some limited value in detecting coordinated inauthentic behavior, that tool is unreliable for assessing individual accounts.


Moreover, NewsGuard's Reality Check report showed that the feature has already triggered a wave of deliberate misuse, with partisan X accounts fabricating screenshots to falsely portray political figures and commentators as operating from foreign countries. NewsGuard identified multiple falsified images circulating widely, including a fake screenshot suggesting U.S. commentator Nick Fuentes was based in Nigeria and another claiming U.K. Prime Minister Keir Starmer’s account originated in Ukraine. In both cases, users misrepresented impostor accounts or manufactured location data to fuel political narratives.


Source: ABC, NewsGuard, ‘New location feature on Elon Musk's X 'weaponised' to spread misinformation’


[State Actors]


Russia

Russian Disinformation Targeting Georgia

According to an article by DFR Lab, sanctioned Russian propaganda networks are amplifying disinformation to undermine Georgia’s pro-democracy movement and legitimize anti-Western narratives. As protests against the increasingly authoritarian government continue, websites in English, French, and Polish circulate claims that the West is plotting coups, engineering unrest, or manipulating Georgia’s political future. These narratives mirror Kremlin talking points and echo the ruling Georgian Dream party’s rhetoric.


A key disinformation hub is The Intel Drop, which republishes content from Russian intelligence-linked outlets such as New Eastern Outlook and South Front. Its articles often recycle false claims that Georgian elections are Western provocations, that Ukrainian operatives smuggle explosives to destabilize the country, or that past uprisings like the Rose Revolution and Maidan were CIA-led coups. These stories distort isolated events, dismiss independent election observations, and portray the EU and US as hostile actors attempting to drag Georgia into conflicts with Russia. The same narratives are spread through mirror sites like newsnet.fr in France and Canal666 in Poland, both of which systematically launder material from sanctioned Russian sources.


Source: DFR Lab, ‘Suspicious websites amplify narratives from sanctioned Russian entities targeting Georgia’


Russia Escalates Coordinated Disinformation Campaign Ahead of Armenia’s 2026 Elections

An article published by DW states that Russia is intensifying a coordinated disinformation campaign to undermine Armenia’s Western-leaning government ahead of the June 2026 parliamentary election. Experts report a sharp rise in anti-government narratives targeting Prime Minister Nikol Pashinyan, characterized by greater organization, faster circulation, and more precise targeting than in previous years. These efforts include AI-generated photos, audio, and deepfakes, appearing in Armenia’s information space for the first time, with the overarching goal of eroding public trust in Armenian institutions, discrediting the country’s Western partnerships, and opening political space for more Kremlin-aligned actors.


The campaign integrates multiple tactics and actors. Bot networks such as Matryoshka, previously active in other countries, have begun distributing AI-generated videos mimicking Western media and accusing Pashinyan of destroying Armenia’s cultural foundations and promoting “non-traditional values.” Russian and pro-Russian actors are also employing “doppelganger” techniques, creating websites that imitate legitimate outlets to spread fabricated scandals and corruption stories. One such operation pushed a false claim about US-funded laboratories conducting secret experiments on Armenian civilians, content traced to the Russian Foundation to Battle Injustice, widely viewed as a Kremlin-aligned disinformation front. These narratives are reinforced by local pro-Russian voices as well as Azerbaijani and Turkish actors, forming a broader hostile information ecosystem.


Armenian officials describe this pressure as a form of hybrid warfare, noting that Russian commentators regularly call for political change in Armenia and that such interference is expected to intensify as the election approaches. While Moscow denies involvement, researchers emphasize that the tactics now deployed in Armenia mirror those used by Kremlin-linked actors in Georgia and Moldova, including the rapid creation of impersonating websites and the coordinated amplification of hostile messaging. As Armenia’s relations with Russia have deteriorated since Pashinyan’s rise in 2018 and Russia’s perceived failure to meet security obligations in 2021–2022, the Kremlin’s disinformation efforts appear focused on reshaping Armenia’s political orientation ahead of 2026.


Source: DW, ‘Russia's disinformation campaign in Armenia gains momentum’



[The War in Ukraine]



Kremlin Attacks Ukraine Through Hack-and-Leak Narratives

As reported by DFR Lab, in August and September, pro-Russia media promoted two coordinated disinformation campaigns aimed at weakening Western support for Ukraine. Both used alleged “hack and leaks” to create the illusion of insider access to sensitive data. The first claimed hacktivists had obtained Ukrainian military records proving 1.7 million soldiers were dead. Launched on Telegram and reinforced by Kremlin-aligned outlets like MASH and RIA Novosti, the narrative used unverifiable documents and sensational images to frame Ukraine as militarily devastated. The second campaign, originating from a fringe Italian outlet, alleged that Western states planned to divide Ukraine and seize control of its territory under the guise of military aid, a claim later amplified by Russian media and repeated by Hungary’s prime minister to lend it legitimacy.


These narratives exploited information gaps and wartime uncertainty to question the value of Western aid and Ukraine’s resilience. Casualty figures, rarely disclosed by Kyiv and inconsistently estimated by analysts, became a key vulnerability, allowing propaganda accounts to present fabricated numbers as “leaked truth.” The messaging portrayed Ukraine as doomed, corrupt, and dependent, arguing that Western efforts were pointless. The distribution channels underscored how disinformation spreads rapidly without formal coordination. Telegram was the primary vector, with thousands of small accounts pushing the narratives and millions of views accumulated in under a week. The stories then jumped to TikTok, Facebook, and fringe media, often through supposedly personal accounts that exclusively repost pro-Kremlin content. Amplification by Russian state media, pro-war bloggers, and sympathetic political figures helped move the claims from anonymous sources into mainstream discourse.


Source: DFR Lab, ‘Pro-Kremlin actors seed narratives to undermine Western military aid to Ukraine’ Available Online: https://dfrlab.org/2025/11/28/pro-kremlin-actors-seed-narratives-to-undermine-western-military-aid-to-ukraine/



Disinformation Surrounding Colombian Volunteers in Ukraine

As stated in a Stop Fake report, since Russia invaded Ukraine in 2022, Kyiv has actively sought foreign volunteers, with a notable surge from Latin America, especially Colombia. The Ukrainian Ministry of Defense promotes recruitment by sharing testimonies and success stories, but these efforts are heavily targeted by Kremlin-led disinformation. Russia spreads false or decontextualized content to discourage volunteers and undermine trust in Ukraine, using AI-generated imagery, fabricated media logos, and exaggerated claims about casualties or organ trafficking. These tactics often blend real incidents with false narratives, making them appear more credible.


Colombian volunteers are at the center of these campaigns. Disinformers circulate stories of mistreatment, unpaid soldiers, and claims that Ukraine refuses to return bodies, manipulating tragedies like the missing soldier Michael Ramírez. AI-generated videos portraying forced conscription, fake recruitment posters offering unrealistic benefits, or offensive claims about “inferior races” aim to inflame public opinion and reduce volunteer enlistment. Even Colombian government warnings are picked up by Russian channels to amplify fear and distrust. The broader objective is to shape a perception of Ukraine as abusive, corrupt, and militarily failing.


Source: Stop Fake, ‘Russian disinformation campaigns about foreign volunteers in the Ukrainian Armed Forces: fakes about losses and the "black market for organs"’ Available Online: https://www.stopfake.org/ru/massovaya-gibel-inostrannyh-dobrovoltsev-i-ih-organy-na-chyornom-rynke-rossijskie-dezinformatsionnye-kampanii-protiv-mezhdunarodnoj-mobilizatsii-v-vsu/



Kremlin Fake Peace Rhetoric About the War with Ukraine

According to a report by EU vs. Disinfo, Russia has repeatedly used diplomacy as a façade while pursuing domination over its neighbors. Despite public statements about peace, the Kremlin runs foreign information manipulation and interference (FIMI) campaigns to paint Moscow as reasonable and Ukraine as the aggressor. This strategy has been visible for decades, from covert backing of separatists in Moldova and Georgia to denying direct involvement while entrenching troops and recognizing breakaway regions as “independent.”


In Ukraine, disinformation is central to Russia’s war. After annexing Crimea and secretly intervening in Donbas, Moscow signed the Minsk agreements with no intention of complying. Instead, it violated ceasefires, supported separatists, and accused Ukraine of blocking peace. Putin now claims Russia did not start the war and is merely protecting Donbas, narratives that ignore overwhelming evidence of Russian orchestration and military aggression. This pattern continues today - Missile and drone attacks often follow peace talks, as seen after negotiations in Geneva. Meanwhile, Russian propaganda blames Ukraine, the EU, and the UK for “sabotaging peace.”


Source: EU vs. Disinfo, ‘Russia’s peace rhetoric: A smokescreen for aggression’ Available Online: https://euvsdisinfo.eu/russias-peace-rhetoric-a-smokescreen-for-aggression/



[AI Related Articles]


Fake AI Creates “Digital Reincarnations”

According to a report by CyberNews, AI-generated “digital reincarnations” are becoming a disturbing tool for disinformation and emotional manipulation. Platforms like TikTok struggle to detect or remove deepfakes that impersonate deceased individuals, as seen in cases where dozens of fake victim videos remained online until media pressure intervened. Public figures such as Zelda Williams have demanded that people stop sending AI recreations of their late relatives. In contrast, systems like OpenAI’s Sora have been exploited to generate offensive deepfakes of icons like Martin Luther King Jr. Griefbots (AI simulations designed to mimic lost loved ones) blur the line between comfort and manipulation. AI ethicists warn that users can develop emotional dependence, like relationships with AI companions.


Legal protections lag behind technology. In the US, most deceased individuals have little defense against deepfakes, as federal law does not grant posthumous publicity rights, and state protections primarily focus on commercial use. Newer laws like Tennessee’s ELVIS Act and New York’s Digital Replica law attempt to curb non-consensual AI clones, but enforcement is costly and complex. Until comprehensive legislation, such as the proposed No Fakes Act, takes effect, reporting deepfakes to platforms and setting explicit digital rights in estate documents remain among the few practical safeguards against these evolving disinformation threats.


Source: CyberNews, 'AI deepfakes target the dead | Cybernews'



Disinformation in Political Campaigns Beyond AI

According to an article by The Hill, recent political ads, such as the AI-generated video of Sen. Jon Ossoff, highlight a growing problem: campaigns are using digital tools to fabricate statements and simulate media coverage. However, the core issue is not the technology itself, but the deliberate spreading of falsehoods. Historical examples, from Plato’s warnings to wartime propaganda, show that disinformation long predates artificial intelligence. AI makes fabrication faster, cheaper, and harder to detect, allowing deceptive political messages to circulate widely and gain legitimacy.


The danger of AI-assisted disinformation lies in its ability to blur reality. In one case, a political group created a video of Sen. Chuck Schumer delivering real quotes, even though no such footage existed, raising ethical questions about how information is presented. The more AI is used to generate convincing visuals or voices, the easier it becomes to manipulate public opinion while claiming authenticity. This creates “gray areas” where technically accurate content can be delivered in deceptive formats, undermining trust. The broader risk is erosion of public confidence. As AI headlines dominate, voters may begin dismissing both real and false information as “just AI,” feeding cynicism and weakening democracy.


Source: The Hill, ‘The use of AI-generated deepfakes in campaign ads: Lying is the problem, not AI’



[General Reports]


Influential ‘U.S.’ X Accounts Spread False Claims from Abroad

A NewsGuard's Reality Check investigation found that several influential pro-Trump accounts on X, which appeared to be operated by Americans, were in fact run from countries including Nigeria, Thailand, Bangladesh, Turkey, and Eastern Europe. These covertly foreign accounts collectively spread at least 31 significant false claims over the past 15 months, many targeting U.S. political figures, electoral processes, and international affairs. The activity came to light after X introduced a geolocation feature intended to improve transparency, revealing that some of the platform’s most prominent “American” political accounts were actually based overseas.


Three high-impact accounts, posing as fan pages for Ivanka Trump, Barron Trump, and conservative influencer Charlie Kirk, reached nearly two million followers while amplifying narratives debunked in NewsGuard’s False Claims Fingerprint database. Falsehoods ranged from fabricated allegations against Democratic leaders to disinformation about supposed Ukrainian corruption and staged military movements inside the United States. Some claims echoed well-known pro-Kremlin narratives, further highlighting the manipulation's geopolitical dimension. While X suspended one Nigeria-based account, the platform has not commented on the broader pattern of foreign operations.


Source: NewsGuard, ‘X Accounts Masquerading as American Spread 31 Significant False Claims About U.S. Politics from Abroad’ Available Online: https://www.newsguardrealitycheck.com/i/179868003/x-accounts-masquerading-as-american-spread-significant-false-claims-about-us-politics-from-abroad




Foreign Disinformation During the 2025 Czech Parliamentary Elections

According to a report by ISD, the 2025 Czech parliamentary elections unfolded amid an increasingly complex information environment, shaped by long-standing foreign influence efforts, particularly from Russian-linked networks. Although disinformation actors attempted to undermine trust in institutions, promote narratives of electoral fraud, and amplify societal divisions, investigators found no evidence of large-scale, coordinated foreign interference capable of affecting the election outcome. The vote was ultimately deemed legitimate, transparent, and technically resilient. Minor disruptions, such as a temporary malfunction in the eDoklady digital ID system, were quickly resolved, and extensive media coverage, civil society monitoring, and calm public engagement helped maintain confidence throughout the process.


Despite this resilience, the election highlighted persistent vulnerabilities in the Czech information space. Russian-affiliated outlets, proxy websites, and cross-platform amplification networks continued to launder and spread narratives aimed at eroding support for Ukraine, weakening EU and NATO alignment, and fueling polarization over issues such as migration and defense. Structural weaknesses, including delays in implementing the Digital Services Act, limited oversight of campaign financing, low trust in political institutions, and the frequent domestic amplification of misleading narratives, created fertile ground for FIMI and long-term erosion of trust. High political polarization and uncertainty around new electoral procedures, especially postal voting, further intensified susceptibility to disinformation.


The election results reflected strong democratic engagement, with a turnout of nearly 69%. Andrej Babiš’s ANO party secured a commanding lead and entered government negotiations, while state institutions, civil society, and international partners demonstrated effective, though still incomplete, cooperation in countering FIMI. Looking forward, the report stresses the need for sustained investment in digital resilience, transparent media and advertising rules, cross-border intelligence sharing, and platform accountability.


Source: FIMI-ISAC, ‘Assessment of Foreign Information Manipulation and Interference in the 2025 Czech Parliamentary Election’ Available Online: https://fimi-isac.org/wp-content/uploads/2025/11/FRT-24_Globsec_Czech-Election-Report_FINAL.pdf



Disinformation as a Strategic Threat During COVID-19

As revealed in a report by the NATO Association of Canada, the COVID-19 pandemic unleashed an “infodemic,” in which misinformation and disinformation spread faster than the virus itself. False narratives undermined trust in public health measures, eroded compliance with safety protocols, and fragmented social cohesion. In Canada, these effects were intensified by existing social inequities. Marginalized communities, already facing barriers such as discrimination, housing insecurity, and low health literacy, were more vulnerable to online conspiracy theories about vaccines and masks. Anti-mandate movements exploited public fatigue and emotional reactions, fueling protests like the Freedom Convoy, while widespread vaccine falsehoods reached millions of Canadians.


Disinformation thrived amid inconsistent government messaging and declining trust in institutions. Confusing or contradictory policy announcements created a vacuum that false narratives quickly filled. Research showed that facts alone cannot counter misinformation - they must be delivered with clarity, empathy, and reliability. When official communication fails, disinformation becomes the default explanation, weaponizing public fear and frustration. The pandemic also highlighted disinformation as a geopolitical threat. State and non-state actors used coordinated campaigns to undermine confidence in democratic systems, portraying organizations like NATO as incompetent or malicious.


Source: NATO Association of Canada, ‘Disinformation and Public Health in the Post-Pandemic Era: What COVID-19 Taught Canada and NATO About Resilience’ Available Online: https://natoassociation.ca/disinformation-and-public-health-in-the-post-pandemic-era-what-covid-19-taught-canada-and-nato-about-resilience/



Global Poll Shows Disinformation Among Top Security Fears as U.S. Influence Declines

According to an IPSOS article, global public opinion data from the latest Ipsos World Affairs polling shows that disinformation remains one of the most widely recognized threats, with 77% across 30 countries viewing deliberate attempts to manipulate public opinion as a real and immediate danger, matching concerns about hacking for fraud or espionage. This heightened sensitivity to information threats exists alongside growing geopolitical pessimism: only one-third believe the world is improving, and most say it has become more dangerous, even as confidence in maintaining a strong national defense remains high.


The survey also captures shifting perceptions of global influence, with Canada rated the most positive international actor for the tenth consecutive year. At the same time, views of the United States have sharply deteriorated. Only 48% now believe the U.S. will have a positive impact in the coming decade—down dramatically from previous years—and many countries see America as losing global influence compared to China, BRIC states, and CRINK countries. Meanwhile, Iran, Israel, Pakistan, Russia, and Saudi Arabia are viewed least favorably as global influencers. Public attitudes toward national defense and security are similarly mixed; strong majorities support maintaining robust military capabilities, yet only modest support exists for mandatory service, and opinions differ widely across countries. While most respondents believe economic power outweighs military strength in global affairs, the persistent concern about disinformation campaigns and cyber-enabled manipulation underscores the continued salience of non-kinetic threats within the global security landscape.


Source: IPSOS, ‘Disinformation, hacking seen as top threats; Reputation of America drops’



[Appendix - Framworks to Counter Disinformationj]


Addressing Disinformation Through Media Literacy

According to a K-12Dive report, experts emphasized that as students spend more time on digital media, educators must proactively teach media literacy across all subjects. Rather than directly arguing against conspiracy beliefs, teachers should use open-ended, non-judgmental questions that encourage students to reflect on their own thinking. This approach helps students understand how misinformation and disinformation spread and prevents them from becoming defensive.


Media literacy should start with understanding the broader media ecosystem, including the attention economy and the incentives behind digital content. Educators are encouraged to help students analyze who benefits from a post, why it was created, and whether it deserves engagement. Museums and institutions like the 9/11 Memorial & Museum provide valuable resources and promote critical thinking strategies, such as Occam’s razor and the investigation of whether evidence could change one’s view. The rise of artificial intelligence intensifies the challenge, as multiple sources may reinforce false ideas. Students should examine diverse sources rather than focusing intensely on a single post, and they must monitor their emotional reactions, since disinformation often exploits strong feelings to bypass rational thought.


Source: K-12Dive, ‘How to combat misinformation and disinformation in the classroom’



[CRC Glossary]


The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation and effective action more difficult.


To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.


As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website 











 
 
bottom of page