top of page

CRC Weekly: Cyber-based hostile influence campaigns 22dn - 28th December 2025

  • Writer: CRC
    CRC
  • Jan 1, 2026
  • 16 min read

Updated: Jan 29

Cover Image- Text: Weekly Media Update: Information Operations


[Introduction]


Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.  


During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events.



[Contents]



[Social Media Platforms]

Meta


[State Actors]

Russia


The War in Ukraine

  • The Expansion of Russia’s AI-Driven Disinformation Campaigns
  • Disinformation Campaigns Targeting Ukrainian Refugees
  • False Claims About Microchipped Vaccines in Ukraine
  • Coordinated Disinformation Fuels Anti-Ukrainian Sentiment in Poland

China

  • China’s Cognitive Warfare Strategy Seeks Narrative Dominance Through Disinformation
  • China Index Events Highlight PRC Influence

[AI Related Articles]


[Appendix - Frameworks to Counter Disinformation]



[ Report Highlights]






[Social Media Platforms]


Meta

Meta Report Warns of Persistent State-Linked Disinformation and AI-Accelerated Influence Operations

Meta’s Q2–Q3 2025 Adversarial Threat Report outlines how hostile actors continue to exploit online platforms through coordinated inauthentic behavior (CIB), fraud, and AI-enabled manipulation, with a particular focus on state-linked influence operations and transnational criminal networks. The report documents how adversaries use false identities, deceptive media outlets, and coordinated amplification to mislead audiences, obscure attribution, and advance geopolitical or financial objectives, while deliberately adapting tactics to evade detection.


The report highlights several state-linked influence campaigns, including a long-running Iranian operation known as Endless Mayfly, attributed to the International Union of Virtual Media, which used fake news sites, impersonated journalists, and cross-platform amplification to launder misleading narratives into public discourse. It also details Russian-linked operations that increasingly rely on authentic local intermediaries, particularly in sub-Saharan Africa, where freelancers and local media managers were covertly used to promote pro-Russian narratives and undermine Western partnerships—blurring the line between authentic and inauthentic behavior.


In parallel, Meta reports that AI has become an accelerant rather than a fundamentally new threat, enabling adversaries to scale deception through AI-generated personas, content, translations, and impersonation while maintaining familiar behavioral patterns. These techniques are used across influence operations and scams, including celebrity impersonation and the production of fake news, reinforcing the need for behavior-based detection, cross-sector intelligence sharing, and coordinated defenses to counter evolving disinformation and hybrid threats.


Source: Meta, Adversarial Threat Report: Second–Third Quarter. [online] Published December 2025.


[State Actors]


Russia

Disinformation in Putin’s “Direct Line” Broadcast

According to an article by EUvsDisinfo, Russia’s annual “Direct Line with Vladimir Putin,” first held in 2001, functions as a central instrument of state-led disinformation. In the broadcast that aired on December 19, Putin created an illusion of openness while legitimizing false narratives at the highest political level. Emotional appeals, personalized anecdotes, and claims of hands-on leadership were used to humanize the president and distract from systemic problems. At the same time, inconvenient realities were downplayed or presented as already resolved.


A key tactic was the management of criticism through false equivalence and projection. By allowing Western journalists to ask questions, the Kremlin sought to showcase what it claimed was “free debate,” while dismissing criticism as biased and hostile. Putin repeated long-standing disinformation narratives about Ukraine, portraying Russia as seeking peace, shifting blame for the war onto Ukraine, the EU, and NATO, and insisting that Russia did not start the conflict. He also relied on cherry-picked facts and outright denial to claim military success and economic resilience, despite sanctions and battlefield realities. Assertions that Ukraine lacks the capacity to resist, that Russian advances are inevitable, and that Western actions amount to “robbery” or provocation were used to reinforce a victimhood narrative.


Source: EUvsDisinfo, Controlled questions, crafted lies: inside Putin’s year-end messaging machine. [online] Published 23 December 2025. Available at: https://euvsdisinfo.eu/controlled-questions-crafted-lies-inside-putins-year-end-messaging-machine/


Russia’s Historical Disinformation Campaign Against Finland

A report by the Psychological Defence Agency described a sustained Kremlin-led disinformation campaign that uses falsified history as a strategic tool against Finland. Russian authorities and state-controlled media portray Finland as a Nazi collaborator responsible for war crimes, reviving and distorting Second World War narratives to undermine Finland’s legitimacy, damage its international reputation, and justify future pressure or aggression. This historical revisionism is embedded in Russia’s broader hybrid warfare, in which disinformation, fear-mongering, and victim narratives are coordinated from the highest political level and amplified through state media such as “TASS”.


A central element of the campaign is “lawfare”: Russian courts and prosecutors issue rulings and indictments that falsely accuse Finland of genocide and complicity in the siege of Leningrad. These legal actions create a façade of legitimacy for potential reparations or territorial claims. They are reinforced by physical acts, including the vandalism and removal of Finnish memorials on Russian territory, as well as staged provocations involving nationalist activists. The campaign follows an escalatory logic in which rhetoric, legal measures, and physical acts combine to create conditions for renewed aggression.


The report concluded that history is no longer treated as an academic debate but as an active weapon of security policy, used to mobilize the Russian population, portray Russia as a victim, and frame Finland as an enemy. Countering this threat requires coordinated, long-term, and fact-based international responses, as fragmented or reactive approaches allow disinformation to set the terms of historical and political debate.


Source: Psychological Defence Agency Report Series, P. Oksanen, History as a Battlefield: Russia’s Information War Against Finland 2025. [online] Published 2025.


The War in Ukraine


The Expansion of Russia’s AI-Driven Disinformation Campaigns

NewsGuard has debunked 400 false claims related to the Russia–Ukraine war promoted by Russian actors. The most prominent of these operations is Storm-1516, which Microsoft and NewsGuard have identified as the fastest-growing and most prolific campaign. Rather than relying on sanctioned outlets like RT and Sputnik, the Kremlin is increasingly using unofficial, harder-to-trace networks to spread false narratives. Storm-1516 specialises in accusing Ukraine and its allies of corruption and criminal activity, using AI-generated news websites, deepfake videos, and fake social media accounts.


These false claims often achieve substantial reach, drawing millions of views across platforms such as X and Telegram. The campaign has rapidly scaled its output since 2023, surpassing not only other influence operations but also RT and Sputnik combined in the number of false claims spread in 2025. Similar operations, such as Matryoshka and the Foundation to Battle Injustice, further amplify disinformation through fake media outlets. A defining feature of this new phase is the use of AI to increase speed, scale, and credibility. Storm-1516 has impersonated trusted news organisations, fabricated official-looking videos, and exploited real corruption investigations to inject false claims about President Volodymyr Zelensky.


Source: NewsGuard Reality Check, E. Maitland & A. Lee, 400 and Counting: A Russian Influence Operation Overtakes Official State Media in Spreading Russia-Ukraine False Claims. [online] Published 22 December 2025.



Disinformation Campaigns Targeting Ukrainian Refugees

An article published by StopFake outlines a long-standing Russian disinformation strategy that depicts Ukrainians abroad as traitors or threats—an approach with roots in Soviet-era propaganda that intensified after Russia’s full-scale invasion of Ukraine. As millions of Ukrainians were forced to flee, Russian information attacks increasingly shifted blame onto the victims of the war, using fabricated stories to distort public perception. StopFake reports a sharp rise in such narratives since 2022, describing a sustained effort to damage Ukraine’s image and weaken international support.


Russian disinformation about Ukrainian refugees tends to follow three core narratives: portraying them as criminals, as an economic and social burden, or as people who will be forcibly deported and sent back to fight. These claims commonly rely on invented stories, manipulated photos and videos, and forged websites that impersonate trusted Western media—a tactic associated with the “Doppelganger” operation. Refugees are falsely accused of violence, spreading disease, abusing welfare systems, or receiving preferential treatment, and many of these stories are amplified by far-right actors and fringe outlets to gain political traction. Over time, the emphasis of these narratives has shifted from petty crime stories in 2022 to economic strain and “war fatigue” messaging in 2023, and then to fears of forced mobilisation in 2024–2025. Countries hosting large numbers of Ukrainians, especially Poland and Germany, have been frequent targets, with disinformation shaping public debate and, in some cases, election campaigning.


Source: StopFake, Как Россия ведет войну против украинских беженцев. [online] Published 23 December 2025. Available at: https://www.stopfake.org/ru/kak-rossiya-vedet-vojnu-protiv-ukrainskih-bezhentsev/



False Claims About Microchipped Vaccines in Ukraine

NewsGuard’s analysis described a fabricated “investigation” by the so-called Foundation to Battle Injustice, which claimed that Ukrainian authorities are injecting soldiers with COVID-19 vaccines containing microchips that can remotely paralyse or kill them. Widely circulated by pro-Kremlin social media accounts in December 2025, the allegation asserted that President Volodymyr Zelensky had issued secret orders and cited an unnamed “leak” as evidence.


In reality, the Foundation to Battle Injustice is a known front with documented ties to pro-Kremlin influence efforts and a record of spreading disinformation about Ukraine. There is no evidence that Ukraine mandates COVID-19 vaccination for soldiers, nor that vaccines can contain microchips. Medical experts and health authorities have repeatedly stated that such technology does not exist, and that vaccine components are far smaller than any microchip. Ukrainian officials have explicitly denied the claim, calling it false. In addition, a member of the press department of the Ukrainian President’s Office told NewsGuard in an email: “This information is fake.”


Source: NewsGuard Reality Check, I. Blachez, Russia Fabricates Report that Ukraine Injects Soldiers with Vaccines Containing Microchips that Can Kill Them for Disobeying Orders. [online] Published 23 December 2025.


Coordinated Disinformation Fuels Anti-Ukrainian Sentiment in Poland

A DFRLab report detailed a surge in anti-Ukrainian disinformation in Poland following two triggering events: President Karol Nawrocki’s veto of expanded benefits for Ukrainian refugees and Russian drone incursions into Polish airspace. Both events were exploited by coordinated online networks that pushed misleading narratives portraying Ukrainians as economic burdens and security threats, while downplaying or denying Russia’s role. These information attacks sought to fracture Polish public support for Ukraine and weaken the Polish–Ukrainian alliance at a critical moment.


Despite robust economic data showing that most Ukrainians in Poland are employed and contribute far more in taxes than they receive in benefits, false claims depicting refugees as “freeloaders,” “parasites,” or criminals spread widely on social media. Investigations identified coordinated amplification by networks of Facebook pages, fringe groups, clickbait websites, and monetized platforms, often using synchronized posting, misleading page names, and AI-generated videos. These narratives were also injected into non-political spaces—such as hobby and lifestyle groups—making them harder to detect and counter.


Disinformation intensified further after Russian drones entered Polish airspace, with conspiratorial narratives falsely blaming Ukraine for staging provocations to drag Poland and NATO into the war. These claims circulated across Facebook, TikTok, and the comment sections of major Polish media outlets, often originating from suspicious or inauthentic accounts. One example involved the site udostepnij[.]pl and a network of Facebook pages with innocuous, non-political names that simultaneously shared identical posts about a deportation case, exaggerating and misrepresenting the facts. With hundreds of thousands of followers, these pages systematically amplified similar content, monetized audience attention through ads and YouTube, and redirected users to clusters of right-wing websites..

Source: DFRLab, G. Gigitashvili & S. Gelava, How social media manipulation fuels anti-Ukraine sentiment in Poland. [online] Published 21 December 2025. Available at: https://dfrlab.org/2025/12/21/how-social-media-manipulation


China

China’s Cognitive Warfare Strategy Seeks Narrative Dominance Coordinated Disinformation

The Cyfluence Research Center (CRC) has published a piece by Athena Tong that casts China’s doctrine of cognitive warfare as a tightly integrated form of hostile influence—melding physical actions with sustained narrative manipulation to reshape perceptions of sovereignty and legitimacy across East Asia. Maritime manoeuvres in the South China Sea, the East China Sea, and around Taiwan are embedded in coordinated foreign information manipulation and interference (FIMI) campaigns, using state media, legal instruments, environmental claims, and diplomatic messaging to portray Beijing as restrained, lawful, and environmentally responsible while framing rivals as provocateurs or foreign proxies.


The piece explains how maritime incidents are systematically distorted to advance strategic narratives. Collisions, water-cannon attacks, and blockades are rapidly reframed through manipulated images, recycled footage, and fabricated environmental scandals—such as claims of ecological harm caused by Philippine forces—to blur accountability and confuse audiences. This narrative engineering is reinforced through lawfare, including the designation of disputed features such as Scarborough Shoal as Chinese “nature reserves” and selective appeals to international law at the United Nations to recast sovereignty disputes as matters of development and stewardship rather than territorial rights.


Beyond the South China Sea, Beijing employs historical revisionism, digital disinformation, and psychological operations to erode Japanese and Taiwanese sovereignty narratives. Coordinated social media campaigns amplify divisive local grievances in Okinawa, seed fake grassroots movements, and link disputes over the Senkaku Islands to Taiwan’s status, while misrepresenting UN Resolution 2758 to legitimise claims over Taiwan. Collectively, these tactics constitute a comprehensive political-warfare playbook that combines media warfare, lawfare, and psychological pressure to normalise grey-zone aggression, weaken the adversary's resolve, and establish narrative dominance without direct military confrontation.


Source: Cyfluence Research, A. Tong, Cognitive Warfare Masterclass: China’s Doctrine for Strategic Narrative Superiority. [online] Published 29 December 2025. Available at: https://www.cyfluence-research.org/post/cognitive-warfare-masterclass-china-s-doctrine-for-strategic-narrative-superiority


China Index Events Highlight PRC Influence

As published by DoubleThink Lab, the China Index Regional Event Fund, launched in 2025, has supported local events worldwide to foster discussion about Chinese influence and disinformation, helping communities better understand the PRC’s global strategies. Events ranged from policy roundtables and workshops for journalists to university sessions for students. In the UK, discussions focused on how the Chinese Communist Party's influence affects higher education, including concerns about surveillance and self-censorship among Chinese students. Meanwhile, Italian forums highlighted the PRC’s impact on politics, media, and democracy in Europe.


In South and Southeast Asia, workshops emphasised the role of online information operations and PRC-backed initiatives in shaping public opinion and reinforcing control. At universities in India, Thailand, and Pakistan, participants learned to detect patterns of influence, assess risks linked to initiatives such as the China–Pakistan Economic Corridor, and understand how surveillance technology and economic dependencies can be used to shape behaviour. Discussions highlighted how disinformation and selective narratives are deployed to manipulate populations.


Across all events, the China Index facilitated evidence-based discussions on disinformation and broader PRC influence, illustrating how academic research can inform public debate, policymaking, and media coverage. The initiative aimed to strengthen regional understanding of authoritarian digital strategies, raise awareness of state-driven propaganda, and provide practical tools to detect and mitigate the effects of information operations globally.


Source: Medium (Doublethink Lab), From Global Insight to Local Action: The 2025 China Index Regional Event Fund Report. [online] Published 23 December 2025. Available at: https://medium.com/doublethinklab/from-global-insight-to-local-action-the-2025-china-index-regional-event-fund-report-f7f1bacd6800

[AI Related Articles]


AI and LLM Disinformation Risks

Concerns about digital misinformation have flared up again following Meta’s decision to wind down professional fact-checking, a development covered by TechXplore (for further information, see the W02 January 2025 Cyfluence Report here). Still, the larger challenge lies in the subtler influence of AI large language models (LLMs) on public perception. Still, the larger challenge lies in the subtler influence of AI large language models (LLMs) on public perception. These models increasingly generate news summaries, headlines, and content across platforms before traditional moderation can respond, shaping not only what information people see but also how it is framed. Even when the underlying facts are accurate, LLMs can foreground certain viewpoints while downplaying others, creating communication bias that quietly steers opinions and amplifies particular narratives at the expense of others. This bias is distinct from outright disinformation, but it can produce similar effects on public understanding.


Research suggests that LLMs exhibit persona-based steerability, aligning outputs with perceived user expectations. For example, the same query about legislation may yield responses tailored to an environmental activist versus a business owner—emphasising different angles while remaining factually accurate. When a small number of dominant companies control most widely used AI models, even minor biases can scale rapidly, shaping public discourse without users fully realizing the extent of that influence.


Current regulatory frameworks, such as the EU AI Act and the Digital Services Act, prioritise transparency, pre-launch audits, and post-deployment oversight. However, they remain limited in their ability to address subtle communication bias. Effective mitigation, therefore, requires more than regulation: fostering competition, transparency, and meaningful user participation is essential to ensuring that AI systems reflect diverse perspectives.


Source: TechXplore, A. Kuenzler, People are getting their news from AI—and it's altering their views. [online] Published 22 December 2025. Available at: https://techxplore.com/news/2025-12-people-news-ai-views.html


[Appendix - Frameworks to Counter Disinformation]


EU Sanctions Target Diana Panchenko

EUvsDisinfo notes that the European Union has introduced a new round of sanctions aimed at individuals and entities linked to coordinated Russian disinformation and cyber operations (for further information, see the W51 December 2025 Cyfluence Report here). One of the listed individuals is Diana Panchenko, a Ukrainian-born journalist whom EUvsDisinfo describes as promoting pro-Kremlin, anti-Ukrainian, and anti-NATO narratives while presenting this material as “investigative journalism” in ways intended to mislead audiences and lend credibility to false claims.


Her output blends the aesthetics of professional news production with content filmed in Russian-occupied territories, amplifying narratives aligned with Russia’s war messaging. She uses platforms such as Telegram and X to reach large audiences, including English-speaking users. She relies on techniques often associated with influence operations, clickbait framing, coordinated amplification, and the systematic promotion of pro-Russian accounts, rather than on conventional independent reporting. Ukrainian authorities and disinformation researchers have warned that her content contributes to justifications for Russian abuses and undermines Ukraine’s security and stability.


Source: EUvsDisinfo, From “journalism” to FIMI: EU sanctions Diana Panchenko. [online] Published 22 December 2025. Available at: https://euvsdisinfo.eu/from-journalism-to-fimi-eu-sanctions-diana-panchenko/


U.S. Action Targets Foreign Disinformation and Censorship Agents

The U.S. State Department has imposed restrictions on five individuals identified as leading organised efforts to pressure American platforms to censor, demonetise, or suppress viewpoints they oppose. These actors—including radical activists and weaponised NGOs—have advanced foreign-backed censorship campaigns targeting American speakers and companies.


By imposing these restrictions, the State Department aims to prevent foreign disinformation networks from influencing domestic discourse and manipulating online platforms. In some cases, the Department of Homeland Security can initiate removal proceedings under U.S. law against individuals engaged in such activities, underscoring the seriousness with which the U.S. treats extraterritorial disinformation and censorship campaigns. By curbing the reach of these actors, the U.S. seeks to safeguard free expression, protect American companies from coercion, and mitigate the impact of disinformation campaigns.


Source: U.S. Department of State, Announcement of Actions to Combat the Global Censorship-Industrial Complex (Press Statement by Marco Rubio, Secretary of State). [online] Published 23 December 2025. Available at: https://www.state.gov/releases/office-of-the-spokesperson/2025/12/announcement-of-actions-to-combat-the-global-censorship-industrial-complex/


US Judge Blocks Deportation of Anti-Disinformation Advocate

ABC News published that a U.S. judge has temporarily prevented the Trump administration from detaining Imran Ahmed, a British anti-disinformation campaigner and chief executive of the Center for Countering Digital Hate, over visa restrictions aimed at five Europeans accused of targeting U.S. tech companies or censoring speech. Ahmed, a permanent U.S. resident, fears deportation that would separate him from his U.S.-citizen family while he continues his work combating online hate and misinformation.


The visa bans sparked criticism from European governments, which argued that Ahmed’s work enhances online safety by exposing disinformation, hate speech, and child-exploitation content—highlighting the tension between foreign-policy concerns and anti-disinformation efforts. Ahmed filed a lawsuit alleging violations of his free-speech and due-process rights, prompting U.S. District Judge Vernon Broderick to issue a temporary restraining order halting any detention or removal until the case can be heard.


Source: ABC News, US judge blocks deportation of British anti-disinformation activist. [online] Published 26 December 2025. Available at: https://www.abc.net.au/news/2025-12-26/us-barred-from-deporting-campaigner-imran-ahmed/106179794


The WikiSignals Project Strengthens Wikipedia Against Disinformation

MisinfoCon’s coverage of WikiCredCon 2025 highlights a core reality for Wikipedia: credibility depends not only on accurate articles, but on the trustworthiness of the sources behind them. WikiSignals brings together organisations, including Veri.FYI, Factiverse, SimPPL, AfroCrowd, Iffy.news, and the Internet Archive Reference Explorer.


The WikiSignals project is developing tools to help editors assess the reliability of references by compiling data from press associations, fact-checking ratings, domain registration records, media indexes, and lists of predatory or hijacked journals. This resource—now covering 70,000 websites—provides editors with a searchable credibility score for each source, helping prevent disinformation from entering Wikipedia.


WikiSignals also offers a practical tool that lets editors evaluate a URL’s credibility and format it as a Wikipedia citation. The project integrates machine-harvestable data to estimate reliability programmatically, supporting both manual and automated efforts to counter misleading or biased sources. Input from active Wikipedia editors has guided the development of additional resources, including a compendium of favourite editing tools and an expanded index of predatory journals.


Source: MisinfoCon, B. Golding, WikiSignals: Strengthening the Credibility of Wikipedia Citations. [online] Published 24 October 2025. Available at: https://misinfocon.com/wikisignals-strengthening-the-credibility-of-wikipedia-citations-8531d2673099


EU Sanctions Act against Russian Aggression

The European External Action Service (EEAS) announced it has renewed its restrictive measures against Russia for six more months, until 31th July 2026, in response to Moscow’s ongoing destabilization of Ukraine. These sanctions, first imposed in 2014 and significantly expanded after Russia’s 2022 invasion, cover a wide range of sectors, including trade, finance, energy, technology, and luxury goods. The measures include suspending the broadcasting activities and licenses of Kremlin-backed disinformation outlets within the EU, directly targeting coordinated propaganda campaigns aimed at influencing European public opinion. The EU’s sanctions also include asset freezes, travel bans, and restrictions on economic activity in Crimea, Sevastopol, and non-government-controlled areas of Ukraine.


Source: EEAS, Press and information team of the Delegation to Ukraine, Russia’s war of aggression against Ukraine: Council extends economic sanctions for a further 6 months. [online] Published 22 December 2025.


Uzbekistan Strengthens Youth Fact-Checking to Combat Disinformation

A report by the EEAS revealed that on December 25th, 2025, Uzbekistan inaugurated a Fact-Checking Club at the Youth Affairs Agency in Tashkent under the EU-funded “Digital Detectives” project, in collaboration with the “Qizlar Ovozi” (Voices of Girls) initiative. The event emphasized the importance of promoting media literacy, critical thinking, and responsible information consumption among young people, while linking fact-checking to gender equality and inclusive public discourse. Discussions highlighted the societal role of fact-checkers, the dangers of misinformation, and the negative impact of gender stereotypes.


The project has trained 10 new fact-checking trainers and expanded a network of freelance contributors, delivering workshops and regional sessions that reached over 220 participants from schools, universities, and youth centers. Between October and December 2025, participants produced more than 50 fact-checking articles, approximately 20 videos, and multiple awareness campaigns addressing disinformation, cyberbullying, gender equality, and public health. Fact-checking clubs are now being launched in multiple institutions, including the University of Journalism and Mass Communications, and in Nukus, Karakalpakstan.


Source: EEAS, Press and information team of the Delegation to Uzbekistan, EU-Funded Project “Digital Detectives” Launches Fact-Checking Club at the Youth Affairs Agency. [online] Published 25 December 2025. Available at: https://www.eeas.europa.eu/delegations/uzbekistan/eu-funded-project-%E2%80%9Cdigital-detectives%E2%80%9D-launches-fact-checking-club-youth-affairs-agency_en


EU - Western Balkans Conference Strengthens Fight Against Disinformation

According to the EEAS publication, the EU–Western Balkans Media Literacy Conference in Skopje, on 17th November, convened representatives from across the region, EU Member States, and international partners to address the growing threat of disinformation. The conference emphasized media literacy as a cornerstone of informed citizenship and democracy.


Panels and workshops focused on key challenges in the regional information landscape, including social media’s role in the spread and counteracting of manipulated content, gendered disinformation, digital hate, and the specific vulnerabilities of younger audiences. Sessions also explored how EU policy frameworks, such as the Digital Services Act, can support human-rights-based, whole-of-society approaches to combating misinformation and influence operations. Participants exchanged practical strategies for detecting, mitigating, and preventing disinformation while enhancing institutional capacity.


Source: European External Action Service (EEAS), Press and information team of the EU Delegation to the Republic of North Macedonia, EU–Western Balkans Media Literacy Conference 2025 strengthens regional cooperation to counter disinformation. [online] Published 25 December 2025.

[CRC Glossary]


The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult.


To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.


As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website 











 
 
bottom of page