top of page

Cyber-based hostile influence campaigns 19th-25th January 2026

  • Writer: CRC
    CRC
  • 1 day ago
  • 17 min read
Cover Image- Text: Weekly Media Update: Information Operations


[Introduction]


Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.  


During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events.



[Contents]



[Social Media Platforms]


X


[State Actors]


Russia 

The War in Ukraine

Iran


[AI Related Articles]


[Cyfluence Attacks]


[General Reports]



[Appendix - Frameworks to Counter Disinformation]




[ Report Highlights]


A report by EU vs Disinfo examined how Russia used cultural heritage as a tool of FIMI, a strategy described as Cultural Heritage Exploitation.








[Social Media Platforms]


X

Grok’s Deepfake Scandal and Its Regulatory Conclusions

Following the latest events regarding Grok, in which the platform allowed users to create realistic, fake sexual images (for further information, see W02 January 2026 Cyfluence Report here), TechXplore reported that Musk has promised the UK authorities that it will block such capabilities to comply with local law. However, this response appears limited to the UK and came only after investigations were launched.


Current laws in jurisdictions such as New Zealand also struggle to address these problems, particularly when images are fully synthetic. Voluntary codes and light-touch regulation allow platforms to claim compliance while still enabling predictable misuse of AI tools. Nevertheless, the article argued that focusing only on criminalizing users comes too late and fails to address the systemic roots of AI-enabled disinformation. Without robust regulation that compels companies to assess risks and implement safeguards, platforms will continue to prioritize speed and engagement over safety.


Source: TechXplore, C. Mudgway & A. Lensen. Sexualized deepfakes on X are a sign of things to come. NZ law is already way behind. [online] Published 20 January 2026. Available at: https://techxplore.com/news/2026-01-sexualized-deepfakes-nz-law.html


[State Actors]


Russia

Pro-Russian Monuments Were Used to Manipulate History and Justify War

A report by EU vs Disinfo examined how Russia used cultural heritage as a tool of FIMI, in a strategy described as Cultural Heritage Exploitation (CHX). Through CHX, pro-Kremlin historical narratives are deliberately embedded in physical cultural objects, especially monuments, to legitimize Russia’s war against Ukraine and its territorial claims. These monuments promote recurring propaganda themes, including claims that Ukraine is controlled by Nazis, that Russian-speaking communities are under threat, and that Ukrainian statehood is artificial.


Thousands of new pro-Russian monuments have been erected in Russia, occupied Ukrainian territories, and other regions, often placed in highly visible public spaces such as parks, schools, and religious sites. Research showed these monuments are not isolated acts but part of a coordinated system involving Kremlin-linked monument manufacturers, the Russian military, the Russian Orthodox Church, and local authorities. Beyond their physical presence, these monuments have been digitally amplified through state media, diplomatic channels, exhibitions abroad, and social media.


Source: EU vs Disinfo, F. Greenland. Built to lie: how new pro-Russian monuments exploit cultural heritage. [online] Published 20 January 2026. Available at: https://euvsdisinfo.eu/built-to-lie-how-new-pro-russian-monuments-exploit-cultural-heritage/


Russian Hacktivists Increased Disruptive Cyber Pressure on UK Organizations

Infosecurity Magazine reported that UK authorities warned Russian-aligned hacktivist groups had intensified disruptive cyber activity against UK organizations, with local government bodies and critical national infrastructure as key targets. A new alert from the UK National Cyber Security Centre (NCSC) stated that these actors focused primarily on DoS attacks designed to overwhelm websites and online systems, temporarily taking services offline and limiting public access to essential functions.The NCSC noted that the attacks are ideologically motivated rather than financially driven, linked to perceived Western support for Ukraine. One prominent actor, NoName057(16), has been active since 2022 and has repeatedly targeted UK local authorities, NATO members, and other European states perceived as opposing Russian geopolitical interests. The group coordinated activities via Telegram channels and distributed tools and tactics on platforms such as GitHub, enabling supporters to participate in coordinated disruption efforts. Gary Barlet, Public Sector CTO at Illumio, stated that modern supply chains and critical infrastructure are deeply interconnected, making disruption easier than ever. The agency recommends that organizations review their DoS protections, improve resilience, and engage with its heightened cyber threat collection efforts.


The Record also noted that the warning echoed a separate advisory issued in December by the U.S. Cybersecurity and Infrastructure Security Agency (CISA), which said pro-Russian hacktivists were conducting opportunistic, low-impact attacks against global critical infrastructure. While these operations are typically less sophisticated than those conducted by state-sponsored hackers, U.S. officials cautioned that they can still cause real-world disruption and, in some cases, physical damage. Sectors targeted have included water and food systems, agriculture, and energy.


Sources:


Disinformation in Lavrov’s January Press Conference 

In its analysis of Russia’s information tactics, EU vs Disinfo framed Foreign Minister Sergey Lavrov’s January 2026 press conference as less a diplomatic briefing and more a coordinated exercise in foreign information manipulation and interference. Over several hours, he recycled well-known Kremlin narratives aimed at undermining trust in the EU, portraying Europe as aggressive and unstable, and presenting Russia as a besieged but lawful actor. Personal attacks on European leaders, rather than engagement with policy, were used to undermine public confidence in democratic decision-making and to frame EU leadership as reckless and dangerous.


Lavrov relied heavily on distortion and false equivalence. He invoked Nazi-era imagery to discredit Germany’s defensive military posture, falsely depicted the EU as a coercive and illegitimate institution, and repeated long-debunked claims about Ukraine being run by Nazis, repressing Russian culture, and provoking the war. Beyond the EU, Lavrov targeted the Baltic states and Moldova with narratives designed to delegitimise their security concerns and European aspirations, while also adopting anti-colonial language to justify Russia’s own territorial expansion in Ukraine.


Source: EU vs Disinfo. Lavrov’s 2026 presser: a three-hour FIMI offensive against Europe and its leaders. [online] Published 23 January 2026. Available at: https://euvsdisinfo.eu/lavrovs-2026-presser-a-three-hour-fimi-offensive-against-europe-and-its-leaders/



Kremlin Disinformation Mechanism and Debunk

NATO`s Strategic Communications Centre of Excellence research shows that Russian propaganda thrives on contradiction, presenting Russia at once as a defender of conservative Christian Europe and as an anti-colonial champion of the Global South. Studies focusing on Bulgaria show that exposing these logical inconsistencies is not enough: disinformation succeeds not because it is coherent, but because it resonates emotionally with specific social groups. Rather than persuading through facts, Kremlin narratives exploit grievances, identity politics, and distrust of liberal elites, making simple fact-checking an inadequate response.


A key disinformation technique is what researchers call “confirmation through denial”, where denials are framed in ways that indirectly reinforce threats or culpability. Kremlin messaging is also rarely openly pro-Russian; instead, it embeds itself within national-populist and anti-democratic movements across Europe. Its core goal is not to promote Russia, but to sow polarisation, weaken trust in democratic institutions, and link toxic identity narratives to political actors hostile to liberal democracy and European unity. The Bulgarian case also highlights how social media algorithms, opaque networks of websites, and resistance to regulation amplify disinformation, while efforts to govern the digital space are reframed as “censorship”.


Source: NATO Strategic Communications Centre of Excellence, M. Iakimova & B. Dimitrova & V. Dimitrova & O. Nanova & L. Donchev & V. Valkanov & D. Vatsov, Defence Strategic Communications, Volume 16. Bulgaria as a Russian Propaganda Laboratory. [online] Published 2025. Available at: https://stratcomcoe.org/pdfjs/?file=/publications/download/DSC_NATO_journal_V16_05_Milena-Iakimova.pdf?zoom=page-fit



War in Ukraine

Disinformation Targeting Greenland and Ukraine Aid

As stated in an article by NewsGuard, Pro-Kremlin sources are spreading fabricated videos that imitate trusted European and international media to claim that support for Ukraine has left Europe too weak to defend Greenland from supposed U.S. annexation plans. These fake reports exploit recent remarks by U.S. President Donald Trump about acquiring Greenland, using them as a hook to advance disinformation that portrays European assistance to Ukraine as reckless and self-damaging.


The false content includes manipulated videos allegedly from Danish broadcaster TV2 Nord, France’s Le Figaro, and the U.S. think tank Institute for the Study of War. In each case, the videos were entirely fabricated. The outlets never published the reports, and the individuals quoted never made the statements attributed to them. Linguistic inconsistencies, direct denials from media editors, and the absence of any credible records confirm that the material is fake. Despite this, the claims were widely amplified by the Russian state and pro-Kremlin media networks.


Source: NewsGuard, E. Maitland. Pro-Kremlin Sources Impersonate Credible Media to Suggest Western Support for Ukraine Harms Europe and Greenland. [online] Published 21 January 2026. Available at: https://www.newsguardrealitycheck.com/p/russian-fakes-make-the-most-of-europes


Disinformation Around Russia’s Winter Attacks on Ukraine

EU vs. Disinfo reported that as Russia intensified winter strikes on Ukraine’s energy infrastructure, causing blackouts and heating shortages during extreme cold, pro-Kremlin information channels launched a parallel disinformation campaign. While Russian missiles and drones were the direct cause of the outages, these narratives sought to obscure responsibility by downplaying the attacks and shifting blame onto Ukraine’s leadership for “refusing peace.”


A core tactic has been the inversion of responsibility. Pro-Russian commentators express performative sympathy for civilians while portraying the destruction of energy infrastructure as a legitimate tool to force Ukraine into surrender. By describing the war as a “technical” or “energy” problem and avoiding mention of Russian strikes, these narratives whitewash the causes of the crisis and present capitulation on Moscow’s terms as the only path to relief. At the same time, propaganda outlets openly weaponize hardship, celebrating the impact of cold and blackouts as pressure on Ukrainian society. Western reporting on civilian suffering is selectively quoted or distorted to suggest that Ukrainians are losing the will to resist and are ready to accept territorial concessions.


Source: EU vs Disinfo. Weaponising winter: how pro-Russian outlets justify strikes against Ukrainian energy infrastructure. [online] Published 23 January 2026. Available at: https://euvsdisinfo.eu/weaponising-winter-how-pro-russian-outlets-justify-strikes-against-ukrainian-energy-infrastructure/


Iran

Iranian State TV Feed Reportedly Hijacked to Air Anti-Regime Messages

The Record stated that several Iranian state television channels were briefly hijacked on 18 January 2026, interrupting regular programming to broadcast protest footage and anti-regime messages. The affected channels, transmitted via the Badr satellite and used by Iran’s state broadcaster for provincial stations, reportedly aired Farsi-language calls for continued demonstrations, footage of solidarity rallies abroad, and statements attributed to international supporters.


The broadcast also featured Reza Pahlavi, the U.S.-based son of Iran’s last shah, urging further protests and calling on Iran’s military and security forces to back demonstrators. Clips of the incident were shared by Iran International, local media, and Pahlavi’s media team, though the footage could not be independently verified. Reports indicate that the unauthorized transmission lasted approximately 10 minutes, and neither the group responsible nor Iranian authorities have publicly commented on the incident. The apparent breach came amid unrest that began in late December 2025, following a severe economic downturn marked by high inflation and rising food prices. Protests have increasingly targeted alleged government corruption and mismanagement, with some demonstrators calling for political change. The government also imposed a near-total shutdown of internet and mobile communications for nearly two weeks, though restrictions may soon be eased.


Source: The Record, D. Antoniuk. Iranian state TV feed reportedly hijacked to air anti-regime messages. [online] Published 19 January 2026. Available at: https://therecord.media/iran-state-television-reported-hack-opposition


[AI Related Articles]


The Silicon Gaze and Geographic Bias in AI Systems

Drawing on a new analysis by Oxford and Kentucky researchers, the concept of the “silicon gaze” was introduced to explain how large language models (LLMs), such as ChatGPT, systematically reproduce and amplify long-standing spatial inequalities. Based on a large-scale audit of more than 20 million geographically framed queries, the study showed that bias in generative AI is not a technical flaw that can be easily corrected, but a structural feature rooted in historically uneven data production, model design, and platform power. Because LLMs are trained primarily on English-language and Global North–centric sources, they tend to privilege already visible, wealthy, and well-documented places while rendering others vague, stereotyped, or invisible.


The authors developed a five-part typology to describe how these biases operate: availability bias (uneven data coverage), pattern bias (reliance on frequent language associations), averaging bias (flattening complexity into dominant narratives), trope bias (recycling cultural stereotypes), and proxy bias (substituting measurable indicators for subjective qualities). Together, these mechanisms shape how AI evaluates places across scales, from countries to neighborhoods, often reinforcing racial, economic, and geographic hierarchies. Rankings that appear neutral or authoritative, such as those that label places as “smarter”, “happier”, or “more cultured”, frequently reflect dominant media narratives and global metrics rather than lived realities.


The report concluded that addressing these issues requires moving beyond technical debiasing toward a power-aware, relational understanding of AI systems. Bias in LLMs reflects centuries of unequal knowledge production and cannot be solved by more data alone. Instead, the authors call for greater transparency, independent auditing, stronger governance, and critical literacy among users to question what, and who, AI systems leave out. Without such shifts, generative AI risks hard-coding historical inequalities into everyday decision-making, shaping how places are imagined, valued, and acted upon in the future.


Source: SAGE Journals, F. W. Kerche & M. Zook & M. Graham. The silicon gaze: A typology of biases and inequality in LLMs through the lens of place. [online] Published 20 January 2026. Available at: https://journals.sagepub.com/doi/10.1177/29768624251408919


AI Realistic Deepfake Videos

NewsGuard testing demonstrated that OpenAI’s video generator Sora can produce fake videos so convincing that both humans and leading AI chatbots struggle to identify them as false. When Sora videos had their watermarks removed, ChatGPT, Grok, and Gemini failed to recognize them as AI-generated in most cases, often confidently describing fabricated scenes as real and supported by “news sources.” This creates a serious disinformation risk, as realistic AI videos can easily circulate without clear signals that they are fake.


Even when Sora’s watermark was visible, detection was inconsistent. While watermarks improved accuracy, ChatGPT and Grok still misidentified some videos, and all three systems sometimes treated fabricated footage as legitimate reporting. Compounding the problem, watermark removal tools are widely available, making it easy to strip away the main visual indicator that a video is AI-generated. As a result, false videos promoting provably untrue claims can spread while appearing credible to both users and automated systems. The tests also revealed a broader transparency issue. The chatbots rarely warned users that they lack reliable AI-detection capabilities and instead often expressed unwarranted confidence in their incorrect assessments.


Source: NewsGuard, I. Chomnalez & L. Marchl. AI Fools Itself: Top Chatbots Don’t Recognize AI-Generated Videos. [online] Published 22 January 2026. Available at: https://www.newsguardtech.com/special-reports/top-ai-chatbots-dont-recognize-ai-generated-videos/


New Influence Risks Follow OpenAI’s Move to Ads

As TechXplore highlights, OpenAI raised new influence risks by announcing plans to introduce advertising in ChatGPT in the United States, showing ads to users on the free version and the low-cost Go tier, while excluding Pro, Business, and Enterprise subscribers. The company said ads will be clearly separated from chatbot responses, will not affect answers, and will respect user privacy, including opt-outs for personalized ads and bans on ads for minors or around sensitive topics. Still, the move has raised concerns among some users. The key question is whether OpenAI’s voluntary safeguards will hold once advertising becomes central to its business. Running large-scale AI systems is extremely costly, and OpenAI expects significant losses over the coming years. History has shown that targeted advertising is the most scalable way to monetize large user bases, as seen with Google and Facebook. Experience has also shown how commercial pressure can erode self-imposed guardrails, leading to harmful outcomes such as political manipulation, data misuse, and the promotion of damaging products. Concerns are heightened because chatbots are used in more personal and intimate ways than social media. Ads placed alongside private, trust-based conversations may exert a stronger influence on user decisions, even if formally separated from AI responses. The report argued that this creates a structural conflict between public interest and advertising incentives, raising the risk of manipulation by advertisers, scammers, or political actors.


Source: TechXplore, R. F. Ciriello & K. Backholer. OpenAI will put ads in ChatGPT, opening a new door for dangerous influence. [online] Published 25 January 2026. Available at: https://techxplore.com/news/2026-01-openai-ads-chatgpt-door-dangerous.html


[Cyfluence Attacks]


Sandworm-Linked Cyberattack Targets Poland’s Energy Infrastructure

In an Infosecurity Magazine article, a coordinated cyberattack targeting Poland’s energy infrastructure on 29th–30th of December 2025 has been attributed by ESET to the Russia-aligned APT group Sandworm (also tracked as UAC-0113, APT44, and Seashell Blizzard), which is believed to be linked to Russia’s military intelligence service, the GRU. ESET reported that the attackers deployed a data-wiping malware strain known as DynoWiper, though the company stated it was not aware of any successful disruption to energy services. ESET assessed the attribution with medium confidence, citing a strong overlap between the malware’s behavior and Sandworm’s previous wiper-related tactics, techniques, and procedures.


The incident remains under investigation, but ESET suggested the timing may have been deliberate, coinciding with the 10th anniversary of Sandworm’s 2015 cyberattack on Ukraine’s power grid. Polish Prime Minister Donald Tusk stated that Poland successfully repelled the late-2025 attack and that critical infrastructure was not threatened, while also emphasizing the government’s push to finalize the National Cybersecurity System Act, which implements NIS2 to strengthen risk management, IT/OT security, and incident response. The report notes that the attack apparently targeted two combined heat and power plants and a renewable energy system, reflecting continued Russian-linked focus on energy assets as a high-impact target set.


Source: Infosecurity Magazine, P. Muncaster. Wiper Attack on Polish Power Grid Linked to Russia’s Sandworm. [online] Published 26 January 2026. Available at: https://www.infosecurity-magazine.com/news/wiper-attack-polish-power-grid/


[General Reports]


Anti-Israel Hoaxes Targeted Israeli Citizens Abroad

According to a report by NewsGuard's Reality Check, since March 2025, anti-Israel disinformation has increasingly shifted from focusing on the war in Gaza to targeting Israeli citizens abroad on a personal level. At least eight false narratives have circulated widely on social media, particularly on X, collectively generating more than 16 million views. Several viral posts falsely claimed that the Russian government has begun deporting all Israeli citizens from Russia, and that Japan has banned Israeli tourists from entering the country. In reality, no such policies exist, and none of the claims were supported by official statements or credible media reporting. Similar false narratives have also referenced supposed travel bans or punitive measures targeting Israelis in China, France, Spain, and South Korea.


Other hoaxes alleged criminal behavior by Israeli tourists abroad, including a widely shared claim that Israelis were involved in an attempted rape in Thailand. Thai authorities confirmed that the suspects were not Israeli. Another narrative falsely claimed that Israeli tourists set fire to a Christmas tree in China. NewsGuard sent an email to X, the platform on which these false claims have spread most, regarding these findings but did not receive a response.


Source: NewsGuard’s Reality Check, M. Calamaio & G. Pozzi. False Claims Target Israeli Citizens Abroad with Fabricated Travel Bans, Deportations, and Misconduct Allegations. [online] Published. Available at: https://www.newsguardrealitycheck.com/p/anti-israel-hoaxes-get-personal


Old Military Exercise Footage Misrepresented as US Seizure of Russia-Linked Tanker

Yahoo News described how two videos widely shared on social media and presented as footage of a recent US boarding of a Russia-linked oil tanker were actually recordings of earlier US military exercises. Reverse-image and keyword searches traced one clip to a casualty-evacuation drill during Exercise Balikatan 25 in Subic Bay, and the other to a SEAL demonstration during the Titans of the Sea Presidential Review. The circulating videos, however, differ from the grainy footage of the tanker's seizure published by DVIDS and by US Department of Homeland Security Secretary Kristi Noem on her official X account, and archival checks showed the viral material predated the operation by months.


The mis-captioned videos spread across X, TikTok, Instagram, and other platforms shortly after U.S. authorities said they had seized the tanker in the North Atlantic for alleged sanctions violations. Washington said the tanker was part of a shadow fleet that carries oil for countries such as Venezuela, Russia, and Iran in violation of US sanctions, and seized it despite the ship being escorted by the Russian navy.


Source: 

 


False Claims Misrepresented European Response to U.S. Remarks on Greenland

NewsGuard’s Reality Check: “False Claim of the Week” focused on a widely shared false narrative alleging that Italian Prime Minister Giorgia Meloni said Europe would shut U.S. military bases, scrap trade agreements, and boycott McDonald’s in response to U.S. threats to acquire Greenland. The claim, which circulated widely on social media, gained millions of views after anti-Trump users misrepresented a short video clip from Meloni’s press conference on the 9th of January 2026.


In reality, Meloni’s comments were rhetorical questions intended to criticize commentators who advocated distancing Italy from its allies. At the press conference, she reaffirmed Italy’s commitment to the European Union and the Atlantic alliance, emphasizing continued cooperation with the United States. NewsGuard found no credible news reports that Italy is considering leaving NATO, closing U.S. bases, or encouraging the boycott of McDonald’s.


Source: NewsGuard’s Reality Check. Europe Shutting U.S. Bases, Scrapping Trade Deals and Boycotting McDonald’s? NewsGuard’s False Claim of the Week. [online] Published 23 January 2026. Available at: https://www.newsguardrealitycheck.com/p/europe-shutting-us-bases-scrapping


Psychological Defence Against Hostile Disinformation and Influence

Psychological Defence Agency (PDA) presents Psychological Defence and Information Influence as a comprehensive framework for understanding and countering hostile information activities aimed at undermining democratic societies. It explains how foreign and domestic actors exploit openness, free expression, cognitive biases, media systems, and emerging technologies, particularly social media and artificial intelligence, to conduct disinformation, propaganda, and broader malign influence operations. These campaigns aim to distort public debate, polarize societies, erode trust in institutions, and weaken democratic decision-making by manipulating attention, emotions, and perceptions rather than relying solely on overt coercion.


Drawing on theory, case studies, and policy practice, especially in the Swedish and Nordic context, the article outlines how democratic states can respond without undermining their own values. It emphasizes a whole-of-society psychological defence, combining citizen resilience, media and information literacy, institutional preparedness, intelligence coordination, and legally grounded countermeasures. Rather than censorship or state control of truth, the approach prioritizes transparency, factual communication, attribution of hostile actors, and ethical deterrence, framing the defence against disinformation as a long-term effort to strengthen democratic norms, societal trust, and resistance to hostile influence.


Source: Psychological Defence Agency, J. Falkheimer & J. Pamment. Psychological Defence and Information Influence – A Textbook on Theory and Practice. [online] Published 2026. Available at: https://mpf.se/psychological-defence-agency/publications/archive/2026-01-19-psychological-defence-and-information-influence----a-textbook-on-theory-and-practice


[Appendix - Frameworks to Counter Disinformation]


Countering Information Influence Operations in the Nordic-Baltic Region

A report by NATO Strategic Communications Centre of Excellence examined how the eight Nordic-Baltic countries (NB8) have developed strategies to counter information influence operations amid a deteriorating security environment. Drawing on shared democratic values, historical experience with foreign interference, and close regional cooperation, the countries have built national frameworks that emphasize resilience, public awareness, and coordinated responses. Since 2014, and especially following Russia’s full-scale invasion of Ukraine, information influence operations (IIO) have increasingly been paired with cyberattacks, sabotage, and pressure on critical infrastructure, making them a central national and regional security concern.


Across the region, resilience is the cornerstone of counter-IIO efforts. This includes educating citizens through media literacy and critical thinking, strengthening institutional preparedness, and involving civil society, independent media, and fact-checking organizations. Defence and intelligence bodies conduct monitoring and situational analysis, while governments rely on a mix of strategic communications, case-by-case responses, and, increasingly, legal and regulatory measures such as sanctions, media restrictions, and amendments to the criminal code. Attribution and response thresholds vary by country, but international cooperation through NATO, the EU, and bilateral partnerships is a consistent feature.


The report concluded that the NB8 countries are among the most advanced globally in countering IIO, largely due to their whole-of-society approach and strong coordination culture. However, challenges remain, including resource constraints, fragmented legal tools, duplication across international forums, and the need to balance free expression with disruptive measures. The authors identified significant potential for deeper regional cooperation, including joint training, shared capability development, and coordinated response projects, to further strengthen resilience against increasingly sophisticated influence operations.


Source: NATO Strategic Communications Centre of Excellence, J. Lindgren & J. Pamment & A. Palmer & S. Svetoka & E. Lange-Ionatamišvili. Countering Information Influence Operations in the Nordic-Baltic Region. [online] Published 22 January 2026. Available at: https://stratcomcoe.org/pdfjs/?file=/publications/download/Countering-IIOs-in-the-Nordic-Baltic-FINAL-FILE.pdf?zoom=page-fit


[CRC Glossary]


The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult.


To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.


As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website 











 
 
bottom of page