Cyber-based hostile influence campaigns 26th January - 1st February 2026
- CRC
- 1 minute ago
- 16 min read

[Introduction]
Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.
During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events.
[Contents]
[State Actors]
Russia
Pro-Kremlin Network Spread Fake Media Reports to Undermine EU Unity After Davos
NewsGuard’s 2025 Disinformer of the Year: Yevgeny Shevchenko, Creator of the Pravda Network
Russian Pravda Network Amplified Alberta Secessionist Content
Kremlin claimed Russia and the US won’t talk to EU politicians
EU Sanctioned Six Individuals Over Russian Information Manipulation Activities
Latvia Claimed Russia Remained its Top Cyber Threat as Attacks Hit Record High
The War in Ukraine
China
[General Reports]
[Appendix - Frameworks to Counter Disinformation]
[ Report Highlights]
The European External Action Service (EEAS) reported that the EU Council imposed sanctions on six additional individuals for their involvement in Russian hybrid activities, particularly foreign information manipulation and interference targeting the EU, its member states, and partners.
In its annual report, Latvia’s “Constitution Protection Bureau” (SAB), said 2025 marked an all-time high in registered cyber threats targeting the country, while Russia remained the primary source of cyber risk.
A report by NATO Strategic Communications Centre of Excellence examined how Beijing has sought to shape the Nordic-Baltic information environment, documenting a shift from earlier partnership approaches to a more cautious, contested relationship.
According to an EU VS Disinfo article, Global risk assessments increasingly identify foreign information manipulation, disinformation, and misinformation as structural threats that undermine democracy, human rights, economic stability, and crisis response, making investment in trustworthy public-interest media and stronger policy safeguards essential to protecting information integrity.
A Politico article describes a growing digital struggle in which expanded federal surveillance and data use for deportations is met by activist and hacker efforts to track, leak, and disrupt ICE operations, fueling an increasingly contested and misinformation-prone information environment.
An article from the NATO Strategic Communications Centre of Excellence explains how coordinated actors exploit social media platforms through fake accounts, automated amplification, and targeted narrative tactics to manipulate public opinion, spread disinformation, and undermine trust in democratic information environments.
[State Actors]
Russia
Pro-Kremlin Network Spread Fake Media Reports to Undermine EU Unity After Davos
NewsGuard’s Reality Check reveals how a pro-Kremlin influence operation sought to undermine the European Union during and immediately after the World Economic Forum in Davos by circulating fabricated videos that impersonated trusted outlets. The fake content included a fabricated Reuters video alleging that France and Germany were preparing to leave the EU, a bogus Gallup report claiming that most young Europeans supported exiting the union, and a falsified Economist video attributing anti-EU remarks to former German Chancellor Angela Merkel.
All of these claims were demonstrably false, and none appeared on the outlets’ official platforms. The individuals cited consistently expressed strong pro-EU positions. Reuters explicitly confirmed that the video attributed to it was fake, while independent polling showed strong EU support among European youth. The campaign also promoted additional fabricated quotes, anti-EU statements attributed to global leaders, claims about fabricated anti-EU protests, and reports of escalating tensions among EU member states. Although individual videos achieved only modest reach, collectively they garnered thousands of views across platforms such as Telegram, where audiences have limited tools for verifying authenticity.
Source: NewsGuard, A. Lee. Post Davos, Russian Influence Operation Uses Phony Videos Impersonating Reuters, Gallup and the Economist to Foment EU Discord. [online] Published 28 January 2026. Available at: https://www.newsguardrealitycheck.com/p/russians-seed-fake-reports-to-sow
NewsGuard’s 2025 Disinformer of the Year: Yevgeny Shevchenko, Creator of the Pravda Network
Yevgeny Shevchenko was designated by NewsGuard’s Reality Check as its 2025 Disinformer of the Year for his role in building the Pravda network, one of the most prolific pro-Kremlin disinformation operations globally. The network comprises hundreds of automated, news-style websites published in 49 languages, collectively producing approximately 6.3 million articles in 2025 alone. These sites repeatedly amplified false claims aligned with Kremlin narratives, targeting topics such as the war in Ukraine, European and U.S. elections, public health, and geopolitics, while using domain names designed to appear legitimate and local.
A key impact of the Pravda network was its success in polluting search results and influencing generative AI systems. NewsGuard audits found that while some AI systems successfully debunked false claims sourced directly from Pravda articles, others reproduced them at significant rates. Shevchenko, a Crimea-based web entrepreneur and founder of the company TigerWeb, has kept a low public profile despite the network’s reach. The Pravda network expanded rapidly after Russia’s full-scale invasion of Ukraine in 2022 and was sanctioned by the European Union in July 2025 for coordinated information manipulation.
Source: NewsGuard, A. Lee & E. Maitland. NewsGuard’s 2025 Disinformer of the Year: Yevgeny Shevchenko, Creator of the Pravda Network. [online] Published 27 January 2026. Available at: https://www.newsguardrealitycheck.com/p/newsguards-2025-disinformer-of-the
Russian Pravda Network Amplified Alberta Secessionist Content
As reported by DisinfoWatch, Russia’s Pravda News Network published what appeared to be an AI-generated video promoting an Alberta secessionist rally scheduled for Monday, January 26, on its Russian VK social media platform. The Pravda network, also known as Portal Kombat, was first identified by France’s VIGINUM agency as a coordinated pro-Kremlin disinformation ecosystem that aggregates and republishes content from Russian state media, official channels, and aligned online sources rather than producing original reporting. The network operates a Canada-focused site that republishes material daily, drawing heavily from outlets such as RT, TASS, and the Russian Embassy in Canada. Canadian civil society group Cyber Alberta has warned that Pravda is targeting Canadian interests.
Source: DisinfoWatch, Russian Pravda News Platform Amplifying Alberta Secessionist Events. [online] Published 25 January 2026. Available at: https://disinfowatch.org/disinfo/russian-pravda-news-platform-amplifying-alberta-secessionist-events/
Kremlin Claimed Russia and the US won’t talk to EU politicians
DisinfoWatch documents how Russian state outlet RT reported claims by Kremlin spokesperson Dmitry Peskov asserting that Russia would not engage with EU foreign policy chief Kaja Kallas and that it was “obvious” the United States would also refuse to engage with her. The claim sought to widen EU–US fractures amid UAE-hosted talks. Kallas is the EU’s High Representative for Foreign Affairs and Security Policy and a Commission Vice-President, a role appointed through formal EU processes and documented in EU institutional records. The report relied on delegitimizing language and provided no evidence to support the assertion that Washington shared Moscow’s position.
Source: DisinfoWatch, Kremlin Claims Russia and US won’t talk to EU. [online] Published 26 January 2026. Available at: https://disinfowatch.org/disinfo/kremlin-claims-russia-and-us-wont-talk-to-eu/
EU Sanctioned Six Individuals Over Russian Information Manipulation Activities
The European External Action Service (EEAS) reported that the EU Council imposed sanctions on six additional individuals for their involvement in Russian hybrid activities, particularly foreign information manipulation and interference targeting the EU, its member states, and partners. Those sanctioned include prominent Russian television presenters Dmitry Guberniev, Ekaterina Andreeva, and Maria Sittel; propagandist Pavel Zarubin; actor Roman Chumakov; and ballet dancer Sergey Polunin. The Council said these figures have actively promoted pro-Kremlin disinformation, anti-Ukraine and anti-Western narratives, and, in some cases, helped raise funds for the Russian armed forces, directly contributing to Russia’s war effort against Ukraine. With this decision, EU restrictive measures now apply to 65 individuals and 17 entities. Sanctions include asset freezes, bans on EU citizens and companies providing funds or economic resources to those listed, and travel restrictions preventing entry into or transit through EU territory. The legal acts formalizing the measures have been published in the Official Journal of the European Union.
Source: EEAS, Russian hybrid threats: Council sanctions six individuals over information manipulation activities. [online] Published 30 January 2026. Available at: https://www.eeas.europa.eu/delegations/ukraine/russian-hybrid-threats-council-sanctions-six-individuals-over-information-manipulation-activities_en
Latvia Claimed Russia Remained its Top Cyber Threat as Attacks Hit Record High
In its annual report, Latvia’s Constitution Protection Bureau (SAB), said that 2025 marked an all-time high in the number of registered cyber threats targeting the country, while Russia remained the primary source of cyber risk. While most incidents involved cybercrime and digital fraud, state-linked threats remained elevated, particularly from Russia. From a national security perspective, the most significant risks included intrusion attempts, malware, system compromises, and distributed DDoS attacks. Latvian authorities noted that effective defensive measures, particularly by CERT, were in place. LV helped limit the impact of many attacks, including during politically sensitive events such as municipal elections.
A key concern highlighted in the report was the growing role of Russian hacktivists, who have demonstrated both intent and capability to target critical and industrial systems across Latvia and other Western countries. These actors aim to disrupt essential services, intimidate populations, punish support for Ukraine, and deter further assistance. Examples cited included hacktivist attacks on operational technologies, such as dams and power plants, in Norway and Poland, where weak security controls enabled attackers to manipulate industrial control systems and, in one case, shut down a hydroelectric facility. Although Latvia has so far avoided major incidents affecting critical infrastructure, vulnerabilities in operational technologies remain a significant risk. Russian DDoS campaigns continued to target Latvian government institutions, municipalities, and critical infrastructure, often timed to coincide with political decisions or symbolic events. In most cases, DDoS attacks had little or no effect on services’ availability. To counter this threat, Latvia has invested in centralized, state-funded DDoS protection for public institutions and strengthened oversight of ICT critical infrastructure through new cybersecurity laws and regulations.
Source: SAB, Annual Report 2025. [online] Published January 2026. Available at: https://www.sab.gov.lv/files/uploads/2026/01/SABs-annual-report_2025_ENG.pdf
War in Ukraine
Fake Videos Targeted Ukrainian Refugees in France
StopFake reports that a series of fake videos on Telegram falsely alleged that Ukrainian refugees in France committed mass crimes, including murder, theft, drug distribution, and even terrorism. These videos, which mimicked the logos and formats of prominent French and international media outlets, including Le Parisien, Le Figaro, L’Équipe, Reuters, Le Point, and Euronews, were part of a coordinated disinformation campaign. Analysis showed that the videos were released over a short period, from January 12 to 16, 2026, via at least three anonymous Telegram channels, and were then widely amplified across pro-Russian networks.
Fact-checks confirmed that none of the alleged crimes or news stories were real. Official websites, social media accounts, and publications from the cited media outlets contain no reports that match the videos’ claims. France also does not maintain official statistics on crime specifically among Ukrainian refugees, and available data suggest that Ukrainians do not pose a higher criminal threat than other migrant groups.
Source: StopFake, Фейк: Мировые СМИ сообщили о массовых преступлениях украинских беженцев во Франции. [online] Published 28 January 2026. Available at: https://www.stopfake.org/ru/fejk-mirovye-smi-soobshhili-o-massovyh-prestupleniyah-ukrainskih-bezhentsev-vo-frantsii/
China
China’s Influence in the Nordic–Baltic Information Environment in Denmark and Lithuania
The NATO Strategic Communications Centre of Excellence examined, in a report on China’s influence in the Nordic–Baltic information environment, how Beijing has sought to shape the region’s information space, documenting a shift from earlier partnership approaches to a more cautious, contested relationship. It mapped China’s objectives (protecting core interests, acquiring technology, and improving perceptions), described eight avenues of influence, and analysed official PRC frames and their resonance in local media using country case studies of Lithuania and Denmark.
In Lithuania, relations with China deteriorated sharply after 2019, culminating in Vilnius's withdrawal from the China-CEEC format and the authorization of Taiwan to open a representative office under its own name. China responded with economic pressure and a coordinated diplomatic and information campaign. However, the study found that China’s influence in Lithuania’s media space remained limited. Chinese narratives had little resonance, partly due to the absence of strong Chinese media channels and partly because Beijing’s coercive tactics proved counterproductive, reinforcing public skepticism rather than shaping opinion. Media debates largely reflected domestic political dissatisfaction and broader geopolitical shocks, particularly Russia’s invasion of Ukraine, rather than successful Chinese messaging.
In Denmark, the report identified a different pattern, described as “Shadow Wolf Warrior” diplomacy. Rather than relying on aggressive public messaging, China relied more on backstage influence through elite networks, business ties, and United Front activities, while maintaining a low public profile. Although Danish public discourse was generally skeptical of Chinese frames, and official messaging failed to gain broad traction, the report warned that China’s covert channels and long-standing institutional ties create a durable influence.
Source: NATO Strategic Communications Centre of Excellence, M. Lanteigne & L. Stünkel & K. Andrijauskas & A. K. Jakobsson. China’s Influence in the Nordic – Baltic Information Environment: Denmark and Lithuania. [online] Published 28 January 2026. Available at: https://stratcomcoe.org/pdfjs/?file=/publications/download/Chinas-Influence-in-the-Nordic-Baltic---Denmark-Lithuania-FINAL-FILE.pdf?zoom=page-fit
Pro-China AI Videos Falsely Claim Taiwanese Support for Unification
NewsGuard’s Reality Check has tracked how, since December 2025, pro-China sources have circulated AI-generated videos showing people purportedly from Taiwan speaking Mandarin with authentic Taiwanese accents and calling for unification with China. One account on the Chinese platform RedNote, “Taiwanese come home,” posted 35 such videos featuring teachers, doctors, police officers, firefighters, and students, garnering over 21,000 likes. NewsGuard confirmed that the videos were AI-generated using OpenAI’s Sora 2 tool, and that the individuals depicted do not exist or were misrepresented. For example, Taipei Municipal Chien Kuo High School, shown in one video, has never admitted female students, and the National Taiwan University professors depicted in another video were entirely fabricated. These AI-generated videos are part of a broader cognitive warfare effort by China, which has commissioned companies like Magic Data and iFlytek to create voice databases of native Taiwanese speakers in Mandarin, Hokkien, and Hakka. These databases are intended to lend authenticity to fabricated pro-China messaging.
Source: NewsGuard, C. Lin. Pro-China AI-Generated Videos Use Databanks of Taiwanese Accents to Fake Calls for Taiwan-China Unification. [online] Published 29 January 2026. Available at: https://www.newsguardrealitycheck.com/p/pro-china-ai-fakes-a-taiwanese-accent
[General Reports]
Disinformation Surrounding the Shooting of Alex Pretti
NewsGuard’s Reality Check designated the claim that Alex Pretti pulled a gun on federal agents before being fatally shot in Minneapolis in January 2026 as its “False Claim of the Week,” citing its rapid spread, high engagement, and promotion by high-profile figures. Following the January 2024 shooting, Trump administration officials and conservative commentators alleged that Pretti brandished a firearm and posed an imminent threat, framing the killing as justified. Statements from the Department of Homeland Security, Homeland Security Secretary Kristi Noem, and White House aide Stephen Miller were widely echoed across social media and partisan websites, drawing millions of views.
However, a detailed review of eyewitness video footage from five angles by NewsGuard, alongside reporting from major outlets including Reuters, CNN, The New York Times, and ABC News, found no evidence that Pretti pulled or reached for a gun before he was shot. The footage showed Pretti holding a phone, with no weapon visible, as officers confronted him. He was pepper-sprayed, tackled, and pinned to the ground before an officer removed a concealed handgun from his waistband. Authorities later confirmed that Pretti was legally carrying a concealed firearm with a permit, but video analysis indicated it remained holstered and hidden throughout the initial encounter.
Additionally, NewsGuard’s Reality Check reported that shortly after the shooting, an AI-manipulated image circulated widely on social media, falsely claiming to show Pretti holding a gun at the moment he was shot, with posts reaching millions of views within hours. Investigators and journalists confirmed the image was fabricated.
Sources:
NewsGuard, C. Vercellone, Reality Check. Debunk: Pretti Didn’t Pull Out a Firearm, Contrary to the Trump Administration’s Claims. [online] Published 30 January 2026. Available at: https://www.newsguardrealitycheck.com/p/did-alex-pretti-brandish-a-gun-newsguards
NewsGuard, M. Calamaio, Reality Check. AI-Manipulated Image Cited as False Evidence that Victim in Latest ICE Shooting Was Brandishing a Gun. [online] Published 26 January 2026. Available at: https://www.newsguardrealitycheck.com/p/ai-manipulated-image-shows-gun-not
Disinformation As a Systemic Threat to Democratic Resilience
EUvsDisinfo argues that recent global risk assessments characterize foreign information manipulation and interference (FIMI), disinformation, and misinformation as systemic threats that undermine democratic resilience worldwide. Reports from the World Economic Forum, the United Nations, and the European External Action Service highlight how these campaigns deepen societal divides, erode trust in institutions, and weaken crisis response by undermining the shared evidence base required for collective decision-making. The Human Rights Council further warns that FIMI increasingly targets marginalized communities, independent media, and human rights defenders, demonstrating that information disorder is not a marginal issue but a global risk multiplier affecting governance, security, and social cohesion.
The article also emphasizes the economic and structural damage caused by disinformation, noting that misleading narratives can destabilize markets, distort financial expectations, and erode long-term investment and policy stability. Conflict-related and climate-focused manipulation campaigns can incite hatred, obstruct humanitarian efforts, and delay sustainable development by casting doubt on scientific consensus or promoting false solutions. As a safeguard, the article emphasizes robust public-interest media ecosystems, independent journalism, and media literacy, alongside initiatives such as the European Democracy Shield, which seeks to strengthen election integrity and counter FIMI through detection, cooperation, and proactive investment in trustworthy information spaces.
Source: EUvsDisinfo, FIMI and disinformation as global threats. [online] Published 30 January 2026. Available at: https://euvsdisinfo.eu/fimi-and-disinformation-as-global-threats/
Disinformation Vortex Around Minnesota ICE Protests
A podcast episode of Uncanny Valley by Wired describes a fragmented, high-tempo information environment surrounding intensified ICE activity in Minnesota, in which far-right and pro-administration messaging rapidly shaped and distorted public understanding of events. Hosts discuss how a right-wing influencer, Nick Shirley, promoted an unproven claim that Somali-run daycare centers in Minneapolis misappropriated millions in a Medicaid-related fraud narrative; they link this amplification to subsequent harassment and violence dynamics, including an attack on Rep. Ilhan Omar and attempts to frame the incident as staged. They also describe a rolling cycle of narrative shifts and reputational smears after the killing of protester Alex Pretti, moving from claims of an assassination attempt and “terrorist” labeling to alternative insinuations (e.g., immigration status) and finally to blame-shifting arguments about protest behavior, illustrating a “spin-to-fit” approach that prioritizes ideological utility over verifiable facts, including repetition by figures at the highest levels of the administration.
The episode then broadens to platform governance and credibility crises as accelerants for misinformation and perceived censorship. It highlights how users interpreted TikTok outages and content-performance changes as politically motivated suppression following a U.S. ownership restructuring, noting that distrust is compounded by opaque, personalized recommendation systems that are difficult to audit externally. The hosts suggest that even subtle algorithmic tweaks can influence which narratives gain traction without leaving clear evidence. They further note that TikTok’s updated terms request more granular location permissions and enable the broader collection of user input for AI features, raising concerns about surveillance, targeting, and the erosion of user trust at a moment when many already suspect political capture of major information channels.
Source: WIRED, B. Barrett & Z. Schiffer, & T. Marchman. Uncanny Valley: Minneapolis Misinformation, TikTok’s New Owners, and Moltbot Hype. [online] Published 29 January 2026.
ICE Surveillance Sparks Online Counterattacks
Politico reports an escalating digital information conflict surrounding the Trump administration’s mass deportation agenda, in which federal agencies have expanded domestic surveillance capabilities while online activists and hacker groups deploy countermeasures to track and expose immigration enforcement operations. ICE has reportedly increased its use of advanced surveillance tools and data access, including contracts with firms such as Paragon and Palantir, forensic phone-cracking technologies, facial recognition systems, and data brokers collecting sensitive personal information. The administration has also granted ICE access to large federal datasets from agencies like the IRS, Medicaid, and Social Security. In response, activists have used encrypted messaging platforms, social media, and community-built tools to report raid locations, map surveillance infrastructure, and identify agents, while cybercriminal collectives have escalated tactics by leaking names and personal details of ICE and DHS officials online.
The article highlights how these developments create fertile ground for disinformation, coercive influence, and contested narratives over legitimacy and safety. Digital tools intended to document or resist enforcement actions have prompted aggressive efforts by the government and major technology companies to suppress information sharing, including app removals, Meta’s content restrictions, and federal investigations into encrypted communications. Officials have framed ICE-tracking platforms as threats to agent security, while critics argue these actions represent intimidation and censorship aimed at silencing opposition. The environment is characterized by breaches, doxxing, surveillance expansion, and attempts to control online discourse, illustrating how both state and non-state actors use digital tactics to influence public perception, disrupt organizing, and shape the information space around immigration enforcement.
Source: Politico, D. Nickel & A. Ng. ICE has expanded its mass surveillance efforts. Online activists are fighting back. [online] Published 29 January 2026. Available at: https://www.politico.com/news/2026/01/29/ice-tracking-tools-protesters-00755703
[Appendix - Frameworks to Counter Disinformation]
UK Warned It Risks Absorbing Cyber and Hybrid Attacks Without Deterrence
Warnings from UK security leaders, reported by The Record, highlighted that Britain risks exposing itself to cyberattacks, sabotage, and disinformation campaigns unless it develops credible offensive deterrence alongside defensive resilience. Former national security adviser Lord Sedwill told a parliamentary hearing that resilience measures alone would not discourage hostile states.
The warnings came as ministers defended plans agreed at last year’s NATO summit to raise total security spending to 5 percent of GDP within a decade, including 1.5 percent for indirect defense and resilience such as cybersecurity. Committee members questioned whether this resilience funding would deliver new capabilities or merely repackage existing spending, given the lack of clear NATO definitions. Sedwill expressed concern that creative accounting could undermine the effort, urging ministers to clarify the additional capacity that would be delivered in the coming years. Ministers acknowledged that cyber incidents and hybrid attacks below the threshold of armed conflict are already having serious strategic effects. The government plans to publish a revised National Cyber Action Plan, shifting from a strategic framework to an operational plan focused on countering threats, strengthening resilience, and supporting economic growth.
Source: The Record, A. Martin. UK leaders warned country risks 'absorbing' cyber and hybrid attacks without offensive deterrence. [online] Published 28 January 2026. Available at: https://therecord.media/uk-government-warned-cyber-hybrid-threats-offensive-operations
Commercialized Social Media Manipulation and Disinformation Amplification
An experiment published in 2025 by NATO Strategic Communications Centre of Excellence examined how major social media platforms detect and counter commercially purchased inauthentic engagement. Despite regulatory advances such as the EU Digital Services Act, manipulation services remain widely accessible and inexpensive, allowing actors to buy fake likes, comments, shares, and followers at scale. The study found that more than 30,000 inauthentic accounts generated more than 100,000 units of engagement, with enforcement varying significantly across platforms: X and YouTube removed a larger share of fake activity, whereas Instagram, TikTok, and others left most purchased engagement intact. The experiment also showed that manipulation is not confined to organic posts, as paid advertising systems can be exploited to distribute inauthentic narratives to targeted audiences.
The report highlights a shift toward more sophisticated influence tactics, including AI-enabled bot networks designed to blend into authentic conversations rather than relying on overt spam. These bots increasingly amplify politically sensitive and military-related narratives, including pro-Kremlin and pro-China themes, while commercial providers use cryptocurrency payments to obscure traceability and sustain a resilient ecosystem of manipulation. Overall, the findings underscore how hostile actors can exploit low-cost, automated tools to shape discourse, erode trust, and embed disinformation within legitimate online communities, emphasizing the need for behavioural detection, financial disruption, and stronger cross-platform accountability.
Source: NATO Strategic Communications Centre of Excellence, Social Media Manipulation for Sale: 2025 Experiment on Platform Capabilities to Detect and Counter Inauthentic Social Media Engagement. [online] Published 30 January 2026.
EU-Supported Media Literacy Effort Against Disinformation in Kosovo
As published by the EEAS, digital and AI-driven technologies have increased vulnerability to misinformation and information manipulation across the Western Balkans, prompting the European Union and its local European Houses to support public resilience initiatives. At the launch of the exhibition The Glass Room: Misinformation Edition in Pristina, EU Ambassador Airo Orav emphasized the shared challenge of safeguarding societies from disinformation and the EU’s commitment to equipping citizens with tools to recognize and counter misleading narratives.
With EU support, the exhibition toured multiple towns in Kosovo between October and December 2025, using posters, interactive applications, and animations to explain how misinformation spreads, why it is persuasive, and how everyday online behaviors such as clicks, likes, and shares amplify false content. The tour also included capacity-building workshops led by experts Kreshnik Gashi and Darko Dimitrijević, which addressed risks such as deepfakes, algorithmic bias, and the influence of digital design on public opinion, while promoting source verification and privacy awareness as key defenses against hostile information dynamics.
Source: EEAS, Press and information team of the EU Office/EU Special Representative in Kosovo. The Kosovo Journey of The Glass Room – Misinformation Exhibition Edition. [online] Published 29 January 2026. Available at: https://www.eeas.europa.eu/delegations/kosovo/kosovo-journey-glass-room-%E2%80%93-misinformation-exhibition-edition_en
[CRC Glossary]
The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult.
To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.
As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website
_edited.png)
.png)