Cyber-based hostile influence campaigns 13th - 19th April 2026
- CRC

- 6 hours ago
- 17 min read

[Introduction]
Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect (hence force Cyfluence, as opposed to cyber-attacks that aim to steal information, extort money, etc.) Such hostile influence campaigns and operations can be considered an epistemological branch of Information Operations (IO) or Information Warfare (IW).
Typically, and as customary during the last decade, the information is spread throughout various internet platforms, which are the different elements of the hostile influence campaign, and as such, connectivity and repetitiveness of content between several elements are the main core characteristics of influence campaigns.
Hostile influence campaigns, much like Cyber-attacks, have also become a tool for rival nations and corporations to damage reputation or achieve various business, political or ideological goals. Much like in the cyber security arena, PR professionals and government agencies are responding to negative publicity and disinformation shared over the news and social media.
We use the term cyber based hostile influence campaigns, as we include in this definition also cyber-attacks aimed at influencing (such as hack and leak during election time), while we exclude of this term other types of more traditional kinds of influence such as diplomatic, economic, military etc.
During the 13th to the 19th of April 2026, we observed, collected and analyzed endpoints of information related to cyber based hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. Some of the mentioned campaigns have to do with social media and news outlets solemnly, while others leverage cyber-attack capabilities.
[Contents]
[State Actors]
Russia
The War in Ukraine
China
Iran
[General Reports]
[Appendix - Frameworks to Counter Disinformation]
[ Report Highlights]
As reported by EU VS Disinfo, pro-Kremlin information channels intensified efforts to undermine trust in democratic processes in both Hungary and Bulgaria.
An article by The Jamestown Foundation described Péter Magyar’s electoral victory in Hungary as a major political shift, particularly in reducing Russian influence in the European Union.
According to a study by NATO, the People’s Republic of China has expanded its media presence in Arabic-speaking regions as part of broader FIMI efforts.
A report by Carnegie Endowment for International Peace argued that digital technologies, especially social media and AI, are increasingly used by governments to shape public opinion and reinforce political narratives, with this dynamic particularly visible in relations between China and Taiwan.
Ofcom announced that it will take new regulatory steps to combat the spread of illegal intimate images online, including non-consensual content and explicit deepfakes.
An article published by the International Journal of Communication examines how deep state conspiracies function as adaptable disinformation narratives used by diverse political actors to shape perceptions, link domestic and foreign threats, and blur the boundary between truth and falsehood.
An article published in Healthcare MDPI analyses how disinformation spreads through algorithm-driven platforms that exploit emotional and cognitive vulnerabilities, reinforcing false beliefs and eroding trust in media and institutions.
[ Report Summary]
According to a report by EU vs. Disinfo, Russian figures Vladimir Kuznetsov and Alexei Stolyarov present themselves as pranksters, but their activities function as a form of coordinated disinformation.
As revealed by the FREEDOM TV channel’s website, Russian propaganda circulated a false claim that Ukraine declared itself a "winner" in the Iran war, using a manipulated image from the FREEDOM channel as supposed evidence.
As reported by EU VS Disinfo, pro-Kremlin information channels intensified efforts to undermine trust in democratic processes in both Hungary and Bulgaria.
According to an article by EU vs. Disinfo, Russia has used religion, particularly Orthodox Church structures, as a tool in its broader disinformation and hybrid warfare strategy against Ukraine.
An article by The Jamestown Foundation described Péter Magyar’s electoral victory in Hungary as a major political shift, particularly in reducing Russian influence in the European Union.
As published by The Atlantic Council, widespread narratives about the Russia–Ukraine War are shaped by misleading or incomplete information, particularly the claim that the conflict began in 2022.
According to a study by NATO, the People’s Republic of China has expanded its media presence in Arabic-speaking regions as part of broader FIMI efforts.
A report by Carnegie Endowment for International Peace argued that digital technologies, especially social media and AI, are increasingly used by governments to shape public opinion and reinforce political narratives, with this dynamic particularly visible in relations between China and Taiwan.
According to an article by ISD Institute for Strategic Dialogue, during the first month of the Iran war, two coordinated pro-Iran networks on X, BRICS4CLICKS and Verified4War, generated over one billion views by spreading false, misleading, and AI-generated content.
According to a report by The Hill, disinformation rooted in real social tensions has long been a national security concern.
As reported by NewsGuard's Reality Check, Elon Musk significantly amplified a false claim that COVID-19 vaccines caused up to 60,000 deaths in Germany by reposting content to his massive audience on X.
A report by ISD analyzed Canadian domestic extremist activity across social media between June and November 2025, identifying hundreds of accounts generating over one million posts.
In a statement at the UN Committee on Information, the European Union reaffirmed its support for efforts to promote accurate, reliable, and accessible information worldwide.
Ofcom announced that it will take new regulatory steps to combat the spread of illegal intimate images online, including non-consensual content and explicit deepfakes.
An article published by the International Journal of Communication examines how deep state conspiracies function as adaptable disinformation narratives used by diverse political actors to shape perceptions, link domestic and foreign threats, and blur the boundary between truth and falsehood.
An article published in Healthcare MDPI analyses how disinformation spreads through algorithm-driven platforms that exploit emotional and cognitive vulnerabilities, reinforcing false beliefs and eroding trust in media and institutions.
[State Actors]
Russia
Weaponizing Comedy as a Disinformation Tactic
According to a report by EU vs. Disinfo, Russian figures Vladimir Kuznetsov (Vovan) and Alexei Stolyarov (Lexus) present themselves as pranksters, but their activities function as a form of coordinated disinformation. By impersonating political figures and publishing selectively edited conversations, they create misleading narratives that consistently favor the Kremlin. Although framed as entertainment, their content is amplified by pro-Russian media networks, turning staged interactions into tools for influencing public opinion.
Their operations focus heavily on discrediting Ukraine, Western governments, and post-Soviet opposition movements. Through deceptive calls, they extract comments that are taken out of context and used to support false claims, such as portraying protests in Belarus and Georgia as Western-controlled "color revolutions" or suggesting declining Western support for Ukraine. These manipulated narratives are then spread online, highlighting how modern propaganda no longer relies only on traditional media but also on viral, seemingly informal content.
Source: EUvsDisinfo. Pranked by the Kremlin: fake phone calls as a FIMI instrument. [online] Published 15 April 2026. Available at: https://euvsdisinfo.eu/pranked-by-the-kremlin-fake-phone-calls-as-a-fimi-instrument/
Fabricated Media Content Used to Spread Disinformation
As revealed by the FREEDOM TV channel’s website, Russian propaganda circulated a false claim that Ukraine declared itself a "winner" in the Iran war, using a manipulated image from the FREEDOM channel as supposed evidence. The image was digitally altered to include a fake caption stating that Ukraine had defeated Iran and that its air defense instructors were returning home. This fabricated visual was designed to mislead audiences and create a false narrative about Ukraine’s role in the conflict.
The disinformation spread quickly across Russian Telegram channels. It was even picked up by some Ukrainian media outlets and bloggers without proper verification. In reality, the original broadcast had nothing to do with Iran. It featured commentary on the European Commission's positions on Russian threats to the Baltic states, with an Estonian official speaking via video link.
Source: Freedom (UATV). Повідомлення ДП “МПІУ” щодо використання логотипа і зображення студії телеканалу FREEДОМ для створення російського фейку. [online] Published 9 April 2026. Available at: https://uatv.ua/uk/povidomlennya-dp-mpiu-shhodo-vykorystannya-logotypa-i-zobrazhennya-studiyi-telekanalu-freedom-dlya-stvorennya-rosijskogo-fejku/
Russia Targeted Elections in Hungary and Bulgaria
As reported by EU VS Disinfo, pro-Kremlin information channels intensified efforts to undermine trust in democratic processes in both Hungary and Bulgaria. In Hungary, the campaign focused on discrediting the opposition party TISZA and its leader, Péter Magyar, while spreading claims that the EU and Ukraine were interfering in the 12th of April 2026. Following TISZA’s victory, similar narratives are expected to persist, targeting both the new political leadership and the EU.
Comparable messaging has also been directed at Bulgaria ahead of its upcoming parliamentary elections, particularly through allegations of EU interference and censorship. Beyond election-related disinformation, pro-Kremlin outlets promoted broader narratives aimed at weakening trust in the EU. These included false claims that the EU is secretly developing nuclear weapons. At the same time, messaging around EU financial support to Ukraine framed the assistance as prolonging the war.
Another report by EU VS Disinfo presented a specific case where a Russian disinformation campaign spread a fabricated claim that Magyar had killed a family puppy. The story, originating from a newly created and anonymous website, falsely alleged that Magyar’s ex-wife had accused him of abusive behavior in a memoir that does not exist. Despite lacking any evidence, the claim quickly gained traction online, reaching millions of users across multiple languages. The disinformation moved beyond fringe platforms when Polish opposition leader Jarosław Kaczyński repeated the allegation during a press conference. The claim was later acknowledged as untrue by his party, while Magyar’s ex-wife explicitly denied ever making such accusations.
Source: EUvsDisinfo. Russia targets elections in Hungary and Bulgaria. [online] Published 17 April 2026. Available at: https://euvsdisinfo.eu/russia-targets-elections-in-hungary-and-bulgaria/
The War in Ukraine
Religion as a Tool of Hybrid Warfare
According to an article by EU vs. Disinfo, Russia has used religion, particularly Orthodox Church structures, as a tool in its broader disinformation and hybrid warfare strategy against Ukraine. Alongside historical ties between the Ukrainian Orthodox Church (UOC) and Moscow, affiliated networks have spread propaganda that frames the war in religious terms. These narratives falsely portray Ukraine as persecuting believers and describe Russia’s actions as a "holy war" to defend true Christianity, reinforcing pro-Kremlin ideology.
A key element of this disinformation is the systematic use of media channels linked to the UOC to spread false claims, conspiracy theories, and manipulative interpretations of events. These platforms promote narratives about a supposed "church schism" and label Ukrainian institutions and believers as "Satanists" or "heretics". Such language is designed to dehumanize Ukrainians and legitimize Russian aggression by embedding propaganda within religious discourse. Additionally, these campaigns aim to reshape public perception by introducing Kremlin-aligned terminology into everyday use.
Source: EUvsDisinfo. How Russia weaponizes the church in Ukraine. [online] Published 14 April 2026. Available at: https://euvsdisinfo.eu/how-russia-weaponizes-the-church-in-ukraine/ (euvsdisinfo.eu)
Political Changes in Hungary Affect Its International Relations
An article by The Jamestown Foundation described Péter Magyar’s electoral victory in Hungary as a major political shift, particularly in reducing Russian influence in the European Union. While the new government plans to investigate past ties between Hungarian officials and Moscow, the legacy of Russian influence, especially through energy dependence and political networks, remains significant. Disinformation is highlighted as a key tool previously used to shape public opinion and policy, particularly under the former government.
A central theme is how pro-Kremlin disinformation has affected Hungary’s domestic attitudes, especially toward Ukraine. Years of narratives portraying Ukraine negatively have contributed to public skepticism about closer ties and EU membership for Ukraine. The text emphasized that while political leadership can shift quickly, the effects of disinformation are more persistent. Public distrust, shaped by repeated misleading narratives, continues to complicate Hungary’s foreign policy decisions.
Source: The Jamestown Foundation. Péter Magyar’s Historic Victory Holds Implications for Russia and Ukraine. [online] Published April 2026. Available at: https://jamestown.org/peter-magyars-historic-victory-holds-implications-for-russia-and-ukraine/
Misleading Framing of the Russia–Ukraine War and Timeline
As published by The Atlantic Council, widespread narratives about the Russia–Ukraine War are shaped by misleading or incomplete information, particularly the claim that the conflict began in 2022. In reality, the war started in 2014 with Russia’s annexation of Crimea and its covert military intervention in eastern Ukraine. Framing the war as a shorter, recent conflict obscures its true nature as a long-term campaign of aggression and contributes to misunderstandings in international discourse.
A central theme is the role of Kremlin disinformation in distorting perceptions of the conflict. Russia portrayed its actions in 2014 as local uprisings by oppressed Russian-speaking populations, while denying direct military involvement. These false narratives were sometimes repeated by international media, creating confusion and lending credibility to fabricated claims. In fact, evidence and later admissions confirm that the so-called "separatist" movements were orchestrated and supported by Moscow from the outset.
This disinformation continues to influence policy debates and peace proposals, such as the idea that territorial concessions could end the war, and therefore dismantling these persistent falsehoods and acknowledging the full history of the conflict since 2014 is essential for understanding the war and achieving any meaningful resolution.
Source: Atlantic Council. Russia invaded Ukraine in 2014 long before the full-scale war of 2022. [online] Published 18 April 2026. Available at: https://www.atlanticcouncil.org/blogs/ukrainealert/russia-invaded-ukraine-in-2014-long-before-the-full-scale-war-of-2022/
China
PRC Media Influence in Arabic-Language Environments
According to a study by NATO, the People’s Republic of China (PRC) has expanded its media presence in Arabic-speaking regions as part of broader foreign information manipulation and interference (FIMI) efforts. Through state-controlled media and partnerships with local outlets, Beijing seeks to shape public opinion, promote its global image, and challenge Western narratives. While social media plays a role, traditional media remains a key channel for spreading these narratives, often presenting China as a reliable development partner while downplaying sensitive issues such as human rights.
The study found that PRC messaging is selectively effective. Narratives tied to practical benefits like infrastructure projects, trade, and logistics are more likely to be picked up and amplified by local media. Similarly, content that aligns with existing anti-Western or geopolitical competition narratives gains traction. In contrast, more ideological messaging, such as concepts like a "shared global destiny", has limited impact. Importantly, PRC narratives are rarely critically challenged in Arabic-language media, even when they carry a strong promotional or propagandistic tone.
The findings highlighted a pattern of indirect disinformation influence. Rather than spreading overt falsehoods, PRC media promote biased, one-sided narratives that are selectively adopted and reframed by regional outlets. This contributes to shaping perceptions of global politics in ways that favor Beijing, with potential spillover effects on Arabic-speaking audiences beyond the region, including in Europe. Source: NATO Strategic Communications Centre of Excellence. Assessing PRC Media: Framing and Narratives in Arabic-Language Media Environments. [online] Published April 2026. Available at: https://stratcomcoe.org/pdfjs/?file=/publications/download/Assessing-PRC-Media-FINAL-465f0.pdf?zoom=page-fit
China’s Digital Disinformation Affects Taiwan’s Policy
A report by Carnegie Endowment for International Peace argued that digital technologies, especially social media and AI, are increasingly used by governments to shape public opinion and reinforce political narratives, with this dynamic particularly visible in relations between China and Taiwan. It highlighted how Beijing promotes a simplified "unification versus independence" framework to describe Taiwan’s political future, reducing a complex and diverse set of public attitudes into a binary choice. This framing, amplified through global media, influencer networks, and Chinese-developed AI systems, supports China’s broader effort to enforce the "One China" principle and weaken international support for Taiwan’s self-determination.
However, survey data from Taiwan showed that public opinion is far more nuanced, spanning multiple positions such as conditional unification, maintaining the status quo, or conditional independence. This complexity is often overlooked not only by external actors but also by domestic media and political discourse, which tend to favor simplified narratives for mobilization purposes. The report concluded that this "digital hegemony" poses a significant challenge to democratic debate and policymaking.
Source: Carnegie Endowment for International Peace. Digital Hegemony and the Reification of Taiwan’s “Unification-Independence” Dichotomy. [online] Published 14 April 2026. Available at: https://carnegieendowment.org/research/2026/04/digital-hegemony-and-the-reification-of-taiwans-unification-independence-dichotomy
Iran
Pro-Iran Networks Gained a Billion Views on War Propaganda
According to an article by ISD Institute for Strategic Dialogue, during the first month of the Iran war, two coordinated pro-Iran networks on X, BRICS4CLICKS and Verified4War, generated over one billion views by spreading false, misleading, and AI-generated content. Despite comprising only a few dozen accounts, the networks achieved massive reach through coordinated reposting, paid verification (blue checkmarks), and amplification via X’s "For You" algorithm. Their content was further boosted by high-profile accounts, including diplomats and influencers.
Both networks posed as news or commentary accounts while promoting pro-Iran narratives that exaggerated military successes and targeted the United States and Israel. They widely circulated fabricated claims, such as the death of Israeli Prime Minister Benjamin Netanyahu, and used AI-generated war footage, clickbait posts, and conspiracy theories to drive engagement. Their activity revealed clear coordination patterns, with most reposts occurring within the networks themselves and consistent similarities in account metadata and behavior.
ISD has no further evidence that either network is state-backed, and it also remains unclear if the underlying motivations are financial or ideological. Although X removed some accounts, gaps in moderation allowed these networks to gain visibility and credibility.
Source: Institute for Strategic Dialogue. How pro-Iran networks gained a billion views on war propaganda. [online] Published April 2026. Available at: https://www.isdglobal.org/digital-dispatch/how-pro-iran-networks-gained-a-billion-views-on-war-propaganda/
Disinformation as a Divisive Strategic Weapon
According to a report by The Hill, disinformation rooted in real social tensions has long been a national security concern. It recalls how President Eisenhower warned during the Little Rock crisis that visible racism damaged U.S. credibility and gave adversaries propaganda material. Today, the same dynamic persists in a different form, as foreign actors, particularly Iran, exploit America’s racial history and internal divisions to weaken unity and undermine support for national policies.
According to the passage, modern disinformation blends with exaggerations and falsehoods. Iranian-linked campaigns reportedly use social media, memes, and music to target specific groups, such as Black Americans and women, while promoting anti-American, anti-Israel, and antisemitic narratives. This strategy does not aim to invent new conflicts but to amplify existing ones, creating deeper polarization and mistrust within society. The report emphasized that domestic actors unintentionally contribute to this problem. Political extremes, influencers, and media figures may amplify divisive narratives for profit or ideology, effectively reinforcing foreign propaganda.
Sources: The Hill. Iran is exploiting our racial divisions online — and Americans are helping. [online] Published 2026. Available at: https://thehill.com/opinion/international/5830889-iran-propaganda-american-divisions/
[General Reports]
Musk Gave a 50 Million-View Boost to the COVID Vaccine Hoax
As reported by NewsGuard's Reality Check, Elon Musk significantly amplified a false claim that COVID-19 vaccines caused up to 60,000 deaths in Germany by reposting content to his massive audience on X. The claim originated from testimony by a former Pfizer toxicologist, who speculated, without evidence, 0that official reports of vaccine-related deaths were vastly undercounted. While the claim initially gained limited traction, it spread rapidly after being shared by a commentator and then boosted by Musk, reaching tens of millions of views within hours.
In reality, Germany’s vaccine regulator, the Paul Ehrlich Institute, reported 2,133 deaths following vaccination, but emphasized that such reports do not imply causation. Its analysis identified only 74 cases in which a link to vaccination was possible or probable, out of nearly 200 million doses administered.
Source: NewsGuard Reality Check. Musk Gives a 50 Million View Boost. [online] Published 2026. Available at: https://www.newsguardrealitycheck.com/p/musk-gives-a-50-million-view-boost
Online Extremism in Canada in 2025
A report by ISD analyzed Canadian domestic extremist activity across social media between June and November 2025, identifying hundreds of accounts generating over one million posts. These groups are highly active on platforms like X and Telegram, where different ideologies, from ethnonationalism to white supremacy, thrive. Their content often focuses on narratives of societal decline, threats to national identity, and hostility toward minorities, helping to amplify and normalize extremist worldviews online.
Disinformation plays a central role in amplifying conspiracy theories and distorting real-world events. Narratives linked to figures like Romana Didulo, anti-vaccination movements, and false claims about minority groups are widely circulated. Offline incidents, such as violent attacks or political events, are often misrepresented or exaggerated to justify hatred, for example, by falsely linking entire communities to violence or spreading claims of media bias. The spread of disinformation is closely tied to rising hate speech and calls for violence. False or misleading claims about migrants, LGBTQ individuals, and other minorities fuel hostility and dehumanization, while viral posts from influential accounts can rapidly intensify these trends. Although only a small portion of posts contain explicit violent language, spikes often follow disinformation-driven reactions to major events.
Source: Institute for Strategic Dialogue (ISD). Online Domestic Extremism in Canada: June–November 2025. [online] Published April 2026. Available at: https://www.isdglobal.org/wp-content/uploads/2026/04/Online-Domestic-Extremism-in-Canada-June-November-2025.pdf
Deep State Narratives Blur Lines Between Truth and Disinformation
An article published by the International Journal of Communication, written by Stephen Hutchings, examines how “deep state” (DS) narratives function as a fluid and contested form of disinformation across political and cultural contexts. It shows that DS conspiracism is used by a wide range of actors, including populist political movements, mainstream politicians, Kremlin-affiliated media, and counter-disinformation organizations, to advance competing narratives. These actors deploy DS claims both as tools of influence and as accusations against opponents, often linking domestic grievances with foreign adversaries in a circular dynamic. Tactics include narrative amplification through media ecosystems, strategic framing of elites as covert manipulators, cross-border adaptation of conspiracy tropes, and the use of hashtags and community-building mechanisms to reinforce shared belief systems. The study highlights how DS narratives are frequently integrated into broader geopolitical messaging, particularly along the U.S. – Russia axis, where they are alternately framed as hidden truths or as disinformation, depending on the political perspective.
The article emphasizes that DS conspiracism operates as a “master narrative” within disinformation campaigns, capable of absorbing and connecting multiple claims into a coherent but unfalsifiable framework. Its effectiveness lies in its adaptability: it shifts between being presented as factual revelation, symbolic critique, or deliberate falsehood, allowing actors to exploit ambiguity and evade straightforward refutation. Disinformation tactics associated with DS narratives include selective use of evidence, rhetorical distancing, repurposing foreign-origin concepts to serve local political agendas, and reciprocal accusations between opposing actors. The analysis concludes that this fluidity undermines binary distinctions between truth and falsehood, enabling DS narratives to persist and evolve across platforms, cultures, and political systems, thereby complicating efforts to counter disinformation and contribute to broader instability in democratic information environments.
Source: International Journal of Communication. [online] Available at: https://ijoc.org/index.php/ijoc/article/view/26038/5302
Disinformation Exploits Algorithms and Human Vulnerabilities to Erode Trust
An article published in Healthcare MDPI analyses disinformation as a systemic and intentional form of information manipulation embedded within digital ecosystems and amplified by algorithm-driven platforms. It identifies digital platforms and social media networks as the primary enablers of disinformation dissemination, where algorithms prioritize emotionally charged, sensationalist, and polarizing content to maximize engagement, regardless of accuracy. These dynamics facilitate the rapid and large-scale spread of false or misleading narratives, often outpacing information. Tactics include algorithmic amplification, exploitation of echo chambers and filter bubbles, and repeated exposure to false claims, which increases perceived credibility through familiarity effects. Disinformation actors, though not always explicitly named, operate through digitally mediated environments, leveraging platform structures and user behavior to propagate misleading content and shape public perception.
The article further highlights how disinformation campaigns exploit psychosocial vulnerabilities, including political beliefs, social identity, and emotional responses, to increase acceptance and diffusion of false narratives. Techniques such as confirmation bias, identity-based messaging, and emotionally manipulative content reinforce group alignment and polarization. At the same time, repeated exposure contributes to the “illusory truth effect”, making false information more likely to be believed. The cumulative impact of these tactics includes erosion of trust in media and institutions, distortion of risk perception, and behavioral influence in areas such as public health decision-making. Disinformation is thus framed as a coordinated influence process that operates across cognitive, emotional, and social dimensions, creating feedback loops of distrust and increased susceptibility within the information environment.
Source: MDPI. Effect of Muscle Energy Technique on Hamstring Flexibility: Systematic Review and Meta-Analysis. [online] Published 11 April 2023. Available at: https://www.mdpi.com/2227-9032/11/8/1089
[Appendix - Frameworks to Counter Disinformation]
EU Called for Stronger Action on Information Integrity
In a statement at the UN Committee on Information, the European Union reaffirmed its support for efforts to promote accurate, reliable, and accessible information worldwide. It praised the UN Department of Global Communications for advancing initiatives to counter disinformation and hate speech, including the implementation of the Global Principles for Information Integrity. The EU emphasized the importance of protecting independent media, strengthening media literacy, and ensuring transparency and accountability from digital platforms.
At the same time, the EU warned that disinformation, particularly foreign information manipulation, poses a growing threat to democratic societies, international cooperation, and human rights, with risks amplified by the rise of AI. It also highlighted operational challenges facing the UN, including funding constraints that limit its ability to reach global audiences effectively. It stressed that reforms, including the use of AI, should complement rather than replace core communication functions. The EU further underscored the importance of multilingual communication and the protection of journalists, especially in conflict zones.
Source: European External Action Service (EEAS). EU Statement – 48th session of the UN Committee on Information: Informal briefing with USG of the Department of Global Communications. [online] Published 2026. Available at: https://www.eeas.europa.eu/delegations/un-new-york/eu-statement-%E2%80%93-48th-session-un-committee-information-informal-briefing-usg-department-global_en
Ofcom Founds Additional Online Safety Measures
Ofcom announced that it will take new regulatory steps to combat the spread of illegal intimate images online, including non-consensual content and explicit deepfakes. One of their proposals is the use of "hash matching" technology, which allows platforms to detect and block harmful images before they are widely shared. The decision has been fast-tracked due to the urgent need to better protect users from online abuse.
The measures also address the growing role of disinformation in this space. Deepfake technology can be used to create and spread manipulated intimate images and cause serious harm to victims. By stopping such content at the source, regulators aim to limit both the abuse itself and the spread of deceptive digital material that can damage reputations and distort reality. According to Ofcom, the proposed rules are expected to be finalized soon and could take effect within months, with additional protections planned later.
Source: Ofcom. Ofcom fast-tracks decision on measures to block illegal intimate images. [online] Published 18 February 2026. Available at: https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/ofcom-fast-tracks-decision-on-measures-to-block-illegal-intimate-images
[CRC Glossary]
The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult.
To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.
As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website.
_edited.png)
.png)


