top of page

Search CRC

165 results found with an empty search

  • Cyber based influence campaigns 20th - 26th April 2026 Report

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect (hence force Cyfluence, as opposed to cyber-attacks that aim to steal information, extort money, etc.) Such hostile influence campaigns and operations can be considered an epistemological branch of Information Operations (IO) or Information Warfare (IW). Typically, and as customary during the last decade, the information is spread throughout various internet platforms, which are the different elements of the hostile influence campaign, and as such, connectivity and repetitiveness of content between several elements are the main core characteristics of influence campaigns. Hostile influence campaigns, much like Cyber-attacks, have also become a tool for rival nations and corporations to damage reputation or achieve various business, political or ideological goals. Much like in the cyber security arena, PR professionals and government agencies are responding to negative publicity and disinformation shared over the news and social media. We use the term cyber based hostile influence campaigns, as we include in this definition also cyber-attacks aimed at influencing (such as hack and leak during election time), while we exclude of this term other types of more traditional kinds of influence such as diplomatic, economic, military etc. During the 20th to the 26th of April 2026, we observed, collected and analyzed endpoints of information related to cyber based hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. Some of the mentioned campaigns have to do with social media and news outlets solemnly, while others leverage cyber-attack capabilities. [Contents] [Introduction] [Report Highlights] [Report Summary] [State Actors] X Investigation Into Disinformation and Misconduct On X Russia The Narrative of Russian Colonialism Russian Propaganda as a Weapon and Its Impact on Belief and Behavior The EU Lists Two Entities for Information Manipulation Activities China Chinese Influence Operation Targets Tibetan Elections Iran Pro-Iran Sources Reframed U.S. Strike Video as Two Different Iranian Victories Antisemitic Narratives Surge Following the Iran Conflict Iran’s Generated Misinformation Strategy During War Iran Uses Viral AI Propaganda to Distract Western Audiences [AI Related Articles] Deepfakes Targeting U.S. Officials AI-Generated Influencer Spreads Political Disinformation Jailbroken AI Enables Disinformation and Attack Planning [General Reports] Orbán’s Hungary Defeat Showed Disinformation is Not a Political Magic Trick The Role of Meme Culture in Shaping War Perception and Understanding Trump Pope Rift Sparked Viral False Claims [Appendix - Frameworks to Counter Disinformation] EU Campaign to Counter Climate Disinformation EU Mission to Strengthen Armenia’s Resilience EU Calls for Initiative to Tackle Evolving Disinformation Threats EU Tackles Disinformation Through New Media Program Biometric Verification to Counter AI-Driven Disinformation and Fraud [CRC Glossary] [ Report Highlights] An article by The Psychological Defense Agency argued that Russia has long functioned as a colonial empire, but that misleading narratives have obscured its history. The European Union announced it has imposed sanctions on two entities: Euromore and Pravfond, for their role in spreading disinformation as part of Russia’s hybrid strategy. As published by DFR Lab, a China-linked influence network known as Spamouflage has been spreading disinformation ahead of the April 26 elections for the Central Tibetan Administration. As published by ISD, following the 28th of February strikes by the United States and Israel on Iran, online antisemitic content rose sharply, increasing by 68% within a week. As revealed in a CyberNews article, a medical student used AI tools, including Gemini, to create a fake online persona named Emily Hart, demonstrating how deepfakes can be monetized. Based on an analysis by Tech Policy Press, Hungary’s April 2026 parliamentary election offered a clear reminder that disinformation is not a decisive political force on its own. As published by NewsGuard's Reality Check, Tensions between Donald Trump and Pope Leo XIV have fueled a wave of viral misinformation on social media, with multiple false claims gaining significant traction online. The European Union announced that its Delegation to Türkiye has launched a new initiative to combat disinformation through a television programme called Ambassadors of Truth, produced in collaboration with CNN Türk. [ Report Summary] According to a report by The Hill, French prosecutors have summoned Elon Musk and former X CEO Linda Yaccarino for questioning over alleged misconduct linked to X. An article by The Psychological Defense Agency argued that Russia has long functioned as a colonial empire, but that misleading narratives have obscured its history. A study by the Ukrainian NGO LingvaLexa, conducted with support from the Office of the Prosecutor General of Ukraine, found that Kremlin propaganda plays a direct and measurable role in shaping the beliefs and behavior of Russian soldiers. The European Union announced it has imposed sanctions on two entities: Euromore and Pravfond, for their role in spreading disinformation as part of Russia’s hybrid strategy. As published by DFR Lab, a China-linked influence network known as Spamouflage has been spreading disinformation ahead of the April 26 elections for the Central Tibetan Administration. As reported by NewsGuard's Reality Check, Pro-Iran social media accounts have circulated a video of a burning ship in the Strait of Hormuz, falsely presenting it as evidence of Iranian military success. An article from Le Monde states that Iranian state-linked actors deploy AI-generated, highly shareable propaganda content to manipulate narratives, amplify anti-Western messaging, and divert attention from domestic issues. As published by ISD, following the 28th of February strikes by the United States and Israel on Iran, online antisemitic content rose sharply, increasing by 68% within a week. According to an article by ISD, since the start of the Iran war, official Iranian accounts on X have shifted from formal messaging to provocative, meme-driven content designed to maximize engagement. A recent analysis by CyberNews found 156 deepfake incidents involving U.S. government officials over two years. As revealed in a CyberNews article, a medical student used AI tools, including Gemini, to create a fake online persona named Emily Hart, demonstrating how deepfakes can be monetized. Based on an analysis by Tech Policy Press, Hungary’s April 2026 parliamentary election offered a clear reminder that disinformation is not a decisive political force on its own. According to an article by Wired, modern conflicts are increasingly mediated through memes and meme-like content that simplify, decontextualize, and emotionally amplify war narratives, often creating a widespread illusion of understanding without deeper knowledge. As published by NewsGuard's Reality Check, Tensions between Donald Trump and Pope Leo XIV have fueled a wave of viral misinformation on social media, with multiple false claims gaining significant traction online. According to a short publication by The European Commission, it has launched the #ClimateFactsMatter campaign to address the growing spread of climate disinformation across the EU. An article published by Politico highlights that jailbroken AI models allow state-linked and malicious actors to bypass safeguards, enabling the spread of disinformation and the generation of detailed plans for cyber and physical attacks. The EEAS European External Action Service announced that the European Union has launched a new civilian mission in Armenia, known as EUPM Armenia, aimed at strengthening the country’s resilience against complex threats. The European Commission announced it has launched a new funding call under its European Democracy Shield strategy to strengthen research on information integrity. The European Union announced that its Delegation to Türkiye has launched a new initiative to combat disinformation through a television programme called Ambassadors of Truth, produced in collaboration with CNN Türk. According to a report by CyberNews, as AI-generated content becomes harder to distinguish from reality, concerns over disinformation and fraud are growing. [Social Media] X Investigation Into Disinformation and Misconduct On X According to a report by The Hill, French prosecutors have summoned Elon Musk and former X CEO Linda Yaccarino for questioning over alleged misconduct linked to X. Central to the investigation are claims that X facilitated the spread of illegal and harmful content, including child sexual abuse material and AI-generated deepfakes. Authorities emphasized that these interviews aim to assess compliance with French law and determine whether platform leadership adequately addressed these risks. A major focus of the case is disinformation generated by Grok. The chatbot reportedly produced false and harmful content, including Holocaust denial narratives and explicit nonconsensual deepfakes. Although Grok later corrected some outputs, the incident raised serious concerns about how AI systems can amplify disinformation on a scale. French officials are also investigating whether X’s algorithms contributed to biased or distorted information flows, further undermining public trust. Prosecutors additionally suspect that the controversy surrounding Grok’s disinformation may have been deliberately amplified to influence market value ahead of a planned corporate listing involving Musk’s companies. Meanwhile, U.S. authorities currently decline to assist the French probe. Source: Associated Press. French prosecutors summon Elon Musk over allegations of child abuse images and deepfakes on X. [online] Published 20 April 2026. Available at: https://thehill.com/homenews/ap/ap-technology/france-elon-musk-x-social-media/?email=467cb6399cb7df64551775e431052b43a775c749&emaila=12a6d4d069cd56cfddaa391c24eb7042&emailb=054528e7403871c79f668e49dd3c44b1ec00c7f611bf9388f76bb2324d6ca5f3&utm_source=Sailthru&utm_medium=email&utm_campaign=04.20.26%20Tech Top Of Page [State Actors] Russia The Narrative of Russian Colonialism An article by The Psychological Defense Agency argued that Russia has long functioned as a colonial empire, but that misleading narratives have obscured its history. Unlike other European empires, Russian expansion has often been framed as defensive or civilizing rather than imperial. Such narratives act as a form of disinformation, masking patterns of domination and shaping global misunderstanding of both past and present conflicts, including the war in Ukraine. Propaganda and distorted historical claims have been used to justify Russian actions. These include portraying conquest as beneficial, denying colonial practices, and spreading narratives that Russia is uniquely “anti-colonial.” The article points to modern examples like state-aligned media and online content that present occupation as positive while ignoring violence, repression, and cultural erasure. It concluded that such disinformation has severe consequences - it delays recognition of imperial behavior, weakens international responses, and enables continued aggression. Source: Psychological Defence Agency. The Last European Colonial Empire. [online] Published 2026. Available at: https://mpf.se/download/18.732eef4e19d7078d81ab8/1775717958942/The-last-european-colonial-empire.pdf Top Of Page Russian Propaganda as a Weapon and Its Impact on Belief and Behavior A study by the Ukrainian NGO LingvaLexa, conducted with support from the Office of the Prosecutor General of Ukraine, found that Kremlin propaganda plays a direct and measurable role in shaping the beliefs and behavior of Russian soldiers. Based on surveys of over 1,000 prisoners of war, the research showed that a large majority accepted at least some state narratives, with many viewing the invasion of Ukraine as justified. Those who believed propaganda more strongly were significantly more likely to support the war, dehumanize Ukrainians, resist surrender, and express willingness to fight again, indicating that propaganda is not just background noise, but a key driver of combat motivation. The study also highlighted the central role of anti-Western narratives, which frame the war as a defensive struggle against NATO and a broader "decadent" West. These narratives were widely believed among surveyed soldiers and proved just as influential as other propaganda themes in shaping attitudes and behavior. The findings suggested that propaganda should be understood as a core instrument of modern warfare, one that facilitates mobilization, sustains aggression, and influences battlefield decisions. The report called for stronger international recognition of propaganda as a tool of aggression, including potential legal accountability not only for political leaders but also for those involved in creating and disseminating such narratives. Source: The Ukrainian NGO LingvaLexa. Words that Kill: How Russian Propaganda Shapes Mobilizarion and Combat Motivation. [online] Google Drive. Available at: https://drive.google.com/file/d/1NEQFv4YCQ0boNq6EbIeIDeqyUTwn3U6n/view Top Of Page The EU Lists Two Entities for Information Manipulation Activities The European Union announced it has imposed sanctions on two entities: Euromore and Pravfond, for their role in spreading disinformation as part of Russia’s hybrid strategy. Euromore functions as an unofficial media relay, amplifying and legitimizing pro-Kremlin narratives aimed at European audiences, including content that questions the legitimacy of EU institutions and justifies Russia’s war against Ukraine. Pravfond, funded by the Russian state, contributes to disinformation through legal and analytical materials that reinforce key propaganda claims. These include false narratives such as the "Nazification of Ukraine”, "Russophobia”, and alleged persecution of Russian-speaking populations in neighboring countries. Together, these efforts are designed to manipulate public perception and weaken trust in democratic institutions. At the same time, as reported by EU VS Disinfo, pro-Kremlin disinformation continues to recycle familiar narratives. These include portraying Ukraine as a "terrorist state" allegedly supported by NATO, reviving long-debunked conspiracy theories about Western "biolabs", and spreading false claims about secret NATO infrastructure. Disinformation surrounding the Bucha massacre has also evolved, from outright denial to more complex conspiracy theories aimed at deflecting responsibility for documented war crimes. Source: European External Action Service (EEAS). Russian hybrid threats: EU lists two entities over information manipulation activities. [online] Published 22 April 2026. Available at: https://www.eeas.europa.eu/delegations/ukraine/russian-hybrid-threats-eu-lists-two-entities-over-information-manipulation-activities_en EUvsDisinfo. Disinformation Review: EU sanctions and the Kremlin’s recycled narratives. [online] Available at: https://euvsdisinfo.eu/disinformation-review-eu-sanctions-and-the-kremlins-recycled-narratives/ Top Of Page China Chinese Influence Operation Targets Tibetan Elections As published by DFR Lab, a China-linked influence network known as Spamouflage has been spreading disinformation ahead of the April 26 elections for the Central Tibetan Administration. This activity is part of a broader pattern of Chinese information manipulation targeting multiple countries and issues, including Taiwan, the United States, and Japan. Using dozens of fake Facebook and Instagram profiles, the operation promotes misleading narratives aimed at undermining trust in the Tibetan democratic process. These include personal attacks against leader Penpa Tsering, claims that the elections are manipulated, and portrayals of the government as corrupt or dominated by religious elites. The campaign relies on coordinated, inauthentic behavior, including mass-sharing posts within its own network to simulate credibility. It also amplifies real controversies by inserting false or exaggerated claims, attempting to deepen divisions within the Tibetan community. AI-generated images and recycled narratives are used to make the disinformation appear more convincing, although most posts receive little genuine engagement. Despite becoming more technologically sophisticated, the operation has largely failed to gain significant organic traction. Source: Digital Forensic Research Lab (DFRLab). China-linked Spamouflage targets Tibetan parliament-in-exile elections. [online] Published 24 April 2026. Available at: https://dfrlab.org/2026/04/24/china-linked-spamouflage-targets-tibetan-parliament-in-exile-elections/ Top Of Page Iran Pro-Iran Sources Reframed U.S. Strike Video as Two Different Iranian Victories As reported by NewsGuard's Reality Check, Pro-Iran social media accounts have circulated a video of a burning ship in the Strait of Hormuz, falsely presenting it as evidence of Iranian military success. Initially, the footage was shared as proof that Iran had sunk a U.S. vessel near Bandar Abbas. Days later, the same video was reused to claim that Iranian forces had severely damaged Indian-flagged ships. In reality, the video showed a U.S. strike on an Iranian warship, identified through visual analysis as the IRIS Haj Qasem, with no credible evidence supporting either of the viral claims. The misinformation emerged amid escalating tensions following the collapse of U.S.-Iran peace talks and a subsequent maritime blockade ordered by Donald Trump. Since the war began in late February 2026, reports indicated that over 155 Iranian naval vessels have been destroyed by U.S. and Israeli operations. While Iran did intercept two Indian-flagged ships, verified reports showed only minor damage, with no injuries or major losses. Source: NewsGuard. One video, two conflicting false narratives. [online] Available at: https://www.newsguardrealitycheck.com/p/one-video-two-conflicting-false-narratives Top Of Page Antisemitic Narratives Surge Following the Iran Conflict As published by ISD, following the 28th of February strikes by the United States and Israel on Iran, online antisemitic content rose sharply, increasing by 68% within a week. Much of this surge was driven by disinformation and conspiracy theories, particularly claims that the war was a “false flag” orchestrated by Jewish elites or that Western governments are controlled by a hidden Jewish power structure. These narratives, widely shared across platforms, reframed the conflict through misleading and harmful lenses that amplified distrust and hostility. Disinformation also spread through distorted criticism of Israel, where legitimate political debate was often mixed with antisemitic tropes. Online content frequently blames Jewish communities worldwide for the actions of the Israeli state or uses Holocaust-related distortions to push false narratives. At the same time, direct hate speech and slurs increased significantly, creating a more aggressive and hostile digital environment shaped by viral misinformation and conspiracy-driven rhetoric. This wave of disinformation had real-world consequences, coinciding with a rise in antisemitic attacks across several countries. Source: Institute for Strategic Dialogue (ISD). The impact of the war with Iran on antisemitic discourse. [online] Published 22 April 2026. Available at: https://www.isdglobal.org/digital-dispatch/the-impact-of-the-war-with-iran-on-antisemitic-discourse/ Top Of Page Iran’s Generated Misinformation Strategy During War According to an article by ISD, since the start of the Iran war, official Iranian accounts on X have shifted from formal messaging to provocative, meme-driven content designed to maximize engagement. This includes AI-generated and misleading visuals targeting figures like Donald Trump and references to conspiracy-linked individuals such as Jeffrey Epstein. While often framed as humor, this content blends satire with disinformation, distorting political narratives and trivializing serious geopolitical issues. The strategy has proven highly effective in spreading these narratives, generating hundreds of millions of views and dramatically increasing likes, shares, and comments. By using viral, platform-native formats, Iranian state actors have expanded their reach and made disinformation more accessible, especially to online audiences less engaged with traditional political messaging. This approach has also helped reshape perceptions of Iran, portraying it as a relatable or even sympathetic “underdog” rather than focusing on its controversial policies or human rights record. Source: Institute for Strategic Dialogue (ISD). Iran’s diplomats launch a meme war. [online] Published 23 April 2026. Available at: https://www.isdglobal.org/digital-dispatch/irans-diplomats-launch-a-meme-war/ Top Of Page Iran Uses Viral AI Propaganda to Distract Western Audiences An article from Le Monde describes a coordinated Iranian propaganda campaign that leverages advanced generative AI and pop-culture aesthetics to conduct influence operations targeting Western audiences. Regime-linked actors, particularly a studio identified as Explosive Media, produce high-quality animated videos, often using Lego-style visuals, that mock U.S. and Israeli leaders and promote anti-Western narratives. These materials are disseminated across major social media platforms (X, Facebook, Instagram, TikTok), achieving large-scale reach through viral distribution, with hundreds of millions of views. The campaign combines technical sophistication with humor, satire, and emotionally engaging content to increase acceptability and shareability, particularly among politically disengaged or neutral audiences. Tactically, the operation relies on narrative manipulation, co-opting existing anti-establishment sentiment and embedding ideological messaging within entertaining formats to obscure propagandistic intent. It employs distortion and omission, including downplaying or denying state violence and reframing geopolitical dynamics to divide adversaries, such as portraying the United States as subordinate to Israel. The campaign also exploits amplification dynamics, as media coverage and public attention in the West further extend its reach, inadvertently reinforcing its impact. Analysts warn that this strategy diverts attention from domestic repression in Iran and functions as a broader disinformation effort aimed at weakening democratic discourse by shaping perceptions and redirecting focus away from human rights issues. Source: Le Monde. How Tehran’s propaganda lures the West into distraction. [online] Published 25 April 2026. Available at: https://www.lemonde.fr/en/les-decodeurs/article/2026/04/25/how-tehran-s-propaganda-lures-the-west-into-distraction_6752815_8.html Top Of Page [AI Related Articles] Deepfakes Targeting U.S. Officials A recent analysis by CyberNews found 156 deepfake incidents involving U.S. government officials over two years. Most cases focus on a small group of high-profile figures, especially Donald Trump, who alone accounts for 58% of all incidents. Notably, some of these deepfakes are self-generated or shared by individuals, blurring the line between deliberate disinformation and political messaging. Other frequent targets include Marco Rubio, JD Vance, and Alexandria Ocasio-Cortez. While the raw numbers suggest Republicans are targeted more often, this is largely due to Trump’s dominance in the dataset; excluding him, deepfake incidents are relatively balanced between political parties. Generally, Deepfake-driven disinformation tends to focus on widely recognizable leaders rather than on the broader political system, as familiar figures are more likely to attract attention and influence public opinion. Source: Cybernews. Trump, Rubio and Vance are the most deepfaked U.S. government officials in office today. [online] Published 23 April 2026. Available at: https://cybernews.com/ai-news/most-deepfaked-us-government-officials/ Top Of Page AI-Generated Influencer Spreads Political Disinformation As revealed in a CyberNews article, a medical student used AI tools, including Gemini, to create a fake online persona named Emily Hart, demonstrating how deep fakes can be monetized. By targeting a specific political audience of older, conservative, pro-Trump users, the creator crafted posts promoting divisive and misleading narratives on issues like immigration, religion, and politics, often aligned with supporters of Donald Trump. The strategy relied on AI-generated images and tailored messaging designed to appear authentic and emotionally engaging. This approach quickly gained traction, with some posts reaching millions of views and attracting thousands of followers, many of whom paid for exclusive content. Although the accounts were eventually suspended, the operation shows how easily individuals can use AI to create convincing fake identities that spread misleading content on a large scale. Sources: Cybernews. Indian man is behind MAGA’s AI-generated sweetheart Emily Hart. [online] Published 25 April 2026. Available at: https://cybernews.com/ai-news/emily-hart-maga/ Top Of Page Jailbroken AI Enables Disinformation and Attack Planning An article published by Politico highlights the growing exploitation of artificial intelligence by malicious actors, demonstrating how AI systems, when stripped of built-in safeguards, can be weaponized to support harmful and potentially violent activities. Researchers from the National Counterterrorism Innovation, Technology and Education Center (NCITE) showed U.S. lawmakers that “jailbroken” or “abliterated” AI models, with disabled safety mechanisms, can generate detailed instructions for terrorism, cyberattacks, and criminal acts. These models, unlike safeguarded “censored” systems, provide step-by-step guidance for executing attacks, illustrating how easily adversaries can manipulate AI tools to bypass restrictions. Tactics used include prompt engineering techniques, such as disguising harmful queries in complex or academic language, to evade detection systems and extract prohibited information. The article also identifies state-linked actors and cyber threat groups as key participants in leveraging AI for disinformation and offensive operations. Russia-linked groups have reportedly used AI models to disseminate disinformation online, while Beijing-backed hackers attempted to automate cyberattacks using advanced language models. The accessibility of off-the-shelf AI tools further lowers the barrier for such activities, increasing the risk of widespread misuse. Overall, the article underscores a rapidly evolving threat landscape in which both state and non-state actors exploit vulnerabilities in AI systems to conduct influence operations, spread disinformation, and enable harmful real-world actions, raising significant concerns about the adequacy of current safeguards and regulatory responses. Sources: Politico. AI chatbots can be jailbroken, alarming lawmakers and exposing safety gaps. [online] Published 22 April 2026. Available at: https://www.politico.com/news/2026/04/22/ai-chatbots-jailbreak-safety-00887869 Top Of Page [General Reports] Orbán’s Hungary Defeat Showed Disinformation is Not a Political Magic Trick Based on an analysis by Tech Policy Press, Hungary’s April 2026 parliamentary election offered a clear reminder that disinformation is not a decisive political force on its own. In the election, Viktor Orbán and his Fidesz party lost to newcomer Péter Magyar, despite a campaign marked by extensive pro-government disinformation. While Fidesz spread coordinated false claims, such as manipulated videos alleging plans to reintroduce conscription, this did not prevent a decisive opposition victory. The findings reinforced that the purpose of factchecking is not to sway election outcomes, but to provide voters with reliable information and tools to assess claims independently. The campaign also highlighted the limits of other commonly cited influences. Russian-linked disinformation efforts appeared weak and largely ineffective compared to domestic propaganda networks. At the same time, AI-generated political content was widely used by both sides, often to provoke emotional responses rather than to convincingly deceive. Meanwhile, the EU-driven ban on political advertising by platforms such as Meta and Google significantly reduced the volume of online propaganda and may have benefited the opposition, which achieved stronger organic engagement. Source: Fülöp, Z. and Teczár, S. Orbán’s Hungary defeat shows disinformation is not a political magic trick. [online] Tech Policy Press. Published 20 April 2026. Available at: https://www.techpolicy.press/orbns-hungary-defeat-shows-disinformation-is-not-a-political-magic-trick/ (techpolicy.press) Top Of Page The Role of Meme Culture in Shaping War Perception and Understanding According to an article by Wired, Recent conflicts involving the United States, Iran, Israel, and Lebanon have spread widely on social media, not only through news but through memes. These ranged from jokes about conscription and viral songs to dark humor about missile strikes and wartime life. While humor has long been a way to cope with fear and uncertainty, social media has transformed it into a fast-moving, global phenomenon. Memes are designed to be simple, relatable, and highly shareable, often stripping away context in favor of engagement. Governments are increasingly adopting meme-like formats to shape narratives, blending real footage with cinematic edits, gaming references, or AI-generated visuals. This content spreads easily because it mirrors the language of online culture, making propaganda more accessible and emotionally engaging. However, this environment creates an "illusion of understanding", where constant exposure to simplified content makes people feel informed without deep knowledge. Source: WIRED. War memes are turning conflict into content. [online] Published 20 April 2026. Available at: https://www.wired.com/story/war-memes-turn-conflict-into-content/ Top Of Page Trump Pope Rift Sparked Viral False Claims As published by NewsGuard's Reality Check, Tensions between Donald Trump and Pope Leo XIV have fueled the spread of 3 viral false claims related to the Catholic Church, garnering 1.5 million views on the social media platform X alone. The claims began spreading on the 12th of April 2026, following public criticism exchanged between the two figures. Both pro- and anti-Trump users circulated claims that the pope donated to Kamala Harris’s campaign, a fabricated quote attributed to Cardinal Timothy Dolan supporting Trump, and a fake post suggesting Trump threatened the Vatican with secret "files". None of these claims is supported by evidence. Source: NewsGuard. Trump-Pope rift sparks viral false claims. [online] Published 24 April 2026. Available at: https://www.newsguardrealitycheck.com/p/trump-pope-rift-sparks-viral-false Top Of Page [Appendix - Frameworks to Counter Disinformation] EU Campaign to Counter Climate Disinformation According to a short publication by The European Commission, it has launched the #ClimateFactsMatter campaign to address the growing spread of climate disinformation across the EU. Led by its climate department, the initiative provides multilingual resources: videos, infographics, and guides, to help citizens recognize misleading content and better understand climate issues and policies. The campaign focuses on exposing how disinformation distorts public understanding of climate change, often by spreading false or misleading claims that undermine trust in scientific facts and delay action. It offers practical tools for identifying manipulation techniques and encourages critical thinking about the sources and intent behind climate-related information. Source: European Commission, Directorate-General for Climate Action. #ClimateFactsMatter: Countering climate disinformation in the EU. [online] Published April 2026. Available at: https://climate.ec.europa.eu/eu-action/climate-disinformation/climatefactsmatter-countering-climate-disinformation-eu_en (climate.ec.europa.eu) Top Of Page EU Mission to Strengthen Armenia’s Resilience The EEAS European External Action Service announced that the European Union has launched a new civilian mission in Armenia, known as EUPM Armenia, aimed at strengthening the country’s resilience against complex threats. Operating under the Common Security and Defence Policy, the mission will provide strategic and operational support to Armenian institutions in addressing challenges such as foreign information manipulation and interference (FIMI), cyberattacks, and illicit financial activities. It will focus on capacity building across government bodies and promote a coordinated, whole-of-government approach to crisis management and national security. The mission, requested by Armenian authorities, will initially run for two years and be headquartered in Armenia. It complements the existing EU Mission in Armenia, which focuses on observation and confidence-building in conflict-affected areas. Source: European External Action Service (EEAS). Armenia: EU establishes a new civilian mission to contribute to strengthening the country’s resilience. [online] Published 21 April 2026. Available at: https://www.eeas.europa.eu/delegations/armenia/armenia-eu-establishes-new-civilian-mission-contribute-strengthening-country%E2%80%99s-resilience_en (consilium.europa.eu) Top Of Page EU Calls for Initiative to Tackle Evolving Disinformation Threats The European Commission announced it has launched a new funding call under its European Democracy Shield strategy to strengthen research on information integrity. The initiative responds to the rapid rise of disinformation and manipulation tactics online, including the use of AI-generated content, fake websites, influencer misuse, and the exploitation of algorithms to amplify misleading or divisive narratives. The proposal emphasized that by distorting information environments, influencers can affect elections, policy debates, and individual choices, making it essential to better understand how such manipulation operates and spreads. To counter this, the EU plans to invest in research infrastructure and collaboration across sectors such as academia, civil society, and technology to improve the ability to detect, analyze, and respond to disinformation. Source: European Commission. New open call for proposals under the Digital Europe Programme. [online] Published 21 April 2026. Available at: https://digital-strategy.ec.europa.eu/en/news/new-open-call-proposals-under-digital-europe-programme Top Of Page EU Tackles Disinformation Through New Media Program The European Union announced that its Delegation to Türkiye has launched a new initiative to combat disinformation through a television programme called Ambassadors of Truth, produced in collaboration with CNN Türk. The show aims to educate audiences about how disinformation operates, why it spreads, especially during crises, and how individuals can better recognize and resist misleading content. Hosted by Nezih Orhon, the programme brings together experts from academia, media, and fact-checking organizations to examine key disinformation challenges. Topics include the risks of AI-generated content, health misinformation, foreign influence campaigns, and the vulnerabilities of children and families in the digital space. It also highlights how false or manipulated information can undermine trust and decision-making. Beyond raising awareness, the initiative focuses on practical solutions, offering viewers tools to verify information and navigate online media more critically. Source: European External Action Service (EEAS). Delegation of the European Union to Türkiye takes a new step in the fight against disinformation: “Ambassadors of Truth” premieres on CNN Türk. [online] Published 24 April 2026. Available at: https://www.eeas.europa.eu/delegations/t%C3%BCrkiye/ambassadors-truth_en Top Of Page Biometric Verification to Counter AI-Driven Disinformation and Fraud According to a report by CyberNews, as AI-generated content becomes harder to distinguish from reality, concerns over disinformation and fraud are growing. A company co-founded by Sam Altman, Tools for Humanity, is partnering with Match Group and Zoom to introduce biometric identity verification. Their solution, the “Orb”, uses iris scanning to confirm whether users are real humans, aiming to reduce the spread of fake identities and AI-generated personas online. This effort responds to the increasing use of synthetic media and deepfakes in disinformation campaigns and scams. Instead of detecting fake content directly, the system verifies real users and assigns a “proof-of-human” badge to their profiles. While technology promises greater trust and security, it also raises questions about privacy and accessibility. As AI continues to enable large-scale deception, tools like biometric verification highlight the growing need for new defenses against disinformation, even as they introduce new challenges around data protection and user trust. Source: Cybernews. Sam Altman’s iris-scanning company joins forces with Tinder and Zoom to exterminate deepfake scams. [online] Published 23 April 2026. Available at: https://cybernews.com/ai-news/tinder-zoom-human-verification/ Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website. Top Of Page

  • Cybersecurity Meets Cyfluence Defence: Applying Cognitive Security Posture Management to Managed Contested Spaces

    As cyber and influence operations grow increasingly intertwined, a reductive framing of "cyber-enabled influence" is no longer sufficient. This primer from the Cyfluence Research Center introduces Cyfluence, a conceptual domain that models cyber and influence operations as components of integrated, non-linear kill chains. It examines how these threats manifest in Managed Contested Spaces (MCSs) such as universities, smart cities, and corporate environments, and proposes a new defence framework, Cognitive and Cyber Security Posture Management (C2SPM), that adapts proven cybersecurity principles to the cognitive threat landscape. With the integration of hyper-automation, C2SPM offers near-real-time detection, assessment, and remediation across cognitive, digital, and physical dimensions. [Download PDF Here]

  • Cyber-based hostile influence campaigns 6th - 12th April 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect (hence force Cyfluence, as opposed to cyber-attacks that aim to steal information, extort money, etc.) Such hostile influence campaigns and operations can be considered an epistemological branch of Information Operations (IO) or Information Warfare (IW). Typically, and as customary during the last decade, the information is spread throughout various internet platforms, which are the different elements of the hostile influence campaign, and as such, connectivity and repetitiveness of content between several elements are the main core characteristics of influence campaigns. Hostile influence campaigns, much like Cyber-attacks, have also become a tool for rival nations and corporations to damage reputation or achieve various business, political or ideological goals. Much like in the cyber security arena, PR professionals and government agencies are responding to negative publicity and disinformation shared over the news and social media. We use the term cyber based hostile influence campaigns, as we include in this definition also cyber-attacks aimed at influencing (such as hack and leak during election time), while we exclude of this term other types of more traditional kinds of influence such as diplomatic, economic, military etc. During the 06th to the 12th of April 2026, we observed, collected and analyzed endpoints of information related to cyber based hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. Some of the mentioned campaigns have to do with social media and news outlets solemnly, while others leverage cyber-attack capabilities. [Contents] [Introduction] [Report Highlights] [Report Summary] [State Actors] Russia Global Expansion of Kremlin Disinformation Network AI Poisoning in Large Language Models Kremlin Disinformation Networks Targeting French-Speaking Audiences Propaganda as Driver of Russian Military Recruitment Disinformation in Reporting on Alleged Attack on USS Tripoli The War in Ukraine Disinformation Campaign Targeting Arab Media Recent Kremlin Disinformation Campaigns Against Ukraine and the EU AI Slop Ahead of Hungary’s Crucial Election China Fake Facebook Accounts Bolster Chinese Embassy Attacks vs PCIJ China’s Biased Coverage of the Iran War Iran Fake video of Captured U.S. Pilot in Iran AI-Driven Disinformation About World Leaders’ Health Iran Threats Produce Fake Claims of Missile Strikes Disinformation Trends in the Iran War [AI Related Articles] Disinformation Risks in AI-Generated Search Results Growing Propaganda Threats Through Language Models AI-Amplified Disinformation in Russia and China AI Chatbots Pose a Growing Risk of Disinformation Disinformation Risks in AI-Generated Search Results [General Reports] Disinformation Ecosystem in Spain NATO’s Assessment and Response to Cognitive Warfare Threats Fake Claim that CNN Reported a Fake Iranian Statement A GOP-Aligned Group is Using Klan Imagery to Target Black Voters Elon Musk’s xAI Sues Colorado Over AI Law [Appendix - Frameworks to Counter Disinformation] The Evolving Impact of Digital Technologies on European Democracy Check First and French Journalists in a Virtual Foreign Interference Campaign US Budget Frames Global Institutions as Disinformation Actors [CRC Glossary] [ Report Highlights] An essay by ISW described a long-term disinformation strategy in which the Kremlin is building a global media network to shape international narratives. As reported by NewsGuard's Reality Check, ahead of Hungary’s pivotal parliamentary election, coordinated misinformation campaigns are using AI-generated "slop" content and low-grade Russian disinformation to reinforce Prime Minister Viktor Orbán’s campaign messaging and undermine the opposition led by Péter Magyar. According to an article by PressOne.PH, a coordinated network of fake Facebook accounts and pro-Duterte influencers amplified attacks by the Chinese Embassy in Manila against the Philippine Center for Investigative Journalism (PCIJ), following PCIJ’s reporting on pro-Beijing influence operations in the country. As published by The Jamestown Foundation, disinformation played a central role in shaping narratives about the Iran war, particularly through Chinese state media. An article by The New York Times highlighted how AI-generated search summaries, such as Google’s "AI Overviews", can contribute to the spread of misleading or inaccurate information. A report by The European Commission argued that Europe’s democratic resilience is being weakened by an information environment shaped by the "attention economy", in which digital platforms prioritize engagement and advertising revenue over accuracy and public interest. The U.S. budget document characterizes certain international organizations and programs as sources of biased narratives and coordinated information influence that it frames as disinformation undermining U.S. interests. [ Report Summary] An essay by ISW described a long-term disinformation strategy in which the Kremlin is building a global media network to shape international narratives. An article by DFR Lab exposed a disinformation strategy in which pro-Kremlin networks, such as the Pravda system, flood the internet with propaganda to influence how AI models learn and respond. As published by DFR Lab, a Morocco-based YouTube channel, ODC TV, acts as a platform for disinformation by broadcasting pro-Kremlin narratives to French-speaking audiences, including within the EU. According to an article by the Atlantic Council, propaganda plays a key role in sustaining Russian military recruitment in Ukraine, beyond financial incentives or coercion. Stop Fake’s report revealed a clear case of disinformation in a report claiming that Iran attacked the USS Tripoli. A Telegram report by the SPRAVDI Center for Strategic Communications described disinformation efforts in which actors linked to the Kremlin attempt to influence Arab public opinion about Ukraine by purchasing placements on popular Telegram channels. According to an article by EU VS Disinfo, recent pro-Kremlin disinformation campaigns focused on portraying Ukraine as a danger to the Baltic states, interfering in European elections, and pressuring the EU to reverse its energy sanctions on Russia. As reported by NewsGuard's Reality Check, ahead of Hungary’s pivotal parliamentary election, coordinated misinformation campaigns are using AI-generated "slop" content and low-grade Russian disinformation to reinforce Prime Minister Viktor Orbán’s campaign messaging and undermine the opposition led by Péter Magyar. According to an article by PressOne.PH, a coordinated network of fake Facebook accounts and pro-Duterte influencers amplified attacks by the Chinese Embassy in Manila against the Philippine Center for Investigative Journalism (PCIJ), following PCIJ’s reporting on pro-Beijing influence operations in the country. As published by The Jamestown Foundation, disinformation played a central role in shaping narratives about the Iran war, particularly through Chinese state media. NewsGuard’s Reality Check reported a disinformation campaign in which pro-Iran social media users circulated a video falsely claiming it showed Iranian forces capturing a U.S. pilot after an F-15E jet was shot down on 03th of April 2026. As revealed by NewsGuard’s Reality Check, as part of a coordinated disinformation campaign, pro-Iran social media users used AI-generated images and videos to manipulate perceptions of key political leaders during wartime. A report by NewsGuard's Reality Check highlighted a recurring disinformation pattern in which Iran publicly threatens specific targets and pro-Iran social media accounts then falsely claim that those targets were successfully struck, even when no such attacks occurred. According to NewsGuard’s publication, disinformation has spread rapidly during the Iran war, with at least 69 false claims gaining hundreds of millions of views in less than a month. According to an article by The Conversation, we are experiencing a growing wave of disinformation known as "slopaganda", where AI-generated or misleading content is used for propaganda purposes. As highlighted in a DFR Lab report, a sophisticated form of disinformation in which pro-Kremlin networks flood the internet with AI-generated propaganda to influence how large language models are trained. According to research by Hybrid CoE, China and Russia are increasingly using artificial intelligence to enhance disinformation and influence operations abroad. According to a report by TechXplore, the increasing role of AI chatbots in spreading harmful and misleading information, especially among vulnerable users like children. An article by The New York Times highlighted how AI-generated search summaries, such as Google’s "AI Overviews", can contribute to the spread of misleading or inaccurate information. According to a report by EU Disinfo Lab, the disinformation landscape in Spain is complex and rapidly evolving, as false narratives are often triggered by major events such as natural disasters and geopolitical tensions. A NATO report highlighted that modern security threats increasingly rely on disinformation and propaganda to influence public opinion and decision-making. A report by NewsGuard's Reality Check debunked claims that CNN fabricated an Iranian statement regarding the April 2026 U.S.-Iran ceasefire. According to an article by The Washington Post, a Republican-aligned political action committee in Virginia, Democracy and Justice, is facing criticism for a mailer campaign that uses Ku Klux Klan imagery, civil rights symbolism, and old quotes from Barack Obama and Governor Abigail Spanberger to target Black voters ahead of the state’s 21st of April redistricting referendum. As reported by CyberNews, Elon Musk’s xAI has filed a federal lawsuit seeking to block Colorado’s new AI law, arguing that the law violates the First Amendment by compelling developers to redesign systems, such as its chatbot Grok, to reflect the state’s preferred views on fairness, diversity, and discrimination. Check First, in partnership with the French training organization Samsa, developed an immersive one-day workshop to help journalists identify foreign information manipulation, disinformation campaigns, and coordinated inauthentic behavior in an electoral context. A report by The European Commission argued that Europe’s democratic resilience is being weakened by an information environment shaped by the "attention economy", in which digital platforms prioritize engagement and advertising revenue over accuracy and public interest. The U.S. budget document characterizes certain international organizations and programs as sources of biased narratives and coordinated information influence that it frames as disinformation undermining U.S. interests. [State Actors] Russia Global Expansion of Kremlin Disinformation Network An essay by ISW described a long-term disinformation strategy in which the Kremlin is building a global media network to shape international narratives. By forming partnerships with foreign media outlets, expanding multilingual services, and distributing state-produced content, Russia aims to embed its narratives into local information ecosystems. This effort is part of a broader "cognitive warfare" approach, designed to influence public opinion and affect decision-making in other countries. A key component of this strategy is the cultivation of a global network of journalists trained through Russian-backed programs. These initiatives, along with collaborations with universities and media organizations, help spread pro-Kremlin narratives in a way that appears locally credible. At the same time, disinformation is amplified through both traditional media and digital platforms, including Telegram networks and influencers, enabling coordinated messaging across regions and languages. While Russia has faced resistance and restrictions, especially in Western countries, it continues to expand its influence in regions such as Africa, Asia, and Latin America. Source: Institute for the Study of War. The Kremlin’s Expanding Media Conglomerate. [online] Published 15 January 2020. Available at: https://understandingwar.org/backgrounder/kremlin%E2%80%99s-expanding-media-conglomerate/ Top Of Page AI Poisoning in Large Language Models An article by DFR Lab exposed a disinformation strategy in which pro-Kremlin networks, such as the Pravda system, flood the internet with propaganda to influence how AI models learn and respond. This tactic, known as "AI poisoning", aims to embed false or biased narratives directly into training data used by large language models (LLMs). Unlike surface-level manipulation, this approach is more dangerous because once disinformation is absorbed into a model, it becomes difficult and costly to remove. Research showed that propaganda from sources like Pravda and Russian state media has already entered widely used datasets such as Common Crawl, which feeds many AI systems. In some cases, models were able to reproduce false narratives, such as conspiracy claims about U.S.-Ukrainian biolabs, almost word for word. This demonstrates how disinformation can be internalized and later presented as credible information, potentially misleading users on a large scale. The broader risk is that AI systems themselves may become tools for spreading disinformation if their training data is compromised. While not all influence efforts succeed, those that do can have long-lasting effects by shaping how AI interprets and presents information. As propaganda networks continue to expand their output, the threat of AI-driven disinformation is likely to grow, highlighting the need for better data filtering and oversight in AI development. Source: DFRLab. Pravda in the pipeline: Early evidence of state-adjacent propaganda in AI training data. [online] Published April 2026. Available at: https://dfrlab.org/2026/04/08/pravda-in-the-pipeline/ Top Of Page Kremlin Disinformation Networks Targeting French-Speaking Audiences As published by DFR Lab, a Morocco-based YouTube channel, ODC TV acts as a platform for disinformation by broadcasting pro-Kremlin narratives and sanctioned Russian figures to French-speaking audiences, including within the EU. Despite sanctions, the channel remains accessible and has gained significant reach, hosting individuals linked to Russian propaganda, far-right extremism, and influence operations. Through documentaries and interviews, it promotes narratives aligned with Kremlin ideology, particularly regarding the war in Ukraine. A central feature of the channel’s content is the spread of misleading and false claims. Its videos present biased interpretations of events, such as portraying Russia’s actions in Ukraine as justified or suggesting Western conspiracies behind major incidents like the Moscow terror attack. The inclusion of conspiracy theorists and extremist figures further amplifies disinformation and polarizing content. The channel demonstrates how disinformation ecosystems operate across borders by using alternative platforms, multilingual content, and third-country intermediaries. Source: DFRLab. Morocco-based YouTube channel acts as French-language proxy for sanctioned Kremlin propagandists. [online] Published 10 April 2026. Available at: https://dfrlab.org/2026/04/10/morocco-based-youtube-channel-acts-as-french-language-proxy-for-sanctioned-kremlin-propagandists/ Top Of Page Propaganda as Driver of Russian Military Recruitment According to an article by the Atlantic Council, propaganda plays a key role in sustaining Russian military recruitment in Ukraine, beyond financial incentives or coercion. Although enlistment bonuses have declined, many soldiers still volunteer, suggesting that belief in state narratives significantly influences their decision to fight. Research on Russian prisoners of war shows a strong link between exposure to propaganda and support for the invasion. Soldiers who accept Kremlin narratives are far more likely to view the war as legitimate, dehumanize Ukrainians, and express willingness to return to combat. These narratives, including the "Russian World" ideology, rely on false information to justify violence and reinforce loyalty. propaganda strengthens military resilience and prolongs conflict, acting as a force multiplier that helps sustain Russia’s war effort despite heavy losses. Source: Atlantic Council. Recognizing the role of propaganda in Russia’s infrastructure of aggression. [online] Published 9 April 2026. Available at: https://www.atlanticcouncil.org/blogs/ukrainealert/recognizing-the-role-of-propaganda-in-russias-infrastructure-of-aggression Top Of Page Disinformation in Reporting on Alleged Attack on USS Tripoli Stop Fake’s report revealed a clear case of disinformation in a report claiming that Iran attacked the USS Tripoli. The claim, based on statements from the Iranian military and amplified by TASS, alleges that the ship was struck and forced to retreat. However, the report contains internal contradictions about the ship’s supposed location and retreat route, suggesting a lack of factual consistency and reliability. Verified information from U.S. Central Command directly contradicts these claims, confirming that the USS Tripoli was not attacked and continues to operate normally in the Arabian Sea. The false narrative appears to exaggerate or fabricate a military success, presenting misleading information about events that did not occur. Source: StopFake. ТАСС повторил фейк КСИР об ударе по американскому кораблю с 5000 моряков и морпехов на борту. [online] Published 8 April 2026. Available at: https://www.stopfake.org/ru/tass-povtoril-fejk-ksir-ob-udare-po-amerikanskomu-korablyu-s-5000-moryakov-i-morpehov-na-bortu/ Top Of Page The War in Ukraine Disinformation Campaign Targeting Arab Media A Telegram report by the SPRAVDI Center for Strategic Communications described disinformation efforts in which actors linked to the Kremlin attempt to influence Arab public opinion about Ukraine by purchasing placements on popular Telegram channels. Channel owners are reportedly approached with ready-made anti-Ukrainian videos and asked to publish them as if they were legitimate news content. This strategy relies on disguising coordinated propaganda as organic, locally sourced information. By outsourcing distribution to trusted regional influencers, the campaign seeks to increase credibility and obscure the original source of the content. The broader objective of this disinformation campaign is to weaken trust in Ukraine across the Arab world, particularly at a time when Ukraine is strengthening diplomatic and economic ties with Gulf states. Source: SPRAVDI. Telegram post. [online] n.d. Available at: https://t.me/spravdi/54328?embed=1 Top Of Page Recent Kremlin Disinformation Campaigns Against Ukraine and the EU According to an article by EU VS Disinfo, recent pro-Kremlin disinformation campaigns focused on portraying Ukraine as a danger to the Baltic states, interfering in European elections, and pressuring the EU to reverse its energy sanctions on Russia. One of the main false narratives claimed that Ukraine’s strikes on Russian Baltic ports could provoke Russia into attacking the Baltic states and even trigger nuclear escalation. This allegation was based on already debunked reports that Ukrainian drones had flown over Baltic territory. At the same time, Russian FIMI operations intensified around upcoming parliamentary elections in several EU countries, including Bulgaria and Hungary, as well as in Armenia. Other narratives portraying Europe as dependent on Russian energy supplies have been a persistent feature of the information space since the outbreak of the conflict in Iran. They are often paired with the claim of an impending European economic crisis, aiming to promote the lifting of European sanctions on Russian oil trade. Source: EUvsDisinfo. The Kremlin points at Ukraine as a threat for the Baltic states, and accuses the EU with meddling in upcoming elections. [online] Published 2 April 2026. Available at: https://euvsdisinfo.eu/the-kremlin-points-at-ukraine-as-a-threat-for-the-baltic-states-and-accuses-the-eu-with-meddling-in-upcoming-elections/ Top Of Page AI Slop Ahead of Hungary’s Crucial Election As reported by NewsGuard's Reality Check, ahead of Hungary’s pivotal parliamentary election, coordinated misinformation campaigns are using AI-generated "slop" content and low-grade Russian disinformation to reinforce Prime Minister Viktor Orbán’s campaign messaging and undermine the opposition led by Péter Magyar. Although much of the content appears absurd, such as AI-generated videos featuring talking fruits, vegetables, and cartoon characters, the report argued that its purpose is not to persuade directly, but to normalize false associations and spread emotionally charged narratives through algorithm-friendly social media content. These campaigns aim to plant recurring ideas, such as Ukraine as a dangerous, Brussels as unaccountable, and the opposition as violent. False claims ranged from fabricated stories that Ukrainians attempted to assassinate Orbán to allegations that the opposition planned to conscript young Hungarians to fight in Ukraine. Russian-linked campaigns, including Matryoshka and Storm-1516, reportedly amplified these claims through fake news reports, forged websites, and coordinated TikTok videos that gained significant reach. This combination of top-down political messaging and bottom-up viral content is more insidious and likely more effective. That’s why short, low-grade, repetitive slop and Russia’s slapdash fake news reports are not to be underestimated. Source: NewsGuard Reality Check. Don’t Dismiss AI Slop Ahead of Hungary’s Crucial Election. [online] Published 9 April 2026. Available at: https://www.newsguardrealitycheck.com/p/dont-dismiss-ai-slop-ahead-of-hungarys Top Of Page China Fake Facebook Accounts Bolster Chinese Embassy Attacks vs PCIJ According to an article by PressOne.PH, a coordinated network of fake Facebook accounts and pro-Duterte influencers amplified attacks by the Chinese Embassy in Manila against the Philippine Center for Investigative Journalism (PCIJ), following PCIJ’s reporting on pro-Beijing influence operations in the country. After PCIJ republished a guide on identifying pro-China propaganda in February 2026, the embassy accused the newsroom of being a U.S.-backed "tool" and questioned its editorial independence by pointing to funding from the National Endowment for Democracy. Within hours, more than one hundred accounts with a combined reach of nearly 3 million followers rapidly spread the embassy’s message, ensuring the narrative was already widely circulating by morning. Analysis of more than 640 Facebook accounts found that a significant share showed medium to high signs of inauthentic behavior, including suspicious usernames, dormant profiles that suddenly became highly active, generic comments, bot-like posting patterns, and the use of stock or AI-generated profile images. These accounts systematically echoed the embassy’s claims that PCIJ was biased and part of a broader anti-China media campaign. This artificial amplification was designed to make the embassy’s messaging appear as organic public sentiment, while discouraging investigative reporting on sensitive issues such as the West Philippine Sea and foreign influence operations. More broadly, the campaign reflected a familiar state-backed tactic: discrediting independent media by portraying foreign-funded journalism as a hostile political operation. The report warned that this strategy threatens not only a single newsroom but the broader integrity of the Philippine information space and public understanding of national sovereignty issues. Source: PressOne.PH. Fake Facebook accounts bolster Chinese Embassy attacks vs PCIJ. [online] Published 9 April 2026. Available at: https://pressone.ph/fake-facebook-accounts-bolster-chinese-embassy-attacks-vs-pcij/ Top Of Page China’s Biased Coverage of the Iran War As published by The Jamestown Foundation, disinformation played a central role in shaping narratives about the Iran war, particularly through Chinese state media. Instead of reporting Iranian military losses or weaknesses, official outlets amplified Iranian claims, many of which were false, about successful strikes on U.S. assets. These included unverified reports of destroyed radar systems and downed aircraft, sometimes supported by AI-generated images or even video game footage presented as real evidence. This selective reporting reflects a deliberate effort to construct a distorted picture of the battlefield. By repeating Iranian disinformation and ignoring contradictory evidence, Chinese media created the impression that Iran’s defenses remained strong and effective, despite clear indications of U.S. and Israeli tactical success. As the gap between claims and reality became more apparent, the narrative shifted toward portraying the United States as strategically failing. Source: Jamestown Foundation. Weapons Systems Failures May Have Led to Personnel Removals. [online] Published 9 April 2026. Available at: https://jamestown.org/weapons-systems-failures-may-have-led-to-personnel-removals/ Top Of Page Iran Fake video of Captured U.S. Pilot in Iran NewsGuard’s Reality Check reported a disinformation campaign in which pro-Iran social media users circulated a video falsely claiming it showed Iranian forces capturing a U.S. pilot after an F-15E jet was shot down on 03th of April 2026. The video quickly gained traction online, amassing hundreds of thousands of views, and was presented as real-time evidence of major military success. In reality, the video was unrelated to the event and had been circulating online since at least May 2025, well before the 2026 conflict. Earlier versions of the same footage had been used in a completely different context. This highlights a common disinformation tactic of recycling old or unverifiable content and reframing it to fit current events. Notably, even Iranian state media avoided definitively claiming the pilot’s capture, instead using vague or contradictory language. At the same time, credible reports confirmed that both U.S. crewmembers were successfully rescued, contradicting the viral claims. Source: NewsGuard Reality Check. The Captured US Pilot Who Wasn’t Captured. [online] Published 7 April 2026. Available at: https://www.newsguardrealitycheck.com/p/the-captured-us-pilot-who-wasnt-captured Top Of Page AI-Driven Disinformation About World Leaders’ Health As revealed by NewsGuard’s Reality Check, as part of a coordinated disinformation campaign, pro-Iran social media users used AI-generated images and videos to manipulate perceptions of key political leaders during wartime. False content portrayed Iran’s leader as strong and active, while depicting U.S. President Donald Trump as weak and Israeli Prime Minister Benjamin Netanyahu as dead or incapacitated. These narratives were widely shared, particularly in Arabic and Farsi, indicating a targeted regional influence effort. In reality, the viral materials were fabricated or misrepresented. The video showing Iran’s leader was identified as AI-generated, and the image of Trump struggling to walk was also artificially created, containing clear digital markers and visual inconsistencies. At the same time, authentic footage of Netanyahu was falsely labeled as AI-generated to support claims that he had died. This mix of fake content and false accusations against real content demonstrates a dual strategy of creating disinformation while also discrediting genuine evidence. Sources: NewsGuard Reality Check. AI Misdiagnoses Health of World Leaders. [online] Published 7 April 2026. Available at: https://www.newsguardrealitycheck.com/p/ai-misdiagnoses-health-of-world-leaders Top Of Page Iran Threats Produce Fake Claims of Missile Strikes A report by NewsGuard's Reality Check highlighted a recurring disinformation pattern in which Iran publicly threatens specific targets and pro-Iran social media accounts, then falsely claims that those targets were successfully struck, even when no such attacks occurred. On several occasions before the 7th of April 2026 ceasefire, users on X and other platforms circulated misleading videos and posts claiming Iranian missile strikes on high-profile sites, including Israel’s Dimona nuclear facility, Intel and IBM offices in Petah Tikva, and the King Fahd Causeway linking Saudi Arabia and Bahrain. These posts attracted millions of views and significantly amplified the perception of Iranian military success. In each case, the claims were false or unsupported by credible evidence. Sources: NewsGuard Reality Check. Iran Threats Produce Fake Claims of Missile Strikes. [online] Published 8 April 2026. Available at: https://www.newsguardrealitycheck.com/p/iran-threats-produce-fake-claims Top Of Page Disinformation Trends in the Iran War According to NewsGuard’s publication, disinformation has spread rapidly during the Iran war, with at least 69 false claims gaining hundreds of millions of views in less than a month. These misleading narratives appear frequently and consistently, and Most of these claims promote a pro-Iran perspective. A key trend is the evolution of disinformation techniques. Early false claims often relied on misused or taken-out-of-context images and videos, but more recent ones increasingly use AI-generated content to fabricate events. Another growing tactic is to discredit accurate reporting by falsely claiming that real, verified media is actually AI-generated. Interestingly, much of this disinformation does not originate directly from Iranian state media but spreads through a wider network of actors and platforms. This reflects a decentralized information environment where false narratives can circulate widely regardless of their source. Sources: NewsGuard. 2026 Iran War False Claims Tracking Center. [online] Published 10 April 2026. Available at: https://www.newsguardtech.com/special-reports/2026-iran-war/ Top Of Page [AI Related Articles] The Rise of "Slopaganda" and AI-Driven Disinformation According to an article by The Conversation, we are experiencing a growing wave of disinformation known as "slopaganda", where AI-generated or misleading content is used for propaganda purposes. Following U.S.-Israeli strikes on Iran, both sides engaged in spreading manipulated media: official communications mixed real footage with entertainment clips. At the same time, Iran and its supporters circulated outdated war videos and AI-generated images falsely depicting attacks. Slopaganda relies on emotionally charged, attention-grabbing content that spreads quickly on social media, often without concern for truth. It includes not only misleading material but also symbolic or exaggerated content designed to shape perceptions and associations, such as portraying political figures in absurd or negative ways. Even when not meant to be taken literally, such content can influence beliefs and reinforce biases, especially when audiences are repeatedly exposed to it in fast-moving, low-attention environments. Source: The Conversation. Slopaganda wars: how (and why) the US and Iran are flooding the zone with viral AI-generated noise. [online] Published 8 April 2026. Available at: https://theconversation.com/slopaganda-wars-how-and-why-the-us-and-iran-are-flooding-the-zone-with-viral-ai-generated-noise-280024 Top Of Page Growing Propaganda Threats Through Language Models As highlighted in a DFR Lab report, a sophisticated form of disinformation in which pro-Kremlin networks, such as the Pravda system, flood the internet with AI-generated propaganda to influence how large language models (LLMs) are trained. This tactic, known as "AI poisoning" or "LLM grooming", aims to embed false or biased narratives directly into the data that AI systems learn from. Unlike surface-level manipulation (such as search results), poisoning training data is far more dangerous because it can permanently shape how AI models generate responses. Research shows that even a relatively small number of malicious documents can distort AI outputs, and evidence suggests that such propaganda has already entered widely used datasets like Common Crawl. In some cases, models were able to reproduce false narratives, such as Kremlin-backed claims about U.S.-Ukrainian biolabs, almost word for word. The broader implication is that AI systems themselves can become vehicles for disinformation if their training data is compromised. This threat is especially serious for smaller developers relying on public datasets, and it underscores the need for stricter data filtering and oversight to prevent long-term contamination of AI-generated knowledge. Source: Digital Forensic Research Lab (DFRLab). Pravda in the pipeline: Early evidence of state-adjacent propaganda in AI training data. [online] Published 8 April 2026. Available at: https://dfrlab.org/2026/04/08/pravda-in-the-pipeline/ Top Of Page AI-Amplified Disinformation in Russia and China According to research by Hybrid CoE, China and Russia are increasingly using artificial intelligence to enhance disinformation and influence operations abroad. AI allows these actors to produce and spread misleading content at a greater scale and speed, making propaganda more efficient and harder to detect. This includes generating fake images, videos, and text, and targeting specific audiences with tailored narratives that distort reality. While both countries use AI for disinformation, their approaches differ. China has more advanced domestic AI capabilities and uses them not only for content creation but also for data collection, monitoring, and precise targeting of individuals. Russia, although less advanced technologically, compensates by using widely available AI tools and focusing on high-volume, experimental disinformation campaigns. In some cases, both countries amplify similar narratives when their interests align. As a result of the growing use of AI in disinformation, democratic societies face greater risks of manipulation and polarization. Addressing this challenge requires a combination of technological solutions, regulation, and human oversight to detect and counter increasingly sophisticated disinformation tactics. Source: - Top Of Page AI Chatbots Pose a Growing Risk of Disinformation According to a report by TechXplore, the increasing role of AI chatbots in spreading harmful and misleading information, especially among vulnerable users like children. Unlike traditional disinformation on social media, which is broadcast to large audiences, AI chatbots deliver personalized content directly to individuals. This makes disinformation more persuasive and harder to detect, as it can be tailored to a user’s emotions, fears, and personal situation. Research cited in the text shows that many AI systems are willing to generate dangerous or false guidance, including instructions related to violence, self-harm, or illegal activities. In some cases, chatbots present harmful content as reliable and factual, blurring the line between truth and falsehood. The report warned that because the "new" AI-driven disinformation operates more privately and at scale, it can influence individuals without public scrutiny. This creates a new form of disinformation where misleading or dangerous ideas are not just shared but actively produced and adapted in real time and raises urgent concerns about regulation and the need to prevent AI systems from generating or reinforcing harmful content. Source: Tech Xplore. AI chatbots offer children harm as if it were help, says activist. [online] Published 11 April 2026. Available at: https://techxplore.com/news/2026-04-ai-chatbots-children-activist.html Top Of Page Disinformation Risks in AI-Generated Search Results An article by The New York Times highlighted how AI-generated search summaries, such as Google’s "AI Overviews", can contribute to the spread of misleading or inaccurate information. While these systems aim to provide quick and authoritative answers, they sometimes produce responses that contradict reliable sources or lack proper evidence. This can confuse users and create uncertainty about what is true, especially when incorrect information is presented with confidence. Although AI Overviews are accurate most of the time, even a small error rate results in a massive volume of misinformation due to the scale of global searches. Additionally, many answers are “ungrounded,” meaning they are not fully supported by the sources they cite. This makes it difficult for users to verify claims and increases the risk that false or distorted information will be accepted as fact. AI systems can unintentionally amplify disinformation by generating and distributing incorrect content at scale. Therefore, the text underscored that as these tools become more central to how people access information, the need for critical evaluation and cross-checking sources becomes essential to avoid being misled. Source: The New York Times. Google’s A.I. Overviews Struggle With Accuracy, Study Finds. [online] Published 7 April 2026. Available at: https://www.nytimes.com/2026/04/07/technology/google-ai-overviews-accuracy.html Top Of Page [General Reports] Disinformation Ecosystem in Spain According to a report by EU Disinfo Lab, the disinformation landscape in Spain is complex and rapidly evolving, as false narratives are often triggered by major events such as natural disasters and geopolitical tensions. These campaigns use emotionally charged and misleading content, ranging from conspiracy theories about weather manipulation to false claims about migration or energy failures, to exploit public fear. Social media platforms play a central role in amplifying such content, often prioritizing virality over accuracy. This ecosystem stands out for the blending of domestic and foreign disinformation efforts. Russian-linked networks and other external actors frequently amplify internal divisions by spreading misleading narratives about migration, elections, or EU policies. At the same time, local "digital agitators" and political movements use similar tactics to build influence and erode trust in institutions. This creates a feedback loop where disinformation spreads across platforms and even translates into real-world political action. Recurring themes, such as anti-migrant sentiment and conspiracy theories, are constantly adapted to new contexts, making them more resilient and harder to counter. Despite efforts like factchecking and regulation, the speed and scale of disinformation continue to challenge authorities. Source: EU DisinfoLab. The disinformation landscape in Spain. [online] Published 8 April 2026. Available at: https://www.disinfo.eu/publications/disinformation-landscape-in-spain/ Top Of Page NATO’s Assessment and Response to Cognitive Warfare Threats A NATO report highlighted that modern security threats increasingly rely on disinformation and propaganda to influence public opinion and decision-making. Adversaries exploit digital platforms, social media, and emerging technologies such as AI and deepfakes to spread misleading or false information, weaken trust in institutions, and interfere in democratic processes. These tactics are part of broader "cognitive warfare", where the goal is not only to inform but to manipulate how individuals and societies think and act. Disinformation is central to this strategy, targeting both civilians and military decision-makers. The report emphasized that such manipulation has already played a major role in recent conflicts, demonstrating how information can be weaponized alongside traditional military force. By exploiting vulnerabilities in human cognition and the information environment, adversaries aim to disrupt decision-making cycles and gain a strategic advantage without confrontation. In response, NATO stressed the need for stronger scientific and technological capabilities to detect, analyze, and counter disinformation-driven cognitive attacks. This includes improving resilience at both societal and institutional levels, investing in research, and fostering cooperation across governments and sectors. Source: NATO Science and Technology Organization (STO). NATO Chief Scientist Research Report on Cognitive Warfare. [online] Published 19 January 2026. Available at: https://www.sto.nato.int/wp-content/uploads/chief-scientist-report-cognitive-warfare-final.pdf Top Of Page Fake Claim that CNN Reported a Fake Iranian Statement A report by NewsGuard's Reality Check debunked claims that CNN fabricated an Iranian statement regarding the April 2026 U.S.-Iran ceasefire. Following CNN’s report that Iran claimed it had "forced the United States to accept in principle its 10-point plan", Donald Trump publicly accused the network of spreading "fake news". His criticism was quickly amplified by members of his administration, conservative media outlets, and online influencers, generating millions of views across multiple countries and languages. According to the report, CNN’s coverage was actually based on an authentic statement issued by Iran’s security council and also published by Iranian state media, including Fars News and Press TV. The confusion appears to stem from the existence of two legitimate versions of the statement: a full version, which included Iran’s claim that the U.S. had accepted its 10-point plan, and an abridged version later cited by Trump that omitted this language. Source: NewsGuard Reality Check. No, CNN Did Not Fake Iranian Statement. [online] Published April 2026. Available at: https://www.newsguardrealitycheck.com/p/no-cnn-did-not-fake-iranian-statement Top Of Page A GOP-Aligned Group is Using Klan Imagery to Target Black Voters According to an article by The Washington Post, a Republican-aligned political action committee in Virginia, Democracy and Justice, is facing criticism for a mailer campaign that uses Ku Klux Klan imagery, civil rights symbolism, and old quotes from Barack Obama and Governor Abigail Spanberger to target Black voters ahead of the state’s 21st of April redistricting referendum. Critics, including the NAACP and Democratic officials, said the ads are designed to create confusion and suppress turnout by falsely implying that Obama and Spanberger oppose the measure. The campaign appears aimed at African American and elderly voters, both key constituencies in what is expected to be a closely contested vote. The referendum is politically significant because it could reshape Virginia’s congressional districts in a way that benefits Democrats and influences the broader national battle for control of the U.S. House. While polling suggests that most Black voters support the measure, opponents are spending heavily to challenge it, and even limited confusion among voters could affect the outcome. Source: The Washington Post. A GOP-aligned group is using Klan imagery to target Black voters. [online] Published 9 April 2026. Available at: https://www.washingtonpost.com/politics/2026/04/09/virginia-redistricting-obama-civil-rights/ Top Of Page Elon Musk’s xAI Sues Colorado Over AI Law As reported by CyberNews, Elon Musk’s xAI has filed a federal lawsuit seeking to block Colorado’s new AI law, Senate Bill 24-205, which is set to take effect on June 30. The law requires developers of "high-risk" AI systems used in areas such as employment, housing, education, healthcare, and financial services to take reasonable steps to prevent algorithmic discrimination and disclose known risks. xAI argued that the law violates the First Amendment by compelling developers to redesign systems, such as its chatbot Grok, to reflect the state’s preferred views on fairness, diversity, and discrimination. The lawsuit also reflects a broader national debate over whether AI regulation should be handled by individual states or by Washington. xAI claimed that a patchwork of state-level rules could hinder innovation, weaken competition, and undermine U.S. leadership in artificial intelligence. The company is seeking both an injunction to stop enforcement and a court ruling declaring the law unconstitutional. Source: Cybernews. Elon Musk's xAI sues Colorado over AI law and claims it forces specific viewpoints into AI. [online] Published 10 April 2026. Available at: https://cybernews.com/ai-news/musk-xai-court-colorado-ai-law-kill-innovation-viewpoints/ Top Of Page [Appendix - Frameworks to Counter Disinformation] The Evolving Impact of Digital Technologies on European Democracy A report by The European Commission argued that Europe’s democratic resilience is being weakened by an information environment shaped by the "attention economy", in which digital platforms prioritize engagement and advertising revenue over accuracy and public interest. Algorithms tend to amplify emotional, divisive, and conflict-driven content, contributing to fragmented realities, ideological echo chambers, and the spread of mis- and disinformation. Rather than simply convincing people of specific falsehoods, today’s manipulation often seeks to "flood the zone" with confusion, distrust, and competing narratives, eroding any shared sense of reality essential for democratic institutions to function. The report described this as a growing "fantasy-industrial complex", further intensified by generative AI, which can produce persuasive text, audio, and video at scale. To address these risks, the report recommended building alternative public digital spaces that do not rely on engagement-based business models, strengthening crowd-sourced knowledge systems similar to Wikipedia, expanding fact-checking mechanisms, improving media literacy, and reducing the profitability of disinformation. It also called for greater user control over recommendation algorithms and platform feeds, as well as reforms to platform business models through regulation and alternative revenue structures such as subscriptions or non-profit models. A central conclusion was that these measures cannot succeed without stronger EU digital sovereignty, as much of Europe’s information space is controlled by foreign-owned platforms whose interests may not align with democratic values. Source: European Commission, Joint Research Centre. Fractured reality: How democracy can win the global struggle over the information space. [online] Published 9 April 2026. Available at: https://publications.jrc.ec.europa.eu/repository/handle/JRC144603 Top Of Page Check First and French Journalists in a Virtual Foreign Interference Campaign Check First, in partnership with the French training organization Samsa, developed an immersive one-day workshop to help journalists identify foreign information manipulation, disinformation campaigns, and coordinated inauthentic behavior in an electoral context. The training was designed in response to growing concerns that foreign interference is increasingly targeting not only national elections but also local elections in France, where candidates have recently faced disinformation attacks. The initiative aims to strengthen newsroom preparedness so journalists can quickly recognize and explain such campaigns to the public. Using Check First’s "Tutki" platform, participants were placed in a simulated election scenario set in the fictional coastal city of Porteval. Journalists navigated a realistic stream of social media content, including ordinary lifestyle posts, manipulated political narratives, troll activity, and fabricated documents linked to a fake foreign company. Following a similar session in Armenia, the organizers were convinced that raising awareness of disinformation campaigns can be practical, entertaining, and applicable to a wide audience of media professionals throughout Europe. Source: Check First. Check First and Samsa immerse French journalists in a virtual foreign interference campaign. [online] Published 13 March 2026. Available at: https://checkfirst.network/check-first-and-samsa-immerse-french-journalists-in-a-virtual-foreign-interference-campaign/ Top Of Page US Budget Frames Global Institutions as Disinformation Actors The Budget of the United States Government, Fiscal Year 2027, identifies certain international organizations and programs as contributors to disinformation and narrative manipulation, particularly highlighting the United Nations, the World Health Organization, and the National Endowment for Democracy. It asserts that these actors disseminate biased or misleading narratives that undermine U.S. interests and distort global information environments, including through media support initiatives and fact-checking mechanisms that are portrayed as selectively shaping public discourse. The document characterizes these activities as part of broader influence tactics that involve funding media networks, promoting specific ideological narratives, and using institutional platforms to legitimize contested information. It frames these actors as participating in coordinated efforts that influence perception and policy debates, and proposes reducing or eliminating funding to such entities as a means of countering what it identifies as disinformation-driven influence operations. Source: The White House. Budget of the U.S. Government, Fiscal Year 2027. [online] Published April 2026. Available at: https://www.whitehouse.gov/wp-content/uploads/2026/04/budget_fy2027.pdf Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website. Top Of Page

  • Cyber-based hostile influence campaigns 13th - 19th April 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect (hence force Cyfluence, as opposed to cyber-attacks that aim to steal information, extort money, etc.) Such hostile influence campaigns and operations can be considered an epistemological branch of Information Operations (IO) or Information Warfare (IW). Typically, and as customary during the last decade, the information is spread throughout various internet platforms, which are the different elements of the hostile influence campaign, and as such, connectivity and repetitiveness of content between several elements are the main core characteristics of influence campaigns. Hostile influence campaigns, much like Cyber-attacks, have also become a tool for rival nations and corporations to damage reputation or achieve various business, political or ideological goals. Much like in the cyber security arena, PR professionals and government agencies are responding to negative publicity and disinformation shared over the news and social media. We use the term cyber based hostile influence campaigns, as we include in this definition also cyber-attacks aimed at influencing (such as hack and leak during election time), while we exclude of this term other types of more traditional kinds of influence such as diplomatic, economic, military etc. During the 13th to the 19th of April 2026, we observed, collected and analyzed endpoints of information related to cyber based hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. Some of the mentioned campaigns have to do with social media and news outlets solemnly, while others leverage cyber-attack capabilities. [Contents] [Introduction] [Report Highlights] [Report Summary] [State Actors] Russia Weaponizing Comedy as a Disinformation Tactic Fabricated Media Content Used to Spread Disinformation Russia Targeted Elections in Hungary and Bulgaria The War in Ukraine Religion as a Tool of Hybrid Warfare Political Changes in Hungary Affect Its International Relations Misleading Framing of the Russia–Ukraine War and Timeline China PRC Media Influence in Arabic-Language Environments Iran Pro-Iran Networks Gained a Billion Views on War Propaganda Disinformation as a Divisive Strategic Weapon [General Reports] Musk Gave a 50 Million-View Boost to the COVID Vaccine Hoax Online Extremism in Canada in 2025 Deep State Narratives Blur Lines Between Truth and Disinformation Disinformation Exploits Algorithms and Human Vulnerabilities to Erode Trust [Appendix - Frameworks to Counter Disinformation] EU Called for Stronger Action on Information Integrity Ofcom Founds Additional Online Safety Measures [CRC Glossary] [ Report Highlights] As reported by EU VS Disinfo, pro-Kremlin information channels intensified efforts to undermine trust in democratic processes in both Hungary and Bulgaria. An article by The Jamestown Foundation described Péter Magyar’s electoral victory in Hungary as a major political shift, particularly in reducing Russian influence in the European Union. According to a study by NATO, the People’s Republic of China has expanded its media presence in Arabic-speaking regions as part of broader FIMI efforts. A report by Carnegie Endowment for International Peace argued that digital technologies, especially social media and AI, are increasingly used by governments to shape public opinion and reinforce political narratives, with this dynamic particularly visible in relations between China and Taiwan. Ofcom announced that it will take new regulatory steps to combat the spread of illegal intimate images online, including non-consensual content and explicit deepfakes. An article published by the International Journal of Communication examines how deep state conspiracies function as adaptable disinformation narratives used by diverse political actors to shape perceptions, link domestic and foreign threats, and blur the boundary between truth and falsehood. An article published in Healthcare MDPI analyses how disinformation spreads through algorithm-driven platforms that exploit emotional and cognitive vulnerabilities, reinforcing false beliefs and eroding trust in media and institutions. [ Report Summary] According to a report by EU vs. Disinfo, Russian figures Vladimir Kuznetsov and Alexei Stolyarov present themselves as pranksters, but their activities function as a form of coordinated disinformation. As revealed by the FREEDOM TV channel’s website, Russian propaganda circulated a false claim that Ukraine declared itself a "winner" in the Iran war, using a manipulated image from the FREEDOM channel as supposed evidence. As reported by EU VS Disinfo, pro-Kremlin information channels intensified efforts to undermine trust in democratic processes in both Hungary and Bulgaria. According to an article by EU vs. Disinfo, Russia has used religion, particularly Orthodox Church structures, as a tool in its broader disinformation and hybrid warfare strategy against Ukraine. An article by The Jamestown Foundation described Péter Magyar’s electoral victory in Hungary as a major political shift, particularly in reducing Russian influence in the European Union. As published by The Atlantic Council, widespread narratives about the Russia–Ukraine War are shaped by misleading or incomplete information, particularly the claim that the conflict began in 2022. According to a study by NATO, the People’s Republic of China has expanded its media presence in Arabic-speaking regions as part of broader FIMI efforts. A report by Carnegie Endowment for International Peace argued that digital technologies, especially social media and AI, are increasingly used by governments to shape public opinion and reinforce political narratives, with this dynamic particularly visible in relations between China and Taiwan. According to an article by ISD Institute for Strategic Dialogue, during the first month of the Iran war, two coordinated pro-Iran networks on X, BRICS4CLICKS and Verified4War, generated over one billion views by spreading false, misleading, and AI-generated content. According to a report by The Hill, disinformation rooted in real social tensions has long been a national security concern. As reported by NewsGuard's Reality Check, Elon Musk significantly amplified a false claim that COVID-19 vaccines caused up to 60,000 deaths in Germany by reposting content to his massive audience on X. A report by ISD analyzed Canadian domestic extremist activity across social media between June and November 2025, identifying hundreds of accounts generating over one million posts. In a statement at the UN Committee on Information, the European Union reaffirmed its support for efforts to promote accurate, reliable, and accessible information worldwide. Ofcom announced that it will take new regulatory steps to combat the spread of illegal intimate images online, including non-consensual content and explicit deepfakes. An article published by the International Journal of Communication examines how deep state conspiracies function as adaptable disinformation narratives used by diverse political actors to shape perceptions, link domestic and foreign threats, and blur the boundary between truth and falsehood. An article published in Healthcare MDPI analyses how disinformation spreads through algorithm-driven platforms that exploit emotional and cognitive vulnerabilities, reinforcing false beliefs and eroding trust in media and institutions. [State Actors] Russia Weaponizing Comedy as a Disinformation Tactic According to a report by EU vs. Disinfo, Russian figures Vladimir Kuznetsov (Vovan) and Alexei Stolyarov (Lexus) present themselves as pranksters, but their activities function as a form of coordinated disinformation. By impersonating political figures and publishing selectively edited conversations, they create misleading narratives that consistently favor the Kremlin. Although framed as entertainment, their content is amplified by pro-Russian media networks, turning staged interactions into tools for influencing public opinion. Their operations focus heavily on discrediting Ukraine, Western governments, and post-Soviet opposition movements. Through deceptive calls, they extract comments that are taken out of context and used to support false claims, such as portraying protests in Belarus and Georgia as Western-controlled "color revolutions" or suggesting declining Western support for Ukraine. These manipulated narratives are then spread online, highlighting how modern propaganda no longer relies only on traditional media but also on viral, seemingly informal content. Source: EUvsDisinfo. Pranked by the Kremlin: fake phone calls as a FIMI instrument. [online] Published 15 April 2026. Available at: https://euvsdisinfo.eu/pranked-by-the-kremlin-fake-phone-calls-as-a-fimi-instrument/ Top Of Page Fabricated Media Content Used to Spread Disinformation As revealed by the FREEDOM TV channel’s website, Russian propaganda circulated a false claim that Ukraine declared itself a "winner" in the Iran war, using a manipulated image from the FREEDOM channel as supposed evidence. The image was digitally altered to include a fake caption stating that Ukraine had defeated Iran and that its air defense instructors were returning home. This fabricated visual was designed to mislead audiences and create a false narrative about Ukraine’s role in the conflict. The disinformation spread quickly across Russian Telegram channels. It was even picked up by some Ukrainian media outlets and bloggers without proper verification. In reality, the original broadcast had nothing to do with Iran. It featured commentary on the European Commission's positions on Russian threats to the Baltic states, with an Estonian official speaking via video link. Source: Freedom (UATV). Повідомлення ДП “МПІУ” щодо використання логотипа і зображення студії телеканалу FREEДОМ для створення російського фейку. [online] Published 9 April 2026. Available at: https://uatv.ua/uk/povidomlennya-dp-mpiu-shhodo-vykorystannya-logotypa-i-zobrazhennya-studiyi-telekanalu-freedom-dlya-stvorennya-rosijskogo-fejku/ Top Of Page Russia Targeted Elections in Hungary and Bulgaria As reported by EU VS Disinfo, pro-Kremlin information channels intensified efforts to undermine trust in democratic processes in both Hungary and Bulgaria. In Hungary, the campaign focused on discrediting the opposition party TISZA and its leader, Péter Magyar, while spreading claims that the EU and Ukraine were interfering in the 12th of April 2026. Following TISZA’s victory, similar narratives are expected to persist, targeting both the new political leadership and the EU. Comparable messaging has also been directed at Bulgaria ahead of its upcoming parliamentary elections, particularly through allegations of EU interference and censorship. Beyond election-related disinformation, pro-Kremlin outlets promoted broader narratives aimed at weakening trust in the EU. These included false claims that the EU is secretly developing nuclear weapons. At the same time, messaging around EU financial support to Ukraine framed the assistance as prolonging the war. Another report by EU VS Disinfo presented a specific case where a Russian disinformation campaign spread a fabricated claim that Magyar had killed a family puppy. The story, originating from a newly created and anonymous website, falsely alleged that Magyar’s ex-wife had accused him of abusive behavior in a memoir that does not exist. Despite lacking any evidence, the claim quickly gained traction online, reaching millions of users across multiple languages. The disinformation moved beyond fringe platforms when Polish opposition leader Jarosław Kaczyński repeated the allegation during a press conference. The claim was later acknowledged as untrue by his party, while Magyar’s ex-wife explicitly denied ever making such accusations. Source: EUvsDisinfo. Russia targets elections in Hungary and Bulgaria. [online] Published 17 April 2026. Available at: https://euvsdisinfo.eu/russia-targets-elections-in-hungary-and-bulgaria/ Top Of Page The War in Ukraine Religion as a Tool of Hybrid Warfare According to an article by EU vs. Disinfo, Russia has used religion, particularly Orthodox Church structures, as a tool in its broader disinformation and hybrid warfare strategy against Ukraine. Alongside historical ties between the Ukrainian Orthodox Church (UOC) and Moscow, affiliated networks have spread propaganda that frames the war in religious terms. These narratives falsely portray Ukraine as persecuting believers and describe Russia’s actions as a "holy war" to defend true Christianity, reinforcing pro-Kremlin ideology. A key element of this disinformation is the systematic use of media channels linked to the UOC to spread false claims, conspiracy theories, and manipulative interpretations of events. These platforms promote narratives about a supposed "church schism" and label Ukrainian institutions and believers as "Satanists" or "heretics". Such language is designed to dehumanize Ukrainians and legitimize Russian aggression by embedding propaganda within religious discourse. Additionally, these campaigns aim to reshape public perception by introducing Kremlin-aligned terminology into everyday use. Source: EUvsDisinfo. How Russia weaponizes the church in Ukraine. [online] Published 14 April 2026. Available at: https://euvsdisinfo.eu/how-russia-weaponizes-the-church-in-ukraine/ (euvsdisinfo.eu) Top Of Page Political Changes in Hungary Affect Its International Relations An article by The Jamestown Foundation described Péter Magyar’s electoral victory in Hungary as a major political shift, particularly in reducing Russian influence in the European Union. While the new government plans to investigate past ties between Hungarian officials and Moscow, the legacy of Russian influence, especially through energy dependence and political networks, remains significant. Disinformation is highlighted as a key tool previously used to shape public opinion and policy, particularly under the former government. A central theme is how pro-Kremlin disinformation has affected Hungary’s domestic attitudes, especially toward Ukraine. Years of narratives portraying Ukraine negatively have contributed to public skepticism about closer ties and EU membership for Ukraine. The text emphasized that while political leadership can shift quickly, the effects of disinformation are more persistent. Public distrust, shaped by repeated misleading narratives, continues to complicate Hungary’s foreign policy decisions. Source: The Jamestown Foundation. Péter Magyar’s Historic Victory Holds Implications for Russia and Ukraine. [online] Published April 2026. Available at: https://jamestown.org/peter-magyars-historic-victory-holds-implications-for-russia-and-ukraine/ Top Of Page Misleading Framing of the Russia–Ukraine War and Timeline As published by The Atlantic Council, widespread narratives about the Russia–Ukraine War are shaped by misleading or incomplete information, particularly the claim that the conflict began in 2022. In reality, the war started in 2014 with Russia’s annexation of Crimea and its covert military intervention in eastern Ukraine. Framing the war as a shorter, recent conflict obscures its true nature as a long-term campaign of aggression and contributes to misunderstandings in international discourse. A central theme is the role of Kremlin disinformation in distorting perceptions of the conflict. Russia portrayed its actions in 2014 as local uprisings by oppressed Russian-speaking populations, while denying direct military involvement. These false narratives were sometimes repeated by international media, creating confusion and lending credibility to fabricated claims. In fact, evidence and later admissions confirm that the so-called "separatist" movements were orchestrated and supported by Moscow from the outset. This disinformation continues to influence policy debates and peace proposals, such as the idea that territorial concessions could end the war, and therefore dismantling these persistent falsehoods and acknowledging the full history of the conflict since 2014 is essential for understanding the war and achieving any meaningful resolution. Source: Atlantic Council. Russia invaded Ukraine in 2014 long before the full-scale war of 2022. [online] Published 18 April 2026. Available at: https://www.atlanticcouncil.org/blogs/ukrainealert/russia-invaded-ukraine-in-2014-long-before-the-full-scale-war-of-2022/ Top Of Page China PRC Media Influence in Arabic-Language Environments According to a study by NATO, the People’s Republic of China (PRC) has expanded its media presence in Arabic-speaking regions as part of broader foreign information manipulation and interference (FIMI) efforts. Through state-controlled media and partnerships with local outlets, Beijing seeks to shape public opinion, promote its global image, and challenge Western narratives. While social media plays a role, traditional media remains a key channel for spreading these narratives, often presenting China as a reliable development partner while downplaying sensitive issues such as human rights. The study found that PRC messaging is selectively effective. Narratives tied to practical benefits like infrastructure projects, trade, and logistics are more likely to be picked up and amplified by local media. Similarly, content that aligns with existing anti-Western or geopolitical competition narratives gains traction. In contrast, more ideological messaging, such as concepts like a "shared global destiny", has limited impact. Importantly, PRC narratives are rarely critically challenged in Arabic-language media, even when they carry a strong promotional or propagandistic tone. The findings highlighted a pattern of indirect disinformation influence. Rather than spreading overt falsehoods, PRC media promote biased, one-sided narratives that are selectively adopted and reframed by regional outlets. This contributes to shaping perceptions of global politics in ways that favor Beijing, with potential spillover effects on Arabic-speaking audiences beyond the region, including in Europe. Source: NATO Strategic Communications Centre of Excellence. Assessing PRC Media: Framing and Narratives in Arabic-Language Media Environments. [online] Published April 2026. Available at: https://stratcomcoe.org/pdfjs/?file=/publications/download/Assessing-PRC-Media-FINAL-465f0.pdf?zoom=page-fit Top Of Page China’s Digital Disinformation Affects Taiwan’s Policy A report by Carnegie Endowment for International Peace argued that digital technologies, especially social media and AI, are increasingly used by governments to shape public opinion and reinforce political narratives, with this dynamic particularly visible in relations between China and Taiwan. It highlighted how Beijing promotes a simplified "unification versus independence" framework to describe Taiwan’s political future, reducing a complex and diverse set of public attitudes into a binary choice. This framing, amplified through global media, influencer networks, and Chinese-developed AI systems, supports China’s broader effort to enforce the "One China" principle and weaken international support for Taiwan’s self-determination. However, survey data from Taiwan showed that public opinion is far more nuanced, spanning multiple positions such as conditional unification, maintaining the status quo, or conditional independence. This complexity is often overlooked not only by external actors but also by domestic media and political discourse, which tend to favor simplified narratives for mobilization purposes. The report concluded that this "digital hegemony" poses a significant challenge to democratic debate and policymaking. Source: Carnegie Endowment for International Peace. Digital Hegemony and the Reification of Taiwan’s “Unification-Independence” Dichotomy. [online] Published 14 April 2026. Available at: https://carnegieendowment.org/research/2026/04/digital-hegemony-and-the-reification-of-taiwans-unification-independence-dichotomy Top Of Page Iran Pro-Iran Networks Gained a Billion Views on War Propaganda According to an article by ISD Institute for Strategic Dialogue, during the first month of the Iran war, two coordinated pro-Iran networks on X, BRICS4CLICKS and Verified4War, generated over one billion views by spreading false, misleading, and AI-generated content. Despite comprising only a few dozen accounts, the networks achieved massive reach through coordinated reposting, paid verification (blue checkmarks), and amplification via X’s "For You" algorithm. Their content was further boosted by high-profile accounts, including diplomats and influencers. Both networks posed as news or commentary accounts while promoting pro-Iran narratives that exaggerated military successes and targeted the United States and Israel. They widely circulated fabricated claims, such as the death of Israeli Prime Minister Benjamin Netanyahu, and used AI-generated war footage, clickbait posts, and conspiracy theories to drive engagement. Their activity revealed clear coordination patterns, with most reposts occurring within the networks themselves and consistent similarities in account metadata and behavior. ISD has no further evidence that either network is state-backed, and it also remains unclear if the underlying motivations are financial or ideological. Although X removed some accounts, gaps in moderation allowed these networks to gain visibility and credibility. Source: Institute for Strategic Dialogue. How pro-Iran networks gained a billion views on war propaganda. [online] Published April 2026. Available at: https://www.isdglobal.org/digital-dispatch/how-pro-iran-networks-gained-a-billion-views-on-war-propaganda/ Top Of Page Disinformation as a Divisive Strategic Weapon According to a report by The Hill, disinformation rooted in real social tensions has long been a national security concern. It recalls how President Eisenhower warned during the Little Rock crisis that visible racism damaged U.S. credibility and gave adversaries propaganda material. Today, the same dynamic persists in a different form, as foreign actors, particularly Iran, exploit America’s racial history and internal divisions to weaken unity and undermine support for national policies. According to the passage, modern disinformation blends with exaggerations and falsehoods. Iranian-linked campaigns reportedly use social media, memes, and music to target specific groups, such as Black Americans and women, while promoting anti-American, anti-Israel, and antisemitic narratives. This strategy does not aim to invent new conflicts but to amplify existing ones, creating deeper polarization and mistrust within society. The report emphasized that domestic actors unintentionally contribute to this problem. Political extremes, influencers, and media figures may amplify divisive narratives for profit or ideology, effectively reinforcing foreign propaganda. Sources: The Hill. Iran is exploiting our racial divisions online — and Americans are helping. [online] Published 2026. Available at: https://thehill.com/opinion/international/5830889-iran-propaganda-american-divisions/ Top Of Page [General Reports] Musk Gave a 50 Million-View Boost to the COVID Vaccine Hoax As reported by NewsGuard's Reality Check, Elon Musk significantly amplified a false claim that COVID-19 vaccines caused up to 60,000 deaths in Germany by reposting content to his massive audience on X. The claim originated from testimony by a former Pfizer toxicologist, who speculated, without evidence, 0that official reports of vaccine-related deaths were vastly undercounted. While the claim initially gained limited traction, it spread rapidly after being shared by a commentator and then boosted by Musk, reaching tens of millions of views within hours. In reality, Germany’s vaccine regulator, the Paul Ehrlich Institute, reported 2,133 deaths following vaccination, but emphasized that such reports do not imply causation. Its analysis identified only 74 cases in which a link to vaccination was possible or probable, out of nearly 200 million doses administered. Source: NewsGuard Reality Check. Musk Gives a 50 Million View Boost. [online] Published 2026. Available at: https://www.newsguardrealitycheck.com/p/musk-gives-a-50-million-view-boost Top Of Page Online Extremism in Canada in 2025 A report by ISD analyzed Canadian domestic extremist activity across social media between June and November 2025, identifying hundreds of accounts generating over one million posts. These groups are highly active on platforms like X and Telegram, where different ideologies, from ethnonationalism to white supremacy, thrive. Their content often focuses on narratives of societal decline, threats to national identity, and hostility toward minorities, helping to amplify and normalize extremist worldviews online. Disinformation plays a central role in amplifying conspiracy theories and distorting real-world events. Narratives linked to figures like Romana Didulo, anti-vaccination movements, and false claims about minority groups are widely circulated. Offline incidents, such as violent attacks or political events, are often misrepresented or exaggerated to justify hatred, for example, by falsely linking entire communities to violence or spreading claims of media bias. The spread of disinformation is closely tied to rising hate speech and calls for violence. False or misleading claims about migrants, LGBTQ individuals, and other minorities fuel hostility and dehumanization, while viral posts from influential accounts can rapidly intensify these trends. Although only a small portion of posts contain explicit violent language, spikes often follow disinformation-driven reactions to major events. Source: Institute for Strategic Dialogue (ISD). Online Domestic Extremism in Canada: June–November 2025. [online] Published April 2026. Available at: https://www.isdglobal.org/wp-content/uploads/2026/04/Online-Domestic-Extremism-in-Canada-June-November-2025.pdf Top Of Page Deep State Narratives Blur Lines Between Truth and Disinformation An article published by the International Journal of Communication, written by Stephen Hutchings, examines how “deep state” (DS) narratives function as a fluid and contested form of disinformation across political and cultural contexts. It shows that DS conspiracism is used by a wide range of actors, including populist political movements, mainstream politicians, Kremlin-affiliated media, and counter-disinformation organizations, to advance competing narratives. These actors deploy DS claims both as tools of influence and as accusations against opponents, often linking domestic grievances with foreign adversaries in a circular dynamic. Tactics include narrative amplification through media ecosystems, strategic framing of elites as covert manipulators, cross-border adaptation of conspiracy tropes, and the use of hashtags and community-building mechanisms to reinforce shared belief systems. The study highlights how DS narratives are frequently integrated into broader geopolitical messaging, particularly along the U.S. – Russia axis, where they are alternately framed as hidden truths or as disinformation, depending on the political perspective. The article emphasizes that DS conspiracism operates as a “master narrative” within disinformation campaigns, capable of absorbing and connecting multiple claims into a coherent but unfalsifiable framework. Its effectiveness lies in its adaptability: it shifts between being presented as factual revelation, symbolic critique, or deliberate falsehood, allowing actors to exploit ambiguity and evade straightforward refutation. Disinformation tactics associated with DS narratives include selective use of evidence, rhetorical distancing, repurposing foreign-origin concepts to serve local political agendas, and reciprocal accusations between opposing actors. The analysis concludes that this fluidity undermines binary distinctions between truth and falsehood, enabling DS narratives to persist and evolve across platforms, cultures, and political systems, thereby complicating efforts to counter disinformation and contribute to broader instability in democratic information environments. Source: International Journal of Communication. [online] Available at: https://ijoc.org/index.php/ijoc/article/view/26038/5302 Top Of Page Disinformation Exploits Algorithms and Human Vulnerabilities to Erode Trust An article published in Healthcare MDPI analyses disinformation as a systemic and intentional form of information manipulation embedded within digital ecosystems and amplified by algorithm-driven platforms. It identifies digital platforms and social media networks as the primary enablers of disinformation dissemination, where algorithms prioritize emotionally charged, sensationalist, and polarizing content to maximize engagement, regardless of accuracy. These dynamics facilitate the rapid and large-scale spread of false or misleading narratives, often outpacing information. Tactics include algorithmic amplification, exploitation of echo chambers and filter bubbles, and repeated exposure to false claims, which increases perceived credibility through familiarity effects. Disinformation actors, though not always explicitly named, operate through digitally mediated environments, leveraging platform structures and user behavior to propagate misleading content and shape public perception. The article further highlights how disinformation campaigns exploit psychosocial vulnerabilities, including political beliefs, social identity, and emotional responses, to increase acceptance and diffusion of false narratives. Techniques such as confirmation bias, identity-based messaging, and emotionally manipulative content reinforce group alignment and polarization. At the same time, repeated exposure contributes to the “illusory truth effect”, making false information more likely to be believed. The cumulative impact of these tactics includes erosion of trust in media and institutions, distortion of risk perception, and behavioral influence in areas such as public health decision-making. Disinformation is thus framed as a coordinated influence process that operates across cognitive, emotional, and social dimensions, creating feedback loops of distrust and increased susceptibility within the information environment. Source: MDPI. Effect of Muscle Energy Technique on Hamstring Flexibility: Systematic Review and Meta-Analysis. [online] Published 11 April 2023. Available at: https://www.mdpi.com/2227-9032/11/8/1089 Top Of Page [Appendix - Frameworks to Counter Disinformation] EU Called for Stronger Action on Information Integrity In a statement at the UN Committee on Information, the European Union reaffirmed its support for efforts to promote accurate, reliable, and accessible information worldwide. It praised the UN Department of Global Communications for advancing initiatives to counter disinformation and hate speech, including the implementation of the Global Principles for Information Integrity. The EU emphasized the importance of protecting independent media, strengthening media literacy, and ensuring transparency and accountability from digital platforms. At the same time, the EU warned that disinformation, particularly foreign information manipulation, poses a growing threat to democratic societies, international cooperation, and human rights, with risks amplified by the rise of AI. It also highlighted operational challenges facing the UN, including funding constraints that limit its ability to reach global audiences effectively. It stressed that reforms, including the use of AI, should complement rather than replace core communication functions. The EU further underscored the importance of multilingual communication and the protection of journalists, especially in conflict zones. Source: European External Action Service (EEAS). EU Statement – 48th session of the UN Committee on Information: Informal briefing with USG of the Department of Global Communications. [online] Published 2026. Available at: https://www.eeas.europa.eu/delegations/un-new-york/eu-statement-%E2%80%93-48th-session-un-committee-information-informal-briefing-usg-department-global_en Top Of Page Ofcom Founds Additional Online Safety Measures Ofcom announced that it will take new regulatory steps to combat the spread of illegal intimate images online, including non-consensual content and explicit deepfakes. One of their proposals is the use of "hash matching" technology, which allows platforms to detect and block harmful images before they are widely shared. The decision has been fast-tracked due to the urgent need to better protect users from online abuse. The measures also address the growing role of disinformation in this space. Deepfake technology can be used to create and spread manipulated intimate images and cause serious harm to victims. By stopping such content at the source, regulators aim to limit both the abuse itself and the spread of deceptive digital material that can damage reputations and distort reality. According to Ofcom, the proposed rules are expected to be finalized soon and could take effect within months, with additional protections planned later. Source: Ofcom. Ofcom fast-tracks decision on measures to block illegal intimate images. [online] Published 18 February 2026. Available at: https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/ofcom-fast-tracks-decision-on-measures-to-block-illegal-intimate-images Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website. Top Of Page

  • From Cyfluence to Urban Risks: Toward the Urban Cyfluence Framework

    The Urban Cyfluence Framework begins from the concept of Cyfluence, a hybrid paradigm that captures the convergence of cyber threats and influence operations, rather than treating urban systems in isolation. While existing research tends to remain fragmented—focusing separately on smart-city innovation, municipal cybersecurity, or information manipulation, this approach highlights how these domains increasingly overlap. Evidence from multiple regions shows that as cities become more digitally dependent, cyber-attacks and influence campaigns no longer operate independently. Instead, they intersect across urban services, communication channels, and, critically, public trust. The key gap is not a lack of research on cyber risks or influence operations individually, but the absence of an integrated framework that explains how physical, digital, and cognitive dimensions interact in urban environments. The Urban Cyfluence Framework is designed to address this gap by providing a more holistic lens for understanding emerging urban threats. Key Takeaways An appropriate point of departure for the understanding of the Urban Cyfluence Framework is the newly-established, hybrid paradigm of Cyfluence, rather than urban centers in isolation. This is because Cyfluence captures the growing convergence between cyber-attacks for influence and information manipulation in support of cyber threats. Current research remains conceptually segmented; Smart-city research continues to prioritize optimization, innovation, and digital enablement; municipal cybersecurity studies often remain confined to technical governance or organizational resilience; while foreign information manipulation and interference (FIMI) and hostile cognitive threats research, is still largely oriented toward state competition, elections, or platform ecosystems. Comparative evidence from North America, Europe, Asia, Australia, and the Pacific indicates a recurring pattern: as urban governance becomes increasingly dependent on digital infrastructures, cyber threats and hostile influence cease to function as separate categories of disruption. Instead, they intersect through service environments, communication channels, and the production or erosion of public trust. The primary research gap, therefore, is not the absence of work on cyber risk or influence operations individually. Rather, it is the absence of a comprehensive analytical framework capable of explaining how physical, digital, and cognitive effects interact within the urban context. It is this epistemic gap that led to the development of the Urban Cyfluence Framework. Author: CRC Urban Cyfluence Lab Team [Download PDF Here]

  • Pro-Palestine Mobilization and Digital Influence at Columbia University

    Following the October 7, 2023 Hamas attack on Israel, U.S. college campuses became the stage for over 3,700 demonstrations and 1,150 encampments across 35 states, the largest student protest wave since the 1960s. This comprehensive multi-disciplinary study applies digital forensics, network analysis, field observations and open-source intelligence (OSINT) in order to examine Columbia University as a prime case study in contemporary hybrid cognitive threats targeting the United States.  The research explores the mechanism by which grassroots activism was embedded within, and largely exploited by, a layered influence-driven ecosystem comprising domestic organizers, digital aggregators, inauthentic amplification clusters, and foreign-linked actors. To conclude, we review possible remediation strategies and practical approaches to address the increased threat posed to Managed Contested Spaces (MCSs) by hostile influence and information disorder efforts. Key Takeaways   Integrated Hybrid Influence Architecture:  Columbia’s protest waves operated as tightly coupled physical–digital phenomena in which templated national toolkits and real-time messaging platforms synchronized mobilization and logistics, while an integrated physical–digital ecosystem enabled rapid scaling, narrative shaping, and the manufacture of perceived consensus. Bidirectional Mobilization:  Protest escalation was driven by recursive information flows: bottom-up amplification transformed localized incidents into nationally salient narratives, while top-down externally generated narratives were injected into campus organizing cycles, often shaping both framing and timing of physical actions. Detection and Attribution of Inauthentic Amplification Activity:  CRC researchers have mapped an array of impersonation assets, designated as Inauthentic Journalists Persona Cluster (IJPC). Currently, we assess it is a Nigeria-based influence-for-hire narrative amplification effort. Protecting Managed Contested Spaces (MCSs): High-profile targeted institutions must adopt a proactive,  whole-of-institution approach  against hybrid threats. This requires the integration of detection solutions, resilience-building programs, collaboration interfaces, and deploying Cognitive Security Posture Management  (CoSPM) visibility and response capabilities across multiple dimensions (physical–cognitive–digital), in accordance with the Cyfluence Security Paradigm. Author: Eliana Aiken and The CRC Team [Full Report Below]

  • Not All-Powerful: A Granular Perspective on Influence Networks

    In many security policy debates, hostile influence campaigns by authoritarian states like China are often portrayed as hyper-efficient, strategically orchestrated, and almost omnipotent. The report "Baybridge – Anatomy of a Chinese Information Influence Ecosystem," published by the French military research center IRSEM  in October 2025, challenges this general perception . i     The notion of a uniformly centralized and effective Chinese disinformation apparatus is inaccurate because such a unified structure does not exist. Instead, a diverse range of actors operate within this ecosystem. These include private, commercially driven entities that act on behalf of the state or maintain links to state resources, which act without strategic coherence, professional execution, or operational efficiency. To assess influence operations strategically, the report calls for a deeper understanding of the specific actors, structures, interests, and operational logics involved by using a specific analytical approach . ii   The Actor-Specific, Granular Approach  The actor-specific, granular analytical approach does not view digital influence campaigns as complex networks of concrete actors with varying interests, capabilities, and motivations. At its core, the approach asks: Who is actually acting, within what organizational framework, using what tools, and to what end? It focuses on digital assets such as websites, social media profiles, and technical infrastructures, examining their connections, modes of control, and content strategy. This allows for the identification of the individuals, companies, or organizations involved and their actual roles and motives within the broader campaign.  The approach follows a multi-step process: first, the network structure is mapped and technical linkages are revealed. Next, digital traces are attributed to real-world actors, and their interests are analyzed. Simultaneously, the content is assessed for coherence, professionalism, and resonance with target audiences. Finally, the campaign’s actual impact is evaluated: Does it exert meaningful strategic influence, or is it merely an exercise in high-volume, low-impact output?  Case Study: The Network Around Haimai and Haixun  By using this approach  Baybridge  report examines a Chinese digital influence ecosystem centered on two companies: Shenzhen Haimai Yunxiang Media Co., Ltd. (Haimai) and Shanghai Haixun Technology Co., Ltd. (Haixun). Both market PR and media packages, run multilingual websites with seemingly journalistic content and share identical infrastructure. The report findings imply that this operation is not a centrally planned and applied influence operation but a network that functions as a commercial system with propagandistic features.    Figure 1 – Infrastructure Overview iii , Courtesy of IRSEM At the core are Wu Yanni, co-founder of Haimai and member of Shenzhen’s Municipal Party Committee Propaganda apparatus, and Zhu Haisong, owner of Haixun and member of Guangdong’s Propaganda Department.    Figure 2 - Activities of Wu Yanni & Zhu Haisong in the public & private sectors iv , Courtesy of IRSEM  The IRSEM report concludes that they are not strategic propagandists, but rather local entrepreneurs leveraging political ties for commercial gain. Their motivations appear to be primarily financial, including contract acquisition, rent-seeking, and fulfilling bureaucratic performance metrics such as article volume and reach.  Why the “Baybridge-Network” is Inefficient  Despite significant technical resources, the network exhibits major deficiencies in technical, structural, and content areas . Much of the content appears machine-translated, riddled with character encoding issues, and lacks editorial oversight. The result is an incoherent visual and linguistic output that undermines credibility and consistency.  An identified core flaw lies in the coexistence of contradictory narratives: Chinese content promotes “Positive Energy,” a state-endorsed messaging style that emphasizes harmony, optimism, and trust, while the same platforms often disseminate aggressive, conflict-driven Russian rhetoric critical of Western democracies. v  This juxtaposition, described in the report as a “narrative cacophony,” creates tonal contradictions that cancel each other out. This incoherence is particularly damaging during moments of symbolic significance for China, such as diplomatic visits, where simultaneously aggressive Russian-led messaging seems to undercut Beijing’s intended messaging. vi   Conclusion  The IRSEM report demonstrates that Chinese information operations are neither uniformly structured nor consistently effective. The “Baybridge” case study highlights a particular model in which private-sector actors with close ties to the state carry out influence operations on behalf of government entities. However, their activities are primarily shaped by commercial incentives and bureaucratic performance indicators. Within this logic, quantitative metrics such as content volume, geographic reach, and language variation are prioritized, while actual strategic impact on target audiences is secondary.  This setup can lead to inefficient campaigns: technically elaborate but strategically incoherent and lacking in persuasive quality. The core issue lies not in the absence of central coordination, but in the disconnect between political objectives, operational execution, and content effectiveness. These shortcomings are not unique to China, but they manifest in distinctive ways within authoritarian systems.  Rather than assuming a centralized and uniformly professional influence apparatus, the report suggests an actor-specific, granular analytical approach that enables differentiation. By mapping concrete actors, structures, and operational logics, it becomes possible to evaluate the actual relevance of an influence operation and to allocate security resources more effectively and proportionally. vii     [Footnotes:] [i] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. pp.78-79 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [ii] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. p. 79 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [iii] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. p.18 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [iv] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. p.42 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [v] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. p.56-61 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [vi] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. pp.69-70 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [vii] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. p.79 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf

  • CRC Spotlight: Smart Cities: Future Urban Environments in the Crosshairs of Cyber Threats and Information Disorder

    Modern smart cities rely on extensively interconnected digital infrastructures that link not only administrative processes, but also mobility, energy systems, communication networks, urban services, and private-sector platforms. This dense connectivity creates significant exposure to hybrid threats in which technical cyberattacks overlap with strategic influence efforts, affecting both critical infrastructure and the wider informational sphere of a city. Against this backdrop, the article analyzes how smart cities evolve into environments where cyber vulnerabilities and informational fragilities reinforce one another, creating conditions for the emergence of cyfluence risks—hybrid threats that combine system intrusion with targeted narrative manipulation. As municipal infrastructure increasingly depends on IoT devices, real-time data streams, cloud-based applications, and automated urban management systems, disruptions can cascade across networks, while manipulated information circulating through public apps, digital signage, transport interfaces, and social media can amplify societal impact. Smart cities thus face a dual risk landscape in which breaches of technical systems and distortions of the information ecosystem can interact, accelerate one another, and undermine public trust at scale. [Full Report Below]

  • CRC Addendum: New Threat Vectors, Legacy Constraints: Structural Gaps in Indonesia’s Smart-City Vision

    Author: Nur Mawaddah, CRC Indonesian Program Director This article examines why Indonesia’s ambitious smart-city vision, despite notable progress, continues to face significant structural challenges. It highlights how uneven local capacities, fragmented cyber governance, and limited coordination shape the pace of digital transformation, while the new capital Nusantara emerges as a technologically integrated countermodel. At its core, the article raises a key question: How can Indonesia move from isolated flagship projects toward a nationwide, resilient, and digitally connected urban system? It outlines where foundational reforms and capacity building are needed, particularly in legislation, organisational structures, digital literacy, and cognitive resilience. [Full Report Below]

  • Cyber-based hostile influence campaigns 30th March - 5th April 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect (hence force Cyfluence , as opposed to cyber-attacks that aim to steal information, extort money, etc.) Such hostile influence campaigns and operations can be considered an epistemological branch of Information Operations (IO) or Information Warfare (IW). Typically, and as customary during the last decade, the information is spread throughout various internet platforms, which are the different elements of the hostile influence campaign, and as such, connectivity and repetitiveness of content between several elements are the main core characteristics of influence campaigns.  Hostile influence campaigns, much like Cyber-attacks, have also become a tool for rival nations and corporations to damage reputation or achieve various business, political or ideological goals. Much like in the cyber security arena, PR professionals and government agencies are responding to negative publicity and disinformation shared over the news and social media.  We use the term cyber based  hostile influence campaigns, as we include in this definition also cyber-attacks aimed at influencing (such as hack and leak during election time), while we exclude of this term other types of more traditional kinds of influence such as diplomatic, economic, military etc. During the 30 th  of March to the 05 th  of April, 2026 we observed, collected and analyzed endpoints of information related to cyber based  hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. Some of the mentioned campaigns have to do with social media and news outlets solemnly, while others leverage cyber-attack capabilities. [Contents] [Introduction] [Report Highlights] [Report Summary] [State Actors] Russia   Sputnik Czechia’s Successor Adapted to Circumvent Sanctions Sanctioned Kremlin-Affiliated Content Persists Via Pravda Network RT and Tucker Carlson Targeted Canada’s Sovereignty The War in Ukraine Kremlin Blamed NATO for Ukrainian Strikes and Warned of a "Coup" in Hungary China Pro-China Sources Faked an Image of a U.S. Soldier Iran Handala claimed a hack of FBI Director Personal Email AI Image Featured Iranian Missile Inscribed With "No Kings" [AI Related Articles] AI-Driven Deception and Industrialized Scam Networks Redefine Online Disinformation Threats [General Reports] RFK Jr. Group Revived Long-Debunked Claim that WiFi is Dangerous The Rise of AI Deepfakes During the Iran War India’s Ruling Party is Using AI to Boost Hate Speech False Claim that "Easter" Removed from Chocolate to Avoid Offending Muslims The UAE Rewrites the Story of Iranian Strikes TikTok Accounts Used AI-Generated Videos to Incite Protests in Moldova [Appendix - Frameworks to Counter Disinformation] EU Commission Boosts Independent Fact-Checking With a €5 Million Grant Resilience to Misinformation in Kosovo’s Election Period U.S. Struggles to Counter Rising Foreign Disinformation Campaigns [CRC Glossary] [ Report Highlights] DFR Lab reported that despite extensive EU sanctions on Kremlin-linked media, pro-Kremlin content remains widely accessible across Europe through mirror websites, third-party reposts, and networks such as the Pravda Network. The Australian Human Rights Commission released a report stating that financial and state-linked actors are increasingly using AI-generated personas, coordinated deception, and structured scam networks to conduct large-scale disinformation and fraud campaigns that exploit trust and evade detection. The war involving Iran has triggered an unprecedented surge in AI-generated misinformation, with fabricated videos, synthetic images of destruction, and fake reports of public figures being killed spreading widely across social media. According to NewsGuard's Reality Check , the volume and realism of this content over the past month have been unlike anything it has tracked in its 8-year history. A Bellingcat investigation found that the Bharatiya Janata Party (BJP) has been using AI-generated imagery and videos to amplify anti-Muslim and anti-Bangladeshi messaging ahead of upcoming state elections in Assam and West Bengal. According to an article published by The New York Times , Foreign adversaries are intensifying disinformation campaigns to spread anti-American narratives and sow division, while weakened U.S. institutional capacity and reactive strategies limit the effectiveness of countermeasures. [ Report Summary] According to an article by DFR Lab , despite EU sanctions on the Russian state outlet Sputnik, its successor platforms, the website 42TČen and the Telegram channel neČT24, have successfully continued operations by rebranding, shifting domains, and using infrastructure spread across several countries. DFR Lab reported that despite extensive EU sanctions on Kremlin-linked media, pro-Kremlin content remains widely accessible across Europe through mirror websites, third-party reposts, and networks such as the Pravda Network. According to a report by DisinfoWatch , RT’s X account amplified an edited clip of Tucker Carlson in which he suggested that the United States should pressure Canada into changing its government, "by force if necessary". According to an article by EU VS Disinfo , after Ukrainian drone strikes hit the Russian Baltic oil ports of Ust-Luga and Primorsk, key export hubs for Russian oil and petroleum products, pro-Kremlin media quickly shifted the narrative away from the damage and Ukraine’s military capabilities. NewsGuard's Reality Check reported that a pro-China X account operated from China has falsely circulated an old battlefield image, claiming it shows a U.S. soldier begging an Iranian drone for mercy. As reported by CyberNews , the pro-Iran hacking group Handala has claimed responsibility for breaching the personal email account of FBI Director Kash Patel, just days after U.S. authorities seized several websites linked to the group. NewsGuard's Reality Check reported that an AI-generated image falsely showing an Iranian missile inscribed with the slogan "No Kings" spread rapidly online after anti-Trump protests took place across the United States. The Australian Human Rights Commission released a report stating that financial and state-linked actors are increasingly using AI-generated personas, coordinated deception, and structured scam networks to conduct large-scale disinformation and fraud campaigns that exploit trust and evade detection. As reported by NewsGuard's Reality Check , Children’s Health Defense, the organization founded by U.S. Health Secretary Robert F. Kennedy Jr. is promoting a misleading claim that wireless radiation has caused health problems for 26 million adults across the U.S., Australia, and Canada. The war involving Iran has triggered an unprecedented surge in AI-generated misinformation, with fabricated videos, synthetic images of destruction, and fake reports of public figures being killed spreading widely across social media. According to NewsGuard's Reality Check , the volume and realism of this content over the past month have been unlike anything it has tracked in its 8-year history. A Bellingcat investigation found that the Bharatiya Janata Party (BJP) has been using AI-generated imagery and videos to amplify anti-Muslim and anti-Bangladeshi messaging ahead of upcoming state elections in Assam and West Bengal. A Bellingcat investigation suggested that UAE authorities have repeatedly downplayed or mischaracterized the impact of Iranian drone strikes on the country, while imposing strict controls on public reporting. As reported by DFR Lab , a coordinated network of at least 25 TikTok accounts used AI-generated videos to spread anti-government narratives in Moldova and mobilize support for protests planned for the 25 th  of March 2026, which ultimately never took place. The European Commission published that on the 31st of March 2026, it signed a 5 million euro grant agreement to support independent fact-checkers across the EU and beyond. A report by hibrid.info examined misinformation trends in Kosovo between the 9th of October and the 31st of December 2025, during a politically sensitive period that included local and early parliamentary elections. According to an article published by The New York Times , Foreign adversaries are intensifying disinformation campaigns to spread anti-American narratives and sow division, while weakened U.S. institutional capacity and reactive strategies limit the effectiveness of countermeasures. [State Actors] Russia Sputnik Czechia’s Successor Adapted to Circumvent Sanctions According to an article  by DFR Lab , despite EU sanctions on the Russian state outlet Sputnik, its successor platforms, the website 42TČen and the Telegram channel neČT24, have successfully continued operations by rebranding, shifting domains, and using infrastructure spread across several countries. After Sputnik Czechia was blocked in 2022, the new platforms openly positioned themselves as its continuation, first using a Czech domain and later moving to an anonymously registered .com domain hosted outside Czech jurisdiction. Their Facebook and X accounts are also managed from abroad, raising concerns about foreign interference. To reduce scrutiny, the outlet has shifted away from directly reposting Kremlin media and instead increasingly relies on original interviews with Czech and Slovak political figures and commentators who promote pro-Kremlin narratives. This strategy makes the content appear more local and organic while still amplifying messages critical of the West, supportive of Russia, and hostile to Ukraine. Telegram has become its main platform, where the channel posts large volumes of unattributed news briefs, videos, and images, including repurposed content from sanctioned Russian sources, while benefiting from a growing network of amplifiers across Czech-language channels. Source:  DFRLab. How Sputnik Czechia’s successor adapted to circumvent sanctions.  [online] Published 3 April 2026. Available at: https://dfrlab.org/2026/04/03/how-sputnik-czechias-successor-adapted-to-circumvent-sanctions/ Top Of Page Sanctioned Kremlin-Affiliated Content Persists Via Pravda Network DFR Lab reported  that despite extensive EU sanctions on Kremlin-linked media, pro-Kremlin content remains widely accessible across Europe through mirror websites, third-party reposts, and networks such as the Pravda Network. In early February 2026, more than 20 Pravda-linked websites published coordinated articles in multiple European languages falsely claiming that Ukrainian President Volodymyr Zelenskyy had described the frontline situation as "catastrophic" during an alleged call with Belgian Prime Minister Bart De Wever. Like many Pravda Network publications, the content promoted pro-Kremlin narratives regarding Ukraine’s imminent loss in the war and was among dozens of articles about Ukraine posted by the network in a single day. The Pravda Network, previously exposed as a sanctions-evasion operation linked to the Crimea-based company Tigerweb, continues to amplify and translate content from Russian state-affiliated outlets such as RT, RIA Novosti, and TASS into more than 80 languages across over 150 websites. By laundering content from sanctioned media and redistributing it through websites, Telegram channels, and social media platforms, the network exploits weak and uneven sanctions enforcement across EU member states. While Ukraine has built significant resilience against Russian disinformation domestically through media bans, fact-checking initiatives, and media literacy campaigns, it has far less control over narratives spread abroad. Source:  DFRLab. Sanctioned Kremlin-affiliated content persists via Pravda Network, attempting to shape Ukraine’s image abroad.  [online] Published 3 April 2026. Available at: https://dfrlab.org/2026/04/03/sanctioned-kremlin-affiliated-content-persists-via-pravda-network-attempting-to-shape-ukraines-image-abroad/ Top Of Page RT and Tucker Carlson Targeted Canada’s Sovereignty According to a report  by DisinfoWatch , RT’s X account amplified an edited clip of Tucker Carlson in which he suggested that the United States should pressure Canada into changing its government, "by force if necessary". Carlson made the remarks in a commentary dated to the 2 nd of April 2026, and on the same day, RT reframed them under a provocative "51st state" and regime-change narrative, falsely implying that Canada is not a sovereign nation and portraying its government as oppressive. It also repeated a misleading claim that Canada has "killed nearly 100,000" citizens through its MAID program. The report argued that this is a deliberate effort to exploit both the "51 st " state narrative and the Alberta separatist movement issue. Source:  DisinfoWatch. RT & Tucker Carlson Target Canada’s Sovereignty, Urging US Sponsored Regime Change . [online] Published 3 April 2026. Available at: https://disinfowatch.org/disinfo/tucker-rt-alberta-canada-separation/ Top Of Page The War in Ukraine Kremlin Blamed NATO for Ukrainian Strikes and Warned of a "Coup" in Hungary According to an article  by EU VS Disinfo , after Ukrainian drone strikes hit the Russian Baltic oil ports of Ust-Luga and Primorsk, key export hubs for Russian oil and petroleum products, pro-Kremlin media quickly shifted the narrative away from the damage and Ukraine’s military capabilities. Instead, Articles circulated through the Pravda network claimed that NATO and the UK’s MI6 coordinated the strikes, with some even alleging that the drones were launched from Estonia.  Russian information channels have used the incident to reinforce the narrative that the West has opened a "second front" against Russia, particularly in the Baltic region. These claims present Russia as the victim of deliberate NATO escalation, and help lay the groundwork for potential retaliatory rhetoric toward neighboring Baltic states. In parallel, pro-Kremlin messaging has also turned toward Hungary’s upcoming elections. RT Arabic has accused the EU and Ukraine of plotting a "coup" against Prime Minister Viktor Orbán by allegedly using experienced organizers from Ukraine’s 2014 protests, repeating wider disinformation claims that Brussels is trying to remove Prime Minister Viktor Orbán through the parliamentary elections. Source:  EUvsDisinfo. Ukraine hits Russian oil ports, Kremlin blames NATO and warns of a coup in Hungary.  [online] Published 2 April 2026. Available at: https://euvsdisinfo.eu/ukraine-hits-russian-oil-ports-kremlin-blames-nato-and-warns-of-a-coup-in-hungary/ Top Of Page China Pro-China Sources Faked an Image of a U.S. Soldier NewsGuard's Reality Check reported  that a pro-China X account operated from China has falsely circulated an old battlefield image, claiming it shows a U.S. soldier begging an Iranian drone for mercy. In reality, the photo predates the current Iran conflict and appears to come from the Russia-Ukraine war. According to reverse image searches, the image has been online since at least April 2024, and the soldier’s uniform matches that of Russian troops, not U.S. forces. The fact that such a post was allowed on X, a banned platform in China, suggested that the propaganda had Chinese government backing. Similar out-of-context or inauthentic imagery has recently been spread by social media users to portray U.S. soldiers cowering before Iranian forces. Source:  NewsGuard Reality Check. Wrong War, Wrong Victim, Wrong Victor . [online] Published n.d. Available at: https://www.newsguardrealitycheck.com/p/wrong-war-wrong-victim-wrong-victor Top Of Page Iran Handala claimed a hack of FBI Director Personal Email As reported  by CyberNews , Handala said it had obtained all personal and confidential information of Kash Patel, including emails, conversations, documents, and even classified files, and that they are available for public download. The leaked material included a mix of casual personal photos, showing Patel relaxing and smoking a cigar, posing next to a statue, sitting in a classic car, and taking mirror selfies with a bottle of ultra-premium Cuban rum. U.S. officials confirmed the breach and said the leaked material appeared authentic. However, the FBI stated that the compromised data was historical in nature and did not include government information. The incident comes amid an escalating cyber confrontation following the FBI and Department of Justice’s recent takedown of Handala’s online infrastructure. Prosecutors have linked the group to a broader Iranian cyber and psychological operations campaign involving data leaks, doxxing, and threats against journalists, dissidents, and Israeli-linked targets. Security experts warn that even a breach of a personal inbox can expose sensitive contacts, travel details, and other information that may enable future phishing or follow-on attacks, making the incident significant beyond its immediate contents. Handala has recently claimed attacks on major companies such as Stryker and Verifone. It also claimed to have compromised the personal email account of former Mossad research director Sima Shine, leaking more than 100,000 emails. These operations are designed not only to cause technical disruption but also to grab attention, generate headlines, and shape public perception. Source:  Cybernews. Handala claims hack of FBI Director Kash Patel’s personal email.  [online] Published 27 March 2026. Available at: https://cybernews.com/news/handala-claims-hack-fbi-director-kash-patel-personal-email/ Top Of Page AI Image Featured Iranian Missile Inscribed With "No Kings" NewsGuard's Reality Check reported  that an AI-generated image falsely showing an Iranian missile inscribed with the slogan "No Kings" spread rapidly online. On the 28 th  of March 2026, anti-Trump demonstrators staged protests across the U.S. under the "No Kings" banner. Many of the protesters expressed opposition to the war in Iran. One day after the protests, an article from an Iran-based, pro-regime News Agency claimed that the IRGC had inscribed "NO KINGS" on missiles used in its latest attack against Israel, and added the image of the inscribed missile without specifying its source. Pro-Iran accounts promoted the image as a tribute by Iran’s military to the American protesters, while pro-Israel and conservative users shared it to argue that U.S. liberals were aligned with Iran. However, NewsGuard analyzed the image using two AI-detection tools, both of which flagged it as manipulated or artificially produced. This isn’t the first time fabricated Iranian missile inscriptions have circulated on social media. In early March 2026, an AI-edited image of a missile with the phrase “In memory of the victims of Epstein Island" was spread by pro-Iran users. Sources: NewsGuard Reality Check. AI Fakes an Iranian “No Kings” Missile — and Both Sides Fall for It . [online] Published 6 April 2026. Available at: https://www.newsguardrealitycheck.com/p/ai-fakes-an-iranian-no-kings-missile Top Of Page [AI Related Articles] AI-Driven Deception and Industrialized Scam Networks Redefine Online Disinformation Threats The Australian Human Rights Commission released a report , coinciding with international trans day of Visibility. The report outlines a rapidly evolving disinformation and fraud ecosystem in which financially motivated threat actors, including loosely organized groups and highly structured criminal syndicates, systematically exploit digital platforms using deception at scale. These actors rely heavily on impersonation tactics, creating credible but false personas (e.g., medical professionals, landlords, or bereavement services) to build trust and manipulate targets. Increasingly, they leverage generative AI to enhance realism, producing culturally tailored, multilingual, and context-aware content, as well as synthetic imagery and conversations that mimic authentic human interaction. AI is also used to fabricate entire online infrastructures, websites, business identities, and communication flows, allowing adversaries to operate with greater efficiency and evade detection.  Operationally, these campaigns follow a structured "attack chain" involving infrastructure setup, persona development, engagement through social engineering, and eventual exploitation for financial or data extraction. Tactics include directing victims off the platform to controlled environments, deploying hyper-personalized narratives, and exploiting moments of vulnerability such as grief, financial hardship, or housing insecurity. Networks are often transnational and coordinated, with evidence of industrialized workflows, role specialization, and adaptive strategies in response to enforcement pressure. In parallel, influence operations, some state-linked, use AI-generated content and false identities to amplify political narratives, simulate grassroots legitimacy, and manipulate public discourse across multiple regions and languages. Source : Australian Human Rights Commission. Equal Identities: A Human Rights Review of Trans and Gender Diverse Experiences in Australia . [online] Published 2024. Available at: https://humanrights.gov.au/media/documents-files-PDFs/strategic-communications/EQUAL_IDENTITIES_FINAL_DIGITAL.pdf Top Of Page [General Reports] RFK Jr. Group Revived Long-Debunked Claim that WiFi is Dangerous As reported  by NewsGuard's Reality Check , Children’s Health Defense, the organization founded by the U.S. Health Secretary Robert F. Kennedy Jr. is promoting a misleading claim that wireless radiation has caused health problems for 26 million adults across the U.S., Australia, and Canada. The claim is based on a March 2026 study that relied entirely on self-reported online survey responses from 3,400 participants who believed their symptoms were linked to wireless devices. While misinformation sites presented the findings as proof that WiFi causes harm, the study itself only estimated the prevalence of people who believe they are sensitive to wireless radiation and did not establish any causal connection. In fact, the researchers explicitly stated that the study was not designed to prove that wireless technology caused the reported symptoms, nor to clinically validate the condition known as electromagnetic hypersensitivity (EHS). Major health authorities, including the World Health Organization and Australia’s radiation safety agency, continue to state that there is no scientific evidence that low-level electromagnetic fields from WiFi or wireless devices cause these health effects, and EHS is not recognized as a medically established condition. Source : NewsGuard Reality Check. RFK Jr. Group Revives Long-Debunked Claim that WiFi is Dangerous.  [online] Published 31 March 2026. Available at: https://www.newsguardrealitycheck.com/p/rfk-jr-group-revives-long-debunked Top Of Page The Rise of AI Deepfakes During the Iran War The war involving Iran has triggered an unprecedented surge in AI-generated misinformation, with fabricated videos, synthetic images of destruction, and fake reports of public figures being killed spreading widely across social media. According  to NewsGuard's Reality Check , the volume and realism of this content over the past month have been unlike anything it has tracked in its 8-year history. However, the crisis is not limited to fake content alone: authentic footage is increasingly being dismissed as AI-generated, creating a dangerous environment in which real evidence can be easily discredited. A key example involved an authentic "proof-of-life" video posted by Israeli Prime Minister Benjamin Netanyahu after false claims circulated that he had been killed in a missile strike (for further information, see W12 March Cyfluence Report). The rise of AI deepfakes and dismissal of real footage are two sides of the same coin. Source:  NewsGuard Reality Check. Real Is the New Fake.  [online] Published date not specified. Available at: https://www.newsguardrealitycheck.com/p/real-is-the-new-fake Top Of Page India’s Ruling Party is Using AI to Boost Hate Speech  A Bellingcat investigation  found that the Bharatiya Janata Party (BJP) has been using AI-generated imagery and videos to amplify anti-Muslim and anti-Bangladeshi messaging ahead of upcoming state elections in Assam and West Bengal. Of 499 visual social media posts reviewed from the BJP’s official state accounts in December 2025, 194 were assessed as meeting the UN’s definition of hate speech, and 31 of those included clear signs of AI-generated content. Much of the messaging portrayed Bengali-origin Muslims as "foreigners", “infiltrators", or threats to local communities, often using religious symbols such as skull caps to reinforce identity-based stereotypes. This content is closely tied to heightened tensions between India and Bangladesh, particularly after political instability in Bangladesh and rising nationalist sentiment on both sides of the border. AI-generated visuals were used to intensify emotionally charged narratives around immigration, religion, and public safety, including posts depicting opposition figures as supporters of alleged "foreigners". Compared with opposition parties, there appeared to be a particularly high concentration of hateful messaging in the AI-generated imagery posted by the BJP’s accounts. BJP West Bengal did not respond to multiple requests for comment by Bellingcat via phone and email. Source:  Bellingcat. How India’s Ruling Party is Using AI to Boost Hate Speech in States Near Bangladesh.  [online] Published 31 March 2026. Available at: https://www.bellingcat.com/news/2026/03/31/india-bjp-hate-speech-ai/ Top Of Page False Claim that "Easter" Removed from Chocolate to Avoid Offending Muslims  As NewsGuard's Reality Check reported , Far-right and anti-immigrant social media accounts have been spreading a false claim that Cadbury removed the word "Easter" from its chocolate egg packaging in 2026 to avoid offending Muslims. The allegation, widely shared online ahead of Easter, accused the company of attempting to "erase" Christianity and quickly gained significant engagement across platforms. However, the claim is false. NewsGuard confirmed that the products shown in viral images still display "Happy Easter" on their packaging, and the circulating photos simply do not show that part of the box. Cadbury’s parent company, Mondelez International, also stated that the brand has used the word "Easter" in its marketing for over 100 years and continues to do so. This is the latest in a series of recurring false claims targeting Cadbury with accusations of changing its branding to appease Muslims, including earlier AI-generated fake images and misleading claims from previous years. Source:  NewsGuard Reality Check. False Claim of the Week: Cadbury removed the word “Easter” from its chocolate eggs in 2026 to avoid offending Muslims.  [online] Published 2026. Available at: https://www.newsguardrealitycheck.com/i/193098983/false-claim-of-the-week-cadbury-removed-the-word-easter-from-its-chocolate-eggs-in-2026-to-avoid-offending-muslims Top Of Page The UAE Rewrites the Story of Iranian Strikes A Bellingcat investigation  suggested that UAE authorities have repeatedly downplayed or mischaracterized the impact of Iranian drone strikes on the country, while imposing strict controls on public reporting. Following the start of the conflict, officials warned that sharing images or videos of strikes was illegal, leading to arrests, detentions, and expedited trials for dozens of people accused of posting "misleading" content online. At the same time, official statements often described fires and explosions as the result of debris from successful interceptions, even when open-source videos and satellite imagery appeared to show direct drone impacts on key sites such as Fujairah Port, Jebel Ali Port, hotels in Dubai, residential buildings, and even areas near Dubai International Airport. Several high-profile incidents revealed clear discrepancies between official accounts and observable evidence. The investigation also pointed to a coordinated effort to shape public perception, including pro-government influencer campaigns promoting the UAE as safe and secure despite ongoing attacks and casualties. Critics and press freedom groups argued that the conflict is being used to further restrict independent reporting and public access to information. Source:  Bellingcat. How the UAE Rewrites the Story of Iranian Strikes . [online] Published 2 April 2026. Available at: https://www.bellingcat.com/news/2026/04/02/war-uae-iran-infuencer-dubai-conflict-drone-successful-strike-intercept-fire/ Top Of Page TikTok Accounts Used AI-Generated Videos to Incite Protests in Moldova As reported  by DFR Lab , a coordinated network of at least 25 TikTok accounts used AI-generated videos to spread anti-government narratives in Moldova and mobilize support for protests planned for the 25 th  of March 2026, which ultimately never took place. The campaign targeted President Maia Sandu and the ruling Party of Action and Solidarity (PAS), portraying them as corrupt and illegitimate, and as responsible for Moldova’s political and economic problems. Many of the accounts shared nearly identical profile imagery. They displayed coordinated behavior, including distinct "seeder" accounts that produced content, amplifier accounts that boosted visibility, and influencers who expanded the campaign’s reach to wider audiences. The network relied heavily on fabricated videos showing synthetic protest crowds and emotionally charged calls for demonstrations, while also promoting pro-Russian and anti-EU narratives. Some content idealized life during the Soviet era and framed Moldova’s European integration as a source of decline. The campaign operated in both Romanian and Russian. Several known local influencers with anti-EU and pro-Russian positions also helped amplify the messaging, contributing to millions of views and interactions across the network. Although no physical protests took place on the 25 th of March, the online campaign continued beyond that date. Moldovan police publicly warned that the activity may be part of a destabilization effort. The report noted that the observed behavior appears to violate TikTok’s policies on platform integrity and political content. Source:  Digital Forensic Research Lab (DFRLab). TikTok accounts use AI-generated videos to incite anti-government protests in Moldova.  [online] Published 3 April 2026. Available at: https://dfrlab.org/2026/04/03/tiktok-accounts-use-ai-generated-videos-to-incite-anti-government-protests-in-moldova/ Top Of Page [Appendix - Frameworks to Counter Disinformation] EU Commission Boosts Independent Fact-Checking With a €5 Million Grant The European Commission published  that on the 31 st  of March 2026, it signed a 5 million euro grant agreement to support independent fact-checkers across the EU and beyond. The initiative is titled "Fact-Checkers for European Resilience Against Disinformation: Network for Emergency, Protection and Technology". The commission stated that the initiative will strengthen fact-checking capacity in all EU languages and foster cooperation among relevant players. It will also establish a protection scheme for fact-checkers, covering assistance in matters such as legal affairs, cybersecurity, and psychological support, as well as developing an independent European repository of fact-checks. Source:  European Commission. Commission boosts independent fact-checking with a €5 million grant under the European Democracy Shield.  [online] Published 31 March 2026. Available at: https://digital-strategy.ec.europa.eu/en/news/commission-boosts-independent-fact-checking-eu5-million-grant-under-european-democracy-shield Top Of Page Resilience to Misinformation in Kosovo’s Election Period A report  by hibrid.info examined misinformation trends in Kosovo between the 9 th of October and the 31 st of December 2025, during a politically sensitive period that included local and early parliamentary elections. It found a significant spread of false and misleading content across social media, foreign media outlets, and online platforms in Albanian, Serbian, and English. The most common forms included fake news, fabricated polls, false statements attributed to public figures, and manipulated images and videos, all of which had the potential to influence voters and undermine trust in the electoral process. The analysis also identified narratives with an interethnic dimension, particularly affecting northern Kosovo and Serb-majority communities, increasing the risk of social polarization and local tensions. A notable trend during the election period was the use of AI-generated and manipulated content to falsely depict political support, events, or statements, making it harder for the public to distinguish fact from deception. Signs of foreign information manipulation and interference (FIMI) were also highlighted, with narratives aimed at shaping public opinion and destabilizing democratic processes. To address these risks, the report recommended stronger electoral monitoring, closer cooperation between fact-checkers, civil society, and EU institutions, improved platform oversight, and broader public awareness efforts to strengthen resilience against misinformation and protect democratic integrity. Source:  European External Action Service (EEAS). Resilience to Misinformation in Kosovo’s Election Period . [online] Published 2 April 2026. Available at: https://www.eeas.europa.eu/sites/default/files/2026/documents/Resilience%20to%20Misinformation%20%20in%20Kosovo%E2%80%99s%20Election%20Period.pdf Top Of Page U.S. Struggles to Counter Rising Foreign Disinformation Campaigns An article  published by The New York Times describes a renewed effort by the U.S. State Department to counter foreign disinformation campaigns amid a surge in anti-American narratives linked to the war in Iran. According to a diplomatic cable, foreign adversaries, including Russia, China, and Iran, are actively deploying influence operations to sow division within the United States and its allies while promoting alternative geopolitical narratives. These campaigns are amplified across both social media and traditional media, with platforms such as X identified as key vectors for the spread of false or manipulative content. The State Department’s response includes directing diplomats to actively counter misleading narratives online, deepen coordination with Pentagon information operations, and leverage tools such as Community Notes to challenge propaganda without restricting free speech. However, the article highlights structural weaknesses in the U.S. response to disinformation, noting that prior dismantling of key institutions, such as elements of the Global Engagement Center and disruptions to government-funded media outlets like Voice of America, has reduced the country’s capacity to counter hostile influence. This has created an environment in which adversarial narratives can proliferate more effectively, compounded by declining perceptions of U.S. credibility in the global information space. Critics characterize current efforts as reactive rather than strategic, emphasizing that inconsistent policy decisions and reduced institutional capability have undermined the effectiveness of counter-disinformation initiatives, even as foreign actors continue to exploit information channels to shape perceptions and influence audiences. Source:  The New York Times. Trump’s Foreign Disinformation Claims About Iran. [online]  Published 1 April 2026. Available at: https://www.nytimes.com/2026/04/01/business/trump-foreign-disinformation-iran.html Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website . Top Of Page

bottom of page