top of page

Search CRC

160 results found with an empty search

  • Not All-Powerful: A Granular Perspective on Influence Networks

    In many security policy debates, hostile influence campaigns by authoritarian states like China are often portrayed as hyper-efficient, strategically orchestrated, and almost omnipotent. The report "Baybridge – Anatomy of a Chinese Information Influence Ecosystem," published by the French military research center IRSEM  in October 2025, challenges this general perception . i     The notion of a uniformly centralized and effective Chinese disinformation apparatus is inaccurate because such a unified structure does not exist. Instead, a diverse range of actors operate within this ecosystem. These include private, commercially driven entities that act on behalf of the state or maintain links to state resources, which act without strategic coherence, professional execution, or operational efficiency. To assess influence operations strategically, the report calls for a deeper understanding of the specific actors, structures, interests, and operational logics involved by using a specific analytical approach . ii   The Actor-Specific, Granular Approach  The actor-specific, granular analytical approach does not view digital influence campaigns as complex networks of concrete actors with varying interests, capabilities, and motivations. At its core, the approach asks: Who is actually acting, within what organizational framework, using what tools, and to what end? It focuses on digital assets such as websites, social media profiles, and technical infrastructures, examining their connections, modes of control, and content strategy. This allows for the identification of the individuals, companies, or organizations involved and their actual roles and motives within the broader campaign.  The approach follows a multi-step process: first, the network structure is mapped and technical linkages are revealed. Next, digital traces are attributed to real-world actors, and their interests are analyzed. Simultaneously, the content is assessed for coherence, professionalism, and resonance with target audiences. Finally, the campaign’s actual impact is evaluated: Does it exert meaningful strategic influence, or is it merely an exercise in high-volume, low-impact output?  Case Study: The Network Around Haimai and Haixun  By using this approach  Baybridge  report examines a Chinese digital influence ecosystem centered on two companies: Shenzhen Haimai Yunxiang Media Co., Ltd. (Haimai) and Shanghai Haixun Technology Co., Ltd. (Haixun). Both market PR and media packages, run multilingual websites with seemingly journalistic content and share identical infrastructure. The report findings imply that this operation is not a centrally planned and applied influence operation but a network that functions as a commercial system with propagandistic features.    Figure 1 – Infrastructure Overview iii , Courtesy of IRSEM At the core are Wu Yanni, co-founder of Haimai and member of Shenzhen’s Municipal Party Committee Propaganda apparatus, and Zhu Haisong, owner of Haixun and member of Guangdong’s Propaganda Department.    Figure 2 - Activities of Wu Yanni & Zhu Haisong in the public & private sectors iv , Courtesy of IRSEM  The IRSEM report concludes that they are not strategic propagandists, but rather local entrepreneurs leveraging political ties for commercial gain. Their motivations appear to be primarily financial, including contract acquisition, rent-seeking, and fulfilling bureaucratic performance metrics such as article volume and reach.  Why the “Baybridge-Network” is Inefficient  Despite significant technical resources, the network exhibits major deficiencies in technical, structural, and content areas . Much of the content appears machine-translated, riddled with character encoding issues, and lacks editorial oversight. The result is an incoherent visual and linguistic output that undermines credibility and consistency.  An identified core flaw lies in the coexistence of contradictory narratives: Chinese content promotes “Positive Energy,” a state-endorsed messaging style that emphasizes harmony, optimism, and trust, while the same platforms often disseminate aggressive, conflict-driven Russian rhetoric critical of Western democracies. v  This juxtaposition, described in the report as a “narrative cacophony,” creates tonal contradictions that cancel each other out. This incoherence is particularly damaging during moments of symbolic significance for China, such as diplomatic visits, where simultaneously aggressive Russian-led messaging seems to undercut Beijing’s intended messaging. vi   Conclusion  The IRSEM report demonstrates that Chinese information operations are neither uniformly structured nor consistently effective. The “Baybridge” case study highlights a particular model in which private-sector actors with close ties to the state carry out influence operations on behalf of government entities. However, their activities are primarily shaped by commercial incentives and bureaucratic performance indicators. Within this logic, quantitative metrics such as content volume, geographic reach, and language variation are prioritized, while actual strategic impact on target audiences is secondary.  This setup can lead to inefficient campaigns: technically elaborate but strategically incoherent and lacking in persuasive quality. The core issue lies not in the absence of central coordination, but in the disconnect between political objectives, operational execution, and content effectiveness. These shortcomings are not unique to China, but they manifest in distinctive ways within authoritarian systems.  Rather than assuming a centralized and uniformly professional influence apparatus, the report suggests an actor-specific, granular analytical approach that enables differentiation. By mapping concrete actors, structures, and operational logics, it becomes possible to evaluate the actual relevance of an influence operation and to allocate security resources more effectively and proportionally. vii     [Footnotes:] [i] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. pp.78-79 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [ii] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. p. 79 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [iii] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. p.18 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [iv] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. p.42 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [v] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. p.56-61 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [vi] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. pp.69-70 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf [vii] IRSEM / Tadaweb & P. Charon, 2025. Baybridge: Anatomy of a Chinese information influence ecosystem – Focus no. 3 . [online]. p.79 Published October 2025. Available at: https://www.irsem.fr/storage/file_manager_files/2025/10/focus-3-charon-a4-ok.pdf

  • CRC Spotlight: Smart Cities: Future Urban Environments in the Crosshairs of Cyber Threats and Information Disorder

    Modern smart cities rely on extensively interconnected digital infrastructures that link not only administrative processes, but also mobility, energy systems, communication networks, urban services, and private-sector platforms. This dense connectivity creates significant exposure to hybrid threats in which technical cyberattacks overlap with strategic influence efforts, affecting both critical infrastructure and the wider informational sphere of a city. Against this backdrop, the article analyzes how smart cities evolve into environments where cyber vulnerabilities and informational fragilities reinforce one another, creating conditions for the emergence of cyfluence risks—hybrid threats that combine system intrusion with targeted narrative manipulation. As municipal infrastructure increasingly depends on IoT devices, real-time data streams, cloud-based applications, and automated urban management systems, disruptions can cascade across networks, while manipulated information circulating through public apps, digital signage, transport interfaces, and social media can amplify societal impact. Smart cities thus face a dual risk landscape in which breaches of technical systems and distortions of the information ecosystem can interact, accelerate one another, and undermine public trust at scale. [Full Report Below]

  • CRC Addendum: New Threat Vectors, Legacy Constraints: Structural Gaps in Indonesia’s Smart-City Vision

    Author: Nur Mawaddah, CRC Indonesian Program Director This article examines why Indonesia’s ambitious smart-city vision, despite notable progress, continues to face significant structural challenges. It highlights how uneven local capacities, fragmented cyber governance, and limited coordination shape the pace of digital transformation, while the new capital Nusantara emerges as a technologically integrated countermodel. At its core, the article raises a key question: How can Indonesia move from isolated flagship projects toward a nationwide, resilient, and digitally connected urban system? It outlines where foundational reforms and capacity building are needed, particularly in legislation, organisational structures, digital literacy, and cognitive resilience. [Full Report Below]

  • Cyber-based hostile influence campaigns 30th March - 5th April 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks which enhance their effect (hence force Cyfluence , as opposed to cyber-attacks that aim to steal information, extort money, etc.) Such hostile influence campaigns and operations can be considered an epistemological branch of Information Operations (IO) or Information Warfare (IW). Typically, and as customary during the last decade, the information is spread throughout various internet platforms, which are the different elements of the hostile influence campaign, and as such, connectivity and repetitiveness of content between several elements are the main core characteristics of influence campaigns.  Hostile influence campaigns, much like Cyber-attacks, have also become a tool for rival nations and corporations to damage reputation or achieve various business, political or ideological goals. Much like in the cyber security arena, PR professionals and government agencies are responding to negative publicity and disinformation shared over the news and social media.  We use the term cyber based  hostile influence campaigns, as we include in this definition also cyber-attacks aimed at influencing (such as hack and leak during election time), while we exclude of this term other types of more traditional kinds of influence such as diplomatic, economic, military etc. During the 30 th  of March to the 05 th  of April, 2026 we observed, collected and analyzed endpoints of information related to cyber based  hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events. Some of the mentioned campaigns have to do with social media and news outlets solemnly, while others leverage cyber-attack capabilities. [Contents] [Introduction] [Report Highlights] [Report Summary] [State Actors] Russia   Sputnik Czechia’s Successor Adapted to Circumvent Sanctions Sanctioned Kremlin-Affiliated Content Persists Via Pravda Network RT and Tucker Carlson Targeted Canada’s Sovereignty The War in Ukraine Kremlin Blamed NATO for Ukrainian Strikes and Warned of a "Coup" in Hungary China Pro-China Sources Faked an Image of a U.S. Soldier Iran Handala claimed a hack of FBI Director Personal Email AI Image Featured Iranian Missile Inscribed With "No Kings" [AI Related Articles] AI-Driven Deception and Industrialized Scam Networks Redefine Online Disinformation Threats [General Reports] RFK Jr. Group Revived Long-Debunked Claim that WiFi is Dangerous The Rise of AI Deepfakes During the Iran War India’s Ruling Party is Using AI to Boost Hate Speech False Claim that "Easter" Removed from Chocolate to Avoid Offending Muslims The UAE Rewrites the Story of Iranian Strikes TikTok Accounts Used AI-Generated Videos to Incite Protests in Moldova [Appendix - Frameworks to Counter Disinformation] EU Commission Boosts Independent Fact-Checking With a €5 Million Grant Resilience to Misinformation in Kosovo’s Election Period U.S. Struggles to Counter Rising Foreign Disinformation Campaigns [CRC Glossary] [ Report Highlights] DFR Lab reported that despite extensive EU sanctions on Kremlin-linked media, pro-Kremlin content remains widely accessible across Europe through mirror websites, third-party reposts, and networks such as the Pravda Network. The Australian Human Rights Commission released a report stating that financial and state-linked actors are increasingly using AI-generated personas, coordinated deception, and structured scam networks to conduct large-scale disinformation and fraud campaigns that exploit trust and evade detection. The war involving Iran has triggered an unprecedented surge in AI-generated misinformation, with fabricated videos, synthetic images of destruction, and fake reports of public figures being killed spreading widely across social media. According to NewsGuard's Reality Check , the volume and realism of this content over the past month have been unlike anything it has tracked in its 8-year history. A Bellingcat investigation found that the Bharatiya Janata Party (BJP) has been using AI-generated imagery and videos to amplify anti-Muslim and anti-Bangladeshi messaging ahead of upcoming state elections in Assam and West Bengal. According to an article published by The New York Times , Foreign adversaries are intensifying disinformation campaigns to spread anti-American narratives and sow division, while weakened U.S. institutional capacity and reactive strategies limit the effectiveness of countermeasures. [ Report Summary] According to an article by DFR Lab , despite EU sanctions on the Russian state outlet Sputnik, its successor platforms, the website 42TČen and the Telegram channel neČT24, have successfully continued operations by rebranding, shifting domains, and using infrastructure spread across several countries. DFR Lab reported that despite extensive EU sanctions on Kremlin-linked media, pro-Kremlin content remains widely accessible across Europe through mirror websites, third-party reposts, and networks such as the Pravda Network. According to a report by DisinfoWatch , RT’s X account amplified an edited clip of Tucker Carlson in which he suggested that the United States should pressure Canada into changing its government, "by force if necessary". According to an article by EU VS Disinfo , after Ukrainian drone strikes hit the Russian Baltic oil ports of Ust-Luga and Primorsk, key export hubs for Russian oil and petroleum products, pro-Kremlin media quickly shifted the narrative away from the damage and Ukraine’s military capabilities. NewsGuard's Reality Check reported that a pro-China X account operated from China has falsely circulated an old battlefield image, claiming it shows a U.S. soldier begging an Iranian drone for mercy. As reported by CyberNews , the pro-Iran hacking group Handala has claimed responsibility for breaching the personal email account of FBI Director Kash Patel, just days after U.S. authorities seized several websites linked to the group. NewsGuard's Reality Check reported that an AI-generated image falsely showing an Iranian missile inscribed with the slogan "No Kings" spread rapidly online after anti-Trump protests took place across the United States. The Australian Human Rights Commission released a report stating that financial and state-linked actors are increasingly using AI-generated personas, coordinated deception, and structured scam networks to conduct large-scale disinformation and fraud campaigns that exploit trust and evade detection. As reported by NewsGuard's Reality Check , Children’s Health Defense, the organization founded by U.S. Health Secretary Robert F. Kennedy Jr. is promoting a misleading claim that wireless radiation has caused health problems for 26 million adults across the U.S., Australia, and Canada. The war involving Iran has triggered an unprecedented surge in AI-generated misinformation, with fabricated videos, synthetic images of destruction, and fake reports of public figures being killed spreading widely across social media. According to NewsGuard's Reality Check , the volume and realism of this content over the past month have been unlike anything it has tracked in its 8-year history. A Bellingcat investigation found that the Bharatiya Janata Party (BJP) has been using AI-generated imagery and videos to amplify anti-Muslim and anti-Bangladeshi messaging ahead of upcoming state elections in Assam and West Bengal. A Bellingcat investigation suggested that UAE authorities have repeatedly downplayed or mischaracterized the impact of Iranian drone strikes on the country, while imposing strict controls on public reporting. As reported by DFR Lab , a coordinated network of at least 25 TikTok accounts used AI-generated videos to spread anti-government narratives in Moldova and mobilize support for protests planned for the 25 th  of March 2026, which ultimately never took place. The European Commission published that on the 31st of March 2026, it signed a 5 million euro grant agreement to support independent fact-checkers across the EU and beyond. A report by hibrid.info examined misinformation trends in Kosovo between the 9th of October and the 31st of December 2025, during a politically sensitive period that included local and early parliamentary elections. According to an article published by The New York Times , Foreign adversaries are intensifying disinformation campaigns to spread anti-American narratives and sow division, while weakened U.S. institutional capacity and reactive strategies limit the effectiveness of countermeasures. [State Actors] Russia Sputnik Czechia’s Successor Adapted to Circumvent Sanctions According to an article  by DFR Lab , despite EU sanctions on the Russian state outlet Sputnik, its successor platforms, the website 42TČen and the Telegram channel neČT24, have successfully continued operations by rebranding, shifting domains, and using infrastructure spread across several countries. After Sputnik Czechia was blocked in 2022, the new platforms openly positioned themselves as its continuation, first using a Czech domain and later moving to an anonymously registered .com domain hosted outside Czech jurisdiction. Their Facebook and X accounts are also managed from abroad, raising concerns about foreign interference. To reduce scrutiny, the outlet has shifted away from directly reposting Kremlin media and instead increasingly relies on original interviews with Czech and Slovak political figures and commentators who promote pro-Kremlin narratives. This strategy makes the content appear more local and organic while still amplifying messages critical of the West, supportive of Russia, and hostile to Ukraine. Telegram has become its main platform, where the channel posts large volumes of unattributed news briefs, videos, and images, including repurposed content from sanctioned Russian sources, while benefiting from a growing network of amplifiers across Czech-language channels. Source:  DFRLab. How Sputnik Czechia’s successor adapted to circumvent sanctions.  [online] Published 3 April 2026. Available at: https://dfrlab.org/2026/04/03/how-sputnik-czechias-successor-adapted-to-circumvent-sanctions/ Top Of Page Sanctioned Kremlin-Affiliated Content Persists Via Pravda Network DFR Lab reported  that despite extensive EU sanctions on Kremlin-linked media, pro-Kremlin content remains widely accessible across Europe through mirror websites, third-party reposts, and networks such as the Pravda Network. In early February 2026, more than 20 Pravda-linked websites published coordinated articles in multiple European languages falsely claiming that Ukrainian President Volodymyr Zelenskyy had described the frontline situation as "catastrophic" during an alleged call with Belgian Prime Minister Bart De Wever. Like many Pravda Network publications, the content promoted pro-Kremlin narratives regarding Ukraine’s imminent loss in the war and was among dozens of articles about Ukraine posted by the network in a single day. The Pravda Network, previously exposed as a sanctions-evasion operation linked to the Crimea-based company Tigerweb, continues to amplify and translate content from Russian state-affiliated outlets such as RT, RIA Novosti, and TASS into more than 80 languages across over 150 websites. By laundering content from sanctioned media and redistributing it through websites, Telegram channels, and social media platforms, the network exploits weak and uneven sanctions enforcement across EU member states. While Ukraine has built significant resilience against Russian disinformation domestically through media bans, fact-checking initiatives, and media literacy campaigns, it has far less control over narratives spread abroad. Source:  DFRLab. Sanctioned Kremlin-affiliated content persists via Pravda Network, attempting to shape Ukraine’s image abroad.  [online] Published 3 April 2026. Available at: https://dfrlab.org/2026/04/03/sanctioned-kremlin-affiliated-content-persists-via-pravda-network-attempting-to-shape-ukraines-image-abroad/ Top Of Page RT and Tucker Carlson Targeted Canada’s Sovereignty According to a report  by DisinfoWatch , RT’s X account amplified an edited clip of Tucker Carlson in which he suggested that the United States should pressure Canada into changing its government, "by force if necessary". Carlson made the remarks in a commentary dated to the 2 nd of April 2026, and on the same day, RT reframed them under a provocative "51st state" and regime-change narrative, falsely implying that Canada is not a sovereign nation and portraying its government as oppressive. It also repeated a misleading claim that Canada has "killed nearly 100,000" citizens through its MAID program. The report argued that this is a deliberate effort to exploit both the "51 st " state narrative and the Alberta separatist movement issue. Source:  DisinfoWatch. RT & Tucker Carlson Target Canada’s Sovereignty, Urging US Sponsored Regime Change . [online] Published 3 April 2026. Available at: https://disinfowatch.org/disinfo/tucker-rt-alberta-canada-separation/ Top Of Page The War in Ukraine Kremlin Blamed NATO for Ukrainian Strikes and Warned of a "Coup" in Hungary According to an article  by EU VS Disinfo , after Ukrainian drone strikes hit the Russian Baltic oil ports of Ust-Luga and Primorsk, key export hubs for Russian oil and petroleum products, pro-Kremlin media quickly shifted the narrative away from the damage and Ukraine’s military capabilities. Instead, Articles circulated through the Pravda network claimed that NATO and the UK’s MI6 coordinated the strikes, with some even alleging that the drones were launched from Estonia.  Russian information channels have used the incident to reinforce the narrative that the West has opened a "second front" against Russia, particularly in the Baltic region. These claims present Russia as the victim of deliberate NATO escalation, and help lay the groundwork for potential retaliatory rhetoric toward neighboring Baltic states. In parallel, pro-Kremlin messaging has also turned toward Hungary’s upcoming elections. RT Arabic has accused the EU and Ukraine of plotting a "coup" against Prime Minister Viktor Orbán by allegedly using experienced organizers from Ukraine’s 2014 protests, repeating wider disinformation claims that Brussels is trying to remove Prime Minister Viktor Orbán through the parliamentary elections. Source:  EUvsDisinfo. Ukraine hits Russian oil ports, Kremlin blames NATO and warns of a coup in Hungary.  [online] Published 2 April 2026. Available at: https://euvsdisinfo.eu/ukraine-hits-russian-oil-ports-kremlin-blames-nato-and-warns-of-a-coup-in-hungary/ Top Of Page China Pro-China Sources Faked an Image of a U.S. Soldier NewsGuard's Reality Check reported  that a pro-China X account operated from China has falsely circulated an old battlefield image, claiming it shows a U.S. soldier begging an Iranian drone for mercy. In reality, the photo predates the current Iran conflict and appears to come from the Russia-Ukraine war. According to reverse image searches, the image has been online since at least April 2024, and the soldier’s uniform matches that of Russian troops, not U.S. forces. The fact that such a post was allowed on X, a banned platform in China, suggested that the propaganda had Chinese government backing. Similar out-of-context or inauthentic imagery has recently been spread by social media users to portray U.S. soldiers cowering before Iranian forces. Source:  NewsGuard Reality Check. Wrong War, Wrong Victim, Wrong Victor . [online] Published n.d. Available at: https://www.newsguardrealitycheck.com/p/wrong-war-wrong-victim-wrong-victor Top Of Page Iran Handala claimed a hack of FBI Director Personal Email As reported  by CyberNews , Handala said it had obtained all personal and confidential information of Kash Patel, including emails, conversations, documents, and even classified files, and that they are available for public download. The leaked material included a mix of casual personal photos, showing Patel relaxing and smoking a cigar, posing next to a statue, sitting in a classic car, and taking mirror selfies with a bottle of ultra-premium Cuban rum. U.S. officials confirmed the breach and said the leaked material appeared authentic. However, the FBI stated that the compromised data was historical in nature and did not include government information. The incident comes amid an escalating cyber confrontation following the FBI and Department of Justice’s recent takedown of Handala’s online infrastructure. Prosecutors have linked the group to a broader Iranian cyber and psychological operations campaign involving data leaks, doxxing, and threats against journalists, dissidents, and Israeli-linked targets. Security experts warn that even a breach of a personal inbox can expose sensitive contacts, travel details, and other information that may enable future phishing or follow-on attacks, making the incident significant beyond its immediate contents. Handala has recently claimed attacks on major companies such as Stryker and Verifone. It also claimed to have compromised the personal email account of former Mossad research director Sima Shine, leaking more than 100,000 emails. These operations are designed not only to cause technical disruption but also to grab attention, generate headlines, and shape public perception. Source:  Cybernews. Handala claims hack of FBI Director Kash Patel’s personal email.  [online] Published 27 March 2026. Available at: https://cybernews.com/news/handala-claims-hack-fbi-director-kash-patel-personal-email/ Top Of Page AI Image Featured Iranian Missile Inscribed With "No Kings" NewsGuard's Reality Check reported  that an AI-generated image falsely showing an Iranian missile inscribed with the slogan "No Kings" spread rapidly online. On the 28 th  of March 2026, anti-Trump demonstrators staged protests across the U.S. under the "No Kings" banner. Many of the protesters expressed opposition to the war in Iran. One day after the protests, an article from an Iran-based, pro-regime News Agency claimed that the IRGC had inscribed "NO KINGS" on missiles used in its latest attack against Israel, and added the image of the inscribed missile without specifying its source. Pro-Iran accounts promoted the image as a tribute by Iran’s military to the American protesters, while pro-Israel and conservative users shared it to argue that U.S. liberals were aligned with Iran. However, NewsGuard analyzed the image using two AI-detection tools, both of which flagged it as manipulated or artificially produced. This isn’t the first time fabricated Iranian missile inscriptions have circulated on social media. In early March 2026, an AI-edited image of a missile with the phrase “In memory of the victims of Epstein Island" was spread by pro-Iran users. Sources: NewsGuard Reality Check. AI Fakes an Iranian “No Kings” Missile — and Both Sides Fall for It . [online] Published 6 April 2026. Available at: https://www.newsguardrealitycheck.com/p/ai-fakes-an-iranian-no-kings-missile Top Of Page [AI Related Articles] AI-Driven Deception and Industrialized Scam Networks Redefine Online Disinformation Threats The Australian Human Rights Commission released a report , coinciding with international trans day of Visibility. The report outlines a rapidly evolving disinformation and fraud ecosystem in which financially motivated threat actors, including loosely organized groups and highly structured criminal syndicates, systematically exploit digital platforms using deception at scale. These actors rely heavily on impersonation tactics, creating credible but false personas (e.g., medical professionals, landlords, or bereavement services) to build trust and manipulate targets. Increasingly, they leverage generative AI to enhance realism, producing culturally tailored, multilingual, and context-aware content, as well as synthetic imagery and conversations that mimic authentic human interaction. AI is also used to fabricate entire online infrastructures, websites, business identities, and communication flows, allowing adversaries to operate with greater efficiency and evade detection.  Operationally, these campaigns follow a structured "attack chain" involving infrastructure setup, persona development, engagement through social engineering, and eventual exploitation for financial or data extraction. Tactics include directing victims off the platform to controlled environments, deploying hyper-personalized narratives, and exploiting moments of vulnerability such as grief, financial hardship, or housing insecurity. Networks are often transnational and coordinated, with evidence of industrialized workflows, role specialization, and adaptive strategies in response to enforcement pressure. In parallel, influence operations, some state-linked, use AI-generated content and false identities to amplify political narratives, simulate grassroots legitimacy, and manipulate public discourse across multiple regions and languages. Source : Australian Human Rights Commission. Equal Identities: A Human Rights Review of Trans and Gender Diverse Experiences in Australia . [online] Published 2024. Available at: https://humanrights.gov.au/media/documents-files-PDFs/strategic-communications/EQUAL_IDENTITIES_FINAL_DIGITAL.pdf Top Of Page [General Reports] RFK Jr. Group Revived Long-Debunked Claim that WiFi is Dangerous As reported  by NewsGuard's Reality Check , Children’s Health Defense, the organization founded by the U.S. Health Secretary Robert F. Kennedy Jr. is promoting a misleading claim that wireless radiation has caused health problems for 26 million adults across the U.S., Australia, and Canada. The claim is based on a March 2026 study that relied entirely on self-reported online survey responses from 3,400 participants who believed their symptoms were linked to wireless devices. While misinformation sites presented the findings as proof that WiFi causes harm, the study itself only estimated the prevalence of people who believe they are sensitive to wireless radiation and did not establish any causal connection. In fact, the researchers explicitly stated that the study was not designed to prove that wireless technology caused the reported symptoms, nor to clinically validate the condition known as electromagnetic hypersensitivity (EHS). Major health authorities, including the World Health Organization and Australia’s radiation safety agency, continue to state that there is no scientific evidence that low-level electromagnetic fields from WiFi or wireless devices cause these health effects, and EHS is not recognized as a medically established condition. Source : NewsGuard Reality Check. RFK Jr. Group Revives Long-Debunked Claim that WiFi is Dangerous.  [online] Published 31 March 2026. Available at: https://www.newsguardrealitycheck.com/p/rfk-jr-group-revives-long-debunked Top Of Page The Rise of AI Deepfakes During the Iran War The war involving Iran has triggered an unprecedented surge in AI-generated misinformation, with fabricated videos, synthetic images of destruction, and fake reports of public figures being killed spreading widely across social media. According  to NewsGuard's Reality Check , the volume and realism of this content over the past month have been unlike anything it has tracked in its 8-year history. However, the crisis is not limited to fake content alone: authentic footage is increasingly being dismissed as AI-generated, creating a dangerous environment in which real evidence can be easily discredited. A key example involved an authentic "proof-of-life" video posted by Israeli Prime Minister Benjamin Netanyahu after false claims circulated that he had been killed in a missile strike (for further information, see W12 March Cyfluence Report). The rise of AI deepfakes and dismissal of real footage are two sides of the same coin. Source:  NewsGuard Reality Check. Real Is the New Fake.  [online] Published date not specified. Available at: https://www.newsguardrealitycheck.com/p/real-is-the-new-fake Top Of Page India’s Ruling Party is Using AI to Boost Hate Speech  A Bellingcat investigation  found that the Bharatiya Janata Party (BJP) has been using AI-generated imagery and videos to amplify anti-Muslim and anti-Bangladeshi messaging ahead of upcoming state elections in Assam and West Bengal. Of 499 visual social media posts reviewed from the BJP’s official state accounts in December 2025, 194 were assessed as meeting the UN’s definition of hate speech, and 31 of those included clear signs of AI-generated content. Much of the messaging portrayed Bengali-origin Muslims as "foreigners", “infiltrators", or threats to local communities, often using religious symbols such as skull caps to reinforce identity-based stereotypes. This content is closely tied to heightened tensions between India and Bangladesh, particularly after political instability in Bangladesh and rising nationalist sentiment on both sides of the border. AI-generated visuals were used to intensify emotionally charged narratives around immigration, religion, and public safety, including posts depicting opposition figures as supporters of alleged "foreigners". Compared with opposition parties, there appeared to be a particularly high concentration of hateful messaging in the AI-generated imagery posted by the BJP’s accounts. BJP West Bengal did not respond to multiple requests for comment by Bellingcat via phone and email. Source:  Bellingcat. How India’s Ruling Party is Using AI to Boost Hate Speech in States Near Bangladesh.  [online] Published 31 March 2026. Available at: https://www.bellingcat.com/news/2026/03/31/india-bjp-hate-speech-ai/ Top Of Page False Claim that "Easter" Removed from Chocolate to Avoid Offending Muslims  As NewsGuard's Reality Check reported , Far-right and anti-immigrant social media accounts have been spreading a false claim that Cadbury removed the word "Easter" from its chocolate egg packaging in 2026 to avoid offending Muslims. The allegation, widely shared online ahead of Easter, accused the company of attempting to "erase" Christianity and quickly gained significant engagement across platforms. However, the claim is false. NewsGuard confirmed that the products shown in viral images still display "Happy Easter" on their packaging, and the circulating photos simply do not show that part of the box. Cadbury’s parent company, Mondelez International, also stated that the brand has used the word "Easter" in its marketing for over 100 years and continues to do so. This is the latest in a series of recurring false claims targeting Cadbury with accusations of changing its branding to appease Muslims, including earlier AI-generated fake images and misleading claims from previous years. Source:  NewsGuard Reality Check. False Claim of the Week: Cadbury removed the word “Easter” from its chocolate eggs in 2026 to avoid offending Muslims.  [online] Published 2026. Available at: https://www.newsguardrealitycheck.com/i/193098983/false-claim-of-the-week-cadbury-removed-the-word-easter-from-its-chocolate-eggs-in-2026-to-avoid-offending-muslims Top Of Page The UAE Rewrites the Story of Iranian Strikes A Bellingcat investigation  suggested that UAE authorities have repeatedly downplayed or mischaracterized the impact of Iranian drone strikes on the country, while imposing strict controls on public reporting. Following the start of the conflict, officials warned that sharing images or videos of strikes was illegal, leading to arrests, detentions, and expedited trials for dozens of people accused of posting "misleading" content online. At the same time, official statements often described fires and explosions as the result of debris from successful interceptions, even when open-source videos and satellite imagery appeared to show direct drone impacts on key sites such as Fujairah Port, Jebel Ali Port, hotels in Dubai, residential buildings, and even areas near Dubai International Airport. Several high-profile incidents revealed clear discrepancies between official accounts and observable evidence. The investigation also pointed to a coordinated effort to shape public perception, including pro-government influencer campaigns promoting the UAE as safe and secure despite ongoing attacks and casualties. Critics and press freedom groups argued that the conflict is being used to further restrict independent reporting and public access to information. Source:  Bellingcat. How the UAE Rewrites the Story of Iranian Strikes . [online] Published 2 April 2026. Available at: https://www.bellingcat.com/news/2026/04/02/war-uae-iran-infuencer-dubai-conflict-drone-successful-strike-intercept-fire/ Top Of Page TikTok Accounts Used AI-Generated Videos to Incite Protests in Moldova As reported  by DFR Lab , a coordinated network of at least 25 TikTok accounts used AI-generated videos to spread anti-government narratives in Moldova and mobilize support for protests planned for the 25 th  of March 2026, which ultimately never took place. The campaign targeted President Maia Sandu and the ruling Party of Action and Solidarity (PAS), portraying them as corrupt and illegitimate, and as responsible for Moldova’s political and economic problems. Many of the accounts shared nearly identical profile imagery. They displayed coordinated behavior, including distinct "seeder" accounts that produced content, amplifier accounts that boosted visibility, and influencers who expanded the campaign’s reach to wider audiences. The network relied heavily on fabricated videos showing synthetic protest crowds and emotionally charged calls for demonstrations, while also promoting pro-Russian and anti-EU narratives. Some content idealized life during the Soviet era and framed Moldova’s European integration as a source of decline. The campaign operated in both Romanian and Russian. Several known local influencers with anti-EU and pro-Russian positions also helped amplify the messaging, contributing to millions of views and interactions across the network. Although no physical protests took place on the 25 th of March, the online campaign continued beyond that date. Moldovan police publicly warned that the activity may be part of a destabilization effort. The report noted that the observed behavior appears to violate TikTok’s policies on platform integrity and political content. Source:  Digital Forensic Research Lab (DFRLab). TikTok accounts use AI-generated videos to incite anti-government protests in Moldova.  [online] Published 3 April 2026. Available at: https://dfrlab.org/2026/04/03/tiktok-accounts-use-ai-generated-videos-to-incite-anti-government-protests-in-moldova/ Top Of Page [Appendix - Frameworks to Counter Disinformation] EU Commission Boosts Independent Fact-Checking With a €5 Million Grant The European Commission published  that on the 31 st  of March 2026, it signed a 5 million euro grant agreement to support independent fact-checkers across the EU and beyond. The initiative is titled "Fact-Checkers for European Resilience Against Disinformation: Network for Emergency, Protection and Technology". The commission stated that the initiative will strengthen fact-checking capacity in all EU languages and foster cooperation among relevant players. It will also establish a protection scheme for fact-checkers, covering assistance in matters such as legal affairs, cybersecurity, and psychological support, as well as developing an independent European repository of fact-checks. Source:  European Commission. Commission boosts independent fact-checking with a €5 million grant under the European Democracy Shield.  [online] Published 31 March 2026. Available at: https://digital-strategy.ec.europa.eu/en/news/commission-boosts-independent-fact-checking-eu5-million-grant-under-european-democracy-shield Top Of Page Resilience to Misinformation in Kosovo’s Election Period A report  by hibrid.info examined misinformation trends in Kosovo between the 9 th of October and the 31 st of December 2025, during a politically sensitive period that included local and early parliamentary elections. It found a significant spread of false and misleading content across social media, foreign media outlets, and online platforms in Albanian, Serbian, and English. The most common forms included fake news, fabricated polls, false statements attributed to public figures, and manipulated images and videos, all of which had the potential to influence voters and undermine trust in the electoral process. The analysis also identified narratives with an interethnic dimension, particularly affecting northern Kosovo and Serb-majority communities, increasing the risk of social polarization and local tensions. A notable trend during the election period was the use of AI-generated and manipulated content to falsely depict political support, events, or statements, making it harder for the public to distinguish fact from deception. Signs of foreign information manipulation and interference (FIMI) were also highlighted, with narratives aimed at shaping public opinion and destabilizing democratic processes. To address these risks, the report recommended stronger electoral monitoring, closer cooperation between fact-checkers, civil society, and EU institutions, improved platform oversight, and broader public awareness efforts to strengthen resilience against misinformation and protect democratic integrity. Source:  European External Action Service (EEAS). Resilience to Misinformation in Kosovo’s Election Period . [online] Published 2 April 2026. Available at: https://www.eeas.europa.eu/sites/default/files/2026/documents/Resilience%20to%20Misinformation%20%20in%20Kosovo%E2%80%99s%20Election%20Period.pdf Top Of Page U.S. Struggles to Counter Rising Foreign Disinformation Campaigns An article  published by The New York Times describes a renewed effort by the U.S. State Department to counter foreign disinformation campaigns amid a surge in anti-American narratives linked to the war in Iran. According to a diplomatic cable, foreign adversaries, including Russia, China, and Iran, are actively deploying influence operations to sow division within the United States and its allies while promoting alternative geopolitical narratives. These campaigns are amplified across both social media and traditional media, with platforms such as X identified as key vectors for the spread of false or manipulative content. The State Department’s response includes directing diplomats to actively counter misleading narratives online, deepen coordination with Pentagon information operations, and leverage tools such as Community Notes to challenge propaganda without restricting free speech. However, the article highlights structural weaknesses in the U.S. response to disinformation, noting that prior dismantling of key institutions, such as elements of the Global Engagement Center and disruptions to government-funded media outlets like Voice of America, has reduced the country’s capacity to counter hostile influence. This has created an environment in which adversarial narratives can proliferate more effectively, compounded by declining perceptions of U.S. credibility in the global information space. Critics characterize current efforts as reactive rather than strategic, emphasizing that inconsistent policy decisions and reduced institutional capability have undermined the effectiveness of counter-disinformation initiatives, even as foreign actors continue to exploit information channels to shape perceptions and influence audiences. Source:  The New York Times. Trump’s Foreign Disinformation Claims About Iran. [online]  Published 1 April 2026. Available at: https://www.nytimes.com/2026/04/01/business/trump-foreign-disinformation-iran.html Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website . Top Of Page

  • Pro-Palestine Mobilization and Digital Influence at Columbia University

    Following the October 7, 2023 Hamas attack on Israel, U.S. college campuses became the stage for over 3,700 demonstrations and 1,150 encampments across 35 states, the largest student protest wave since the 1960s. This comprehensive multi-disciplinary study applies digital forensics, network analysis, field observations and open-source intelligence (OSINT) in order to examine Columbia University as a prime case study in contemporary hybrid cognitive threats targeting the United States.  The research explores the mechanism by which grassroots activism was embedded within, and largely exploited by, a layered influence-driven ecosystem comprising domestic organizers, digital aggregators, inauthentic amplification clusters, and foreign-linked actors. To conclude, we review possible remediation strategies and practical approaches to address the increased threat posed to Managed Contested Spaces (MCSs) by hostile influence and information disorder efforts. Key Takeaways   Integrated Hybrid Influence Architecture:  Columbia’s protest waves operated as tightly coupled physical–digital phenomena in which templated national toolkits and real-time messaging platforms synchronized mobilization and logistics, while an integrated physical–digital ecosystem enabled rapid scaling, narrative shaping, and the manufacture of perceived consensus. Bidirectional Mobilization:  Protest escalation was driven by recursive information flows: bottom-up amplification transformed localized incidents into nationally salient narratives, while top-down externally generated narratives were injected into campus organizing cycles, often shaping both framing and timing of physical actions. Detection and Attribution of Inauthentic Amplification Activity:  CRC researchers have mapped an array of impersonation assets, designated as Inauthentic Journalists Persona Cluster (IJPC). Currently, we assess it is a Nigeria-based influence-for-hire narrative amplification effort. Protecting Managed Contested Spaces (MCSs): High-profile targeted institutions must adopt a proactive,  whole-of-institution approach  against hybrid threats. This requires the integration of detection solutions, resilience-building programs, collaboration interfaces, and deploying Cognitive Security Posture Management  (CoSPM) visibility and response capabilities across multiple dimensions (physical–cognitive–digital), in accordance with the Cyfluence Security Paradigm. Author: Eliana Aiken and The CRC Team [Full Report Below]

  • Cyber-based hostile influence campaigns 23rd - 29th March 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [State Actors] Russia   Russia's Promotion of Separatism Abroad Russian Hybrid Tactics in Europe 2022 to 2025 Disinformation Campaign Targeting Kaja Kallas China Disinformation Denying Uyghur Forced Labor in China Iran Pro-Iranian Nasir Security Targets the Energy Sector in the Middle East Disinformation Trends in the 2026 Iran War Disinformation and Hybrid Coercion in Iran’s War Strategy [General Reports] AI-generated YouTube channels Spread Fake News Reports Pierre Poilievre’s misinformation on Joe Rogan’s podcast Disinformation and Climate Information Integrity in Australia The Return of Claims that Trump's Assassination Attempt Was Staged AI-Generated Audio of Clinton Criticized the Iran War Orbán Doubled Down on Anti-Ukrainian Campaign to Secure Reelection Disinformation After the Bondi Attack Orban Spokesperson Misrepresented 2021 Lawsuit to Smear Journalist Catherine Belton [Appendix - Frameworks to Counter Disinformation] Disrupting the foundations of FIMI Trump Administration Accused of Turning Voice of America into a Partisan Propaganda Outlet [CRC Glossary] [ Report Highlights] A Center for Strategic and International Studies ( CSIS) article outlined how Iran is conducting a multidomain “punishment campaign” that combines military, economic, cyber, and informational tactics to pressure the United States and Israel indirectly. As revealed by DisinfoWatch , a post from the Chinese Embassy in Canada exemplifies a coordinated disinformation effort aimed at denying well-documented human rights abuses in Xinjiang. Australia’s Senate inquiry highlighted growing concern in Australia about the widespread impact of misinformation and disinformation on climate change and energy debates. According to an article by The Jamestown Foundation , Hungary’s ruling Fidesz party has intensified its anti-Ukrainian rhetoric ahead of the country’s most competitive election in 16 years, using recent tensions with Kyiv to strengthen its campaign. A federal lawsuit filed by VOA journalists and press freedom groups alleges that the Trump administration, through USAGM leadership, systematically converted Voice of America into a state propaganda outlet by mandating pro-White House messaging, suppressing independent coverage, and manipulating Persian-language broadcasts into Iran, undermining VOA's core mandate to serve as a credible alternative to authoritarian media. [State Actors] Russia Russia's Promotion of Separatism Abroad A report by EUvsDisinfo  argued that the Kremlin promotes separatist movements abroad while harshly suppressing similar ideas at home. Russia has repeatedly supported or amplified secessionist narratives in Western countries such as the United States, Canada, and Spain, often through coordinated disinformation campaigns and online networks. Examples include backing "Texit" rhetoric, reviving Alberta independence claims, and spreading misinformation around Catalonia and Brexit. In places like Moldova and Georgia, Moscow maintains influence through breakaway territories such as Transnistria and South Ossetia, using them as tools of pressure. In Estonia, this has taken the form of a disinformation campaign targeting the border town of Narva, where a majority of the population is ethnically Russian. However, inside Russia, any public support for separatism is criminalized, with activists facing imprisonment and organizations labeled as extremist or terrorist. Russia’s talk of sovereignty and territorial integrity is pragmatic rather than principled. It is used to justify repression at home and to destabilize countries abroad. Source:  EUvsDisinfo. Secession for you, prison in Russia: Moscow’s selective love for self-determination . [online] Published 24 March 2026. Available at: https://euvsdisinfo.eu/secession-for-you-prison-in-russia-moscows-selective-love-for-self-determination/ Top Of Page Russian Hybrid Tactics in Europe 2022 to 2025 A report by The Soufan Center  analyzed 255 Russian hybrid operations across 6 European countries between 2022 and 2025, highlighting a strategy built on cost asymmetry. Russia conducts low-cost, deniable, and scalable actions, ranging from disinformation and espionage to sabotage and infrastructure probing, while forcing targeted countries to bear the financial, political, and security burden of responding. Russia adapts its tactics to local contexts. In Western Europe, it has focused on intelligence gathering, infrastructure surveillance, and symbolic acts designed to inflame tensions, while in countries like Moldova and Georgia, it has combined long-term influence operations with political interference to shape strategic outcomes. Estonia, despite facing continuous pressure, has shown resilience due to strong public awareness and institutional preparedness. A key shift over time has been the increased use of intermediaries, often low-level recruits, to maintain deniability, alongside a move toward more direct and physical actions in 2025. The report concluded that the impact of these operations depends less on Russia’s capabilities and more on the strength of targeted societies. Countries with resilient institutions, transparent communication, and strong civil society are better able to withstand interference. To counter this threat, Europe must both raise the cost of Russian actions through coordinated responses and reduce vulnerabilities by strengthening governance and social cohesion, while carefully balancing public communication to avoid amplifying the intended disruptive effects. Source:  The Soufan Center, C. Broekaert & N. Lyubarsky & C. Clarke, & J. Shelzi. PRIMING, DESTABILIZING, COERCING: Russian Hybrid Tactics in Europe 2022–2025 . [online] Published 2026. Available at: https://thesoufancenter.org/wp-content/uploads/2026/03/TSC-Report-Priming-Destabilizing-Coercing-Russian-Hybrid-Tactics-in-Europe-2022-2025.pdf Top Of Page Disinformation Campaign Targeting Kaja Kallas According to a report by DisinfoWatch , Kremlin-aligned actors are spreading a coordinated disinformation campaign that distorts history and attacks Estonian Prime Minister Kaja Kallas. Her warning that Russia’s territorial demands follow a familiar pattern was deliberately twisted into claims that regions like Donbas are historically Russian and that Estonia’s sovereignty is questionable. These narratives, amplified by state-linked media and proxy accounts, rely on insults, selective history, and imperial mythology rather than credible evidence or legal standing. The claims are demonstrably false. Estonia’s independence was legally restored in 1991 based on continuity from its pre-Soviet statehood, and the Soviet annexation was widely recognized as illegal. Similarly, Russia’s claims over Donetsk and Luhansk have been rejected by international bodies, including the UN and the European Council, which reaffirm Ukraine’s territorial integrity. Outlets such as RT have also been identified and sanctioned for their role in spreading disinformation and conducting influence operations. Source:  DisinfoWatch. Kremlin-linked X cluster targets Estonia’s sovereignty and Kaja Kallas . [online] Published 27 March 2026. Available at: https://disinfowatch.org/disinfo/kremlin-linked-x-cluster-targets-estonias-sovereignty-and-kaja-kallas/ Top Of Page China Disinformation Denying Uyghur Forced Labor in China As revealed by DisinfoWatch , a post from the Chinese Embassy in Canada exemplifies a coordinated disinformation effort aimed at denying well-documented human rights abuses in Xinjiang. Triggered by Canadian MP Michael Ma’s concerns about forced labor in Chinese EV production, the message dismisses such allegations as "blatant lies" spread by "anti-China" actors. This framing follows a familiar authoritarian pattern: discredit critics, label evidence as fabricated, and shift attention toward protecting trade relations. However, substantial evidence contradicts these claims. The Canadian government has acknowledged credible reports of forced labor and imposed import restrictions tied to Xinjiang. International bodies, including the UN, have also identified persistent patterns of abuse that may amount to crimes against humanity. Independent investigations have further linked Xinjiang-produced materials to global automotive supply chains, reinforcing concerns about forced labor in EV production. This narrative serves the strategic purpose of deflecting scrutiny, protecting China’s economic interests, and reframing human rights concerns as politically motivated interference. It is part of a long-standing denial campaign that has consistently portrayed allegations of abuses in Xinjiang as fabricated. Source:  DisinfoWatch. Chinese Embassy in Canada Exploits Michael Ma comments to deny forced-labour . [online] Published 28 March 2026. Available at: https://disinfowatch.org/disinfo/chinese-embassy-in-canada-exploits-michael-ma-comments-to-deny-forced-labour/ Top Of Page Iran Pro-Iranian Nasir Security Targets the Energy Sector in the Middle East A report by Resecurity  highlighted the activities of Nasir Security, a relatively new and low-profile cyber group believed to be linked to Iran or its proxies. The group primarily targets the energy sector in the Middle East, focusing on supply chain vendors, including contractors in engineering, construction, and safety. Rather than targeting major energy companies directly, the actors exploit weaker third-party systems using techniques such as spear phishing, business email compromise, and cloud data exfiltration. The stolen data is often authentic but originates from vendors, obscuring the true source of the breach and creating confusion about the attack's scale. Nasir Security combines cyber operations with disinformation tactics, exaggerating the volume and impact of its alleged breaches. The group has claimed large-scale data theft from companies in the UAE, Oman, Iraq, and Saudi Arabia, but investigations suggested these claims are overstated and based on limited third-party compromises. Their activity appeared more ideological than financially motivated, aiming to project strength, fuel geopolitical narratives, and create uncertainty amid the ongoing conflict involving Iran. According to the report, numerous independent assessments confirmed that none of the Iran-linked, pro-Iranian groups and/or state-sponsored groups are making any meaningful impact on the Iran conflict. At the same time, Resecurity highlighted the supply chain cybersecurity risks that Iran could exploit, and recommended that enterprises stay vigilant and accelerate third-party cybersecurity monitoring and vendor risk assessments. Source:  Resecurity. Pro-Iranian Nasir Security is Targeting The Energy Sector in the Middle East . [online] Published 23 March 2026. Available at: https://www.resecurity.com/blog/article/pro-iranian-nasir-security-is-targeting-the-energy-sector-in-the-middle-east Top Of Page Top Of Page Disinformation Trends in the 2026 Iran War According to NewsGuard ’s Reality Check , within the first 25 days of the Iran war, at least 53 false claims circulated online, attracting hundreds of millions of views and averaging roughly two disinformation posts per day. The disinformation shows three key patterns: a strong bias toward pro-Iran messaging, a shift from reused or misrepresented images to fully AI-generated visuals, and a growing tactic of dismissing legitimate journalism as fake or AI-generated. This last trend is particularly concerning, as it attempts to erode trust in credible media by falsely labeling accurate reporting as disinformation. A different NewsGuard   report added that the vast majority (about 92%) of the claims promoted pro-Iran narratives, often exaggerating military successes or inventing major events, such as the destruction of Israeli strategic sites or the deaths of senior leaders like Benjamin Netanyahu. These claims were entirely baseless but aimed to shape public perception and morale. Although some false claims were amplified by Iranian-linked outlets, most originated from decentralized pro-Iran social media networks worldwide. The goal of this disinformation is not to inform but to influence emotions and shape perceptions before facts can be verified. For example, as reported in another NewsGuard 's Reality Check, a widely shared video claiming to show Iranian missiles striking a U.S. Navy ship in the Strait of Hormuz on the 25 th  of March 2026 has been debunked as false. The footage, circulated by pro-Iran social media accounts and viewed millions of times, actually originates from a video game, not a real military event. Analysis of the video revealed several clear indicators of its artificial origin, including a visible mouse cursor, unrealistic visual effects, and the depiction of a ship class no longer in service. Sources: NewsGuard, L. Marchl & S. Rubinson & I. Chomnalez & J. Schofield & M. Wolf & I. Blachez & M. Calamaio & C. Lin & W. Rizvi. 50 False Claims and Counting: First 25 Days, Pro-Iran Disinformation Dominates the Online Battlefield. [online] Published 27 March 2026. Available at: https://www.newsguardtech.com/special-reports/2026-iran-war/ NewsGuard, L. Marchl. 50 False Claims and Counting: First 25 Days, Pro-Iran Disinformation Dominates the Online Battlefield. [online] Published 24 March 2026. Available at: https://www.newsguardtech.com/special-reports/2026-iran-war/ NewsGuard, J. Schofield. Debunk: Supposed War Footage Is Actually from a Video Game. [online] Published 27 March 2026. Available at: https://www.newsguardrealitycheck.com/p/pro-iran-war-footage-no-its-a-video Top Of Page Disinformation and Hybrid Coercion in Iran’s War Strategy A Center for Strategic and International Studies (CSIS)   article outlined how Iran is conducting a multidomain "punishment campaign" that combines military, economic, cyber, and informational tactics to pressure the United States and Israel indirectly. Disinformation plays a central role in this approach. Alongside missile and cyber operations, Iran deploys computational propaganda and targeted influence campaigns to magnify the psychological impact of disruptions. By targeting interconnected systems, such as energy, finance, and infrastructure, Iran amplifies both the material and informational effects of its actions. Strategically, this campaign aims to weaken coalition unity and pressure governments through economic and psychological strain rather than battlefield victory. Countering this strategy requires not only military and defensive measures, but also active efforts to detect, expose, and disrupt false narratives that support Iran’s broader coercive campaign. Source:  CSIS, B. Jensen. Iran’s Next Move: How to Counter Tehran’s Multidomain Punishment Campaign . [online] Published 23 March 2026. Available at: https://www.csis.org/analysis/irans-next-move-how-counter-tehrans-multidomain-punishment-campaign Top Of Page [General Reports] AI-generated YouTube channels Spread Fake News Reports The Digital Forensic Research Lab (DFRLab)   reports that a network of more than two dozen YouTube channels uses AI-generated content to mimic legitimate news reporting while inserting fabricated geopolitical events. These channels, publishing in English and Russian, combine synthetic anchors, automated narration, AI-generated visuals, and coordinated posting patterns to produce large volumes of content at low cost. Collectively, they have amassed nearly 2 billion views and nearly 2 million subscribers. The operation relies on sensationalist titles, uniform branding, and repeated content across channels, with clear signs of coordination such as synchronized uploads and thematic shifts. A key tactic is blending factual reporting with false claims in the same style, making it difficult for viewers to distinguish real from fabricated events. For example, some Ukraine-related videos falsely reported attacks on logistical infrastructure in Mykolaiv and alleged strikes on military infrastructure in the Polish city of Rzeszów. Other videos suggested imminent diplomatic ruptures between Russia and Azerbaijan and dramatized the US capture of Venezuelan President Nicolás Maduro using AI-generated footage. The network also shows signs of centralized production, including duplicated videos, shared assets, and minimal human oversight, with some content still containing visible AI-generation artifacts. While it is unclear whether the channels are directly monetized, their content is eligible for advertising and benefits from algorithmic amplification. The report raises concerns about violations of YouTube’s misinformation policies and broader regulatory implications, particularly under the EU Digital Services Act, as undisclosed synthetic media at scale poses risks to information integrity and public discourse. Source : Digital Forensic Research Lab (DFRLab), I. Adam & E. Buziashvili. AI-generated YouTube channels co-opt war coverage to farm nearly two billion views . [online] Published 23 March 2026. Available at: https://dfrlab.org/2026/03/23/ai-generated-youtube-channels-co-opt-war-coverage-to-farm-nearly-two-billion-views/ Top Of Page Pierre Poilievre’s misinformation on Joe Rogan’s podcast A report by The Conversation  examined Pierre Poilievre’s appearance on the controversial Joe Rogan Experience podcast and argued that the Canadian opposition leader spread or failed to challenge several misleading claims. Rogan’s podcast is one of the world’s longest-running, averaging 11 million listeners per episode. The interview aimed to reach a large international audience and exposed millions of listeners to disputed or inaccurate statements. Poilievre, citing no evidence, told Rogan that Canada admits one million immigrants per year. A number significantly higher than stated on the Canadian government website. Inflating immigration numbers is a known rhetorical tactic in far-right online spaces, where it functions to fuel anxieties about demographic change. He also downplayed the environmental and health effects of Alberta’s oil sands. Moreover, Canada is the world's largest exporter of canola oil, and Poilievre failed to push back against Rogan’s health misinformation about seed oils. Poilievre also repeated unsupported claims about Canada’s safer supply drug program and about the Liberal government's actions' impact on inflation during and following the COVID-19 pandemic. The report concluded that Poilievre's spread of false claims is dangerous because it fosters divisiveness and distrust among Canadians, particularly on immigration and public health. Source:  The Conversation, J. Hodson & B. I. Wiens & N. Ruest & S. MacDonald. Fact check: Pierre Poilievre’s misinformation on Joe Rogan’s podcast disrespects Canadians. [online] Published 24 March 2026. Available at: https://theconversation.com/fact-check-pierre-poilievres-misinformation-on-joe-rogans-podcast-disrespects-canadians-278864 Top Of Page Disinformation and Climate Information Integrity in Australia Australia’s Senate   inquiry highlighted growing concern in Australia about the widespread impact of misinformation and disinformation on climate change and energy debates. Surveys show that a large majority of Australians encounter false or misleading information online, particularly on climate-related issues. Examples included claims that wind turbines harm whales or that community batteries pose major safety risks, which have influenced local decisions and fueled public anxiety. Disinformation is often strategically produced and amplified by powerful actors, including corporations, governments, and political groups. These campaigns frequently aim to delay climate action by spreading doubt about scientific evidence and promoting misleading narratives. Tactics include "astroturfing" (fake grassroots campaigns), the use of bots and trolls, and increasingly, AI-generated content. The inquiry also highlighted how misinformation affects social cohesion and democratic processes. Climate-related falsehoods have contributed to division within communities, harassment of advocates, and confusion about scientific realities. The report concluded that disinformation is not just about false facts but about manipulating public discourse. It exploits existing beliefs, polarizes opinions, and weakens trust in institutions and science. Addressing this challenge requires stronger regulation of digital platforms, greater transparency, and coordinated efforts to expose and counter deliberate falsehoods while preserving open democratic debate. Source:  The Senate Select Committee on Information Integrity on Climate Change and Energy. The Integrity Gap: Restoring Trust in the Climate and Energy Debate . [online] Published March 2026. Available at: https://apo.org.au/sites/default/files/resource-files/2026-03/apo-nid333872.pdf Top Of Page The Return of Claims that Trump's Assassination Attempt Was Staged According to an article by NewsGuard 's Reality Check, a Washington Post report that Russian intelligence once considered staging an assassination attempt on Hungary’s prime minister, Viktor Orban, to boost his chances in Hungary’s 12 th of April 2026 parliamentary election, has reignited claims that Trump staged his shooting in July 2024. Following the publication of the report, Anti-Trump social media users began claiming that the assassination attempt on Donald Trump was also staged to generate political sympathy. These claims quickly gained traction online, drawing significant engagement. Actually, there is no credible evidence to support assertions that the assassination attempt on Trump, in which a bullet grazed his ear, was staged. The   Washington Post report was based on intelligence documents. The plan, described as a potential "gamechanger", aimed to shift the campaign away from economic concerns toward emotional themes such as security and stability. Although the proposal was never carried out and has been dismissed by the Kremlin as disinformation, it highlighted the strategic importance Moscow places on maintaining Orbán, one of its closest allies within the EU and NATO, in power. Beyond this proposal, the report pointed to broader Russian efforts to influence Hungary’s political landscape, including disinformation campaigns, support for pro-government narratives, and attempts to discredit opposition figures. Sources:   NewsGuard, S. Howard. Possibility of Russia Faking an Assassination in Hungary to Help Orbán Leads Trump Critics to Resurrect Claim His Shooting Was Staged . [online] Published 2026. Available at: https://www.newsguardrealitycheck.com/p/the-return-of-crazy-claims-that-trump The Washington Post, C. Belton. To tilt Hungarian election, Russians proposed staging assassination attempt . [online] Published 21 March 2026. Available at: https://www.washingtonpost.com/world/2026/03/21/hungary-election-interference-russia-orban/ Top Of Page AI-Generated Audio of Clinton Criticized the Iran War As reported by NewsGuard 's Reality Check, a network of YouTube channels has been using AI-generated audio to impersonate former U.S. President Bill Clinton, falsely portraying him criticizing Donald Trump’s handling of the war in Iran.  144 such videos have accumulated more than 10 million views, often featuring realistic voice imitations paired with static images. While some videos include small disclosures, many viewers appear to believe the content is genuine. NewsGuard also found Clinton deepfakes discussing topics such as state elections in Florida and Texas and U.S.-Canada relations. Similar AI-generated audio commentary on the Iran war and other political topics has also targeted other former presidents, including Barack Obama and George W. Bush. The report suggested that financial incentives, rather than purely political motives, are driving this activity. The videos generate advertising revenue through YouTube’s monetization system, benefiting from high engagement and low production costs. YouTube has since removed several of these channels for violating its policies. Source:  NewsGuard, S. Rubinson. AI YouTube Channels Put Words in Bill Clinton’s Mouth About the Iran War, Drawing Millions of Views . [online] Published 25 March 2026. Available at: https://www.newsguardrealitycheck.com/p/bill-clinton-on-youtube-bashes-trump Top Of Page Orbán Doubled Down on Anti-Ukrainian Campaign to Secure Reelection Hungary’s ruling Fidesz party has intensified its anti-Ukrainian rhetoric ahead of the country’s most competitive election in 16 years, as reported by The Jamestown Foundation , using recent tensions with Kyiv to strengthen its campaign. Disputes over the Druzhba oil pipeline, controversial statements by Ukrainian officials, and unverified allegations of threats and financial interference have been used by the government and pro-government media to portray Ukraine as a hostile actor. The strategy appears aimed at mobilizing voters by exploiting existing skepticism toward Ukraine and fears of involvement in the war. The rise in popularity of the main opposition Tisza party has consolidated despite multiple failed efforts by Fidesz to counter the new challenger through various tactics. These include a since-debunked artificial intelligence-generated document promoted as Tisza’s “secret austerity program”, personal attacks against Magyar over his private life, and labeling him as a Ukrainian agent. Source:  The Jamestown Foundation, P. Fazekas. Orbán Doubles Down On Anti-Ukrainian Campaign To Secure Reelection . [online] Published 25 March 2026. Available at: https://jamestown.org/orban-doubles-down-on-anti-ukrainian-campaign-to-secure-reelection/ Top Of Page Disinformation After the Bondi Attack As reported by ABC News , following the Bondi attack in Sydney, a real image of survivor Arsen Ostrovsky was rapidly weaponized in a wave of disinformation. His selfie, sent to his wife, went viral but was quickly reframed by online conspiracy communities as "evidence" that the attack was staged. False claims emerged suggesting his injuries were fake and that the attack was orchestrated by Israeli actors. These narratives relied on familiar tactics such as questioning victim behavior, introducing baseless links to intelligence agencies, and labeling victims as "crisis actors". The disinformation spread quickly across platforms like Telegram, X, and Reddit, evolving from speculation to more sophisticated manipulation. Within hours, AI-generated images were created to "prove" the conspiracy, showing Ostrovsky with fake blood being applied. These fabricated visuals were widely shared internationally, even among users who recognized them as false. At the same time, authentic reporting and real evidence were dismissed as fake. Importantly, this wave of disinformation appears to have been driven less by coordinated state actors and more by decentralized networks and "conspiracy entrepreneurs" seeking attention and profit. Source:  ABC News, J. Robertson & M. Connaughton. This man went viral after surviving Bondi. Then the internet took a dark turn . [online] Published 27 March 2026. Available at: https://www.abc.net.au/news/2026-03-28/how-bondi-beach-survivor-became-face-of-conspiracy-theory/106499580 Top Of Page Orban Spokesperson Misrepresented 2021 Lawsuit to Smear Journalist Catherine Belton Hungary’s international spokesman Zoltán Kovács is misrepresenting a 2021 lawsuit involving sanctioned Russian oligarch Roman Abramovich to discredit journalist Catherine Belton, as highlighted in a report by DisinfoWatch . The case, brought against Belton and her publisher over her landmark book “Putin’s People,” was widely seen as a politically motivated SLAPP suit aimed at intimidating her and undermining her reporting on Vladimir Putin’s network. Kovács is now reviving it to challenge her recent Washington Post reporting on Hungary’s upcoming election and alleged Russian links.In reality, the 2021 case did not disprove Belton’s work. The lawsuit was settled without damages, only minor amendments were made to the book, and its central findings remained intact. Abramovich’s close ties to Putin were later reaffirmed in EU sanctions records. Meanwhile, Belton’s more recent reporting on Hungary and Russia has prompted broader international scrutiny, including follow-up coverage by AP, which reported that the European Commission sought clarification from Hungary after the Washington Post allegations, and by Reuters, which reported longstanding regional suspicions. Source:  DisinfoWatch. Orban spokesperson Recycles Oligarch SLAPP to Smear Journalist Catherine Belton . [online] Published 29 March 2026. Available at: https://disinfowatch.org/disinfo/orban-spokesperson-recycles-oligarch-slapp-to-smear-journalist-catherine-belton/ Top Of Page [Appendix - Frameworks to Counter Disinformation] Disrupting the foundations of FIMI An analysis by EUvsDisinfo  argues that foreign information manipulation and interference (FIMI) should be understood as a structured “supply chain” of deception, where influence operations rely on coordinated resources, infrastructure, and intermediaries. These campaigns require funding, personnel, and technology, with actors often outsourcing activities to contractors and commercial providers to ensure plausible deniability and complicate attribution. The report highlights that FIMI ecosystems are highly interconnected, involving not only state and non-state actors but also overlaps with organized crime networks that provide technical infrastructure, global reach, and operational cover—such as hosting fake news websites or managing bot networks. To counter these threats, the analysis emphasizes the need to disrupt the underlying structures that enable FIMI rather than focusing solely on individual pieces of content. This includes targeting financial flows, dismantling enabling infrastructure, and increasing the operational costs for perpetrators, reflecting a broader shift toward systemic and preventive approaches in countering information manipulation. Source:  EUvsDisinfo. Disrupting the foundations of FIMI . [online] Published 27 March 2026. Available at: https://euvsdisinfo.eu/disrupting-the-foundations-of-fimi/ Top Of Page Trump Administration Accused of Turning Voice of America into a Partisan Propaganda Outlet As reported by The hill , a coalition of current and former Voice of America (VOA) journalists, alongside press freedom organizations PEN America and Reporters Without Borders, filed a federal lawsuit in the U.S. District Court for the District of Columbia  against the Trump administration, the U.S. Agency for Global Media (USAGM), its acting CEO Michael Rigas, and former USAGM director Kari Lake. The plaintiffs allege that USAGM leadership sought to transform VOA's newsroom into a partisan instrument of the executive branch, compelling journalists to reproduce White House talking points nearly verbatim and to disseminate imagery of President Trump in a manner characteristic of authoritarian personality cults. The complaint frames censorship and propaganda as complementary tools of the same strategic objective, arguing that these directives violate both the federal statutes governing VOA's editorial independence and constitutional protections, and that they fundamentally undermine U.S. credibility among the foreign audiences VOA is mandated to serve. Among the concrete tactics alleged are the suppression of politically inconvenient coverage, the replacement of independent editorial judgment with state-directed messaging, and the cancellation of wire service agreements with the Associated Press and Reuters in favor of a proposed arrangement with the right-wing One American News Network. The most operationally significant allegations concern VOA's Persian-language service, which broadcasts into Iran during the ongoing U.S.-Israeli military campaign. According to the lawsuit, transmissions to Iranian audiences have systematically omitted casualty figures from U.S. airstrikes, excluded perspectives from international leaders outside the administration, and minimized coverage of a strike on an elementary school, with a Lake-appointed official requiring pre-approval for all guest appearances across the Persian, Kurdish, and Afghan broadcast services. The plaintiffs further allege that Lake and Rigas suppressed interviews, video footage, and reporting on anti-government protests within Iran, and banned coverage critical of certain factions opposed to the Iranian regime from the Persian Service entirely. The lawsuit frames these interventions as particularly damaging given VOA's foundational mandate: to serve as an independent information source for audiences living under authoritarian media environments. By subordinating editorial independence to political messaging, the plaintiffs argue, VOA risks becoming indistinguishable from the state-controlled outlets its target audiences already contend with domestically, effectively neutralizing one of the United States' principal strategic communications assets. Sources:   The Hill, A. Fields. VOA journalists sue, accusing US government of forcing censorship, propaganda . [online] Published 23 March 2026. Available at: https://thehill.com/homenews/media/5796635-voice-of-america-propaganda-lawsuit/ United States District Court for the District of Columbia. Complaint (Case No. 25 Civ.) . [online] Published 2026. Available at: https://www.courthousenews.com/wp-content/uploads/2026/03/voa-journalists-sue-lake-usagm-over-censorship-propaganda.pdf Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website . Top Of Page

  • Cyber-based hostile influence campaigns 16th - 22nd March 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] Facebook Meta H1 2026 Adversarial Threat Report TikTok AI-Driven Disinformation TikTok Campaign Targets Hungarian Elections [State Actors] Russia   EU Sanctions Targeted Disinformation and Propaganda Coordinated Russian Disinformation Targeted Armenian Elections Disinformation Network Amplified Through Romanian Platforms The War in Ukraine Russia Targeted Ukraine’s Infrastructure While Spreading Disinformation Kremlin Amplified Disinformation Through Fabricated Political Authority Iran Iran Overstated Its Global Oil Disruption Video Showing Netanyahu Alive and Well Is Not an AI Fake Disinformation and Hacktivism in the Iranian Conflict Justice Department Disrupted Iranian Cyber-Enabled Psychological Operations Iran Cyber Operations Prioritize Influence Over Impact [General Reports] 4th EEAS Report on Foreign Information Manipulation and Interference Threats Developments in the FIMI Galaxy PRC-Aligned Networks Exploit Philippine Divides to Shape Narratives [Appendix - Frameworks to Counter Disinformation] AI-Driven Disinformation Escalates Global Influence Warfare AI Model Boosts Urdu Misinformation Detection Capabilities [CRC Glossary] [ Report Highlights] Meta H1 2026 Adversarial Threat Report describes a highly adaptive disinformation ecosystem in which state-linked and criminal actors use coordinated fake identities, AI-generated content, and cross-platform amplification to manipulate audiences, build false credibility, and evade detection while integrating influence operations with broader fraud and exploitation tactics. NewsGuard identified a coordinated network of AI-generated TikTok content and cross-platform disinformation campaigns that sought to manipulate Hungary’s 2026 elections by discrediting the opposition, amplifying fear-based narratives, and covertly promoting pro-Orbán messaging. According to an article by EU VS Disinfo , Russia’s sustained attacks on Ukraine’s energy infrastructure during the winter of 2026 were accompanied by a parallel FIMI campaign. Analysis from the Center for Strategic and International Studies (CSIS) characterizes Iran’s cyber operations as centered on disinformation, influence, and psychological impact, with the country relying on proxies, hacktivist networks, and narrative manipulation to project power while delivering only limited direct strategic effects in conflict. A report by Resecurity highlighted how the Iran war has evolved into a multi-domain conflict where psychological warfare, disinformation, and hacktivism play a central role alongside military operations. The 4th EEAS Report on Foreign Information Manipulation and Interference Threats provided a comprehensive assessment of FIMI activities worldwide, based on cases documented and investigated by the EEAS throughout 2025. [Social Media Platforms] Facebook Meta H1 2026 Adversarial Threat Report The Meta   report highlights a complex and rapidly evolving disinformation landscape driven by coordinated networks that exploit false identities, deceptive narratives, and cross-platform amplification. Central to this ecosystem is Coordinated Inauthentic Behavior (CIB), where actors, often linked to state or state-aligned entities in countries such as Iran, Russia, China, and Pakistan, deploy fabricated personas and covert networks to mislead audiences and manipulate public discourse. These operations rely on structured tactics, including the creation of credible “creator” personas supported by networks of “amplifier” accounts, which artificially inflate engagement and create the appearance of grassroots legitimacy. Actors frequently disguise their origins through proxy infrastructure and identity obfuscation while targeting specific linguistic and regional audiences with tailored messaging. A defining feature of these campaigns is the increasing integration of artificial intelligence to enhance both scale and plausibility. Adversaries use AI-generated profile images, synthetic media, and large language models to produce coherent, multilingual, and culturally nuanced content, significantly lowering operational costs while improving effectiveness. These tools enable the rapid development of persuasive personas, automated engagement strategies, and consistent narrative framing across platforms. Tactics such as co-opting authentic content, impersonating journalists or activists, and constructing fictitious media brands further blur the line between legitimate discourse and manipulation. The use of AI also extends to evasion techniques, allowing actors to bypass detection systems and sustain operations despite enforcement efforts. Disinformation activities are closely intertwined with broader fraud and scam operations, reflecting a convergence of influence and financial exploitation tactics. Criminal syndicates and influence actors alike employ social engineering methods, such as the impersonation of trusted professionals, the exploitation of emotionally vulnerable populations, and the migration to off-platform communication channels, to build trust and evade oversight. These campaigns demonstrate high levels of organization and adaptability, with actors continuously refining their tactics in response to enforcement pressure. Across both domains, common patterns include identity misrepresentation, coordinated amplification, and the iterative reconstitution of networks, underscoring a persistent and professionalized threat environment in which disinformation serves as both a strategic objective and an operational enabler. Source:  Meta, Adversarial Threat Report First Half 2026 . [online] Published 2026. Available at: https://transparency.meta.com/sr/first-half-2026-Adversarial-threat-report/ Top Of Page TikTok AI-Driven Disinformation TikTok Campaign Targets Hungarian Elections A coordinated influence operation, identified by NewsGuard , leveraged 34 anonymous TikTok accounts to disseminate AI-generated content aimed at discrediting opposition candidate Péter Magyar and bolstering Prime Minister Viktor Orbán ahead of Hungary’s April 2026 elections. The campaign employed a range of tactics, including fabricated news reports featuring AI-generated anchors and non-existent experts, manipulated real footage to portray Magyar as aggressive, and emotionally charged narratives linking him to Ukraine, economic instability, and cultural threats. Additional amplification strategies included AI-generated celebrity endorsements, fictional scenarios, and viral-style content, collectively generating millions of views. The accounts exhibited clear signs of coordination, including synchronized creation dates, AI-generated profile images, and consistent messaging themes. The TikTok campaign operated alongside broader cross-platform disinformation efforts, including activity attributed to the Russian-linked “Matryoshka” network on X and Telegram, which spread false claims targeting Ukraine to indirectly support Orbán. These narratives included fabricated cyberattacks, biological threats, and symbolic provocations, all intended to heighten fear and polarization. While attribution remains inconclusive, the timing, scale, and alignment with geopolitical interests suggest a structured and strategic intervention in Hungary’s electoral process. TikTok confirmed that the network was a covert influence operation and removed the accounts, highlighting ongoing platform-level responses amid broader concerns about foreign interference in democratic processes. Source:  NewsGuard, A. Lee & M. Roache, Influence Campaign on TikTok Uses AI Videos to Boost Hungary’s Orbán Ahead of Crucial Elections . [online] Published 20 March 2026. Available at: https://www.newsguardtech.com/special-reports/russian-influence-campaign-uses-ai-tiktok-videos-to-boost-hungarys-viktor-orban/ Top Of Page [State Actors] Russia EU Sanctions Targeted Disinformation and Propaganda On the 16 th of March 2026, the EU Council   announced it had imposed sanctions on four individuals involved in Russia’s hybrid activities, with a strong focus on Foreign Information Manipulation and Interference. Among them is Sergey Klyuchenkov, a propagandist who has actively spread false narratives to justify Russia’s war against Ukraine, including calls for violence and misleading claims about territorial expansion. Similarly, news anchor Ernest Mackevičius has used state television to promote distorted accounts of the war, Ukraine, and its military. The sanctions also targeted Graham Phillips and Adrien Bocquet, who were both accused of amplifying Kremlin propaganda. Phillips has produced misleading content from occupied Ukrainian territories, while Bocquet has acted as a key voice spreading disinformation across Europe through media appearances and social platforms. Source:  European External Action Service, Russian hybrid threats: four individuals added to EU sanctions list for information manipulation activities . [online] Published 2026. Available at: https://www.eeas.europa.eu/delegations/ukraine/russian-hybrid-threats-four-individuals-added-eu-sanctions-list-information-manipulation-activities_en Top Of Page Coordinated Russian Disinformation Targeted Armenian Elections CivilNet   revealed that a fake video circulated on X in March 2026 falsely claimed that Armenian political strategists had secretly traveled to France to learn vote manipulation techniques under a "secret agreement" between Emmanuel Macron and Nikol Pashinyan. The video attempted to appear credible by misusing the NewsGuard logo, despite no such report existing. The video’s spread was coordinated via newly created or previously inactive accounts that were suddenly activated, suggesting manipulation by bots or hacked profiles. Similar campaigns have used fake logos from major outlets such as CNN and Reuters, along with fabricated newspaper front pages, to promote false claims about political figures and events. These efforts are linked to known disinformation strategies, such as the Russia-associated "Matryoshka" and "Doppelgänger" operations, which rely on impersonation and mass dissemination to amplify misleading content. Source:  CivilNet, L. Dubois-Harounyan, Fake Videos and Fabricated Headlines: A Coordinated Disinformation Campaign Targeting Armenia and France . [online] Published 18 March 2026. Available at: https://www.civilnet.am/en/news/1008717/fake-videos-and-fabricated-headlines-a-coordinated-disinformation-campaign-targeting-armenia-and-france/ Top Of Page Disinformation Network Amplified Through Romanian Platforms A DFRLab   report found that News Time Romania (NTR) operated as a disinformation amplifier by translating and redistributing content from Russian state-aligned and sanctioned sources on TikTok and Telegram. Rather than producing original journalism, it repackaged foreign propaganda into Romanian and presented it as independent media. Analysis showed that over half of its TikTok content originated from such sources, indicating a systematic effort to inject disinformation into the local information space. Key contributors to this content included sanctioned outlets like SouthFront and pro-Kremlin platforms such as Rybar, both known for spreading misleading narratives about military and geopolitical events. NTR also indirectly promoted the ideology of Alexander Dugin through intermediary platforms, allowing sanctioned or extremist viewpoints to circulate without direct attribution. This layered approach helped bypass restrictions while maintaining a steady flow of pro-Russian and anti-Western messaging. Source:  DFRLab, V. Olari, Romanian channel launders sanctioned Russian content for domestic audiences . [online] Published 20 March 2026. Available at: https://dfrlab.org/2026/03/20/romanian-channel-launders-sanctioned-russian-content-for-domestic-audiences/ Top Of Page The War in Ukraine Russia Targeted Ukraine’s Infrastructure While Spreading Disinformation According to an article by EUvsDisinfo , Russia’s sustained attacks on Ukraine’s energy infrastructure during the winter of 2026 were accompanied by a parallel FIMI campaign. As missile strikes and drone attacks left millions without electricity, heating, and water during extreme cold, pro-Kremlin outlets promoted narratives portraying Ukraine as divided and on the verge of collapse. These claims included false assertions that Ukrainians were ready to surrender territory or were protesting against their own government, as well as framing the strikes as justified retaliation rather than attacks primarily affecting civilians. Another narrative targeted European support for Ukraine, with messaging repeatedly claiming that Europe has grown tired of supporting Ukraine. Recent polls showed that, while countries closest to the conflict are beginning to experience war fatigue, overall support for Ukraine remained high. Moreover, the attacks on Ukrainian infrastructure have spurred many grassroots fundraising campaigns across Europe. Russia has also leveraged energy insecurity as part of a broader influence strategy aimed at Europe, spreading claims that the continent cannot function without Russian energy supplies. At the same time, hybrid tactics, including cyberattacks on European energy systems, reinforced these narratives. On the day of the invasion of Ukraine, a hacker group linked to Moscow disrupted monitoring systems for thousands of wind turbines across Germany, and more recently, in December 2026, Poland reported attacks on its energy sector. Source:  EUvsDisinfo, Targeting the grid, shaping the story: Russia’s dual assault on Ukraine . [online] Published 16 March 2026. Available at: https://euvsdisinfo.eu/targeting-the-grid-shaping-the-story-russias-dual-assault-on-ukraine/ Top Of Page Kremlin Amplified Disinformation Through Fabricated Political Authority As reported by Stop Fake , Kremlin-linked media outlets have been systematically spreading disinformation by promoting statements from Armando Mema, falsely claimed to be a "Finnish politician". Russian state media, including RIA Novosti and RT, repeatedly cite his social media posts to claim that Western countries are losing faith in Ukraine, calling for peace with Russia, or warning of inevitable Russian victory. This creates the illusion of growing international support for pro-Kremlin narratives. In reality, Mema is not a recognized political figure in Finland but a little-known individual with no electoral success and no significant public influence. By misrepresenting him as a credible politician, these outlets confer legitimacy and amplify fringe opinions, as if they reflected mainstream Western views. Source:  StopFake, The “Finnish politician”, an opponent of helping Ukraine, who is quoted daily by the Kremlin media, turned out to be a plumber without Finnish citizenship . [online] Published 18 March 2026. Available at: https://www.stopfake.org/ru/finskij-politik-protivnik-pomoshhi-ukraine-kotorogo-ezhednevno-tsitiruyut-kremlevskie-smi-okazalsya-santehnikom-bez-grazhdanstva-finlyandii/ (referring to: The Insider, Antifake investigation . [online] Published 2026. Available at: https://theins.ru/antifake/290378 Top Of Page Iran Iran Overstated Its Global Oil Disruption Since the US-Israel attack, Iran has effectively blocked the Strait of Hormuz, a narrow waterway between Iran and Oman through which roughly 20 percent of the world’s oil supply passes. The International Energy Agency said the Middle East conflict was creating the largest supply disruption in the history of the global oil market. As reported by NewsGuard 's Reality Check, Iranian state-aligned media amplified disinformation that exaggerated the global impact of the Strait of Hormuz disruption, including false claims that Australia is on the verge of a fuel crisis. Reports circulated by pro-Iran outlets alleged that Australia has only 18 days of fuel reserves remaining, roughly half of its actual supply, citing fabricated figures attributed to government sources. The claim was accompanied by a manipulated graphic, likely edited using a Google AI tool, which altered authentic data from an Australian broadcaster to support the misleading narrative. In reality, official statements from Australia confirmed significantly higher reserves, with around 36 days of petrol and substantial supplies of diesel and jet fuel. Alongside these narratives, Iranian outlets have warned of a looming "energy catastrophe" in the West, framing the crisis as evidence of Western economies' vulnerability despite more measured assessments from international energy authorities. Source:  NewsGuard, W. Rizvi, Iranian-Linked Media Falsely Claim Australia Has “Only 18 Days” of Fuel Reserves, Overstating the Impact of the War . [online] Published 17 March 2026. Available at: https://www.newsguardrealitycheck.com/p/iran-overstates-its-global-oil-disruption Top Of Page Video Showing Netanyahu Alive and Well Is Not an AI Fake NewsGuard ’s Reality Check highlights that a video showing Israeli Prime Minister Benjamin Netanyahu alive at a Jerusalem café was falsely claimed to be AI-generated, forming part of a broader disinformation narrative that he was killed or seriously injured in an Iranian missile strike. The video, posted on the 15 th of March 2026, was intended to refute the rumors about Netanyahu's death, but pro-Iran and anti-Israel accounts quickly dismissed it as a deepfake, citing supposed anomalies in Netanyahu’s face, the level of the coffee in the cup, the lighting and shadows, and a perceived distortion in Netanyahu’s coat pocket. Some posts shared screenshots from the AI-detection tool, Hive, as purported evidence. And Hive indeed determined that the video was AI-generated. However, multiple verifications confirmed the video is authentic. Independent analysis, including location matching and supporting images from the café itself, as well as assessments by AI-detection experts, found no signs of manipulation. NewsGuard has identified other viral instances in which social media users claim that authentic war-related visuals are AI-generated. Pro-Iran accounts falsely claimed that a video of Netanyahu speaking at a press conference on the 12 th of March 2026 was AI-generated. And anti-regime social media users falsely claimed that The New York Times published an AI-generated image of a large crowd celebrating the naming of the new Iranian supreme leader, Mojtaba Khamenei. Another report by CyberNews  presented the claims about Netanyahu’s alleged death that were amplified by social media accounts such as " Iran News Today ", which shared images of an injured man resembling him and presented them as confirmation. Shortly after, Netanyahu posted a video released by Israel’s Government Press Office showing him alongside senior officials. While this footage should have dispelled the rumors, many users insisted it was part of a cover-up, claiming the video was recycled from a 2024 meeting or had been manipulated with AI. Some relied on AI tools like Google’s Gemini to "verify" these claims, despite contradictory evidence, such as newly published images confirming the meeting took place. Sources: NewsGuard, W. Rizvi, In this Case, Believe Your Eyes: Video Showing Netanyahu Alive and Well Is Not an AI Fake . [online] Published 16 March 2026. Available at: https://www.newsguardrealitycheck.com/p/netanyahus-proof-of-life-video-is Cybernews, N. Ancell, Israeli Prime Minister Benjamin Netanyahu is trying to prove he's not an AI deepfake . [online] Published 18 March 2026. Available at: https://cybernews.com/news/benjamin-netanyahu-artificial-intelligence-video/ X, Iran News Today (@MCK_Kaif), X post . [online] Published 18 March 2026. Available at: https://x.com/MCK_Kaif/status/2034172316155056236 X, Benjamin Netanyahu (@netanyahu), X post (official account). [online] Published 17 March 2026. Available at: https://x.com/netanyahu/status/2034021266861003258 Top Of Page Disinformation and Hacktivism in the Iranian Conflict A report by Resecurity  highlighted how the Iran war has evolved into a multi-domain conflict where psychological warfare, disinformation, and hacktivism play a central role alongside military operations. Pro-Iran Hacktivists were deliberately spreading fear and confusion among targeted populations. Iran has also conducted internal psychological campaigns to keep the population motivated for the war and aligned with the existing regime. To counter that, the U.S. and its allies conducted attacks against broadcasting infrastructure and state media. At the same time, AI-generated images, recycled footage, and even video game clips have been widely circulated as real battlefield content, distorting public perception of events. In one of the misinformation episodes, a pro-Iranian group was circulating messaging that Khamenei had not been killed. Resecurity highlighted misinformation about both cyber activity and kinetic actions. Pro-Iran hacktivist groups like the Cyber Islamic Resistance, Fatimion Cyber Team, and Cyber Fattah had been identified as key players. Their activities included reconnaissance, DDoS campaigns, website defacements, and data theft. Pro-Western hacktivist groups have also been active and targeted Iranian religious apps and news sites. Notably, such factors included Iranian opposition groups residing outside Iran who targeted the existing regime. While many of the attacks were low-impact or exaggerated, they generated significant "noise", forcing defenders to divert resources and amplifying uncertainty. Source:  Resecurity, Iran War: Kinetic, Cyber, Electronic, and Psychological Warfare Convergence . [online] Published 17 March 2026. Available at: https://www.resecurity.com/blog/article/iran-war-kinetic-cyber-electronic-and-psychological-warfare-convergence Top Of Page Justice Department Disrupted Iranian Cyber-Enabled Psychological Operations As published by the Office of Public Affairs, a U.S. Justice Department  operation seized four domains linked to Iran’s Ministry of Intelligence and Security (MOIS), which were used to conduct cyber-enabled psychological warfare campaigns. These websites functioned as part of a coordinated "faketivist" strategy, where Iranian actors claimed responsibility for cyberattacks, leaked or fabricated sensitive data, and amplified propaganda to intimidate adversaries. The platforms were used to target journalists, dissidents, and Israeli-linked individuals through doxxing, harassment, and threats, including calls for violence and financial bounties. The seized domains were Justicehomeland[.]org, Handala-Hack[.]to, Karmabelow80[.]org, and Handala-Redwanted[.]to. For example, the MOIS used the Handala-hack[.]to domain to claim credit for a destructive malware attack in March 2026 against a U.S.-based multinational medical technology firm. These activities were often aimed less at technical disruption and more at psychological impact, spreading fear, damaging reputations, and deterring opposition. The use of fake hacktivist personas allowed Iranian actors to obscure attribution while amplifying their messaging across online platforms. Source:  U.S. Department of Justice, Justice Department Disrupts Iranian Cyber Enabled Psychological Operations . [online] Published 19 March 2026. Available at: https://www.justice.gov/opa/pr/justice-department-disrupts-iranian-cyber-enabled-psychological-operations Top Of Page Iran Cyber Operations Prioritize Influence Over Impact An article by the Center for Strategic and International Studies (CSIS)  portrays Iran as a capable and adaptive cyber actor that integrates cyber operations into a broader hybrid warfare strategy, combining espionage, disruption, and influence campaigns to advance political objectives. Central to its approach is the use of cyber capabilities for symbolic impact and information dominance rather than decisive military effect. Iranian state-linked actors, including proxies such as hacktivists and cybercriminal networks, conduct operations that blend destructive attacks, reconnaissance, and influence efforts, often targeting critical infrastructure and foreign audiences. These actors employ tactics such as hack-and-leak campaigns, social media manipulation, and the mobilization of loosely affiliated cyber groups to amplify perceived reach, while maintaining plausible deniability and projecting power in the information domain. Disinformation and influence operations emerge as a key battlefield, with Iran prioritizing narrative control, psychological pressure, and audience manipulation over high-impact cyber disruption. Tactics include exploiting social media to reach foreign populations, generating “noise” through coordinated low-level attacks, and leveraging AI and digital tools to shape perceptions and sow division. Domestically, information control is reinforced through internet shutdowns that isolate populations and suppress dissent. Internationally, Iran combines opportunistic cyber targeting with strategic messaging designed to exaggerate its capabilities, obscure attribution, and erode adversary cohesion. Overall, cyber operations primarily serve as enablers of information warfare, aimed at destabilizing opponents, influencing public opinion, and increasing the perceived costs of conflict, rather than delivering decisive operational outcomes. Source:  Center for Strategic and International Studies, N. Shah, Demystifying Iranian Cyber Operations in the U.S.-Iran Conflict . [online] Published 20 March 2026. Available at: https://www.csis.org/analysis/demystifying-iranian-cyber-operations-us-iran-conflict Top Of Page [General Reports] 4th EEAS Report on Foreign Information Manipulation and Interference Threats The 4 th   European External Action Service (EEAS) Report on Foreign Information Manipulation and Interference Threats provided a comprehensive assessment of FIMI activities worldwide, based on cases documented and investigated by the EEAS throughout 2025. A total of 540 incidents were identified, involving around 10,500 digital channels, with Ukraine remaining the primary target, followed by several European countries. Russia and China were the main attributed actors, though much of the activity relied on covert, unattributed networks that complicated detection and accountability. A key trend was the widespread use of artificial intelligence, with AI-generated text, audio, and video now routinely deployed to produce scalable, low-cost disinformation. FIMI was increasingly integrated into broader hybrid strategies, combining digital manipulation with geopolitical and physical actions, particularly around elections and regional conflicts. A central innovation of the report was the FIMI Deterrence Playbook, which marked a shift from reactive responses to proactive disruption. Instead of focusing only on identifying threats, the EU aimed to target the underlying systems that enabled them, such as intermediaries, service providers, and financial and technical infrastructures. By leveraging sanctions, regulations, and law enforcement in a coordinated manner, the EU sought to raise the costs and reduce the effectiveness of FIMI operations. The report also mapped a complex global network, or "Galaxy", of interconnected channels, revealing a highly coordinated core infrastructure with regional clusters targeting Eastern Europe, Africa, and the Middle East. To translate the assessment into practice, the report suggested a set of concrete operational follow-ups, such as Strengthening data collection and information sharing, integrating a deterrence dimension into counter-FIMI, and strengthening collective response and Member States engagement. Source:  EEAS, 4th EEAS Report on Foreign Information Manipulation and Interference Threats . [online] Published March 2026. Available at: https://transparency.meta.com/sr/first-half-2026-Adversarial-threat-report/ Top Of Page Developments in the FIMI Galaxy A report by EUvsDisinfo  highlighted how Foreign Information Manipulation and Interference (FIMI) have evolved into a dense, global ecosystem of interconnected actors, infrastructures, and narratives. The "FIMI Galaxy" has become more complex and active, with a growing number of incidents and operational channels. Threat actors increasingly reuse the same networks and tools across regions, reflecting a shift toward an organized, industry-like model of disinformation. The integration of AI has further accelerated this trend, enabling faster, cheaper, and more scalable production and distribution of manipulative content across multiple languages and platforms. Another defining development is the expansion of the front lines in the information space. Almost every major geopolitical event now has a parallel dimension in the FIMI domain.  The Galaxy mapped this ecosystem through three layers: threat actors (including Russia, China, and largely unattributed networks), manipulation infrastructures (known as Information Manipulation Sets), and targeted audiences. Notably, around 90 percent of the ecosystem operates through covert or proxy networks, making attribution difficult and allowing actors to coordinate influence campaigns while maintaining plausible deniability. These networks are highly adaptable, often shifting focus between regions, while using recurring tactics such as fake accounts, impersonation of media, and coordinated amplification. Source:  EUvsDisinfo, What’s new in the FIMI Galaxy? . [online] Published 20 March 2026. Available at: https://euvsdisinfo.eu/whats-new-in-the-fimi-galaxy/ Top Of Page PRC-Aligned Networks Exploit Philippine Divides to Shape Narratives An article by Doublethink Lab argues that the Philippines faces a sustained Foreign Information Manipulation and Interference (FIMI) campaign centered on the West Philippine Sea and domestic political divisions, with the People’s Republic of China (PRC) state, state-controlled, and state-aligned actors, as well as proxy actors, playing a central role. Across the cases examined, these actors used coordinated narrative seeding and amplification to portray the United States as a destabilizing force, recast the Philippines as the aggressor in maritime confrontations, promote Sara Duterte as a preferable political alternative, agitate unrest within the armed forces, and frame Philippine defense cooperation with Washington as a path to proxy war. The campaign relied on a mix of official diplomatic and media channels, local PRC-aligned think tanks and influencers, pro-Duterte networks, and inauthentic accounts that amplified forged documents, misleading claims, and manipulated framings. These actors frequently presented themselves as experts, journalists, or credible public figures to lend legitimacy to their messaging and exploit existing domestic polarization to advance geopolitical objectives. The tactics documented in the article reflect a deliberate effort to distort facts, degrade adversaries, cultivate support, and flood the information space across multiple platforms. These included forged memos alleging U.S. interference, early and coordinated framing of breaking maritime incidents, cross-posting across Chinese and Philippine digital ecosystems, AI-generated or suspicious account imagery, single-purpose amplification accounts, and the repeated use of local voices to normalize PRC-aligned narratives within Filipino discourse. The report concludes that these campaigns are not isolated disinformation incidents but rather components of a broader, persistent influence effort targeting the country’s cognitive security, democratic stability, and public understanding of national sovereignty issues. It emphasizes that the overlap between pro-PRC and pro-Duterte ecosystems, the growing role of locally based aligned actors, and the systematic use of coordinated inauthentic behavior make FIMI a national security and human rights concern that requires a coordinated, rights-based response. Source:  Doublethink Lab, A Bayanihan Initiative: Countering FIMI in the Philippines . [online] Published 20 March 2026. Available at: https://medium.com/doublethinklab/a-bayanihan-initiative-countering-fimi-in-the-philippines-15d14c4e2356 Top Of Page [Appendix - Frameworks to Counter Disinformation] AI-Driven Disinformation Escalates Global Influence Warfare Keynote speech by High Representative of the Union for Foreign Affairs and Security Policy (HRVP) Kaja Kallas at the 2026 Conference on Countering Foreign Information Manipulation and Interference frames Foreign Information Manipulation and Interference (FIMI) as a central threat to democratic systems, driven primarily by state actors such as Russia and China, and increasingly enabled by artificial intelligence. These actors employ evolving tactics that move beyond traditional bot farms to large-scale, AI-driven influence operations, including deepfakes, coordinated bot amplification, and the use of paid influencers to manipulate public opinion. The example of Romania’s 2024 elections illustrates how Russia-backed networks used AI-generated content and influencer promotion to artificially boost a fringe candidate, demonstrating the ability of such campaigns to distort electoral processes. The speech also highlights the emergence of a "marketplace for manipulation", where influence operations are outsourced to private contractors, digital marketing actors, and influence-for-hire networks, often financed through opaque mechanisms such as cryptocurrency. The address emphasizes that FIMI operates as a form of cognitive warfare aimed at eroding trust, polarizing societies, and shaping political outcomes across multiple domains: within the EU, in neighboring regions, and globally. Tactics include coordinated narrative amplification, financial support to proxy actors, early and sustained interference campaigns, and the exploitation of democratic openness to spread manipulative content. Case studies such as Moldova demonstrate how foreign actors deploy large-scale funding, vote-buying, and information campaigns to influence elections, while targeting countries closer to the EU with intensified efforts. The speech concludes that disinformation campaigns are persistent, adaptive, and globally networked, requiring coordinated responses that combine regulation, financial disruption of influence networks, cross-border cooperation, and strategic communication to counter manipulation and protect information integrity. Source:  European External Action Service, Keynote speech by HRVP Kaja Kallas at the 2026 Conference on Countering Foreign Information Manipulation and Interference: ‘From Insight to Impact’ . [online] Published 17 March 2026. Available at: https://www.eeas.europa.eu/eeas/keynote-speech-hrvp-kaja-kallas-2026-conference-countering-foreign-information-manipulation-and_en Top Of Page AI Model Boosts Urdu Misinformation Detection Capabilities An article by TechXplore describes the development of an advanced artificial intelligence system designed to detect misinformation in Urdu-language media, addressing a significant gap in the analysis of non-English disinformation. Developed using a dataset of over 14,000 Pakistani news articles spanning politically and socially sensitive topics, the model can identify fake, misleading, and partially true content with reported accuracy of 96%. The system analyzes linguistic patterns, sentiment, and structural features to distinguish fabricated narratives from legitimate reporting, targeting a “polluted information ecosystem” where misinformation can influence public health, elections, and trust in institutions. This effort responds to the historical lack of comprehensive Urdu datasets, particularly on politically and religiously sensitive issues where disinformation is prevalent. The research highlights both the potential and limitations of AI in countering information manipulation. While the model represents a significant advancement in detecting disinformation in a low-resource language, it also raises concerns about misclassification, particularly with satire or political dissent, and about its implications for content moderation and enforcement. The dataset has been made open access to support broader efforts to scale detection capabilities and adapt to evolving narratives. Overall, the system is positioned as a technical tool to strengthen resilience against misinformation campaigns targeting Urdu-speaking audiences, while acknowledging the ongoing challenges posed by dynamic and context-dependent disinformation tactics. Source:  Tech Xplore, Heriot-Watt University, AI model trained on 14,000 Urdu news stories spots misinformation with 96% accuracy . [online] Published 18 March 2026. Available at: https://techxplore.com/news/2026-03-ai-urdu-news-stories-misinformation.html Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website . Top Of Page

  • Cyber-based hostile influence campaigns 9th - 15th March 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] Facebook Bulgarian Disinformation Network Drives Traffic to Fabricated News Site X AI and Disinformation Surge on X During Iran Conflict [State Actors] Russia   Kremlin’s Myth of "Protecting Russian Compatriots" Russian Disinformation Targeting French Local Elections Disinformation Networks Linked to Russia Expand Across Europe and Africa False Narratives About the War in the Middle East The War in Ukraine Historical Myths in Russia’s War Against Ukraine China China Pushed Pro-Iran War Claims Iran Disinformation Claims About Israeli Leaders During the Iran War AI-Manipulated Image Used for Disinformation [AI Related Articles] The Risk of Automation of Disinformation Campaigns AI’s Influence on Journalism [General Reports] Disinformation Campaigns Target Human Rights Defenders in Guatemala [Appendix - Frameworks to Counter Disinformation] Study Proposed Aid System Resilient to Disinformation During Disasters CFTC’s Selig Said Prediction Markets Could Help Counter Disinformation Detecting AI Content Farms and Online Disinformation [CRC Glossary] [ Report Highlights] In the wake of the conflict between the United States, Israel, and Iran on 28 February, a Wired report highlights how X became saturated with disinformation, much of it fueled by AI-generated content. A NewsGuard report uncovers a Russian disinformation campaign seeking to influence France’s 2026 municipal elections by spreading false claims about political candidates. Institute for Strategic Dialogue (ISD) Investigations across six EU countries and parts of the Sahel revealed that Russian-aligned disinformation campaigns have become more sophisticated since the 2022 invasion of Ukraine. Chinese state media and pro-China social media accounts, as documented by NewsGuard ’s Reality Check, spread false or misleading narratives about the war involving Iran to promote Iran’s military strength and undermine trust in the US and its allies. A new study by ArXiv examined how AI agents could coordinate online information operations. According to findings by Global Witness , coordinated disinformation campaigns on social media have been used to attack and criminalize human rights defenders in Guatemala. [Social Media Platforms] Facebook Bulgarian Disinformation Network Drives Traffic to Fabricated News Site An article by the DFR Lab  reports that the Bulgarian website dailystandart[.]com publishes fabricated and misleading political articles to generate revenue through the Adskeeper native advertising platform. Traffic to the site is largely driven by a coordinated Facebook network consisting of two pages, nine groups, and six accounts. This network demonstrates signs of coordinated inauthentic behavior, including accounts with stolen profile images, mismatched names and handles, and prolonged inactivity. Several of these accounts administer Facebook groups and repeatedly amplify links to the website, suggesting organized efforts to distribute disinformation. The Facebook groups in the network are politically themed and often promote pro-Russian narratives. Together, they have over 211,600 members, with the largest groups explicitly supporting Russian President Vladimir Putin. Content from dailystandart is spread through direct links, screenshots of headlines, and posts with links in the comments. Evidence suggests the website itself relies on automated publishing: duplicated articles with identical URL structures, repeated timestamps, and clusters of posts published at regular intervals indicate programmatic content generation rather than manual journalism. Stories falsely attribute statements to Russian officials and Bulgaria’s former president, Rumen Radev, exaggerate Russian military capabilities, and portray Ukraine as weak or defeated. The website also circulates fabricated stories during sensitive political periods in Bulgaria and employs AI-generated images and incomplete headlines to increase click-through rates. Source:  DFR Lab, S. Gelava. Coordinated Bulgarian Facebook network amplifies fabricated for-profit political content . [online] Published 9 March 2026. Available at: https://dfrlab.org/2026/03/09/coordinated-bulgarian-facebook-network-amplifies-fabricated-for-profit-political-content/ Top Of Page X AI and Disinformation Surge on X During Iran Conflict A Wired   report states that, since the start of the conflict between the United States, Israel, and Iran on 28th of February, X has been flooded with disinformation, much of it driven by AI-generated content. When disinformation researcher Tal Hagin asked X’s AI chatbot Grok to verify a video claiming Iranian missiles struck Tel Aviv, the chatbot repeatedly provided incorrect information about the video’s location and date and even attempted to support its claims using an AI-generated image. AI-generated images and videos have increasingly been used to spread false or exaggerated claims about the conflict. Iranian officials, state media, and other accounts on X have circulated fabricated visuals depicting events such as destroyed military aircraft, captured U.S. soldiers, and large-scale military damage. Some of these posts gained millions of views before being removed. Researchers also found AI-generated content being used to promote propaganda and antisemitic narratives, showing how easily synthetic media can be deployed to manipulate public perception during wartime. At the same time, traditional disinformation continues to spread widely on the platform. Misleading videos and recycled footage have been used to push false claims about attacks during the conflict, including narratives about a deadly strike on a school in Minab. Source:  Wired. Fake AI Content About the Iran War Is All Over X . [online] Published 10 March 2026. Available at: https://www.wired.com/story/fake-ai-content-about-the-iran-war-is-all-over-x/ Top Of Page [State Actors] Russia Kremlin’s Myth of "Protecting Russian Compatriots" The Atlantic Council highlights in its analysis that, since Russia’s full-scale invasion of Ukraine in 2022, public debate about the war has been shaped by a long-standing Kremlin narrative claiming that Moscow is protecting Russian populations and reclaiming historically Russian lands. According to the analysis, this narrative is a form of disinformation designed to justify imperial expansion rather than defend minority rights. The idea was heavily promoted during Russia’s 2014 annexation of Crimea and invasion of eastern Ukraine, when many observers accepted the misleading claim that Russian-speaking communities wanted to join Russia. The Kremlin has repeatedly used the “protection of compatriots” narrative across the post-Soviet region, combining propaganda, cultural influence, passport distribution, and military intervention. Russia’s “compatriots” strategy has been observed in Transnistria, Moldova, regions of Georgia, and Ukraine. By broadly defining “Russian compatriots,” Moscow creates a justification for intervention even where local populations do not support Russian involvement. In Ukraine, the claim is contradicted by the fact that many Russian-speaking Ukrainians have resisted Russian forces, joined the Ukrainian military, or fled Russian occupation. Despite clear evidence that the narrative functions as propaganda, elements of it continue to shape international discussions about ending the war. Some proposals for peace implicitly treat parts of Ukraine as areas where Russian claims may be legitimate. Source:  Atlantic Council, A. Grigas. Russian talk of protecting compatriots masks Putin’s imperial ambitions . [online] Published 10 March 2026. Available at: https://www.atlanticcouncil.org/blogs/ukrainealert/russian-talk-of-protecting-compatriots-masks-putins-imperial-ambitions/ Top Of Page Russian Disinformation Targeting French Local Elections Amid growing concerns about foreign interference, a NewsGuard   report reveals that a Russian disinformation campaign is attempting to influence France’s 2026 municipal elections by spreading false claims about political candidates. One example involved a fake website that impersonated the campaign site of Paris mayoral candidate Pierre-Yves Bournazel. The site falsely claimed that Bournazel planned to convert the Centre Pompidou museum into a facility for homeless migrants. The fabricated story was amplified on social media by pro-Kremlin accounts through videos and posts, despite Bournazel’s campaign confirming that the claim and website were entirely false. The operation has been linked to the Russian influence network known as Storm-1516, believed to be connected to the Internet Research Agency. This network typically creates fake websites, forged content, and AI-generated materials around controversial topics such as immigration, then spreads them through coordinated social media activity. Another campaign, called Matryoshka, has also circulated fabricated videos that mimic credible media outlets such as Le Monde, TF1, and RTL. These fake reports claim that security threats linked to international conflicts could disrupt the elections. Similar tactics were also used in Germany, where fake news videos falsely accused Ukrainian refugees of planning attacks. Source:  NewsGuard, I. Blachez. Russian Disinformation Goes Local Ahead of French Elections, Targeting Paris Mayoral Candidate Backed by Macron’s Party . [online] Published 10 March 2026. Available at: https://www.newsguardrealitycheck.com/p/russia-has-its-eyes-on-paris Top Of Page Disinformation Networks Linked to Russia Expand Across Europe and Africa Institute for Strategic Dialogue (ISD)   Investigations across six EU countries and parts of the Sahel revealed that Russian-aligned disinformation campaigns have become more sophisticated since the 2022 invasion of Ukraine. Instead of relying mainly on state media, these operations now use proxy websites, AI-generated content, bot networks, and local influencers to spread Kremlin narratives while bypassing sanctions. Researchers found that the strategy focuses on “laundering” propaganda through intermediaries, allowing misleading claims to spread widely within domestic information environments. One major example involved a coordinated network promoting the false claim that Russia was rescuing Ukrainian children from an international sex-trafficking ring linked to the Epstein case. Posts spreading this narrative gained millions of views across Facebook, X, and TikTok, despite evidence that Russia has forcibly deported Ukrainian children, which led to international legal action. Despite EU sanctions on Russian outlets such as RT, enforcement gaps allow sanctioned content to remain accessible through mirror sites and proxy platforms. Disinformation networks have exploited elections in countries like Germany and Poland by spreading fabricated videos, impersonating media outlets, and promoting false claims about politicians and Ukrainian refugees. Similar tactics are also appearing outside Europe, particularly in the Sahel, where Russian-aligned actors use local influencers and regional languages to present Kremlin narratives as authentic domestic opinions. Source:  ISD Global. ISD investigation exposes the sophistication of Russian interference in the EU and beyond . [online] Published 9 March 2026. Available at: https://www.isdglobal.org/media-mentions/the-consequences-are-extremely-serious-isd-investigation-exposes-sophistication-of-russian-interference-in-democracies-in-europe-and-beyond/ Top Of Page False Narratives About the War in the Middle East EUvsDisinfo , in its article , describes how pro-Kremlin information outlets have incorporated the Middle East conflict into their broader disinformation strategy, seeking to link the war involving Iran to Ukraine and Western policies. Russian messaging has promoted misleading narratives suggesting that the 2014 Ukrainian Maidan revolution triggered wider global instability and contributed to the current crisis. Another widely circulated narrative claimed that Ukraine might stage a “provocation” to regain international attention that could be diverted by the war in Iran. This allegation follows a familiar Kremlin pattern of accusing Ukraine of planning sabotage or false flag operations, while also promoting the claim that Western countries are “tired” of Ukraine. However, available polling data indicate that public support for Ukraine in Europe remains high. Moreover, a fabricated story falsely alleged that an Iranian missile destroyed a luxury property in Dubai belonging to an aide to Ukraine’s military leadership, attributing the story to a nonexistent report by Euronews. The claim originated on anonymous Telegram channels and was amplified by the pro-Kremlin Pravda disinformation network. Source:  EUvsDisinfo. Russian FIMI and the war in Iran . [online] Published 13 March 2026. Available at: https://euvsdisinfo.eu/how-russia-lies-about-the-war-in-the-middle-east-2/ Top Of Page The War in Ukraine Historical Myths in Russia’s War Against Ukraine EUvsDisinfo   draws attention in its article to a persistent form of disinformation surrounding Russia’s war against Ukraine, which portrays Ukraine as historically marginal and naturally within Russia’s sphere of influence. Analysts argue that this narrative reflects a Kremlin-centered view of history that ignores Ukraine’s long-standing role in European development. Historically, the territory of modern Ukraine has been a crossroads of civilizations, trade routes, and cultural exchanges, making it central to European history. Presenting Ukraine as an artificial or secondary state helps justify Russian claims over its territory. Another key element of Kremlin disinformation is the myth that modern Russia directly inherits the legacy of medieval Rus’, implying that Ukraine and Russia are historically inseparable. This narrative is used to legitimize territorial ambitions and frame Russian aggression as a historical restoration. In reality, Ukraine’s history is complex and plural, shaped by multiple cultures and political traditions. The Kremlin has also promoted broader myths that reinforce its geopolitical goals, including the idea that Russia is militarily invincible and that nuclear weapons guarantee victory. Source:  EUvsDisinfo, T. Snyder. A Historian’s Big Picture. Russia’s war against Ukraine and how to end it in a right way . [online] Published 10 March 2026. Available at: https://euvsdisinfo.eu/a-historians-big-picture-russias-war-against-ukraine-and-how-to-end-it-in-a-right-way/ Top Of Page China China Pushed Pro-Iran War Claims As reported by NewsGuard 's Reality Check, Chinese state media and pro-China social media accounts spread false or misleading narratives about the war involving Iran in an effort to promote Iran’s military strength and undermine trust in the US and its allies. The messaging targeted audiences in mainland China, Taiwan, and the global Chinese diaspora, reflecting the close strategic relationship between Beijing and Tehran. NewsGuard found that Chinese outlets have increasingly echoed Iranian claims about the conflict, including unverified reports of major US military losses and Iranian battlefield successes. One widely circulated claim alleged that Iran shot down a US fighter jet near its border with Kuwait. The story was repeated dozens of times by Chinese state media and gained significant engagement on Chinese platforms such as Weibo and Douyin. However, US and Kuwaiti officials stated that the aircraft was not downed by Iran but was mistakenly shot down by Kuwaiti forces. Another false narrative promoted by pro-China accounts claimed that Israeli Prime Minister Benjamin Netanyahu had fled to Germany during the war, despite public records and media reports showing that he remained in Israel and continued making public appearances. Researchers also found that Chinese-linked content farms used AI-generated posts to spread similar narratives to Taiwanese audiences on Facebook. These accounts, which previously focused on lifestyle content, began publishing coordinated articles portraying the US and Israel as suffering heavy losses while depicting Iran as dominant in the conflict. Source:  NewsGuard, C. Lin. China Pumps Out War Disinformation to Boost Its Ally Iran and Undermine the West . [online] Published 12 March 2026. Available at: https://www.newsguardrealitycheck.com/p/china-pushes-pro-iran-war-claims Top Of Page Iran Disinformation Claims About Israeli Leaders During the Iran War In its Reality Check, NewsGuard documents that, since the start of the Iran war on 28th of February 2026, Iranian state media and pro-regime social media accounts have circulated disinformation that exaggerates or fabricates Iranian military successes. A major theme of these campaigns has been false claims that senior Israeli officials were killed or injured in Iranian missile strikes. These narratives spread widely online, gaining hundreds of thousands or even millions of views despite lacking credible evidence. Among the most prominent false claims were reports that Israeli Prime Minister Benjamin Netanyahu and Mossad director David Barnea had been killed in Iranian attacks. Iranian state media outlets and pro-Iran social media accounts posted messages suggesting Netanyahu was injured or dead, while others shared a video supposedly showing the aftermath of a missile strike that killed Barnea. In reality, no credible media reports confirmed these claims. Netanyahu was photographed publicly visiting Ashdod Port after the alleged attack, and the video linked to Barnea’s supposed death was actually from a 2024 suicide bombing in Tel Aviv, unrelated to the current conflict. Similar false claims also targeted Israeli National Security Minister Itamar Ben-Gvir and other officials and family members. None of these deaths was confirmed, and some officials publicly showed that they were alive. Source:  NewsGuard, L. Marchl. Iranian State Media Fabricate a Long List of Israeli Leaders Supposedly Killed by the Regime’s Missiles . [online] Published 11 March 2026. Available at: https://www.newsguardrealitycheck.com/p/iranian-death-hoaxes-target-israeli Top Of Page AI-Manipulated Image Used for Disinformation As revealed by NewsGuard , Pro-Iran social media accounts recently circulated disinformation using an AI-altered image of an Iranian missile allegedly inscribed with the phrase "In memory of the victims of Epstein Island". The posts claimed the message was written on a missile launched during the conflict with Israel, portraying Iran as acknowledging victims connected to the Jeffrey Epstein scandal and indirectly mocking U.S. politics. The image spread widely online, gaining millions of views across social media platforms and being shared by pro-Iran users and state-controlled media outlets. However, investigations found that the image was digitally manipulated and not authentic. The original photograph shows an Iranian Khorramshahr-4 ballistic missile without any inscription, and had circulated online weeks before the war began. AI analysis tools confirmed that the text referencing Epstein was added using an AI image editing system. Source:  NewsGuard, I. Chomnalez. Debunk: Iranian Missile With Epstein Inscription Is an AI Fabrication . [online] Published 13 March 2026. Available at: https://www.newsguardrealitycheck.com/p/iranian-missile-dedicated-to-epstein Top Of Page [AI Related Articles] The Risk of Automation of Disinformation Campaigns An arXiv   study  explores how AI agents could coordinate online information operations, showing that advances in large language models and generative AI may enable influence campaigns to become largely automated, with networks of AI agents producing and amplifying content with minimal human supervision. Traditionally, such operations relied on human organizers and simple tactics, such as synchronized posts or hashtag flooding, to create the illusion of widespread support. Using a simulation, researchers created a social media environment in which AI agents promoted a political candidate and disseminated a campaign hashtag. The results showed that even minimal awareness among AI agents, such as knowing who their teammates are, can trigger strong coordination. The agents began amplifying one another’s posts, repeating similar narratives, and thereby increasing the spread and engagement of their messages. These behaviors mirror real-world disinformation tactics, where coordinated accounts attempt to manipulate algorithms and make certain views appear more popular or widely accepted than they actually are. The findings highlight that the growing risk of automated networks of AI agents could scale disinformation campaigns more quickly and efficiently than traditional bot operations. Source:  arXiv, G. M. Orlando, J. Ye, V. La Gatta, M. Saeedi, V. Moscato, E. Ferrara, L. Luceri. Emergent Coordinated Behaviors in Networked LLM Agents: Modeling the Strategic Dynamics of Information Operations . [online] Published 2025. Available at: https://arxiv.org/pdf/2510.25003 Top Of Page AI’s Influence on Journalism The European External Action Service (EEAS)   reports on an event organised by Europe House titled “Journalism and Technology: How is AI changing newsrooms?”, which explored how artificial intelligence is transforming the way news is produced, distributed, and consumed. While AI can assist journalists with tasks such as data analysis, translation, research, and content creation, experts warned that the rapidly evolving digital environment also increases the risk of inaccurate information and disinformation. Participants emphasized that although AI can improve newsroom efficiency, human journalists remain essential for accountability, fact-checking, and ethical decision-making. Experts highlighted that AI should be used as a supporting tool rather than a replacement for core journalistic skills such as reporting, investigation, and verification. The discussion also focused on how news organizations must adapt to new technologies while protecting democratic values. Source:  EEAS. Europe House hosts discussion on the use of artificial intelligence in newsrooms and media ethics . [online] Published 11 March 2026. Available at: https://www.eeas.europa.eu/delegations/bosnia-and-herzegovina/europe-house-hosts-discussion-use-artificial-intelligence-newsrooms-and-media-ethics_en Top Of Page [General Reports] Disinformation Campaigns Target Human Rights Defenders in Guatemala A Global Witness   investigation documents how coordinated disinformation campaigns on social media have been used to attack and criminalize human rights defenders in Guatemala. A prominent example is the case of Luis Pacheco, a Maya K’iche’ leader and former deputy energy minister who was arrested in 2025 on charges of terrorism and sedition. Before and after his arrest, thousands of posts on platforms such as Facebook, X, and TikTok accused him of corruption, terrorism, and electoral fraud. Many of these allegations were spread by accounts known for disseminating false information, anonymous profiles, and figures associated with political and military elites. According to the report, these online campaigns aim to damage activists' reputations, create distrust within their communities, and make it easier to justify legal persecution. Disinformation has been particularly prominent during key political moments, including efforts to overturn the results of the 2023 national election and ongoing conflicts over Indigenous land rights. Critics argue that companies such as Meta, X, and TikTok have failed to prevent the weaponization of their platforms for harassment and misinformation, leaving activists more vulnerable to intimidation and criminalization. Source:  Global Witness. Weaponising social media . [online] Published 2026. Available at: https://gw.hacdn.io/media/documents/Weponising_Social_Media_Guatemala_-_March_2026.pdf Top Of Page [Appendix - Frameworks to Counter Disinformation] Study Proposed Aid System Resilient to Disinformation During Disasters Computers & Industrial Engineering   Journal presents a study showing how an optimization model for humanitarian logistics combines trucks and drones to deliver aid more fairly and efficiently in disaster zones. The system, called the Multi-Truck and UAV Routing Problem (MTURP), aims to minimize both the total travel distance and the time gap between the first and last deliveries, ensuring that assistance reaches all affected communities as evenly as possible. Trucks transport supplies and deploy drones to complete last-mile deliveries, enabling aid to reach areas where damaged infrastructure or flooded roads make ground transport. A key aspect of the model is its ability to operate under conditions of disruption and in the presence of uncertain or misleading information. The researchers highlighted that disasters can be accompanied by disinformation or false service requests, which may distort priorities and lead to misallocation of resources. To address this challenge, the framework incorporates community vulnerability data, using the Social Vulnerability Index, to guide equitable distribution and maintain fairness even when information reliability is uncertain. The model was tested in two case studies simulating flood scenarios in Hoboken, New Jersey, and Hopkins County, Kentucky. Results showed that the proposed optimization approach yields near-optimal routing solutions while remaining scalable for large-scale operations. Source:  ScienceDirect, R. T. Khameneh & N. Ghorbani-Renani & J. E. Ramirez-Marquez, Computers & Industrial Engineering, Volume 213, 111786. Multi-objective optimization of a truck–drone delivery system for fair and efficient humanitarian logistics under disruption and disinformation . [online] Published March 2026. Available at: https://www.sciencedirect.com/science/article/abs/pii/S0360835225009325 Top Of Page CFTC’s Selig Said Prediction Markets Could Help Counter Disinformation At an event organized by the Futures Industry Association in Florida, Politico   reports that U.S. Commodity Futures Trading Commission Chair Michael Selig said prediction markets could serve as tools to counter disinformation by providing publicly generated forecasts about major events. He argued that more Americans are turning to prediction markets to understand issues ranging from elections to economic developments and suggested that combining these markets with blockchain technology could create decentralized systems that help verify information and challenge misleading narratives circulating online. Prediction markets allow users to trade contracts based on the likelihood of real-world events, generating odds that reflect collective expectations. Supporters claim these platforms can be more accurate than traditional forecasting methods such as opinion polls. However, critics question whether the participants represent the broader public and whether their predictions can reliably reflect reality. Source:  Politico, D. Harty. CFTC’s Selig: Prediction markets can offer a check on ‘disinformation’ . [online] Published 9 March 2026. Available at: https://subscriber.politicopro.com/article/2026/03/cftcs-selig-prediction-markets-can-offer-a-check-on-disinformation-00819232 Top Of Page Detecting AI Content Farms and Online Disinformation NewsGuard   announced it has launched a new system to detect "AI Content Farms", websites that mass-produce AI-generated articles that are often inaccurate or misleading. Developed with Pangram Labs, the tool combines automated AI detection with expert human analysis to identify sites that publish large amounts of AI-written content without clearly disclosing it. So far, the system has identified more than 3,000 AI Content Farm websites, and the number is growing rapidly, with hundreds of new sites appearing each month. These websites frequently imitate legitimate news outlets and publish dozens of articles daily, often spreading false claims about public figures, companies, health issues, or politics. In some cases, disinformation from these sites has been amplified by larger media networks or social media accounts, increasing its reach. Some AI Content Farms have also been linked to foreign influence operations, including networks connected to Russia that publish misleading content disguised as local news sources in the United States and Europe. The new detection system aims to protect readers, advertisers, and online platforms from the growing flood of AI-generated disinformation and low-quality content. By identifying these sites in real time, companies can avoid placing advertisements on them and reduce the financial incentives that allow disinformation networks and propaganda operations to spread misleading narratives online. Source:  NewsGuard. NewsGuard Launches Real-time “AI Content Farm” Detection Datastream to Counter Onslaught of AI Slop in News . [online] Published 12 March 2026. Available at: https://www.newsguardtech.com/press/newsguard-launches-real-time-ai-content-farm-detection-datastream-to-counter-onslaught-of-ai-slop-in-news/ Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • Mad MAX: The Kremlin’s Attempt at Complete Information Control

    Telegram and the Authoritarian Control Paradox  Authoritarian regimes increasingly seek information dominance by nationalizing or tightly regulating digital communication platforms. Yet attempts to exert comprehensive narrative control frequently encounter a structural constraint: the same platforms these regimes seek to suppress are operationally indispensable.   Russia’s ongoing effort to suppress Telegram exemplifies this paradox. The popular messaging platform simultaneously serves as Russia’s top propaganda distribution channel, a battlefield communication medium, and a vector for hostile influence campaigns (HICs). At the same time, it remains a largely uncontrolled information space capable of exposing clandestine military activity and information operations, facilitating internal dissent, and enabling intelligence collection.  As the Kremlin escalates pressure on Telegram, promoting a state-controlled alternative messenger platform, it exposes the inherent tension between operational pragmatism and the innate need for complete information control, which characterizes authoritarian regimes’ information governance policies.  Asserting Information Dominance  Modern information dominance is pursued through legal, technical, and infrastructural measures, designed to consolidate control over the domestic information environment. China, Iran, and Russia have each implemented increasingly restrictive policies aimed at regulating digital communication. These measures include platform bans, strict censorship, content moderation, legislation that restricts online expression, and even overall internet shutdowns.  Despite variations in implementation, these approaches all share three key strategic objectives:  Consolidating state oversight over the national information space  Limiting channels for adversarial influence and foreign intelligence activity  Restricting the outward flow of politically or militarily sensitive information  In that context, messaging platforms such as WhatsApp and Telegram have become frequent targets of regulatory limitations, technical disruption, or formal bans.  However, these same platforms are widely used by state actors and state-aligned threat actors to conduct influence operations, espionage, and cyber-enabled activities. Telegram in particular has emerged as a central infrastructure for HICs and hybrid operations, due to its popularity, anonymity features, and permissive moderation standards.  Russia vs. Telegram  Telegram occupies a uniquely ambiguous position in Russia’s information ecosystem. It functions simultaneously as a vibrant domestic media environment, a practical communications tool for military forces, and a major source of tactical intelligence.  Due to its unregulated nature, Russian authorities have increasingly moved to constrain the platform. As of February 2026, the state communications regulator Roskomnadzor introduced a series of phased restrictions against Telegram, citing the company’s alleged noncompliance with domestic legislation, refusal to remove content deemed extremist, and failure to comply with data localization requirements. As of early March 2026, Telegram still remains accessible to the overwhelming majority of Russian users. Authorities have demonstrated a sustained pattern of selective, regionally-targeted throttling, consistent with their broader strategy of graduated digital coercion.  In response, Telegram's founder, Pavel Durov, publicly defied coercive regulatory pressure, reaffirming the platform's commitment to user privacy. Russian courts imposed punitive financial sanctions against Telegram.   The escalation intensified on February 24, 2026, when Russian authorities opened a criminal case against Durov himself on charges of "aiding terrorism", a charge Durov publicly rejected on his @durov Telegram channel, where he described the Russian state's conduct as the daily fabrication of "new pretexts to restrict Russians' access to Telegram" in pursuit of suppressing "the right to privacy and free speech" characterizing it as "a sad spectacle of a state afraid of its own people".    Posts by Telegram founder Pavel Durov responding to Russia’s regulatory actions against the platform.  A State-Sponsored Usurper  In parallel to its applied pressure, the Kremlin has promoted a domestic alternative to foreign messaging platforms: MAX, a messenger platform developed by VK (formerly Mail.ru Group).  The MAX messenger app is listed on the Google Play Store, showing over 50 million downloads as of March 2026. Designed as a multifunctional “super-app” comparable to China’s WeChat, MAX integrates messaging, social networking, financial services, and e-government functions into a single state-aligned ecosystem. Since 2025, the application has been mandatorily preinstalled on new devices sold within Russia.  Feature comparison of Russia's Max (VK) and China's WeChat (Tencent) across communication capabilities, platform ecosystem, and international reach. Data current to early 2025.  In comparison with Telegram, assessments regarding MAX’s actual adoption within Russia differ greatly. VK has reported more than 50 million registered users by late 2025, roughly corresponding to the number of downloads recorded on the Google Play Store. Independent estimations, however, suggest that actual active usage may be significantly lower.  Comparison of four messaging platforms across privacy and state-control indicators. Data as of early 2025. Regardless of precise adoption rates, the Kremlin’s strategic objective is clear: replace unruly communication platforms with a domestically governed digital infrastructure that allows the state complete visibility and control over public discourse and financial transactions.  Why Telegram Still Matters Battlefield Communications and Intelligence   At the tactical level, Telegram has become a primary conduit for Russian battlefield coordination. In fact, it is so indispensable to Russian forces that the Minister for Digital Development explicitly exempted usage in warzone areas (i.e. , the Ukrainian front) from the nationwide restrictions.  This dependency on Telegram has been recently exposed and worsened by Ukraine's successful degradation of Russian Starlink access at the front, which deepened Russia's communication dependencies at a moment when state authorities are escalating regulatory pressure.  Aside from its benefits for combat communications, Telegram is also used for intel gathering purposes:  It is considered a feed of open-source intelligence (OSINT), where troop movements, battlefield imagery, and operational details circulate across channels accessible to any actor.   Its open-access structure, which permits users to contact any other user without prior connection, lends itself to the deployment of botnets for passive intelligence collection or proactive human intelligence (HUMINT) based operations.  A Disinformation Hotbed  According to multiple reports, Telegram has become a primary distribution channel for Russian propaganda and HICs targeting both domestic and foreign information environments, respectively.  Top 5 monitored Russian bloggers’ Telegram channels, between March 2025 and March 2026. This view showcases the extensive available visibility into channels’ reach and engagement. Courtesy of Alliance for Securing Democracy at the German Marshall Fund.  The European Union’s ban on several Russian state-aligned media outlets solidified Telegram’s role as a key platform for continuous narrative dissemination across European audiences. These ongoing activities are not limited to official state channels. A whole ecosystem of proxy accounts, alternative media networks, and coordinated messaging campaigns amplifies Russian narratives across multiple languages and platforms.  Network mapping of Russian affiliated Pseudo-Ukrainian Telegram channels (January 2026), Courtesy of OpenMinds. In October 2023, Foundation for Defense of Democracies (FDD) adjunct fellow Ari Ben Am uncovered a campaign that targeted Ukrainian soldiers with coordinated surrender narratives disseminated across Telegram, WhatsApp, Signal, and Facebook. According to his findings, the manipulative messaging originated from Russian Telegram channels and was subsequently amplified in both German and French, exemplifying the multilingual, cross-platform reach of Russia's Telegram channels.  Another prominent example is the Telegram channel “War on Fakes.” Although it presents itself as an independent fact-checking initiative debunking disinformation, the channel consistently promotes pro-Kremlin narratives. With more than 410,000 followers, the channel has repeatedly circulated misleading claims about the war in Ukraine.  "War on Fakes" active Telegram channels in Russian (left) and English (right). A Comparative Model  Like Russia, the Islamic Republic of Iran has also pursued a sovereign national internet model designed to reduce dependence on foreign digital infrastructure. However, Iran’s system still relies on certain Western services. A January 28, 2026 , report by Filter Watch indicated that authorities gradually permitted access to select platforms such as Google, Bing, and ChatGPT due to the absence of domestic alternatives.  International platforms whitelisted by Iranian authorities, as of January 28, 2026. Courtesy of Filter Watch. However, according to Filter Watch: “Regarding social media and messaging platforms, Instagram, Telegram, YouTube, and X remain accessible only through circumvention tools and continue to suffer from instability.”  Telegram itself has been formally banned in Iran since 2018 following its role in the 2017–2018 protest movement. Nevertheless, as in Russia, the platform remained widely used among Iranian citizens.  Top 10 countries by number of Telegram channels and user count, as of March 2026. Courtesy of Telemetr.io .    Like Russia, Iran has also used Telegram for hostile influence efforts and intelligence gathering abroad. Following the October 7 attacks, Iranian-linked threat actors reportedly increased the scale of their Telegram-based HUMINT and hybrid operations targeting Israel.   The main difference between Russia and Iran lies in enforcement methods. Historically, Iran has had to resort to nationwide internet shutdowns to assert its information control in times of increased threat to the regime’s stability. The three total blackouts (enacted by Iran in June 2025, January 2026, and March 2026) are emblematic of Tehran’s concerns about uncontrolled information flows. Attempts by journalists and civil society actors to bypass Iran’s January 2026 blackout using Starlink terminals proved largely unsuccessful, illustrating the regime’s ability to enforce physical control over connectivity infrastructure.  In comparison, Moscow has mostly managed to avoid drastic and highly disruptive measures. Instead, it has opted for more targeted techniques (e.g., specific protocol blocking, traffic throttling, and regional restrictions). In addition, Russia has invested in developing substitutes to key foreign (and especially Western) digital platforms, including its own search engine, AI tools, and online communication apps. An important dimension of such infrastructural consolidation is the large-scale deployment of DNS tampering. By systematically deleting domain records from the National Domain Name System (NSDI), Russian authorities render targeted domains, including bbc.com , youtube.com , and torproject[.]org, entirely non-existent within the Russian information space.  Strategic Liabilities and the Limits of Information Control  Despite its beneficial functionality, Telegram is perceived by Russian authorities as a malign threat. In the long term, it has the potential to facilitate sensitive information leaks while hosting the largest digital space outside of the Kremlin’s control. At present, the existing widespread usage of Telegram by the Russian populace creates valuable opportunities for external actors looking to inject narratives into Russian information spaces.  When comparing Russia and Iran, the two authoritarian nations present distinct variations of the same structural dilemma. Russia currently preserves selective access to Telegram due to critical dependencies. Iran prioritizes stricter control, even at the cost of nationwide shutdowns and the economic-reputational disruption they generate.  For now, Telegram remains a contested arena within authoritarian information environments: a tool of state power with inherent strategic vulnerabilities for the regimes that rely on it.  The CRC would like to thank Ari Ben Am , FDD’s CCTI Adjunct Fellow and co-founder of Telemetry Data Labs, and Dr. Daria Dergacheva , for their contribution to this analysis.   [References:] Alliance for Securing Democracy at the German Marshall Fund. (2026). The War in Ukraine: Military Bloggers Dashboard . Retrieved March 10, 2026 from:  https://securingdemocracy.gmfus.org/war-in-ukraine-military-bloggers/   Source: BBC, O. Chia & B. Tavener. Russia orders block on WhatsApp in messaging app crackdown . [online] Published 12 February 2026. Available at: https://www.bbc.com/news/articles/clygd10pg5lo   TIME, V. Bergengruen. Telegram Becomes a Digital Battlefield in Russia-Ukraine War . [online] Published 21 March 2022. Available at: https://time.com/6158437/telegram-russia-ukraine-information-war/   Global Voices, D. Dergacheva. Inside-net: Russia is dismantling free internet connections . [online] Published 7 October 2025. Available at: https://globalvoices.org/2025/10/07/inside-net-russia-is-dismantling-free-internet-connections/   FDD, R. Easter. What Russia’s War on Telegram Means for the West . [online] Published 23 February 2026. Available at: https://www.fdd.org/analysis/2026/02/23/what-russias-war-on-telegram-means-for-the-west/   Filter Watch, N. Keshavarznia. Network Monitoring January 2026: Internet repression in times of protest . [online] Published 24 January 2026. Available at: https://filter.watch/english/2026/01/24/network-monitoring-january-2025-internet-repression-in-times-of-protest-2/   Filter Watch, N. Keshavarznia. A Month of Iran’s Internet: From Regional Disruptions to Total Blackout and Whitelisted Access . [online] Published 28 January 2026. Available at: https://filter.watch/english/2026/01/28/network-monitoring-january-2026-from-regional-disuptions-to-total-blackout-and-whitelisted-access/   Freedom House. Freedom on the Net 2025: Russia . [online] Published 2025. Available at: https://freedomhouse.org/country/russia/freedom-net/2025   Institute for National Security Studies (INSS), N. Yasur & D. Citrinowicz. Iranian Foreign Information Manipulation and Interference During the Swords of Iron War . [online] Published 12 November 2024. Available at: https://www.inss.org.il/publication/iran-influence/   IStories, L. Lemyasova. The MAX state messenger announced 45 million users. These are probably mostly “dead souls” . [online] Published 15 October 2025. Available at: https://istories.media/news/2025/10/15/gossmessendzher-max-zayavil-o-45-mln-polzovatelyakh-veroyatno-v-osnovnom-eto-mertvie-dushi   FDD, D. Shapiro & K. Korkiya. SpaceX Blocks Russia’s Starlink Access but Ubiquiti’s Tech Remains Ubiquitous . [online] Published 6 February 2026. Available at: https://www.fdd.org/analysis/2026/02/06/spacex-blocks-russias-starlink-access-but-ubiquitis-tech-remains-ubiquitous/   OpenAI. Disrupting Malicious Uses of AI Report . [online] Published 25 February 2026. Available at: https://openai.com/index/disrupting-malicious-ai-uses/   OpenMinds. The Kremlin Connection: Mapping Telegram Networks in Russia, Ukraine, and Belarus . [online] Published 8 January 2026. Available at: https://telegram-network.openminds.ltd   M. Wang & P. Lin & J. Knockel & W. Greenberg & J. Mayer & P. Mittal. Proceedings on Privacy Enhancing Technologies, 2025(4) pp.896–911. What WeChat knows: Pervasive first-party tracking in a billion-user super-app ecosystem . [online] Published 2025. Available at: https://petsymposium.org/popets/2025/popets-2025-0163.pdf

  • Cyber-based hostile influence campaigns 2nd - 8th March 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [State Actors] Russia   Russia’s Information Control in Ukraine’s Occupied Territories Kremlin Used Iran War Disinformation to Target Ukraine and France Pro-Kremlin Disinformation Targeted Middle East War Narratives China Leaked Documents Revealed AI-Driven Chinese Influence Operations A Case Study of PRC Influence in Malaysia and Singapore Iran Misleading War Images Exaggerated Iran’s Military Success Cyber and Information Operations Expand Pressure on Iran’s Regime Iran Launched a Disinformation Campaign Following War with US and Israel AI Video Showing Massive Destruction in Tel-Aviv Spread by Pro-Iran Accounts The Online Information Environment Around the Iran War Hacktivist Cyber Campaign Escalates Following Operation Epic Fury [AI Related Articles] Google’s AI Overviews Amplified Iran-Related Hoaxes AI Fueled Election Interference Across Three States AI-Assisted Russian Influence Network Targets Africa [Cyfluence Attacks] Iran Expected to Launch Cyber Attacks Worldwide Following US and Israeli Strikes Middle East Conflict Triggered Surge in Global Cyber Activity [Appendix - Frameworks to Counter Disinformation] EU and France Rethink Strategy to Counter Russian Disinformation [CRC Glossary] [ Report Highlights] In a recent report, DoubleThink Lab reveals that leaked internal documents from the Chinese technology company GoLaxy outline plans to build advanced AI-powered systems to automate the manipulation of information. A study by DoubleThink Lab examining the People’s Republic of China's influence in Southeast Asia found that both Malaysia and Singapore face growing exposure to Chinese political, economic, and technological influence. Iranian state media have intensified disinformation efforts following the outbreak of war with the US and Israel, spreading false claims about battlefield successes and using manipulated or misleading visuals. As identified by NewsGuard 's Reality Check, at least 18 false claims have circulated since the conflict began. A report issued by Radware states that following Operation Epic Fury, coordinated pro-Iranian and allied hacktivist groups launched widespread cyber campaigns, including DDoS attacks, phishing operations, and hack-and-leak claims, targeting government and critical infrastructure across the Middle East while amplifying political messaging online. Graphika ’s analysis finds that AI played a central role in influence operations targeting elections in Bangladesh, Colombia, and Tibet, enabling the rapid production and amplification of synthetic political content across online platforms. Le Monde reports that the EU and France are reassessing their strategies to counter rising Russian disinformation ahead of major elections in 2027, including the French presidential vote and parliamentary elections in Italy. [State Actors] Russia Russia’s Information Control in Ukraine’s Occupied Territories An article by EUvsDisinfo  reports that since the start of Russia’s occupation of parts of Ukraine in 2014 and the full-scale invasion in 2022, authorities under Russia have built a comprehensive system of information control in Ukraine’s occupied regions. An estimated 5–6 million residents now live in an information environment largely cut off from Ukrainian media and dominated by Russian state narratives. Ukrainian outlets have been shut down or replaced with pro-Kremlin media, while telecommunications infrastructure has been “Russified”, shifting networks to Russian operators that enable traffic filtering, website blocking, and internet disruptions. Authorities have also restricted access to major platforms and promoted state-controlled messaging apps to expand surveillance and propaganda. Ukrainian journalists have been detained or forced to leave, while new media outlets and Telegram channels simulate “normal life” under occupation while spreading Kremlin narratives. Russian authorities have also targeted language and education as tools of control, eliminating Ukrainian-language schooling, removing Ukrainian books from libraries, and imposing Russian as the dominant language in public life. Arrests, surveillance, and online monitoring have created a climate of fear that encourages widespread self-censorship among residents. Despite these pressures, forms of quiet resistance persist. Some residents use VPNs to access Ukrainian information, maintain Ukrainian language and culture at home, or continue education through Ukrainian online schools. Source:  EUvsDisinfo, E. Beketova. Russia’s Information Grip on Ukraine’s Occupied Territories . [online] Published 2 March 2026. Available at: https://euvsdisinfo.eu/russias-information-grip-on-ukraines-occupied-territories/ Top Of Page Kremlin Used Iran War Disinformation to Target Ukraine and France As reported by NewsGuard 's Reality Check, the Russian influence operation Matryoshka has used the ongoing conflict involving Iran to spread fabricated reports aimed at discrediting Ukraine and its allies, particularly France. The campaign produced at least 9 false claims during the conflict triggered by coordinated strikes by the US and Israel against Iranian targets. The reports mimicked credible media outlets and research institutions to make the claims appear legitimate. Among the false narratives was a fake video attributed to the Institute for the Study of War claiming that an Iranian missile killed 70 French soldiers at a base in Abu Dhabi. Another report falsely attributed to Le Point alleged that Ukrainian phone scammers stole millions of euros from French citizens by exploiting the crisis. Additional claims attempted to portray Ukrainian military leadership as corrupt, including a story falsely suggesting that Ukraine’s army chief, Oleksandr Syrskyi, owned a luxury property in Dubai that was destroyed by an Iranian missile. Source:  NewsGuard Reality Check, A. Lee. Russia Uses Iran War to Plant Fake Reports Discrediting Ukraine And France . [online] Published 4 March 2026. Available at: https://www.newsguardrealitycheck.com/p/kremlin-creates-iran-war-fakes-to Top Of Page Pro-Kremlin Disinformation Targeted Middle East War Narratives An article by EUvsDisinfo showed that Pro-Kremlin media outlets have circulated several misleading narratives related to the war in the Middle East. One widely promoted claim, attributed to the Russian Foreign Intelligence Service, alleged that France and the UK planned to provide Ukraine with a nuclear or "dirty" bomb. Russian state media repeated the accusation across multiple languages, despite offering no evidence, while officials in Ukraine, France, and the UK rejected the claims. Other pro-Kremlin narratives accused Ukraine and the British intelligence agency MI6 of plotting to sabotage Black Sea energy infrastructure, including the Turkish Stream and Blue Stream gas pipelines. These allegations were largely based on statements by Putin and echoed previous unsupported accusations made by Moscow during the war in Ukraine. At the same time, pro-Kremlin outlets promoted the claim that Gulf states such as Qatar, Saudi Arabia, and the United Arab Emirates had asked Russia to serve as the guarantor of peace in the Middle East. Reports indicated that the diplomatic contacts were initiated by Moscow itself. Source:  EUvsDisinfo. How Russia lies about the war in the Middle East . [online] Published 6 March 2026. Available at: https://euvsdisinfo.eu/how-russia-lies-about-the-war-in-the-middle-east/ Top Of Page China Leaked Documents Revealed AI-Driven Chinese Influence Operations A 399-page leak analyzed by DoubleThink Lab  reveals that the Chinese technology company GoLaxy planned to build advanced AI-powered systems to automate the manipulation of foreign information. The documents describe a “smart propaganda” platform capable of monitoring political trends, identifying targets, generating tailored propaganda, and distributing it through large networks of realistic fake social media accounts. The company, which originated from the Institute of Computing Technology under the Chinese Academy of Sciences, aims to use AI and data analysis to influence public opinion abroad more effectively than previous campaigns, often associated with spam-like messaging. The system gathers large amounts of political and social data, profiles influential individuals and organizations, and generates customized content that can be automatically posted by thousands of “persona bots” across platforms such as Facebook, X, and Telegram. The documents also reveal detailed monitoring programs targeting sensitive regions and political issues for China, including extensive datasets on political figures and organizations in Taiwan, as well as monitoring of political actors and institutions in the US and across the Indo-Pacific. These systems are designed to track elections, analyze public sentiment, and identify opportunities to shape narratives or deepen political divisions. The leak also highlights cooperation between GoLaxy and organizations linked to the Chinese government and military, including the Cyberspace Administration of China and the Central Military Commission. Source:  DoubleThink Lab. The Rise of AI in PRC Influence Operations: Nine Takeaways from the GoLaxy Documents . [online] Published 4 March 2026. Available at: https://medium.com/doublethinklab/the-rise-of-ai-in-prc-influence-operations-nine-takeaways-from-the-golaxy-documents-2d6617a753e5 Top Of Page A Case Study of PRC Influence in Malaysia and Singapore A study by DoubleThink Lab  examining the People’s Republic of China's influence in Southeast Asia found that both Malaysia and Singapore face growing exposure to Chinese political, economic, and technological influence. According to the Doublethink Lab China Index 2024, several Southeast Asian countries ranked among the most affected globally. In Malaysia, ties with Beijing have strengthened under Prime Minister Anwar Ibrahim, including expanded economic cooperation, visa policies, and agreements in media, telecommunications, and technology. Public sentiment toward China has also grown more positive, partly due to geopolitical tensions, such as the Gaza war. At the same time, concerns remained about economic competition, security risks, and territorial disputes in the South China Sea. Malaysia has signed multiple cooperation agreements with Chinese state-linked media and technology companies, raising questions about potential narrative influence and information sharing. In Singapore, influence efforts were evident in cultural, educational, and media exchanges, as well as on Chinese social media platforms and diaspora networks. Authorities have also faced cyber incidents attributed to Chinese-linked groups and instances of online disinformation, including AI-generated content spreading false claims about political divisions within the ruling People’s Action Party and its leader, Prime Minister Lawrence Wong. Despite expanding engagement with Beijing, both countries maintain complex and cautious relationships with China. Singapore continues to balance ties with both China and the US, while Malaysia’s closer cooperation with Beijing coexists with concerns over sovereignty and economic dependence. Source:  DoubleThink Lab. PRC Influence on ASEAN Countries in 2025: The Case Study of Malaysia and Singapore . [online] Published 3 March 2026. Available at: https://medium.com/doublethinklab/prc-influence-on-asean-countries-in-2025-the-case-study-of-malaysia-and-singapore-48d84c1e71c1 Top Of Page Iran Misleading War Images Exaggerated Iran’s Military Success As coordinated strikes on 28 February 2026 escalated tensions between the US, Israel, and Iran, an analysis by NewsGuard ’s Reality Check documented how misleading images and videos began circulating widely on social media. At least three viral visual misinformation pieces collectively garnered more than 21.9 million views on X, falsely portraying Iran as achieving major military victories. In reality, each example relied on unrelated or old footage. A widely shared image allegedly showing the USS Abraham Lincoln sinking was actually a photo of the scuttling of the decommissioned USS Oriskany in 2006. Another viral video claimed to depict an Iranian strike on Israel’s Negev Nuclear Research Center but was traced to a 2017 ammunition depot explosion in Balakliya, Ukraine. A third clip circulated as footage of a missile strike on a CIA headquarters in Dubai, United Arab Emirates, but actually showed a residential tower fire in nearby Sharjah in 2015. Source:  NewsGuard Reality Check, L. Marchl & I. Blachez & S. Rubinson. Amid U.S. and Israeli Attack, Phony Images and Videos Make It Look Like Iran Has Had Big Wins . [online] Published 2 March 2026. Available at: https://www.newsguardrealitycheck.com/p/iran-is-winning-according-to-misleading Top Of Page Cyber and Information Operations Expand Pressure on Iran’s Regime Alongside US and Israeli airstrikes on Iran, a parallel campaign has targeted the country’s digital infrastructure and information environment. A report by the Australian Strategic Policy Institute (ASPI) , described this strategy as a fusion of cyber and kinetic operations designed to disrupt communications, weaken situational awareness, and pressure political decision-making. By targeting telecommunications, state media systems, and online services, cyber tools can create uncertainty within government institutions while simultaneously affecting how the public perceives events on the ground. Though none of these guarantees regime collapse. The campaign reportedly included attempts to reach civilians directly through digital platforms. One example cited is the alleged hacking of the widely used Iranian prayer and calendar application BadeSaba, which sent push notifications urging users and military personnel to surrender. Similar approaches have appeared in the past, including those used by Russia in digital influence operations. Source:  ASPI, J. Bassi & J. Corera. Beyond bombs: cyber and information operations targeting Iran’s regime . [online] Published 2 March 2026. Available at: https://www.aspi.org.au/strategist-posts/beyond-bombs-cyber-and-information-operations-targeting-irans-regime/ Top Of Page Iran Launched a Disinformation Campaign Following War with US and Israel Iranian state media have intensified disinformation efforts following the outbreak of war with the US and Israel, spreading false claims about battlefield successes and using manipulated or misleading visuals. As identified by NewsGuard 's Reality Check, at least 18 provably false war-related claims linked to Iran-aligned sources have circulated since the conflict began, more than triple the number recorded in the two weeks before the attacks. Several claims relied on altered or misleading images and videos. One example involved an AI-manipulated satellite image shared by the Iranian state outlet Tehran Times that falsely depicted the destruction of a US radar system at Al Udeid Air Base. In another case, Iranian channels circulated a video claiming that Iranian forces had shot down a US fighter jet over Tehran. However, the footage actually showed an Iranian aircraft destroyed by the Israeli Air Force. The spread of these narratives has been amplified by Iran’s near-total internet shutdown, which reduced connectivity by about 98 percent and limited access to outside information. Source:  NewsGuard Reality Check, W. Rizvi. Iran’s State Media Bombards Iranians with False Claims and Misleading Visuals . [online] Published 5 March 2026. Available at: https://www.newsguardrealitycheck.com/p/iran-goes-on-a-disinformation-offensive Top Of Page AI Video Showing Massive Destruction in Tel-Aviv Spread by Pro-Iran Accounts An investigation by NewsGuard ’s Reality Check found that a viral video circulating on social media falsely claimed to show a dozen Iranian missiles striking a residential neighborhood in Tel Aviv on 3 March 2026, supposedly bypassing Israel’s missile defense system and causing massive destruction. The footage spread widely across platforms, gaining tens of millions of views after being promoted by pro-Iran and anti-Israel accounts, some of which described it as verified “breaking” footage of a major Iranian attack. The investigation determined that the video had actually been generated using AI. Analysis using the AI-detection tool Hive, along with visual inconsistencies in the footage, revealed that it was fabricated. For example, the street shown in the clip appeared completely deserted despite the supposed daytime missile barrage, and the siren heard in the background does not match authentic Israeli missile alert sirens. Grok described the video as authentic citizen footage, while Google AI Overviews suggested it depicted a real military engagement. In reality, while one Iranian missile did strike Tel Aviv on 28 February 2026, causing injuries and one death, the large-scale destruction shown in the viral video never occurred. Source:  NewsGuard Reality Check, I. Chomnalez. Debunk: Video Showing Massive Destruction in Tel Aviv Is an AI Fabrication Spread by Pro-Iran Accounts . [online] Published 6 March 2026. Available at: https://www.newsguardrealitycheck.com/p/dozens-of-missiles-struck-tel-aviv Top Of Page The Online Information Environment Around the Iran War Following US and Israeli airstrikes on Iran, researchers at Graphika tracked the evolving online information environment surrounding the conflict. Their monitoring highlighted several parallel developments, including influence operations, hacktivist mobilization, and the spread of false or misleading information across social media platforms. Pro-China state media outlets such as CGTN and China Daily, along with Chinese diplomatic accounts, have published videos, memes, and official statements portraying the US and Israel as aggressors while expressing support for Iran and calling for de-escalation. At the same time, disinformation has circulated widely online, including coordinated posts falsely claiming that Donald Trump and Benjamin Netanyahu had been killed in the conflict. These claims spread across multiple platforms using copied messages and repurposed images. Several pro-Iranian hacktivist groups linked to Iranian security services, including actors associated with the Islamic Revolutionary Guard Corps, reactivated and claimed attacks against regional and Western targets. Meanwhile, a coalition of hacktivist groups mobilized within hours of the strikes, calling for cyber operations against US and Israeli interests, though many of the claimed attacks remain unverified. Researchers also observed pre-strike messaging from pro-Iranian and pro-Russian networks aimed at discouraging US military action and warning of possible retaliation. Source:  Graphika. Iran War: Monitoring the Online Threat Landscape . [online] Published 6 March 2026. Available at: https://graphika.com/posts/iran-war-monitoring-the-online-threat-landscape Top Of Page Hacktivist Cyber Campaign Escalates Following Operation Epic Fury A report issued by Radware  states that following the launch of the U.S.–Israeli military offensive “Operation Epic Fury” on February 28, 2026, a rapid surge of hacktivist cyber activity emerged across the Middle East. According to the first report, hacktivist groups mobilized within nine hours of the initial strikes. They launched coordinated distributed denial-of-service (DDoS) campaigns targeting government institutions, financial systems, and telecommunications providers. The activity was highly concentrated, with two groups - Keymous+ and DieNet - responsible for nearly 70% of all attack claims between February 28 and March 2. In total, nine hacktivist collectives claimed 107 attacks against 81 organizations across eight countries, with government institutions accounting for approximately 53% of all targets. The campaign focused primarily on Kuwait, Israel, and Jordan, which together represented more than 76% of attack claims, indicating a strategic effort to disrupt state infrastructure and maximize political visibility during the regional escalation. A separate threat analysis  highlights the broader cyber and influence ecosystem surrounding the conflict, noting that although Iranian state cyber capabilities were likely temporarily constrained by severe internet outages and disruptions to command structures, external actors rapidly intensified cyber activity. According to the second report, numerous pro-Iranian, pro-Palestinian, and pro-Russian hacktivist groups, including Handala Hack, APT Iran, Cyber Islamic Resistance, Dark Storm Team, and 313 Team, conducted disruptive operations, including DDoS attacks, hack-and-leak campaigns, website defacements, and attempts to access industrial control systems. The report also identifies influence-oriented tactics, including phishing campaigns using a malicious replica of Israel’s Home Front Command RedAlert application to deliver surveillance malware, intimidation campaigns targeting critics of Iran, and public claims of system compromises intended to generate psychological pressure and shape public narratives during the conflict. Taken together, the two analyses describe a coordinated cyber escalation accompanying the kinetic conflict. Both reports emphasize that a small number of ideologically aligned hacktivist networks dominate the activity, often publicizing operations through Telegram and other online platforms to amplify political messaging. The participation of external actors, including pro-Russian groups such as NoName057(16), suggests the emergence of a broader transnational cyber campaign in which disruptive attacks, exaggerated breach claims, and information operations operate alongside military confrontation to destabilize infrastructure, influence public perception, and expand the digital dimension of the conflict. Source:   Radware. Retaliatory Hacktivist DDoS Activity Following Operation Epic Fury/Roaring Lion . [online] Published 3 March 2026. Available at: https://www.radware.com/security/threat-advisories-and-attack-reports/ddos-activity-following-operation-epic-fury-roaring-lion/ Unit 42, Palo Alto Networks. Threat Brief: March 2026 Escalation of Cyber Risk Related to Iran . [online] Published 2 March 2026. Available at: https://unit42.paloaltonetworks.com/iranian-cyberattacks-2026/ Top Of Page [AI Related Articles] Google’s AI Overviews Amplified Iran-Related Hoaxes An investigation by NewsGuard ’s Reality Check found that Google’s AI-generated search summaries were repeating false claims about images linked to the conflict involving Iran. In several cases, the summaries presented unverified claims as plausible explanations, even when the images or videos were unrelated to the conflict or created using AI. NewsGuard identified multiple examples. Google’s AI summary described a video from a 2015 residential fire in the United Arab Emirates as a possible missile strike on a CIA-linked building in Dubai. In another case, an AI-generated video allegedly showing multiple Iranian missiles striking Tel Aviv was summarized as depicting a real military escalation. Similarly, footage posted weeks earlier was mischaracterized as evidence that the US Embassy in Riyadh had been destroyed in a drone attack, despite verified footage showing only minor damage. Although Google’s reverse-image search can still help trace the origins of images through its “Exact Matches” feature, the AI-generated summary appears at the top of search results and may shape users’ first impressions. Source:  NewsGuard Reality Check, S. Rubinson & I. Chomnalez. Google’s AI Overviews Are Fooled by Iran-Related Visual Hoaxes in its Own Reverse-Image Searches . [online] Published 3 March 2026. Available at: https://www.newsguardrealitycheck.com/p/googles-ai-overviews-supercharge Top Of Page AI Fueled Election Interference Across Three States An analysis by Graphika  found that AI played a central role in influence operations targeting elections in Bangladesh, Colombia, and Tibet. Across the three cases, coordinated networks used AI tools to generate deepfake videos, produce political content in multiple languages, and automate the mass distribution of posts through inauthentic accounts designed to appear like genuine users. Ahead of Bangladesh’s February 2026 election, AI-generated videos circulated online, showing fabricated authority figures accusing political actors, while fact-checkers later identified the clips as fake. In the Tibetan diaspora election for the Central Tibetan Administration, accounts linked to the pro-China "Spamouflage" network published AI-generated articles and cartoons in both English and Tibetan to discredit candidates. Meanwhile, in Colombia’s parliamentary elections, a network of automated accounts flooded social media with thousands of posts targeting President Gustavo Petro, amplifying both pro and anti-government narratives to create confusion and information overload. AI significantly lowered the cost and effort required to produce persuasive political content and coordinate large-scale influence campaigns. As a result, election interference is becoming easier to conduct, more difficult to detect, and increasingly focused on eroding public confidence in democratic institutions rather than simply promoting or attacking specific candidates. Source:  Graphika. Deepfakes, Noise, and Doubt: AI's Role in Three Recent Elections . [online] Published 4 March 2026. Available at: https://graphika.com/posts/deepfakes-noise-and-doubt-ai-s-role-in-three-recent-elections Top Of Page AI-Assisted Russian Influence Network Targets Africa A pro-Kremlin influence network linked to the media outlet Rybar used ChatGPT to support covert information operations targeting audiences in Africa, according to a threat report by OpenAI . The campaign, dubbed "Fish Food", relied on artificial intelligence to mass-produce multilingual social media content that was later disseminated across Telegram and X through accounts with no declared connection to the outlet. Researchers found that the system was used primarily as a content-generation hub, producing batches of comments and narratives, often from Russian-language prompts, designed to promote pro-Russian messaging, criticize Ukraine, and portray Western countries as interfering actors. Beyond producing propaganda content, the network reportedly used AI tools to plan and structure influence operations. Prompts requested assistance drafting proposals for covert campaigns in Africa, including strategies to manage coordinated social media accounts, create a bilingual investigative journalism website, arrange paid media placements in French-language outlets, and organize offline events. Additional prompts sought information on electoral systems in Burundi, Cameroon, and Madagascar, including suggestions for mobilizing protests and developing local networks to support election-related influence activities. Although some of the accounts involved appeared to originate in Russia and Rybar maintains a large Telegram audience, OpenAI stated it found limited evidence that the campaign achieved substantial amplification or measurable real-world impact. Source:  OpenAI. Disrupting malicious uses of AI . [online] Published 25 February 2026. Available at: https://openai.com/index/disrupting-malicious-ai-uses/ Top Of Page [Cyfluence Attacks] Iran Expected to Launch Cyber Attacks Worldwide Following US and Israeli Strikes As reported by Infosecurity Magazine , Iran is highly likely to respond to recent US and Israeli air strikes with cyber-attacks targeting organizations across the Middle East and other regions, according to Google’s chief of cyber threat intelligence, John Hultquist. Speaking at an event hosted by the Royal United Services Institute (RUSI) in London, Hultquist warned that countries hosting US military bases, such as Qatar, Bahrain, Jordan, the United Arab Emirates, and Kuwait, could face particularly aggressive cyber operations. While Iran’s cyber tools may not be new, the scope of potential targets is expected to expand significantly, exposing organizations with weaker cybersecurity defenses. Hultquist noted that Iran often operates through loosely connected cybercriminal or hacktivist groups, creating plausible deniability for state-backed actions. These groups may appear independent, but could actually act as fronts for the Iranian Revolutionary Guard Corps (IRGC). As a result, some incidents labeled as hacktivism or ransomware may in fact be covert state-sponsored cyber campaigns aimed at governments, businesses, and critical infrastructure in the US, Gulf states, and other countries viewed as hostile to Iran. Source:  Infosecurity Magazine, D. Palmer. Expect Iran to Launch Cyber-Attacks Globally, Warns Google Head of Threat Intel . [online] Published 2 March 2026. Available at: https://www.infosecurity-magazine.com/news/iran-cyber-attacks-global-google/ Top Of Page Middle East Conflict Triggered Surge in Global Cyber Activity Infosecurity Magazine   describes the escalating conflict in the Middle East as entering a hybrid phase that combines military operations with large-scale cyber campaigns, raising concerns about spillover risks for organizations worldwide. The shift followed joint Israeli and US strikes on Iran targeting Iranian leadership, military, and nuclear-related sites. At the same time, a major cyber operation reportedly disrupted Iran’s digital infrastructure, with internet connectivity dropping to about 4% of normal levels. Security experts warned that cyber retaliation is likely to intensify, potentially disguised as criminal activity such as ransomware attacks. Increased activity has already been observed from hacktivist and cybercriminal groups, including HydraC2, Handala, and Sicarii, alongside a surge in DDoS attacks, website defacements, and alleged data breaches targeting government, banking, aviation, and telecom sectors. More than 150 hacktivist incidents were reported in the days following the strikes. Experts noted that Iran has previously used cyber operations to retaliate against geopolitical pressure and may deploy tactics designed to obscure state involvement. These could include ransomware attacks followed by destructive data wiping, exploiting long-term espionage access for sabotage, operating through proxy cybercriminal groups, and publicly releasing stolen data to pressure victims. Source:  Infosecurity Magazine, A. Mascellino. Hybrid Middle East Conflict Triggers Surge in Global Cyber Activity . [online] Published 2 March 2026. Available at: https://www.infosecurity-magazine.com/news/middle-east-conflict-surge-global/ Top Of Page [Appendix - Frameworks to Counter Disinformation] EU and France Rethink Strategy to Counter Russian Disinformation Amid growing concerns about foreign interference ahead of major elections in 2027, an article in Le Monde  outlines how the EU and France are reassessing their strategies to counter rising Russian disinformation. The elections include the French presidential election and the Italian parliamentary elections. Concerns have grown over interference campaigns ranging from propaganda websites and manipulated AI content to algorithm-driven influence on social media platforms such as TikTok and X. In response, the European Commission launched the European Center for Democratic Resilience as part of its “European Democratic Shield” initiative, aiming to coordinate member states’ responses and protect public debate. Both Brussels and Paris are shifting toward a society-wide approach that emphasizes public resilience rather than relying solely on removing false content. The strategy focuses on strengthening media literacy, empowering trusted community figures such as educators and librarians to help identify misinformation, and improving research and education around information manipulation. Finland is often cited as a successful model in which media literacy is taught from an early age and public trust in institutions remains relatively high. France has also developed specific tools to counter foreign interference. Since 2021, the government unit Viginum has monitored digital manipulation campaigns, while officials plan to expand research programs and public awareness initiatives ahead of the 2027 election cycle. Source:  Le Monde, W. Audureau. EU and France rethink their approach to combat Russian disinformation surge . [online] Published 4 March 2026. Available at: https://www.lemonde.fr/en/les-decodeurs/article/2026/03/04/eu-and-france-rethink-their-approach-to-combat-russian-disinformation-surge_6751095_8.html/t/699c3be7e6c285093f8f2541/1771846631816/Report+Leicester.pdf Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

bottom of page