Cyber-based hostile influence campaigns 9th - 15th March 2026
- CRC
- 8 hours ago
- 16 min read

[Introduction]
Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.
During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events.
[Contents]
[Social Media Platforms]
X
[State Actors]
Russia
The War in Ukraine
China
Iran
[AI Related Articles]
[General Reports]
[Appendix - Frameworks to Counter Disinformation]
[ Report Highlights]
In the wake of the conflict between the United States, Israel, and Iran on 28 February, a Wired report highlights how X became saturated with disinformation, much of it fueled by AI-generated content.
A NewsGuard report uncovers a Russian disinformation campaign seeking to influence France’s 2026 municipal elections by spreading false claims about political candidates.
Institute for Strategic Dialogue (ISD) Investigations across six EU countries and parts of the Sahel revealed that Russian-aligned disinformation campaigns have become more sophisticated since the 2022 invasion of Ukraine.
Chinese state media and pro-China social media accounts, as documented by NewsGuard’s Reality Check, spread false or misleading narratives about the war involving Iran to promote Iran’s military strength and undermine trust in the US and its allies.
A new study by ArXiv examined how AI agents could coordinate online information operations.
According to findings by Global Witness, coordinated disinformation campaigns on social media have been used to attack and criminalize human rights defenders in Guatemala.
[Social Media Platforms]
Bulgarian Disinformation Network Drives Traffic to Fabricated News Site
An article by the DFR Lab reports that the Bulgarian website dailystandart[.]com publishes fabricated and misleading political articles to generate revenue through the Adskeeper native advertising platform. Traffic to the site is largely driven by a coordinated Facebook network consisting of two pages, nine groups, and six accounts. This network demonstrates signs of coordinated inauthentic behavior, including accounts with stolen profile images, mismatched names and handles, and prolonged inactivity. Several of these accounts administer Facebook groups and repeatedly amplify links to the website, suggesting organized efforts to distribute disinformation.
The Facebook groups in the network are politically themed and often promote pro-Russian narratives. Together, they have over 211,600 members, with the largest groups explicitly supporting Russian President Vladimir Putin. Content from dailystandart is spread through direct links, screenshots of headlines, and posts with links in the comments. Evidence suggests the website itself relies on automated publishing: duplicated articles with identical URL structures, repeated timestamps, and clusters of posts published at regular intervals indicate programmatic content generation rather than manual journalism.
Stories falsely attribute statements to Russian officials and Bulgaria’s former president, Rumen Radev, exaggerate Russian military capabilities, and portray Ukraine as weak or defeated. The website also circulates fabricated stories during sensitive political periods in Bulgaria and employs AI-generated images and incomplete headlines to increase click-through rates.
Source: DFR Lab, S. Gelava. Coordinated Bulgarian Facebook network amplifies fabricated for-profit political content. [online] Published 9 March 2026. Available at: https://dfrlab.org/2026/03/09/coordinated-bulgarian-facebook-network-amplifies-fabricated-for-profit-political-content/
X
AI and Disinformation Surge on X During Iran Conflict
A Wired report states that, since the start of the conflict between the United States, Israel, and Iran on 28th of February, X has been flooded with disinformation, much of it driven by AI-generated content. When disinformation researcher Tal Hagin asked X’s AI chatbot Grok to verify a video claiming Iranian missiles struck Tel Aviv, the chatbot repeatedly provided incorrect information about the video’s location and date and even attempted to support its claims using an AI-generated image.
AI-generated images and videos have increasingly been used to spread false or exaggerated claims about the conflict. Iranian officials, state media, and other accounts on X have circulated fabricated visuals depicting events such as destroyed military aircraft, captured U.S. soldiers, and large-scale military damage. Some of these posts gained millions of views before being removed. Researchers also found AI-generated content being used to promote propaganda and antisemitic narratives, showing how easily synthetic media can be deployed to manipulate public perception during wartime. At the same time, traditional disinformation continues to spread widely on the platform. Misleading videos and recycled footage have been used to push false claims about attacks during the conflict, including narratives about a deadly strike on a school in Minab.
Source: Wired. Fake AI Content About the Iran War Is All Over X. [online] Published 10 March 2026. Available at: https://www.wired.com/story/fake-ai-content-about-the-iran-war-is-all-over-x/
[State Actors]
Russia
Kremlin’s Myth of "Protecting Russian Compatriots"
The Atlantic Council highlights in its analysis that, since Russia’s full-scale invasion of Ukraine in 2022, public debate about the war has been shaped by a long-standing Kremlin narrative claiming that Moscow is protecting Russian populations and reclaiming historically Russian lands. According to the analysis, this narrative is a form of disinformation designed to justify imperial expansion rather than defend minority rights. The idea was heavily promoted during Russia’s 2014 annexation of Crimea and invasion of eastern Ukraine, when many observers accepted the misleading claim that Russian-speaking communities wanted to join Russia. The Kremlin has repeatedly used the “protection of compatriots” narrative across the post-Soviet region, combining propaganda, cultural influence, passport distribution, and military intervention. Russia’s “compatriots” strategy has been observed in Transnistria, Moldova, regions of Georgia, and Ukraine. By broadly defining “Russian compatriots,” Moscow creates a justification for intervention even where local populations do not support Russian involvement. In Ukraine, the claim is contradicted by the fact that many Russian-speaking Ukrainians have resisted Russian forces, joined the Ukrainian military, or fled Russian occupation. Despite clear evidence that the narrative functions as propaganda, elements of it continue to shape international discussions about ending the war. Some proposals for peace implicitly treat parts of Ukraine as areas where Russian claims may be legitimate.
Source: Atlantic Council, A. Grigas. Russian talk of protecting compatriots masks Putin’s imperial ambitions. [online] Published 10 March 2026. Available at: https://www.atlanticcouncil.org/blogs/ukrainealert/russian-talk-of-protecting-compatriots-masks-putins-imperial-ambitions/
Russian Disinformation Targeting French Local Elections
Amid growing concerns about foreign interference, a NewsGuard report reveals that a Russian disinformation campaign is attempting to influence France’s 2026 municipal elections by spreading false claims about political candidates. One example involved a fake website that impersonated the campaign site of Paris mayoral candidate Pierre-Yves Bournazel. The site falsely claimed that Bournazel planned to convert the Centre Pompidou museum into a facility for homeless migrants. The fabricated story was amplified on social media by pro-Kremlin accounts through videos and posts, despite Bournazel’s campaign confirming that the claim and website were entirely false. The operation has been linked to the Russian influence network known as Storm-1516, believed to be connected to the Internet Research Agency. This network typically creates fake websites, forged content, and AI-generated materials around controversial topics such as immigration, then spreads them through coordinated social media activity. Another campaign, called Matryoshka, has also circulated fabricated videos that mimic credible media outlets such as Le Monde, TF1, and RTL. These fake reports claim that security threats linked to international conflicts could disrupt the elections. Similar tactics were also used in Germany, where fake news videos falsely accused Ukrainian refugees of planning attacks.
Source: NewsGuard, I. Blachez. Russian Disinformation Goes Local Ahead of French Elections, Targeting Paris Mayoral Candidate Backed by Macron’s Party. [online] Published 10 March 2026. Available at: https://www.newsguardrealitycheck.com/p/russia-has-its-eyes-on-paris
Disinformation Networks Linked to Russia Expand Across Europe and Africa
Institute for Strategic Dialogue (ISD) Investigations across six EU countries and parts of the Sahel revealed that Russian-aligned disinformation campaigns have become more sophisticated since the 2022 invasion of Ukraine. Instead of relying mainly on state media, these operations now use proxy websites, AI-generated content, bot networks, and local influencers to spread Kremlin narratives while bypassing sanctions. Researchers found that the strategy focuses on “laundering” propaganda through intermediaries, allowing misleading claims to spread widely within domestic information environments.
One major example involved a coordinated network promoting the false claim that Russia was rescuing Ukrainian children from an international sex-trafficking ring linked to the Epstein case. Posts spreading this narrative gained millions of views across Facebook, X, and TikTok, despite evidence that Russia has forcibly deported Ukrainian children, which led to international legal action.
Despite EU sanctions on Russian outlets such as RT, enforcement gaps allow sanctioned content to remain accessible through mirror sites and proxy platforms. Disinformation networks have exploited elections in countries like Germany and Poland by spreading fabricated videos, impersonating media outlets, and promoting false claims about politicians and Ukrainian refugees. Similar tactics are also appearing outside Europe, particularly in the Sahel, where Russian-aligned actors use local influencers and regional languages to present Kremlin narratives as authentic domestic opinions.
Source: ISD Global. ISD investigation exposes the sophistication of Russian interference in the EU and beyond. [online] Published 9 March 2026. Available at: https://www.isdglobal.org/media-mentions/the-consequences-are-extremely-serious-isd-investigation-exposes-sophistication-of-russian-interference-in-democracies-in-europe-and-beyond/
False Narratives About the War in the Middle East
EUvsDisinfo, in its article, describes how pro-Kremlin information outlets have incorporated the Middle East conflict into their broader disinformation strategy, seeking to link the war involving Iran to Ukraine and Western policies. Russian messaging has promoted misleading narratives suggesting that the 2014 Ukrainian Maidan revolution triggered wider global instability and contributed to the current crisis. Another widely circulated narrative claimed that Ukraine might stage a “provocation” to regain international attention that could be diverted by the war in Iran. This allegation follows a familiar Kremlin pattern of accusing Ukraine of planning sabotage or false flag operations, while also promoting the claim that Western countries are “tired” of Ukraine. However, available polling data indicate that public support for Ukraine in Europe remains high. Moreover, a fabricated story falsely alleged that an Iranian missile destroyed a luxury property in Dubai belonging to an aide to Ukraine’s military leadership, attributing the story to a nonexistent report by Euronews. The claim originated on anonymous Telegram channels and was amplified by the pro-Kremlin Pravda disinformation network.
Source: EUvsDisinfo. Russian FIMI and the war in Iran. [online] Published 13 March 2026. Available at: https://euvsdisinfo.eu/how-russia-lies-about-the-war-in-the-middle-east-2/
The War in Ukraine
Historical Myths in Russia’s War Against Ukraine
EUvsDisinfo draws attention in its article to a persistent form of disinformation surrounding Russia’s war against Ukraine, which portrays Ukraine as historically marginal and naturally within Russia’s sphere of influence. Analysts argue that this narrative reflects a Kremlin-centered view of history that ignores Ukraine’s long-standing role in European development. Historically, the territory of modern Ukraine has been a crossroads of civilizations, trade routes, and cultural exchanges, making it central to European history. Presenting Ukraine as an artificial or secondary state helps justify Russian claims over its territory. Another key element of Kremlin disinformation is the myth that modern Russia directly inherits the legacy of medieval Rus’, implying that Ukraine and Russia are historically inseparable. This narrative is used to legitimize territorial ambitions and frame Russian aggression as a historical restoration. In reality, Ukraine’s history is complex and plural, shaped by multiple cultures and political traditions. The Kremlin has also promoted broader myths that reinforce its geopolitical goals, including the idea that Russia is militarily invincible and that nuclear weapons guarantee victory.
Source: EUvsDisinfo, T. Snyder. A Historian’s Big Picture. Russia’s war against Ukraine and how to end it in a right way. [online] Published 10 March 2026. Available at: https://euvsdisinfo.eu/a-historians-big-picture-russias-war-against-ukraine-and-how-to-end-it-in-a-right-way/
China
China Pushed Pro-Iran War Claims
As reported by NewsGuard's Reality Check, Chinese state media and pro-China social media accounts spread false or misleading narratives about the war involving Iran in an effort to promote Iran’s military strength and undermine trust in the US and its allies. The messaging targeted audiences in mainland China, Taiwan, and the global Chinese diaspora, reflecting the close strategic relationship between Beijing and Tehran. NewsGuard found that Chinese outlets have increasingly echoed Iranian claims about the conflict, including unverified reports of major US military losses and Iranian battlefield successes.
One widely circulated claim alleged that Iran shot down a US fighter jet near its border with Kuwait. The story was repeated dozens of times by Chinese state media and gained significant engagement on Chinese platforms such as Weibo and Douyin. However, US and Kuwaiti officials stated that the aircraft was not downed by Iran but was mistakenly shot down by Kuwaiti forces. Another false narrative promoted by pro-China accounts claimed that Israeli Prime Minister Benjamin Netanyahu had fled to Germany during the war, despite public records and media reports showing that he remained in Israel and continued making public appearances. Researchers also found that Chinese-linked content farms used AI-generated posts to spread similar narratives to Taiwanese audiences on Facebook. These accounts, which previously focused on lifestyle content, began publishing coordinated articles portraying the US and Israel as suffering heavy losses while depicting Iran as dominant in the conflict.
Source: NewsGuard, C. Lin. China Pumps Out War Disinformation to Boost Its Ally Iran and Undermine the West. [online] Published 12 March 2026. Available at: https://www.newsguardrealitycheck.com/p/china-pushes-pro-iran-war-claims
Iran
Disinformation Claims About Israeli Leaders During the Iran War
In its Reality Check, NewsGuard documents that, since the start of the Iran war on 28th of February 2026, Iranian state media and pro-regime social media accounts have circulated disinformation that exaggerates or fabricates Iranian military successes. A major theme of these campaigns has been false claims that senior Israeli officials were killed or injured in Iranian missile strikes. These narratives spread widely online, gaining hundreds of thousands or even millions of views despite lacking credible evidence. Among the most prominent false claims were reports that Israeli Prime Minister Benjamin Netanyahu and Mossad director David Barnea had been killed in Iranian attacks. Iranian state media outlets and pro-Iran social media accounts posted messages suggesting Netanyahu was injured or dead, while others shared a video supposedly showing the aftermath of a missile strike that killed Barnea. In reality, no credible media reports confirmed these claims. Netanyahu was photographed publicly visiting Ashdod Port after the alleged attack, and the video linked to Barnea’s supposed death was actually from a 2024 suicide bombing in Tel Aviv, unrelated to the current conflict. Similar false claims also targeted Israeli National Security Minister Itamar Ben-Gvir and other officials and family members. None of these deaths was confirmed, and some officials publicly showed that they were alive.
Source: NewsGuard, L. Marchl. Iranian State Media Fabricate a Long List of Israeli Leaders Supposedly Killed by the Regime’s Missiles. [online] Published 11 March 2026. Available at: https://www.newsguardrealitycheck.com/p/iranian-death-hoaxes-target-israeli
AI-Manipulated Image Used for Disinformation
As revealed by NewsGuard, Pro-Iran social media accounts recently circulated disinformation using an AI-altered image of an Iranian missile allegedly inscribed with the phrase "In memory of the victims of Epstein Island". The posts claimed the message was written on a missile launched during the conflict with Israel, portraying Iran as acknowledging victims connected to the Jeffrey Epstein scandal and indirectly mocking U.S. politics. The image spread widely online, gaining millions of views across social media platforms and being shared by pro-Iran users and state-controlled media outlets. However, investigations found that the image was digitally manipulated and not authentic. The original photograph shows an Iranian Khorramshahr-4 ballistic missile without any inscription, and had circulated online weeks before the war began. AI analysis tools confirmed that the text referencing Epstein was added using an AI image editing system.
Source: NewsGuard, I. Chomnalez. Debunk: Iranian Missile With Epstein Inscription Is an AI Fabrication. [online] Published 13 March 2026. Available at: https://www.newsguardrealitycheck.com/p/iranian-missile-dedicated-to-epstein
[AI Related Articles]
The Risk of Automation of Disinformation Campaigns
An arXiv study explores how AI agents could coordinate online information operations, showing that advances in large language models and generative AI may enable influence campaigns to become largely automated, with networks of AI agents producing and amplifying content with minimal human supervision. Traditionally, such operations relied on human organizers and simple tactics, such as synchronized posts or hashtag flooding, to create the illusion of widespread support.
Using a simulation, researchers created a social media environment in which AI agents promoted a political candidate and disseminated a campaign hashtag. The results showed that even minimal awareness among AI agents, such as knowing who their teammates are, can trigger strong coordination. The agents began amplifying one another’s posts, repeating similar narratives, and thereby increasing the spread and engagement of their messages. These behaviors mirror real-world disinformation tactics, where coordinated accounts attempt to manipulate algorithms and make certain views appear more popular or widely accepted than they actually are. The findings highlight that the growing risk of automated networks of AI agents could scale disinformation campaigns more quickly and efficiently than traditional bot operations.
Source: arXiv, G. M. Orlando, J. Ye, V. La Gatta, M. Saeedi, V. Moscato, E. Ferrara, L. Luceri. Emergent Coordinated Behaviors in Networked LLM Agents: Modeling the Strategic Dynamics of Information Operations. [online] Published 2025. Available at: https://arxiv.org/pdf/2510.25003
AI’s Influence on Journalism
The European External Action Service (EEAS) reports on an event organised by Europe House titled “Journalism and Technology: How is AI changing newsrooms?”, which explored how artificial intelligence is transforming the way news is produced, distributed, and consumed. While AI can assist journalists with tasks such as data analysis, translation, research, and content creation, experts warned that the rapidly evolving digital environment also increases the risk of inaccurate information and disinformation.
Participants emphasized that although AI can improve newsroom efficiency, human journalists remain essential for accountability, fact-checking, and ethical decision-making. Experts highlighted that AI should be used as a supporting tool rather than a replacement for core journalistic skills such as reporting, investigation, and verification. The discussion also focused on how news organizations must adapt to new technologies while protecting democratic values.
Source: EEAS. Europe House hosts discussion on the use of artificial intelligence in newsrooms and media ethics. [online] Published 11 March 2026. Available at: https://www.eeas.europa.eu/delegations/bosnia-and-herzegovina/europe-house-hosts-discussion-use-artificial-intelligence-newsrooms-and-media-ethics_en
[General Reports]
Disinformation Campaigns Target Human Rights Defenders in Guatemala
A Global Witness investigation documents how coordinated disinformation campaigns on social media have been used to attack and criminalize human rights defenders in Guatemala. A prominent example is the case of Luis Pacheco, a Maya K’iche’ leader and former deputy energy minister who was arrested in 2025 on charges of terrorism and sedition. Before and after his arrest, thousands of posts on platforms such as Facebook, X, and TikTok accused him of corruption, terrorism, and electoral fraud. Many of these allegations were spread by accounts known for disseminating false information, anonymous profiles, and figures associated with political and military elites. According to the report, these online campaigns aim to damage activists' reputations, create distrust within their communities, and make it easier to justify legal persecution. Disinformation has been particularly prominent during key political moments, including efforts to overturn the results of the 2023 national election and ongoing conflicts over Indigenous land rights. Critics argue that companies such as Meta, X, and TikTok have failed to prevent the weaponization of their platforms for harassment and misinformation, leaving activists more vulnerable to intimidation and criminalization.
Source: Global Witness. Weaponising social media. [online] Published 2026. Available at: https://gw.hacdn.io/media/documents/Weponising_Social_Media_Guatemala_-_March_2026.pdf
[Appendix - Frameworks to Counter Disinformation]
Study Proposed Aid System Resilient to Disinformation During Disasters
Computers & Industrial Engineering Journal presents a study showing how an optimization model for humanitarian logistics combines trucks and drones to deliver aid more fairly and efficiently in disaster zones. The system, called the Multi-Truck and UAV Routing Problem (MTURP), aims to minimize both the total travel distance and the time gap between the first and last deliveries, ensuring that assistance reaches all affected communities as evenly as possible. Trucks transport supplies and deploy drones to complete last-mile deliveries, enabling aid to reach areas where damaged infrastructure or flooded roads make ground transport.
A key aspect of the model is its ability to operate under conditions of disruption and in the presence of uncertain or misleading information. The researchers highlighted that disasters can be accompanied by disinformation or false service requests, which may distort priorities and lead to misallocation of resources. To address this challenge, the framework incorporates community vulnerability data, using the Social Vulnerability Index, to guide equitable distribution and maintain fairness even when information reliability is uncertain. The model was tested in two case studies simulating flood scenarios in Hoboken, New Jersey, and Hopkins County, Kentucky. Results showed that the proposed optimization approach yields near-optimal routing solutions while remaining scalable for large-scale operations.
Source: ScienceDirect, R. T. Khameneh & N. Ghorbani-Renani & J. E. Ramirez-Marquez, Computers & Industrial Engineering, Volume 213, 111786. Multi-objective optimization of a truck–drone delivery system for fair and efficient humanitarian logistics under disruption and disinformation. [online] Published March 2026.
CFTC’s Selig Said Prediction Markets Could Help Counter Disinformation
At an event organized by the Futures Industry Association in Florida, Politico reports that U.S. Commodity Futures Trading Commission Chair Michael Selig said prediction markets could serve as tools to counter disinformation by providing publicly generated forecasts about major events. He argued that more Americans are turning to prediction markets to understand issues ranging from elections to economic developments and suggested that combining these markets with blockchain technology could create decentralized systems that help verify information and challenge misleading narratives circulating online. Prediction markets allow users to trade contracts based on the likelihood of real-world events, generating odds that reflect collective expectations. Supporters claim these platforms can be more accurate than traditional forecasting methods such as opinion polls. However, critics question whether the participants represent the broader public and whether their predictions can reliably reflect reality.
Source: Politico, D. Harty. CFTC’s Selig: Prediction markets can offer a check on ‘disinformation’. [online] Published 9 March 2026. Available at: https://subscriber.politicopro.com/article/2026/03/cftcs-selig-prediction-markets-can-offer-a-check-on-disinformation-00819232
Detecting AI Content Farms and Online Disinformation
NewsGuard announced it has launched a new system to detect "AI Content Farms", websites that mass-produce AI-generated articles that are often inaccurate or misleading. Developed with Pangram Labs, the tool combines automated AI detection with expert human analysis to identify sites that publish large amounts of AI-written content without clearly disclosing it. So far, the system has identified more than 3,000 AI Content Farm websites, and the number is growing rapidly, with hundreds of new sites appearing each month.
These websites frequently imitate legitimate news outlets and publish dozens of articles daily, often spreading false claims about public figures, companies, health issues, or politics. In some cases, disinformation from these sites has been amplified by larger media networks or social media accounts, increasing its reach. Some AI Content Farms have also been linked to foreign influence operations, including networks connected to Russia that publish misleading content disguised as local news sources in the United States and Europe. The new detection system aims to protect readers, advertisers, and online platforms from the growing flood of AI-generated disinformation and low-quality content. By identifying these sites in real time, companies can avoid placing advertisements on them and reduce the financial incentives that allow disinformation networks and propaganda operations to spread misleading narratives online.
Source: NewsGuard. NewsGuard Launches Real-time “AI Content Farm” Detection Datastream to Counter Onslaught of AI Slop in News. [online] Published 12 March 2026. Available at: https://www.newsguardtech.com/press/newsguard-launches-real-time-ai-content-farm-detection-datastream-to-counter-onslaught-of-ai-slop-in-news/
[CRC Glossary]
The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult.
To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.
As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website
_edited.png)
.png)