Cyber-based hostile influence campaigns 16th - 22nd March 2026
- CRC
- 1 day ago
- 18 min read

[Introduction]
Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.
During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events.
[Contents]
[Social Media Platforms]
TikTok
[State Actors]
Russia
The War in Ukraine
Iran
[General Reports]
[Appendix - Frameworks to Counter Disinformation]
[ Report Highlights]
Meta H1 2026 Adversarial Threat Report describes a highly adaptive disinformation ecosystem in which state-linked and criminal actors use coordinated fake identities, AI-generated content, and cross-platform amplification to manipulate audiences, build false credibility, and evade detection while integrating influence operations with broader fraud and exploitation tactics.
NewsGuard identified a coordinated network of AI-generated TikTok content and cross-platform disinformation campaigns that sought to manipulate Hungary’s 2026 elections by discrediting the opposition, amplifying fear-based narratives, and covertly promoting pro-Orbán messaging.
According to an article by EU VS Disinfo, Russia’s sustained attacks on Ukraine’s energy infrastructure during the winter of 2026 were accompanied by a parallel FIMI campaign.
Analysis from the Center for Strategic and International Studies (CSIS) characterizes Iran’s cyber operations as centered on disinformation, influence, and psychological impact, with the country relying on proxies, hacktivist networks, and narrative manipulation to project power while delivering only limited direct strategic effects in conflict.
A report by Resecurity highlighted how the Iran war has evolved into a multi-domain conflict where psychological warfare, disinformation, and hacktivism play a central role alongside military operations.
The 4th EEAS Report on Foreign Information Manipulation and Interference Threats provided a comprehensive assessment of FIMI activities worldwide, based on cases documented and investigated by the EEAS throughout 2025.
[Social Media Platforms]
Meta H1 2026 Adversarial Threat Report
The Meta report highlights a complex and rapidly evolving disinformation landscape driven by coordinated networks that exploit false identities, deceptive narratives, and cross-platform amplification. Central to this ecosystem is Coordinated Inauthentic Behavior (CIB), where actors, often linked to state or state-aligned entities in countries such as Iran, Russia, China, and Pakistan, deploy fabricated personas and covert networks to mislead audiences and manipulate public discourse. These operations rely on structured tactics, including the creation of credible “creator” personas supported by networks of “amplifier” accounts, which artificially inflate engagement and create the appearance of grassroots legitimacy. Actors frequently disguise their origins through proxy infrastructure and identity obfuscation while targeting specific linguistic and regional audiences with tailored messaging.
A defining feature of these campaigns is the increasing integration of artificial intelligence to enhance both scale and plausibility. Adversaries use AI-generated profile images, synthetic media, and large language models to produce coherent, multilingual, and culturally nuanced content, significantly lowering operational costs while improving effectiveness. These tools enable the rapid development of persuasive personas, automated engagement strategies, and consistent narrative framing across platforms. Tactics such as co-opting authentic content, impersonating journalists or activists, and constructing fictitious media brands further blur the line between legitimate discourse and manipulation. The use of AI also extends to evasion techniques, allowing actors to bypass detection systems and sustain operations despite enforcement efforts. Disinformation activities are closely intertwined with broader fraud and scam operations, reflecting a convergence of influence and financial exploitation tactics. Criminal syndicates and influence actors alike employ social engineering methods, such as the impersonation of trusted professionals, the exploitation of emotionally vulnerable populations, and the migration to off-platform communication channels, to build trust and evade oversight. These campaigns demonstrate high levels of organization and adaptability, with actors continuously refining their tactics in response to enforcement pressure. Across both domains, common patterns include identity misrepresentation, coordinated amplification, and the iterative reconstitution of networks, underscoring a persistent and professionalized threat environment in which disinformation serves as both a strategic objective and an operational enabler.
Source: Meta, Adversarial Threat Report First Half 2026. [online] Published 2026. Available at: https://transparency.meta.com/sr/first-half-2026-Adversarial-threat-report/
TikTok
AI-Driven Disinformation TikTok Campaign Targets Hungarian Elections
A coordinated influence operation, identified by NewsGuard, leveraged 34 anonymous TikTok accounts to disseminate AI-generated content aimed at discrediting opposition candidate Péter Magyar and bolstering Prime Minister Viktor Orbán ahead of Hungary’s April 2026 elections. The campaign employed a range of tactics, including fabricated news reports featuring AI-generated anchors and non-existent experts, manipulated real footage to portray Magyar as aggressive, and emotionally charged narratives linking him to Ukraine, economic instability, and cultural threats. Additional amplification strategies included AI-generated celebrity endorsements, fictional scenarios, and viral-style content, collectively generating millions of views. The accounts exhibited clear signs of coordination, including synchronized creation dates, AI-generated profile images, and consistent messaging themes.
The TikTok campaign operated alongside broader cross-platform disinformation efforts, including activity attributed to the Russian-linked “Matryoshka” network on X and Telegram, which spread false claims targeting Ukraine to indirectly support Orbán. These narratives included fabricated cyberattacks, biological threats, and symbolic provocations, all intended to heighten fear and polarization. While attribution remains inconclusive, the timing, scale, and alignment with geopolitical interests suggest a structured and strategic intervention in Hungary’s electoral process. TikTok confirmed that the network was a covert influence operation and removed the accounts, highlighting ongoing platform-level responses amid broader concerns about foreign interference in democratic processes.
Source: NewsGuard, A. Lee & M. Roache, Influence Campaign on TikTok Uses AI Videos to Boost Hungary’s Orbán Ahead of Crucial Elections. [online] Published 20 March 2026. Available at: https://www.newsguardtech.com/special-reports/russian-influence-campaign-uses-ai-tiktok-videos-to-boost-hungarys-viktor-orban/
[State Actors]
Russia
EU Sanctions Targeted Disinformation and Propaganda
On the 16th of March 2026, the EU Council announced it had imposed sanctions on four individuals involved in Russia’s hybrid activities, with a strong focus on Foreign Information Manipulation and Interference. Among them is Sergey Klyuchenkov, a propagandist who has actively spread false narratives to justify Russia’s war against Ukraine, including calls for violence and misleading claims about territorial expansion. Similarly, news anchor Ernest Mackevičius has used state television to promote distorted accounts of the war, Ukraine, and its military.
The sanctions also targeted Graham Phillips and Adrien Bocquet, who were both accused of amplifying Kremlin propaganda. Phillips has produced misleading content from occupied Ukrainian territories, while Bocquet has acted as a key voice spreading disinformation across Europe through media appearances and social platforms.
Source: European External Action Service, Russian hybrid threats: four individuals added to EU sanctions list for information manipulation activities. [online] Published 2026. Available at: https://www.eeas.europa.eu/delegations/ukraine/russian-hybrid-threats-four-individuals-added-eu-sanctions-list-information-manipulation-activities_en
Coordinated Russian Disinformation Targeted Armenian Elections
CivilNet revealed that a fake video circulated on X in March 2026 falsely claimed that Armenian political strategists had secretly traveled to France to learn vote manipulation techniques under a "secret agreement" between Emmanuel Macron and Nikol Pashinyan. The video attempted to appear credible by misusing the NewsGuard logo, despite no such report existing.
The video’s spread was coordinated via newly created or previously inactive accounts that were suddenly activated, suggesting manipulation by bots or hacked profiles. Similar campaigns have used fake logos from major outlets such as CNN and Reuters, along with fabricated newspaper front pages, to promote false claims about political figures and events. These efforts are linked to known disinformation strategies, such as the Russia-associated "Matryoshka" and "Doppelgänger" operations, which rely on impersonation and mass dissemination to amplify misleading content.
Source: CivilNet, L. Dubois-Harounyan, Fake Videos and Fabricated Headlines: A Coordinated Disinformation Campaign Targeting Armenia and France. [online] Published 18 March 2026. Available at: https://www.civilnet.am/en/news/1008717/fake-videos-and-fabricated-headlines-a-coordinated-disinformation-campaign-targeting-armenia-and-france/
Disinformation Network Amplified Through Romanian Platforms
A DFRLab report found that News Time Romania (NTR) operated as a disinformation amplifier by translating and redistributing content from Russian state-aligned and sanctioned sources on TikTok and Telegram. Rather than producing original journalism, it repackaged foreign propaganda into Romanian and presented it as independent media. Analysis showed that over half of its TikTok content originated from such sources, indicating a systematic effort to inject disinformation into the local information space. Key contributors to this content included sanctioned outlets like SouthFront and pro-Kremlin platforms such as Rybar, both known for spreading misleading narratives about military and geopolitical events. NTR also indirectly promoted the ideology of Alexander Dugin through intermediary platforms, allowing sanctioned or extremist viewpoints to circulate without direct attribution. This layered approach helped bypass restrictions while maintaining a steady flow of pro-Russian and anti-Western messaging.
Source: DFRLab, V. Olari, Romanian channel launders sanctioned Russian content for domestic audiences. [online] Published 20 March 2026. Available at: https://dfrlab.org/2026/03/20/romanian-channel-launders-sanctioned-russian-content-for-domestic-audiences/
The War in Ukraine
Russia Targeted Ukraine’s Infrastructure While Spreading Disinformation
According to an article by EUvsDisinfo, Russia’s sustained attacks on Ukraine’s energy infrastructure during the winter of 2026 were accompanied by a parallel FIMI campaign. As missile strikes and drone attacks left millions without electricity, heating, and water during extreme cold, pro-Kremlin outlets promoted narratives portraying Ukraine as divided and on the verge of collapse. These claims included false assertions that Ukrainians were ready to surrender territory or were protesting against their own government, as well as framing the strikes as justified retaliation rather than attacks primarily affecting civilians.
Another narrative targeted European support for Ukraine, with messaging repeatedly claiming that Europe has grown tired of supporting Ukraine. Recent polls showed that, while countries closest to the conflict are beginning to experience war fatigue, overall support for Ukraine remained high. Moreover, the attacks on Ukrainian infrastructure have spurred many grassroots fundraising campaigns across Europe. Russia has also leveraged energy insecurity as part of a broader influence strategy aimed at Europe, spreading claims that the continent cannot function without Russian energy supplies. At the same time, hybrid tactics, including cyberattacks on European energy systems, reinforced these narratives. On the day of the invasion of Ukraine, a hacker group linked to Moscow disrupted monitoring systems for thousands of wind turbines across Germany, and more recently, in December 2026, Poland reported attacks on its energy sector.
Source: EUvsDisinfo, Targeting the grid, shaping the story: Russia’s dual assault on Ukraine. [online] Published 16 March 2026. Available at: https://euvsdisinfo.eu/targeting-the-grid-shaping-the-story-russias-dual-assault-on-ukraine/
Kremlin Amplified Disinformation Through Fabricated Political Authority
As reported by Stop Fake, Kremlin-linked media outlets have been systematically spreading disinformation by promoting statements from Armando Mema, falsely claimed to be a "Finnish politician". Russian state media, including RIA Novosti and RT, repeatedly cite his social media posts to claim that Western countries are losing faith in Ukraine, calling for peace with Russia, or warning of inevitable Russian victory. This creates the illusion of growing international support for pro-Kremlin narratives.
In reality, Mema is not a recognized political figure in Finland but a little-known individual with no electoral success and no significant public influence. By misrepresenting him as a credible politician, these outlets confer legitimacy and amplify fringe opinions, as if they reflected mainstream Western views.
Source: StopFake, The “Finnish politician”, an opponent of helping Ukraine, who is quoted daily by the Kremlin media, turned out to be a plumber without Finnish citizenship. [online] Published 18 March 2026. Available at: https://www.stopfake.org/ru/finskij-politik-protivnik-pomoshhi-ukraine-kotorogo-ezhednevno-tsitiruyut-kremlevskie-smi-okazalsya-santehnikom-bez-grazhdanstva-finlyandii/ (referring to: The Insider, Antifake investigation. [online] Published 2026. Available at: https://theins.ru/antifake/290378
Iran
Iran Overstated Its Global Oil Disruption
Since the US-Israel attack, Iran has effectively blocked the Strait of Hormuz, a narrow waterway between Iran and Oman through which roughly 20 percent of the world’s oil supply passes. The International Energy Agency said the Middle East conflict was creating the largest supply disruption in the history of the global oil market.
As reported by NewsGuard's Reality Check, Iranian state-aligned media amplified disinformation that exaggerated the global impact of the Strait of Hormuz disruption, including false claims that Australia is on the verge of a fuel crisis. Reports circulated by pro-Iran outlets alleged that Australia has only 18 days of fuel reserves remaining, roughly half of its actual supply, citing fabricated figures attributed to government sources. The claim was accompanied by a manipulated graphic, likely edited using a Google AI tool, which altered authentic data from an Australian broadcaster to support the misleading narrative.
In reality, official statements from Australia confirmed significantly higher reserves, with around 36 days of petrol and substantial supplies of diesel and jet fuel. Alongside these narratives, Iranian outlets have warned of a looming "energy catastrophe" in the West, framing the crisis as evidence of Western economies' vulnerability despite more measured assessments from international energy authorities.
Source: NewsGuard, W. Rizvi, Iranian-Linked Media Falsely Claim Australia Has “Only 18 Days” of Fuel Reserves, Overstating the Impact of the War. [online] Published 17 March 2026. Available at: https://www.newsguardrealitycheck.com/p/iran-overstates-its-global-oil-disruption
Video Showing Netanyahu Alive and Well Is Not an AI Fake
NewsGuard’s Reality Check highlights that a video showing Israeli Prime Minister Benjamin Netanyahu alive at a Jerusalem café was falsely claimed to be AI-generated, forming part of a broader disinformation narrative that he was killed or seriously injured in an Iranian missile strike. The video, posted on the 15th of March 2026, was intended to refute the rumors about Netanyahu's death, but pro-Iran and anti-Israel accounts quickly dismissed it as a deepfake, citing supposed anomalies in Netanyahu’s face, the level of the coffee in the cup, the lighting and shadows, and a perceived distortion in Netanyahu’s coat pocket.
Some posts shared screenshots from the AI-detection tool, Hive, as purported evidence. And Hive indeed determined that the video was AI-generated. However, multiple verifications confirmed the video is authentic. Independent analysis, including location matching and supporting images from the café itself, as well as assessments by AI-detection experts, found no signs of manipulation. NewsGuard has identified other viral instances in which social media users claim that authentic war-related visuals are AI-generated. Pro-Iran accounts falsely claimed that a video of Netanyahu speaking at a press conference on the 12th of March 2026 was AI-generated. And anti-regime social media users falsely claimed that The New York Times published an AI-generated image of a large crowd celebrating the naming of the new Iranian supreme leader, Mojtaba Khamenei.
Another report by CyberNews presented the claims about Netanyahu’s alleged death that were amplified by social media accounts such as "Iran News Today", which shared images of an injured man resembling him and presented them as confirmation. Shortly after, Netanyahu posted a video released by Israel’s Government Press Office showing him alongside senior officials. While this footage should have dispelled the rumors, many users insisted it was part of a cover-up, claiming the video was recycled from a 2024 meeting or had been manipulated with AI. Some relied on AI tools like Google’s Gemini to "verify" these claims, despite contradictory evidence, such as newly published images confirming the meeting took place.
Sources:
NewsGuard, W. Rizvi, In this Case, Believe Your Eyes: Video Showing Netanyahu Alive and Well Is Not an AI Fake. [online] Published 16 March 2026. Available at: https://www.newsguardrealitycheck.com/p/netanyahus-proof-of-life-video-is
Cybernews, N. Ancell, Israeli Prime Minister Benjamin Netanyahu is trying to prove he's not an AI deepfake. [online] Published 18 March 2026. Available at: https://cybernews.com/news/benjamin-netanyahu-artificial-intelligence-video/
X, Iran News Today (@MCK_Kaif), X post. [online] Published 18 March 2026. Available at: https://x.com/MCK_Kaif/status/2034172316155056236
X, Benjamin Netanyahu (@netanyahu), X post (official account). [online] Published 17 March 2026. Available at: https://x.com/netanyahu/status/2034021266861003258
Disinformation and Hacktivism in the Iranian Conflict
A report by Resecurity highlighted how the Iran war has evolved into a multi-domain conflict where psychological warfare, disinformation, and hacktivism play a central role alongside military operations. Pro-Iran Hacktivists were deliberately spreading fear and confusion among targeted populations. Iran has also conducted internal psychological campaigns to keep the population motivated for the war and aligned with the existing regime. To counter that, the U.S. and its allies conducted attacks against broadcasting infrastructure and state media.
At the same time, AI-generated images, recycled footage, and even video game clips have been widely circulated as real battlefield content, distorting public perception of events. In one of the misinformation episodes, a pro-Iranian group was circulating messaging that Khamenei had not been killed. Resecurity highlighted misinformation about both cyber activity and kinetic actions.
Pro-Iran hacktivist groups like the Cyber Islamic Resistance, Fatimion Cyber Team, and Cyber Fattah had been identified as key players. Their activities included reconnaissance, DDoS campaigns, website defacements, and data theft. Pro-Western hacktivist groups have also been active and targeted Iranian religious apps and news sites. Notably, such factors included Iranian opposition groups residing outside Iran who targeted the existing regime. While many of the attacks were low-impact or exaggerated, they generated significant "noise", forcing defenders to divert resources and amplifying uncertainty.
Source: Resecurity, Iran War: Kinetic, Cyber, Electronic, and Psychological Warfare Convergence. [online] Published 17 March 2026. Available at: https://www.resecurity.com/blog/article/iran-war-kinetic-cyber-electronic-and-psychological-warfare-convergence
Justice Department Disrupted Iranian Cyber-Enabled Psychological Operations
As published by the Office of Public Affairs, a U.S. Justice Department operation seized four domains linked to Iran’s Ministry of Intelligence and Security (MOIS), which were used to conduct cyber-enabled psychological warfare campaigns. These websites functioned as part of a coordinated "faketivist" strategy, where Iranian actors claimed responsibility for cyberattacks, leaked or fabricated sensitive data, and amplified propaganda to intimidate adversaries. The platforms were used to target journalists, dissidents, and Israeli-linked individuals through doxxing, harassment, and threats, including calls for violence and financial bounties. The seized domains were Justicehomeland[.]org, Handala-Hack[.]to, Karmabelow80[.]org, and Handala-Redwanted[.]to.
For example, the MOIS used the Handala-hack[.]to domain to claim credit for a destructive malware attack in March 2026 against a U.S.-based multinational medical technology firm. These activities were often aimed less at technical disruption and more at psychological impact, spreading fear, damaging reputations, and deterring opposition. The use of fake hacktivist personas allowed Iranian actors to obscure attribution while amplifying their messaging across online platforms.
Source: U.S. Department of Justice, Justice Department Disrupts Iranian Cyber Enabled Psychological Operations. [online] Published 19 March 2026. Available at: https://www.justice.gov/opa/pr/justice-department-disrupts-iranian-cyber-enabled-psychological-operations
Iran Cyber Operations Prioritize Influence Over Impact
An article by the Center for Strategic and International Studies (CSIS) portrays Iran as a capable and adaptive cyber actor that integrates cyber operations into a broader hybrid warfare strategy, combining espionage, disruption, and influence campaigns to advance political objectives. Central to its approach is the use of cyber capabilities for symbolic impact and information dominance rather than decisive military effect. Iranian state-linked actors, including proxies such as hacktivists and cybercriminal networks, conduct operations that blend destructive attacks, reconnaissance, and influence efforts, often targeting critical infrastructure and foreign audiences. These actors employ tactics such as hack-and-leak campaigns, social media manipulation, and the mobilization of loosely affiliated cyber groups to amplify perceived reach, while maintaining plausible deniability and projecting power in the information domain.
Disinformation and influence operations emerge as a key battlefield, with Iran prioritizing narrative control, psychological pressure, and audience manipulation over high-impact cyber disruption. Tactics include exploiting social media to reach foreign populations, generating “noise” through coordinated low-level attacks, and leveraging AI and digital tools to shape perceptions and sow division. Domestically, information control is reinforced through internet shutdowns that isolate populations and suppress dissent. Internationally, Iran combines opportunistic cyber targeting with strategic messaging designed to exaggerate its capabilities, obscure attribution, and erode adversary cohesion. Overall, cyber operations primarily serve as enablers of information warfare, aimed at destabilizing opponents, influencing public opinion, and increasing the perceived costs of conflict, rather than delivering decisive operational outcomes.
Source: Center for Strategic and International Studies, N. Shah, Demystifying Iranian Cyber Operations in the U.S.-Iran Conflict. [online] Published 20 March 2026. Available at: https://www.csis.org/analysis/demystifying-iranian-cyber-operations-us-iran-conflict
[General Reports]
4th EEAS Report on Foreign Information Manipulation and Interference Threats
The 4th European External Action Service (EEAS) Report on Foreign Information Manipulation and Interference Threats provided a comprehensive assessment of FIMI activities worldwide, based on cases documented and investigated by the EEAS throughout 2025. A total of 540 incidents were identified, involving around 10,500 digital channels, with Ukraine remaining the primary target, followed by several European countries. Russia and China were the main attributed actors, though much of the activity relied on covert, unattributed networks that complicated detection and accountability. A key trend was the widespread use of artificial intelligence, with AI-generated text, audio, and video now routinely deployed to produce scalable, low-cost disinformation. FIMI was increasingly integrated into broader hybrid strategies, combining digital manipulation with geopolitical and physical actions, particularly around elections and regional conflicts.
A central innovation of the report was the FIMI Deterrence Playbook, which marked a shift from reactive responses to proactive disruption. Instead of focusing only on identifying threats, the EU aimed to target the underlying systems that enabled them, such as intermediaries, service providers, and financial and technical infrastructures. By leveraging sanctions, regulations, and law enforcement in a coordinated manner, the EU sought to raise the costs and reduce the effectiveness of FIMI operations. The report also mapped a complex global network, or "Galaxy", of interconnected channels, revealing a highly coordinated core infrastructure with regional clusters targeting Eastern Europe, Africa, and the Middle East.
To translate the assessment into practice, the report suggested a set of concrete operational follow-ups, such as Strengthening data collection and information sharing, integrating a deterrence dimension into counter-FIMI, and strengthening collective response and Member States engagement.
Source: EEAS, 4th EEAS Report on Foreign Information Manipulation and Interference Threats. [online] Published March 2026. Available at: https://transparency.meta.com/sr/first-half-2026-Adversarial-threat-report/
Developments in the FIMI Galaxy
A report by EUvsDisinfo highlighted how Foreign Information Manipulation and Interference (FIMI) have evolved into a dense, global ecosystem of interconnected actors, infrastructures, and narratives. The "FIMI Galaxy" has become more complex and active, with a growing number of incidents and operational channels. Threat actors increasingly reuse the same networks and tools across regions, reflecting a shift toward an organized, industry-like model of disinformation. The integration of AI has further accelerated this trend, enabling faster, cheaper, and more scalable production and distribution of manipulative content across multiple languages and platforms.
Another defining development is the expansion of the front lines in the information space. Almost every major geopolitical event now has a parallel dimension in the FIMI domain. The Galaxy mapped this ecosystem through three layers: threat actors (including Russia, China, and largely unattributed networks), manipulation infrastructures (known as Information Manipulation Sets), and targeted audiences. Notably, around 90 percent of the ecosystem operates through covert or proxy networks, making attribution difficult and allowing actors to coordinate influence campaigns while maintaining plausible deniability. These networks are highly adaptable, often shifting focus between regions, while using recurring tactics such as fake accounts, impersonation of media, and coordinated amplification.
Source: EUvsDisinfo, What’s new in the FIMI Galaxy?. [online] Published 20 March 2026. Available at: https://euvsdisinfo.eu/whats-new-in-the-fimi-galaxy/
PRC-Aligned Networks Exploit Philippine Divides to Shape Narratives
An article by Doublethink Lab argues that the Philippines faces a sustained Foreign Information Manipulation and Interference (FIMI) campaign centered on the West Philippine Sea and domestic political divisions, with the People’s Republic of China (PRC) state, state-controlled, and state-aligned actors, as well as proxy actors, playing a central role. Across the cases examined, these actors used coordinated narrative seeding and amplification to portray the United States as a destabilizing force, recast the Philippines as the aggressor in maritime confrontations, promote Sara Duterte as a preferable political alternative, agitate unrest within the armed forces, and frame Philippine defense cooperation with Washington as a path to proxy war. The campaign relied on a mix of official diplomatic and media channels, local PRC-aligned think tanks and influencers, pro-Duterte networks, and inauthentic accounts that amplified forged documents, misleading claims, and manipulated framings. These actors frequently presented themselves as experts, journalists, or credible public figures to lend legitimacy to their messaging and exploit existing domestic polarization to advance geopolitical objectives.
The tactics documented in the article reflect a deliberate effort to distort facts, degrade adversaries, cultivate support, and flood the information space across multiple platforms. These included forged memos alleging U.S. interference, early and coordinated framing of breaking maritime incidents, cross-posting across Chinese and Philippine digital ecosystems, AI-generated or suspicious account imagery, single-purpose amplification accounts, and the repeated use of local voices to normalize PRC-aligned narratives within Filipino discourse. The report concludes that these campaigns are not isolated disinformation incidents but rather components of a broader, persistent influence effort targeting the country’s cognitive security, democratic stability, and public understanding of national sovereignty issues. It emphasizes that the overlap between pro-PRC and pro-Duterte ecosystems, the growing role of locally based aligned actors, and the systematic use of coordinated inauthentic behavior make FIMI a national security and human rights concern that requires a coordinated, rights-based response.
Source: Doublethink Lab, A Bayanihan Initiative: Countering FIMI in the Philippines. [online] Published 20 March 2026. Available at: https://medium.com/doublethinklab/a-bayanihan-initiative-countering-fimi-in-the-philippines-15d14c4e2356
[Appendix - Frameworks to Counter Disinformation]
AI-Driven Disinformation Escalates Global Influence Warfare
Keynote speech by High Representative of the Union for Foreign Affairs and Security Policy (HRVP) Kaja Kallas at the 2026 Conference on Countering Foreign Information Manipulation and Interference frames Foreign Information Manipulation and Interference (FIMI) as a central threat to democratic systems, driven primarily by state actors such as Russia and China, and increasingly enabled by artificial intelligence. These actors employ evolving tactics that move beyond traditional bot farms to large-scale, AI-driven influence operations, including deepfakes, coordinated bot amplification, and the use of paid influencers to manipulate public opinion. The example of Romania’s 2024 elections illustrates how Russia-backed networks used AI-generated content and influencer promotion to artificially boost a fringe candidate, demonstrating the ability of such campaigns to distort electoral processes. The speech also highlights the emergence of a "marketplace for manipulation", where influence operations are outsourced to private contractors, digital marketing actors, and influence-for-hire networks, often financed through opaque mechanisms such as cryptocurrency.
The address emphasizes that FIMI operates as a form of cognitive warfare aimed at eroding trust, polarizing societies, and shaping political outcomes across multiple domains: within the EU, in neighboring regions, and globally. Tactics include coordinated narrative amplification, financial support to proxy actors, early and sustained interference campaigns, and the exploitation of democratic openness to spread manipulative content. Case studies such as Moldova demonstrate how foreign actors deploy large-scale funding, vote-buying, and information campaigns to influence elections, while targeting countries closer to the EU with intensified efforts. The speech concludes that disinformation campaigns are persistent, adaptive, and globally networked, requiring coordinated responses that combine regulation, financial disruption of influence networks, cross-border cooperation, and strategic communication to counter manipulation and protect information integrity.
Source: European External Action Service, Keynote speech by HRVP Kaja Kallas at the 2026 Conference on Countering Foreign Information Manipulation and Interference: ‘From Insight to Impact’. [online] Published 17 March 2026.
AI Model Boosts Urdu Misinformation Detection Capabilities
An article by TechXplore describes the development of an advanced artificial intelligence system designed to detect misinformation in Urdu-language media, addressing a significant gap in the analysis of non-English disinformation. Developed using a dataset of over 14,000 Pakistani news articles spanning politically and socially sensitive topics, the model can identify fake, misleading, and partially true content with reported accuracy of 96%. The system analyzes linguistic patterns, sentiment, and structural features to distinguish fabricated narratives from legitimate reporting, targeting a “polluted information ecosystem” where misinformation can influence public health, elections, and trust in institutions. This effort responds to the historical lack of comprehensive Urdu datasets, particularly on politically and religiously sensitive issues where disinformation is prevalent.
The research highlights both the potential and limitations of AI in countering information manipulation. While the model represents a significant advancement in detecting disinformation in a low-resource language, it also raises concerns about misclassification, particularly with satire or political dissent, and about its implications for content moderation and enforcement. The dataset has been made open access to support broader efforts to scale detection capabilities and adapt to evolving narratives. Overall, the system is positioned as a technical tool to strengthen resilience against misinformation campaigns targeting Urdu-speaking audiences, while acknowledging the ongoing challenges posed by dynamic and context-dependent disinformation tactics.
Source: Tech Xplore, Heriot-Watt University, AI model trained on 14,000 Urdu news stories spots misinformation with 96% accuracy. [online] Published 18 March 2026. Available at: https://techxplore.com/news/2026-03-ai-urdu-news-stories-misinformation.html
[CRC Glossary]
The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult.
To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.
As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website.
_edited.png)
.png)