Cyber-based hostile influence campaigns 2nd - 8th March 2026
- CRC
- Mar 11
- 17 min read
Updated: Mar 12

[Introduction]
Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.
During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events.
[Contents]
[State Actors]
Russia
China
Iran
[AI Related Articles]
[Cyfluence Attacks]
[Appendix - Frameworks to Counter Disinformation]
[ Report Highlights]
In a recent report, DoubleThink Lab reveals that leaked internal documents from the Chinese technology company GoLaxy outline plans to build advanced AI-powered systems to automate the manipulation of information.
A study by DoubleThink Lab examining the People’s Republic of China's influence in Southeast Asia found that both Malaysia and Singapore face growing exposure to Chinese political, economic, and technological influence.
Iranian state media have intensified disinformation efforts following the outbreak of war with the US and Israel, spreading false claims about battlefield successes and using manipulated or misleading visuals. As identified by NewsGuard's Reality Check, at least 18 false claims have circulated since the conflict began.
A report issued by Radware states that following Operation Epic Fury, coordinated pro-Iranian and allied hacktivist groups launched widespread cyber campaigns, including DDoS attacks, phishing operations, and hack-and-leak claims, targeting government and critical infrastructure across the Middle East while amplifying political messaging online.
Graphika’s analysis finds that AI played a central role in influence operations targeting elections in Bangladesh, Colombia, and Tibet, enabling the rapid production and amplification of synthetic political content across online platforms.
[State Actors]
Russia
Russia’s Information Control in Ukraine’s Occupied Territories
An article by EUvsDisinfo reports that since the start of Russia’s occupation of parts of Ukraine in 2014 and the full-scale invasion in 2022, authorities under Russia have built a comprehensive system of information control in Ukraine’s occupied regions. An estimated 5–6 million residents now live in an information environment largely cut off from Ukrainian media and dominated by Russian state narratives. Ukrainian outlets have been shut down or replaced with pro-Kremlin media, while telecommunications infrastructure has been “Russified”, shifting networks to Russian operators that enable traffic filtering, website blocking, and internet disruptions. Authorities have also restricted access to major platforms and promoted state-controlled messaging apps to expand surveillance and propaganda.
Ukrainian journalists have been detained or forced to leave, while new media outlets and Telegram channels simulate “normal life” under occupation while spreading Kremlin narratives. Russian authorities have also targeted language and education as tools of control, eliminating Ukrainian-language schooling, removing Ukrainian books from libraries, and imposing Russian as the dominant language in public life. Arrests, surveillance, and online monitoring have created a climate of fear that encourages widespread self-censorship among residents. Despite these pressures, forms of quiet resistance persist. Some residents use VPNs to access Ukrainian information, maintain Ukrainian language and culture at home, or continue education through Ukrainian online schools.
Source: EUvsDisinfo, E. Beketova. Russia’s Information Grip on Ukraine’s Occupied Territories. [online] Published 2 March 2026. Available at: https://euvsdisinfo.eu/russias-information-grip-on-ukraines-occupied-territories/
Kremlin Used Iran War Disinformation to Target Ukraine and France
As reported by NewsGuard's Reality Check, the Russian influence operation Matryoshka has used the ongoing conflict involving Iran to spread fabricated reports aimed at discrediting Ukraine and its allies, particularly France. The campaign produced at least 9 false claims during the conflict triggered by coordinated strikes by the US and Israel against Iranian targets. The reports mimicked credible media outlets and research institutions to make the claims appear legitimate.
Among the false narratives was a fake video attributed to the Institute for the Study of War claiming that an Iranian missile killed 70 French soldiers at a base in Abu Dhabi. Another report falsely attributed to Le Point alleged that Ukrainian phone scammers stole millions of euros from French citizens by exploiting the crisis. Additional claims attempted to portray Ukrainian military leadership as corrupt, including a story falsely suggesting that Ukraine’s army chief, Oleksandr Syrskyi, owned a luxury property in Dubai that was destroyed by an Iranian missile.
Source: NewsGuard Reality Check, A. Lee. Russia Uses Iran War to Plant Fake Reports Discrediting Ukraine And France. [online] Published 4 March 2026. Available at: https://www.newsguardrealitycheck.com/p/kremlin-creates-iran-war-fakes-to
Pro-Kremlin Disinformation Targeted Middle East War Narratives
An article by EUvsDisinfo showed that Pro-Kremlin media outlets have circulated several misleading narratives related to the war in the Middle East. One widely promoted claim, attributed to the Russian Foreign Intelligence Service, alleged that France and the UK planned to provide Ukraine with a nuclear or "dirty" bomb. Russian state media repeated the accusation across multiple languages, despite offering no evidence, while officials in Ukraine, France, and the UK rejected the claims.
Other pro-Kremlin narratives accused Ukraine and the British intelligence agency MI6 of plotting to sabotage Black Sea energy infrastructure, including the Turkish Stream and Blue Stream gas pipelines. These allegations were largely based on statements by Putin and echoed previous unsupported accusations made by Moscow during the war in Ukraine. At the same time, pro-Kremlin outlets promoted the claim that Gulf states such as Qatar, Saudi Arabia, and the United Arab Emirates had asked Russia to serve as the guarantor of peace in the Middle East. Reports indicated that the diplomatic contacts were initiated by Moscow itself.
Source: EUvsDisinfo. How Russia lies about the war in the Middle East. [online] Published 6 March 2026. Available at: https://euvsdisinfo.eu/how-russia-lies-about-the-war-in-the-middle-east/
China
Leaked Documents Revealed AI-Driven Chinese Influence Operations
A 399-page leak analyzed by DoubleThink Lab reveals that the Chinese technology company GoLaxy planned to build advanced AI-powered systems to automate the manipulation of foreign information. The documents describe a “smart propaganda” platform capable of monitoring political trends, identifying targets, generating tailored propaganda, and distributing it through large networks of realistic fake social media accounts.
The company, which originated from the Institute of Computing Technology under the Chinese Academy of Sciences, aims to use AI and data analysis to influence public opinion abroad more effectively than previous campaigns, often associated with spam-like messaging. The system gathers large amounts of political and social data, profiles influential individuals and organizations, and generates customized content that can be automatically posted by thousands of “persona bots” across platforms such as Facebook, X, and Telegram.
The documents also reveal detailed monitoring programs targeting sensitive regions and political issues for China, including extensive datasets on political figures and organizations in Taiwan, as well as monitoring of political actors and institutions in the US and across the Indo-Pacific. These systems are designed to track elections, analyze public sentiment, and identify opportunities to shape narratives or deepen political divisions. The leak also highlights cooperation between GoLaxy and organizations linked to the Chinese government and military, including the Cyberspace Administration of China and the Central Military Commission.
Source: DoubleThink Lab. The Rise of AI in PRC Influence Operations: Nine Takeaways from the GoLaxy Documents. [online] Published 4 March 2026. Available at: https://medium.com/doublethinklab/the-rise-of-ai-in-prc-influence-operations-nine-takeaways-from-the-golaxy-documents-2d6617a753e5
A Case Study of PRC Influence in Malaysia and Singapore
A study by DoubleThink Lab examining the People’s Republic of China's influence in Southeast Asia found that both Malaysia and Singapore face growing exposure to Chinese political, economic, and technological influence. According to the Doublethink Lab China Index 2024, several Southeast Asian countries ranked among the most affected globally.
In Malaysia, ties with Beijing have strengthened under Prime Minister Anwar Ibrahim, including expanded economic cooperation, visa policies, and agreements in media, telecommunications, and technology. Public sentiment toward China has also grown more positive, partly due to geopolitical tensions, such as the Gaza war. At the same time, concerns remained about economic competition, security risks, and territorial disputes in the South China Sea. Malaysia has signed multiple cooperation agreements with Chinese state-linked media and technology companies, raising questions about potential narrative influence and information sharing.
In Singapore, influence efforts were evident in cultural, educational, and media exchanges, as well as on Chinese social media platforms and diaspora networks. Authorities have also faced cyber incidents attributed to Chinese-linked groups and instances of online disinformation, including AI-generated content spreading false claims about political divisions within the ruling People’s Action Party and its leader, Prime Minister Lawrence Wong. Despite expanding engagement with Beijing, both countries maintain complex and cautious relationships with China. Singapore continues to balance ties with both China and the US, while Malaysia’s closer cooperation with Beijing coexists with concerns over sovereignty and economic dependence.
Source: DoubleThink Lab. PRC Influence on ASEAN Countries in 2025: The Case Study of Malaysia and Singapore. [online] Published 3 March 2026. Available at: https://medium.com/doublethinklab/prc-influence-on-asean-countries-in-2025-the-case-study-of-malaysia-and-singapore-48d84c1e71c1
Iran
Misleading War Images Exaggerated Iran’s Military Success
As coordinated strikes on 28 February 2026 escalated tensions between the US, Israel, and Iran, an analysis by NewsGuard’s Reality Check documented how misleading images and videos began circulating widely on social media. At least three viral visual misinformation pieces collectively garnered more than 21.9 million views on X, falsely portraying Iran as achieving major military victories.
In reality, each example relied on unrelated or old footage. A widely shared image allegedly showing the USS Abraham Lincoln sinking was actually a photo of the scuttling of the decommissioned USS Oriskany in 2006. Another viral video claimed to depict an Iranian strike on Israel’s Negev Nuclear Research Center but was traced to a 2017 ammunition depot explosion in Balakliya, Ukraine. A third clip circulated as footage of a missile strike on a CIA headquarters in Dubai, United Arab Emirates, but actually showed a residential tower fire in nearby Sharjah in 2015.
Source: NewsGuard Reality Check, L. Marchl & I. Blachez & S. Rubinson. Amid U.S. and Israeli Attack, Phony Images and Videos Make It Look Like Iran Has Had Big Wins. [online] Published 2 March 2026. Available at: https://www.newsguardrealitycheck.com/p/iran-is-winning-according-to-misleading
Cyber and Information Operations Expand Pressure on Iran’s Regime
Alongside US and Israeli airstrikes on Iran, a parallel campaign has targeted the country’s digital infrastructure and information environment. A report by the Australian Strategic Policy Institute (ASPI), described this strategy as a fusion of cyber and kinetic operations designed to disrupt communications, weaken situational awareness, and pressure political decision-making. By targeting telecommunications, state media systems, and online services, cyber tools can create uncertainty within government institutions while simultaneously affecting how the public perceives events on the ground. Though none of these guarantees regime collapse.
The campaign reportedly included attempts to reach civilians directly through digital platforms. One example cited is the alleged hacking of the widely used Iranian prayer and calendar application BadeSaba, which sent push notifications urging users and military personnel to surrender. Similar approaches have appeared in the past, including those used by Russia in digital influence operations.
Source: ASPI, J. Bassi & J. Corera. Beyond bombs: cyber and information operations targeting Iran’s regime. [online] Published 2 March 2026. Available at: https://www.aspi.org.au/strategist-posts/beyond-bombs-cyber-and-information-operations-targeting-irans-regime/
Iran Launched a Disinformation Campaign Following War with US and Israel
Iranian state media have intensified disinformation efforts following the outbreak of war with the US and Israel, spreading false claims about battlefield successes and using manipulated or misleading visuals. As identified by NewsGuard's Reality Check, at least 18 provably false war-related claims linked to Iran-aligned sources have circulated since the conflict began, more than triple the number recorded in the two weeks before the attacks.
Several claims relied on altered or misleading images and videos. One example involved an AI-manipulated satellite image shared by the Iranian state outlet Tehran Times that falsely depicted the destruction of a US radar system at Al Udeid Air Base. In another case, Iranian channels circulated a video claiming that Iranian forces had shot down a US fighter jet over Tehran. However, the footage actually showed an Iranian aircraft destroyed by the Israeli Air Force. The spread of these narratives has been amplified by Iran’s near-total internet shutdown, which reduced connectivity by about 98 percent and limited access to outside information.
Source: NewsGuard Reality Check, W. Rizvi. Iran’s State Media Bombards Iranians with False Claims and Misleading Visuals. [online] Published 5 March 2026. Available at: https://www.newsguardrealitycheck.com/p/iran-goes-on-a-disinformation-offensive
AI Video Showing Massive Destruction in Tel-Aviv Spread by Pro-Iran Accounts
An investigation by NewsGuard’s Reality Check found that a viral video circulating on social media falsely claimed to show a dozen Iranian missiles striking a residential neighborhood in Tel Aviv on 3 March 2026, supposedly bypassing Israel’s missile defense system and causing massive destruction. The footage spread widely across platforms, gaining tens of millions of views after being promoted by pro-Iran and anti-Israel accounts, some of which described it as verified “breaking” footage of a major Iranian attack.
The investigation determined that the video had actually been generated using AI. Analysis using the AI-detection tool Hive, along with visual inconsistencies in the footage, revealed that it was fabricated. For example, the street shown in the clip appeared completely deserted despite the supposed daytime missile barrage, and the siren heard in the background does not match authentic Israeli missile alert sirens.
Grok described the video as authentic citizen footage, while Google AI Overviews suggested it depicted a real military engagement. In reality, while one Iranian missile did strike Tel Aviv on 28 February 2026, causing injuries and one death, the large-scale destruction shown in the viral video never occurred.
Source: NewsGuard Reality Check, I. Chomnalez. Debunk: Video Showing Massive Destruction in Tel Aviv Is an AI Fabrication Spread by Pro-Iran Accounts. [online] Published 6 March 2026. Available at: https://www.newsguardrealitycheck.com/p/dozens-of-missiles-struck-tel-aviv
The Online Information Environment Around the Iran War
Following US and Israeli airstrikes on Iran, researchers at Graphika tracked the evolving online information environment surrounding the conflict. Their monitoring highlighted several parallel developments, including influence operations, hacktivist mobilization, and the spread of false or misleading information across social media platforms.
Pro-China state media outlets such as CGTN and China Daily, along with Chinese diplomatic accounts, have published videos, memes, and official statements portraying the US and Israel as aggressors while expressing support for Iran and calling for de-escalation. At the same time, disinformation has circulated widely online, including coordinated posts falsely claiming that Donald Trump and Benjamin Netanyahu had been killed in the conflict. These claims spread across multiple platforms using copied messages and repurposed images. Several pro-Iranian hacktivist groups linked to Iranian security services, including actors associated with the Islamic Revolutionary Guard Corps, reactivated and claimed attacks against regional and Western targets. Meanwhile, a coalition of hacktivist groups mobilized within hours of the strikes, calling for cyber operations against US and Israeli interests, though many of the claimed attacks remain unverified. Researchers also observed pre-strike messaging from pro-Iranian and pro-Russian networks aimed at discouraging US military action and warning of possible retaliation.
Source: Graphika. Iran War: Monitoring the Online Threat Landscape. [online] Published 6 March 2026. Available at: https://graphika.com/posts/iran-war-monitoring-the-online-threat-landscape
Hacktivist Cyber Campaign Escalates Following Operation Epic Fury
A report issued by Radware states that following the launch of the U.S.–Israeli military offensive “Operation Epic Fury” on February 28, 2026, a rapid surge of hacktivist cyber activity emerged across the Middle East. According to the first report, hacktivist groups mobilized within nine hours of the initial strikes. They launched coordinated distributed denial-of-service (DDoS) campaigns targeting government institutions, financial systems, and telecommunications providers. The activity was highly concentrated, with two groups - Keymous+ and DieNet - responsible for nearly 70% of all attack claims between February 28 and March 2. In total, nine hacktivist collectives claimed 107 attacks against 81 organizations across eight countries, with government institutions accounting for approximately 53% of all targets. The campaign focused primarily on Kuwait, Israel, and Jordan, which together represented more than 76% of attack claims, indicating a strategic effort to disrupt state infrastructure and maximize political visibility during the regional escalation.
A separate threat analysis highlights the broader cyber and influence ecosystem surrounding the conflict, noting that although Iranian state cyber capabilities were likely temporarily constrained by severe internet outages and disruptions to command structures, external actors rapidly intensified cyber activity. According to the second report, numerous pro-Iranian, pro-Palestinian, and pro-Russian hacktivist groups, including Handala Hack, APT Iran, Cyber Islamic Resistance, Dark Storm Team, and 313 Team, conducted disruptive operations, including DDoS attacks, hack-and-leak campaigns, website defacements, and attempts to access industrial control systems. The report also identifies influence-oriented tactics, including phishing campaigns using a malicious replica of Israel’s Home Front Command RedAlert application to deliver surveillance malware, intimidation campaigns targeting critics of Iran, and public claims of system compromises intended to generate psychological pressure and shape public narratives during the conflict.
Taken together, the two analyses describe a coordinated cyber escalation accompanying the kinetic conflict. Both reports emphasize that a small number of ideologically aligned hacktivist networks dominate the activity, often publicizing operations through Telegram and other online platforms to amplify political messaging. The participation of external actors, including pro-Russian groups such as NoName057(16), suggests the emergence of a broader transnational cyber campaign in which disruptive attacks, exaggerated breach claims, and information operations operate alongside military confrontation to destabilize infrastructure, influence public perception, and expand the digital dimension of the conflict.
Source:
Radware. Retaliatory Hacktivist DDoS Activity Following Operation Epic Fury/Roaring Lion. [online] Published 3 March 2026. Available at: https://www.radware.com/security/threat-advisories-and-attack-reports/ddos-activity-following-operation-epic-fury-roaring-lion/
Unit 42, Palo Alto Networks. Threat Brief: March 2026 Escalation of Cyber Risk Related to Iran. [online] Published 2 March 2026. Available at: https://unit42.paloaltonetworks.com/iranian-cyberattacks-2026/
[AI Related Articles]
Google’s AI Overviews Amplified Iran-Related Hoaxes
An investigation by NewsGuard’s Reality Check found that Google’s AI-generated search summaries were repeating false claims about images linked to the conflict involving Iran. In several cases, the summaries presented unverified claims as plausible explanations, even when the images or videos were unrelated to the conflict or created using AI.
NewsGuard identified multiple examples. Google’s AI summary described a video from a 2015 residential fire in the United Arab Emirates as a possible missile strike on a CIA-linked building in Dubai. In another case, an AI-generated video allegedly showing multiple Iranian missiles striking Tel Aviv was summarized as depicting a real military escalation. Similarly, footage posted weeks earlier was mischaracterized as evidence that the US Embassy in Riyadh had been destroyed in a drone attack, despite verified footage showing only minor damage. Although Google’s reverse-image search can still help trace the origins of images through its “Exact Matches” feature, the AI-generated summary appears at the top of search results and may shape users’ first impressions.
Source: NewsGuard Reality Check, S. Rubinson & I. Chomnalez. Google’s AI Overviews Are Fooled by Iran-Related Visual Hoaxes in its Own Reverse-Image Searches. [online] Published 3 March 2026. Available at: https://www.newsguardrealitycheck.com/p/googles-ai-overviews-supercharge
AI Fueled Election Interference Across Three States
An analysis by Graphika found that AI played a central role in influence operations targeting elections in Bangladesh, Colombia, and Tibet. Across the three cases, coordinated networks used AI tools to generate deepfake videos, produce political content in multiple languages, and automate the mass distribution of posts through inauthentic accounts designed to appear like genuine users.
Ahead of Bangladesh’s February 2026 election, AI-generated videos circulated online, showing fabricated authority figures accusing political actors, while fact-checkers later identified the clips as fake. In the Tibetan diaspora election for the Central Tibetan Administration, accounts linked to the pro-China "Spamouflage" network published AI-generated articles and cartoons in both English and Tibetan to discredit candidates. Meanwhile, in Colombia’s parliamentary elections, a network of automated accounts flooded social media with thousands of posts targeting President Gustavo Petro, amplifying both pro and anti-government narratives to create confusion and information overload.
AI significantly lowered the cost and effort required to produce persuasive political content and coordinate large-scale influence campaigns. As a result, election interference is becoming easier to conduct, more difficult to detect, and increasingly focused on eroding public confidence in democratic institutions rather than simply promoting or attacking specific candidates.
Source: Graphika. Deepfakes, Noise, and Doubt: AI's Role in Three Recent Elections. [online] Published 4 March 2026. Available at: https://graphika.com/posts/deepfakes-noise-and-doubt-ai-s-role-in-three-recent-elections
AI-Assisted Russian Influence Network Targets Africa
A pro-Kremlin influence network linked to the media outlet Rybar used ChatGPT to support covert information operations targeting audiences in Africa, according to a threat report by OpenAI. The campaign, dubbed "Fish Food", relied on artificial intelligence to mass-produce multilingual social media content that was later disseminated across Telegram and X through accounts with no declared connection to the outlet. Researchers found that the system was used primarily as a content-generation hub, producing batches of comments and narratives, often from Russian-language prompts, designed to promote pro-Russian messaging, criticize Ukraine, and portray Western countries as interfering actors.
Beyond producing propaganda content, the network reportedly used AI tools to plan and structure influence operations. Prompts requested assistance drafting proposals for covert campaigns in Africa, including strategies to manage coordinated social media accounts, create a bilingual investigative journalism website, arrange paid media placements in French-language outlets, and organize offline events. Additional prompts sought information on electoral systems in Burundi, Cameroon, and Madagascar, including suggestions for mobilizing protests and developing local networks to support election-related influence activities. Although some of the accounts involved appeared to originate in Russia and Rybar maintains a large Telegram audience, OpenAI stated it found limited evidence that the campaign achieved substantial amplification or measurable real-world impact.
Source: OpenAI. Disrupting malicious uses of AI. [online] Published 25 February 2026. Available at: https://openai.com/index/disrupting-malicious-ai-uses/
[Cyfluence Attacks]
Iran Expected to Launch Cyber Attacks Worldwide Following US and Israeli Strikes
As reported by Infosecurity Magazine, Iran is highly likely to respond to recent US and Israeli air strikes with cyber-attacks targeting organizations across the Middle East and other regions, according to Google’s chief of cyber threat intelligence, John Hultquist. Speaking at an event hosted by the Royal United Services Institute (RUSI) in London, Hultquist warned that countries hosting US military bases, such as Qatar, Bahrain, Jordan, the United Arab Emirates, and Kuwait, could face particularly aggressive cyber operations. While Iran’s cyber tools may not be new, the scope of potential targets is expected to expand significantly, exposing organizations with weaker cybersecurity defenses.
Hultquist noted that Iran often operates through loosely connected cybercriminal or hacktivist groups, creating plausible deniability for state-backed actions. These groups may appear independent, but could actually act as fronts for the Iranian Revolutionary Guard Corps (IRGC). As a result, some incidents labeled as hacktivism or ransomware may in fact be covert state-sponsored cyber campaigns aimed at governments, businesses, and critical infrastructure in the US, Gulf states, and other countries viewed as hostile to Iran.
Source: Infosecurity Magazine, D. Palmer. Expect Iran to Launch Cyber-Attacks Globally, Warns Google Head of Threat Intel. [online] Published 2 March 2026. Available at: https://www.infosecurity-magazine.com/news/iran-cyber-attacks-global-google/
Middle East Conflict Triggered Surge in Global Cyber Activity
Infosecurity Magazine describes the escalating conflict in the Middle East as entering a hybrid phase that combines military operations with large-scale cyber campaigns, raising concerns about spillover risks for organizations worldwide. The shift followed joint Israeli and US strikes on Iran targeting Iranian leadership, military, and nuclear-related sites. At the same time, a major cyber operation reportedly disrupted Iran’s digital infrastructure, with internet connectivity dropping to about 4% of normal levels.
Security experts warned that cyber retaliation is likely to intensify, potentially disguised as criminal activity such as ransomware attacks. Increased activity has already been observed from hacktivist and cybercriminal groups, including HydraC2, Handala, and Sicarii, alongside a surge in DDoS attacks, website defacements, and alleged data breaches targeting government, banking, aviation, and telecom sectors. More than 150 hacktivist incidents were reported in the days following the strikes. Experts noted that Iran has previously used cyber operations to retaliate against geopolitical pressure and may deploy tactics designed to obscure state involvement. These could include ransomware attacks followed by destructive data wiping, exploiting long-term espionage access for sabotage, operating through proxy cybercriminal groups, and publicly releasing stolen data to pressure victims.
Source: Infosecurity Magazine, A. Mascellino. Hybrid Middle East Conflict Triggers Surge in Global Cyber Activity. [online] Published 2 March 2026. Available at: https://www.infosecurity-magazine.com/news/middle-east-conflict-surge-global/
[Appendix - Frameworks to Counter Disinformation]
EU and France Rethink Strategy to Counter Russian Disinformation
Amid growing concerns about foreign interference ahead of major elections in 2027, an article in Le Monde outlines how the EU and France are reassessing their strategies to counter rising Russian disinformation. The elections include the French presidential election and the Italian parliamentary elections. Concerns have grown over interference campaigns ranging from propaganda websites and manipulated AI content to algorithm-driven influence on social media platforms such as TikTok and X. In response, the European Commission launched the European Center for Democratic Resilience as part of its “European Democratic Shield” initiative, aiming to coordinate member states’ responses and protect public debate.
Both Brussels and Paris are shifting toward a society-wide approach that emphasizes public resilience rather than relying solely on removing false content. The strategy focuses on strengthening media literacy, empowering trusted community figures such as educators and librarians to help identify misinformation, and improving research and education around information manipulation.
Finland is often cited as a successful model in which media literacy is taught from an early age and public trust in institutions remains relatively high. France has also developed specific tools to counter foreign interference. Since 2021, the government unit Viginum has monitored digital manipulation campaigns, while officials plan to expand research programs and public awareness initiatives ahead of the 2027 election cycle.
Source: Le Monde, W. Audureau. EU and France rethink their approach to combat Russian disinformation surge. [online] Published 4 March 2026. Available at: https://www.lemonde.fr/en/les-decodeurs/article/2026/03/04/eu-and-france-rethink-their-approach-to-combat-russian-disinformation-surge_6751095_8.html/t/699c3be7e6c285093f8f2541/1771846631816/Report+Leicester.pdf
[CRC Glossary]
The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult.
To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.
As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website
_edited.png)
.png)