top of page

Cyber-based hostile influence campaigns 23rd February - 01st March 2026

Cover Image- Text: Weekly Media Update: Information Operations


[Introduction]


Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.  


During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events.



[Contents]



[Social Media Platforms]


X


[State Actors]


Russia 

China


Iran



[AI Related Articles]


[General Reports]




[ Report Highlights]









[Social Media Platforms]


X

Disinformation Spread on X After US and Israeli Strike on Iran

Following announcements by Donald Trump about a major U.S. and Israeli military operation against Iran, a wave of misleading and false content rapidly spread on X. An analysis by WIRED reviewed hundreds of viral posts containing inaccurate claims about the scale and locations of the attacks. Many widely shared videos and images were old footage, taken from unrelated conflicts, altered, or generated with AI. In some cases, video game clips were circulated as supposed evidence of real combat events.


Much of the viral misinformation came from accounts with paid verification badges, whose posts can generate revenue through engagement. Although some misleading posts later received contextual corrections through community notes, they often accumulated millions of views before being flagged. Examples included footage falsely claiming to show missiles over Dubai, alleged Israeli aircraft shootdowns without credible confirmation, and manipulated images depicting damage to Iranian or U.S. military assets.


Source: WIRED, D. Gilbert. X Is Drowning in Disinformation Following US and Israeli Attack on Iran. [online] Published 28 February 2026. Available at: https://www.wired.com/story/x-is-drowning-in-disinformation-following-us-and-israels-attack-on-iran/


White House Staffer Linked to Anonymous Pro-Trump X Account

A WIRED report described how a popular pro-Trump account on X, known as “Johnny MAGA,” with nearly 300,000 followers, appeared to be run by a White House rapid-response staffer named Garrett Wade. The account has frequently amplified messaging from the White House and defended Donald Trump's policies while presenting itself online as an independent supporter.


WIRED linked Wade to the anonymous account through publicly available records and a phone number associated with both him and the profile. The account has promoted administration narratives, attacked political opponents such as Gavin Newsom, and defended controversial content shared by Trump, including an AI-generated video depicting Barack Obama and Michelle Obama as apes. Disinformation researchers suggested that, if a government staffer is secretly operating an influential political account, it could blur the line between official government messaging and what appears to be organic digital support.


Source: WIRED, M. Kelly. A White House Staffer Appears to Run Massive Pro-Trump X Account. [online] Published 25 February 2026. Available at: https://www.wired.com/story/a-white-house-staffer-appears-to-run-massive-pro-trump-meme-page/


[State Actors]


Russia

Escalating Hybrid Warfare Risks in Europe

Russia has conducted increasingly aggressive hybrid warfare in NATO countries since its 2022 invasion of Ukraine, and a report by Recorded Future’s Insikt Group warns that it will likely intensify coordinated tactics such as cyberattacks, sabotage, propaganda, airspace violations, and energy pressure based on a New Generation Warfare doctrine. Unlike opportunistic actions, a full campaign would involve more frequent, simultaneous operations across multiple countries to undermine confidence in governments and strain NATO’s ability to respond.


These tactics could target critical infrastructure, including communications cables, transportation, and energy systems, while expanding disinformation campaigns to deepen political divisions and reduce public trust. The report warned that both governments and private companies would face disruption, reputational risks, and economic damage. Russia may also exploit Europe’s remaining dependence on Russian oil and gas and use emerging technologies such as AI to enhance propaganda and cyber operations. Putin is likely to assess over the next three to five years whether conditions are favorable for a military incursion into NATO territory, particularly in Eastern Europe.


Source: Recorded Future, Insikt Group, Preparing for Russia’s New Generation Warfare in Europe. [online] Published 24 February 2026. Available at: https://www.recordedfuture.com/research/preparing-for-russias-new-generation-warfare-in-europe


Russian Disinformation Wave Targeted the Olympics, France, and the Navalny Investigation

Disinformation campaigns targeting the Olympics, France, and the Navalny investigation intensified in the week of 23 February 2026, as described in an EUvsDisinfo article. On the Winter Olympics in Milan and Cortina, where Russia is banned from competing as a state due to doping scandals and its war against Ukraine, pro-Kremlin sources spread AI-enhanced false reports portraying Ukrainian athletes as aggressive or problematic. These narratives included fabricated news segments impersonating legitimate outlets such as Euronews, AFP, and CBC, with AI-generated voices and altered footage used to falsely claim that Ukrainian athletes were isolated from others because of “toxic behavior.”


Regarding France, pro-Kremlin outlets falsely claimed that President Emmanuel Macron intended to ban virtual private networks after France adopted new rules restricting social media access for children under 15. In reality, Macron stated that VPNs would not be banned and could even be used to bypass the restrictions.


Another disinformation effort sought to deflect attention from evidence that Russian opposition leader Alexey Navalny had been poisoned. Russian officials suggested that the findings of a multinational investigation were meant to distract from the release of files related to Jeffrey Epstein, despite no evidence linking the two events. In the past, pro-Kremlin FIMI outlets have alleged that US special services killed Navalny, that the UK probably assassinated Navalny, and that Navalny died after using an mRNA vaccine supplied by Germany, among many other deceptions.


Source: EUvsDisinfo, Russia’s quest for disinformation gold [online] Published 27 February 2026. Available at: https://euvsdisinfo.eu/russias-quest-for-disinformation-gold/


Russian Disinformation Campaign Targeted Ukraine’s European Allies

A report by NewsGuard found that the Russian influence operation Storm-1516 has increasingly shifted its focus from Ukraine itself to European countries supporting Ukraine’s defense, particularly France and Germany. Since January 2025, the network has circulated dozens of fabricated claims targeting leaders such as Emmanuel Macron and Friedrich Merz, generating hundreds of millions of views on X. The campaign intensified as European countries increased financial and military aid to Ukraine following a sharp decline in U.S. support.


The operation spread disinformation by publishing fabricated stories on fake websites that imitated legitimate news outlets, then amplified them through anonymous social media accounts and pro-Kremlin influencers. Many of the false narratives involved sensational personal accusations, fabricated documents, or manipulated videos aimed at damaging the reputations of political leaders and undermining trust in democratic institutions. In France, several false claims targeted Macron personally, including fabricated stories linking him to the Jeffrey Epstein scandal. In Germany, narratives portrayed Merz as corrupt or involved in criminal activity, while also spreading misleading claims about election integrity.


Source: NewsGuard, N. Huet & M. Wolf & E. Maitland, Russian Influence Campaign Shifts to Target Ukraine’s European Allies Funding its Defense. [online] Published 27 February 2026. Available at: https://www.newsguardrealitycheck.com/p/russian-influence-campaign-storm-1516-targets-france-germany


Russia Expands Influence in Africa Through Hybrid and Disinformation Tactics

Russia’s expanding footprint in Africa and the central role of influence operations in advancing Moscow’s strategic objectives were analyzed in a report by the Carnegie Endowment for International Peace. Beyond military deployments and security partnerships, Russia leverages coordinated information campaigns to portray itself as an anti-colonial partner and an alternative to Western engagement. Through state-backed media, diplomatic messaging, and amplification by local actors, pro-Kremlin narratives frame Western governments as destabilizing forces while depicting Russia as a defender of sovereignty and regime stability. These campaigns frequently exploit local grievances, governance failures, and dissatisfaction with Western security interventions to gain traction.

Information manipulation operates alongside security assistance, elite cultivation, and economic arrangements. Russian-linked actors, including private military companies and affiliated media platforms, support friendly political elites while shaping public discourse to legitimize Moscow’s presence. Disinformation and narrative amplification are used to undermine Western credibility, reinforce anti-Western sentiment, and present Russian engagement as both pragmatic and ideologically aligned with African sovereignty. In fragile or conflict-affected states, this hybrid approach has enabled Russia to secure influence disproportionate to its economic footprint.


However, the report highlighted significant constraints on Moscow’s long-term influence. Russia’s engagement is heavily concentrated in the security domain and relies on opportunistic partnerships rather than sustained institutional investment. Limited economic capacity, reputational risks associated with mercenary activity, and complex local political dynamics constrain the durability and scalability of its influence.


Source: Carnegie Endowment for International Peace, N. Reynolds, Russia in Africa: Examining Moscow’s Influence and Its Limits. [online] Published 26 February 2026. Available at: https://carnegieendowment.org/research/2026/02/russia-role-west-southern-africa-junta-wagner-africa-corps


China

Pro-China Bot Network Spread Propaganda on Threads Targeting Taiwanese Users

A coordinated influence campaign linked to pro-China actors using automated AI bots on Threads to spread propaganda aimed at Taiwanese audiences was uncovered in an investigation by NewsGuard. The campaign involved at least 10 accounts posing as Taiwanese users. Automated bots can generate massive amounts of content and engagement in near-real time. As a result, despite the accounts having relatively small follower bases, some of their posts have gone viral.


The bot accounts frequently posted in Traditional Chinese, following Taiwan’s standard conventions, a style of Chinese writing used uniquely in Taiwan for everyday communication. Their messages promoted narratives praising China while attacking Taiwan’s ruling Democratic Progressive Party and highlighting domestic grievances such as housing costs and governance. At the same time, the bots avoided politically sensitive topics banned by Beijing, including questions about the Tiananmen Square Massacre.


NewsGuard became aware of the campaign when Threads users with advanced programming knowledge tricked the bots into revealing their true nature. In one instance, a bot repeatedly followed instructions to repeat a phrase verbatim, and in another, it identified itself as OpenAI’s AI assistant (ChatGPT). In response to a comment by a Meta spokesperson questioning the accounts’ influence, NewsGuard argued that the appropriate way to measure their impact is not by follower counts but by the accounts’ reach.


Source: NewsGuard, C. Lin, Pro-China Influence Campaign Uses Automated Bots on Meta’s Threads to Target Taiwanese Users With Propaganda. [online] Published 25 February 2026. Available at: https://www.newsguardrealitycheck.com/p/automated-pro-china-bots-swarm-metas


Iran

Pro-Iran Networks Mobilized Hacktivists and Media to Control War Narratives

Coordination among pro-Iran media outlets, social media networks, and hacktivist groups during the June 2025 war between Israel and Iran was examined in a Graphika analysis (also see CRC Weekly Update, 15th - 21st December 2025 click here). After an initial delay, these actors amplified breaking-news-style claims, threats, and unverified “hack-and-leak” announcements designed to portray Iran as dominant online and to depict Israel as exposed and technically weak. Graphika highlighted the rapid promotion of a new actor, Cyber Isnaad Front, which was repeatedly boosted by outlets linked to the IRGC.


The analysis documented a large wave of mostly organic pro-Iran hacktivist activity, including Handala Hack, Gaza Children Hackers, CyberToufan, and Homeland Justice, claiming dozens of intrusions, DDoS attacks, doxxes, and data dumps. Some disruptive operations (and counter-claims) were attributed to other actors, such as Gonjeshke Darande. State media amplification, for example, by Tasnim News, helped inflate the visibility of these personas, but Graphika stressed that most public claims remained unproven.


Source: Graphika, Everything Everywhere All at Once: The Pro-Iran Playbook for Narrative Control – Part 2. [online] Published February 2026. Available at: https://public-assets.graphika.com/reports/Graphika_Report-Everything_Everywhere_All_at_Once_(part%202).pdf


Hacktivist Activity Linked to Iran War

CyberKnow posted that 60 hacktivist groups are engaged in activities related to the Iran-US/Israel war. Most neighboring countries to Iran have been targeted. Pro-Russian groups started to join the fight in support of Iran, and more are expected to join.


Source: X, CyberKnow (@Cyberknow20). Post on X. [online] Published 2 March 2026. Available at: https://x.com/Cyberknow20/status/2028454796077019583


[AI Related Articles]


AI-Enabled Scams and State-Linked Influence Operations Exposed in 2026 Threat Report

The February 2026 OpenAI update documented how threat actors across multiple regions leveraged AI models alongside traditional digital infrastructure to conduct scams, covert influence operations, and social engineering campaigns. The report highlighted that AI was typically integrated into broader operational workflows rather than acting as a standalone tool. Documented abuses include semi-automated romance and investment scams originating in Cambodia, fake "scam recovery" services impersonating U.S. law enforcement and law firms, and social engineering efforts likely linked to China targeting U.S. officials. These operations followed structured engagement funnels, often described as "ping, zing, sting", using AI-generated content to initiate contact, build emotional manipulation, and ultimately extract funds. Distribution tactics relied heavily on social media ads, messaging apps, fake websites, impersonation of trusted institutions, and multilingual content generation to increase credibility and reach.


The report also detailed multiple covert influence operations linked to or aligned with the state. A Russia-linked content farm associated with the "Rybar" network generated multilingual social media posts and coordinated amplification across Telegram and X, including proposals for election interference campaigns in Africa. Another Russia-linked campaign targeted African audiences with long-form geopolitical articles under fabricated academic personas, seeking to criticize Western governments and legitimize Russian positions. Most significantly, a ChatGPT account linked to an individual associated with Chinese law enforcement documented and planned large-scale "cyber special operations" aimed at suppressing dissent globally. These activities included coordinated harassment, hashtag campaigns, impersonation of officials, creation of fake accounts, document forgery, doxxing, smear campaigns, and attempts to trigger platform enforcement actions against dissidents.


Source: OpenAI. Disrupting malicious uses of our model: An update. [online] Published February 2026. Available at: https://cdn.openai.com/pdf/df438d70-e3fe-4a6c-a403-ff632def8f79/disrupting-malicious-uses-of-ai.pdf


Quarterly NewsGuard Audit Found Leading AI Chatbots Repeat False Claims at High Rate

A quarterly audit by NewsGuard of the 11 leading generative AI tools and their propensity to repeat false claims on controversial topics in the news. In tests, major chatbots, including ChatGPT, Gemini, Copilot, Claude, Grok, and others, repeated false claims in 28.79 percent of responses to prompts about controversial news topics. The failure rate rose to 55.45 percent when prompts were designed to imitate malicious actors attempting to manipulate the systems.


The audit also found that chatbots often relied on unreliable online sources, including social media posts tied to foreign influence operations. In one case linked to the Russian operation Internet Research Agency and its offshoot, Storm-1516, several chatbots repeated a false claim that Armenian Prime Minister Nikol Pashinyan allowed Turkish companies to buy discounted gold from the Amulsar mine. Models such as ChatGPT, Grok, and Mistral even cited posts on X that spread the disinformation, inadvertently increasing their visibility. In another case, an AI system incorrectly confirmed a deepfake video of women protesters attacking a paramilitary vehicle in Iran. Some models performed better than others, most notably Claude, which correctly debunked all tested claims. NewsGuard sent emails to the companies that own the tools, but did not receive responses.


Source: NewsGuard, I. Blachez. January 2026 — AI False Claim Monitor. [online] Published 25 February 2026. Available at: https://www.newsguardtech.com/wp-content/uploads/2026/02/January-2026-Quarterly-AI-Audit.pdf


[General Reports]


AI-Driven Disinformation Amplifies Fear During Mexican Cartel Violence

Following a large-scale Mexican military operation targeting Jalisco New Generation Cartel leader Nemesio Oseguera Cervantes (“El Mencho”), Politico reported that widespread violence across approximately 20 states was accompanied by a parallel surge of online disinformation. As roadblocks, explosions, and shootings unfolded, social media platforms were flooded with AI-generated and manipulated videos and images portraying exaggerated or fabricated scenes of national collapse, including false claims that Guadalajara’s international airport had been seized and set ablaze. According to data presented by Mexican authorities citing Tecnológico de Monterrey, between 200 and 500 problematic posts circulated in the immediate aftermath, with 35% to 40% lacking context, at least 25% deemed misleading, and nearly 25% identified as AI-manipulated or fabricated content. Several posts garnered more than 100,000 views, amplifying panic amid acute public uncertainty.


The disinformation narratives included claims that a U.S. agent had strangled Oseguera Cervantes, that President Claudia Sheinbaum was hiding offshore, and that the killing was linked to unrelated geopolitical events. Although the university report did not attribute responsibility, organized crime experts suggested that actors linked to the Jalisco cartel may have contributed, noting the group’s prior investment in online influence and its growing technological sophistication. AI-generated footage, reportedly resembling drone imagery, was described as unusually polished for unfolding crisis conditions, enhancing its credibility and emotional impact. Authorities and the U.S. Embassy attempted to counter false claims, but residents reported difficulty distinguishing authentic reporting from fabricated content. The campaign significantly intensified public fear, reinforcing perceptions of chaos and demonstrating how digitally amplified disinformation can compound real-world violence during security crises.


Source: Politico, Associated Press. Online disinformation fueled panic after killing of Mexican drug lord. [online] Published 25 February 2026. Available at: https://www.politico.com/news/2026/02/25/online-disinformation-fueled-panic-after-killing-of-mexican-drug-lord-00799837


False Hostage Narrative Spread Widely After Killing of Cartel Leader

NewsGuard debunked a widely shared disinformation claim falsely alleging that a Mexican drug cartel had taken U.S. tourists hostage in Guadalajara, Mexico, following the death of cartel leader Nemesio Rubén Oseguera Cervantes. The rumor spread rapidly across social media, gaining nearly 27 million views. Influencers and websites claimed that members of the Jalisco New Generation Cartel had kidnapped an unspecified number of American tourists from airports, hotels, and highways.


The narrative was amplified by prominent figures, including Laura Loomer, whose posts on X reused real footage of chaotic scenes at Guadalajara airport but misrepresented them as evidence of cartel kidnappings. However, authorities found no proof that any tourists were taken hostage. The Mexican Embassy in the United States publicly denied the claims, stating that there had been no attacks on civilians or hostage situations at the airport.


Source: NewsGuard, L. Marchl, Debunk: No Laura Loomer, Claims that the Mexican Cartel Jalisco Took U.S. Tourists Hostage Are Baseless. [online] Published 26 February 2026. Available at: https://www.newsguardrealitycheck.com/p/us-tourists-held-hostage-in-mexico


The Islamic State’s Evolving Online Ecosystem in 2026

A report by the Institute for Strategic Dialogue (ISD) found that the digital ecosystem supporting the Islamic State continued to adapt and expand across multiple platforms in 2026 despite ongoing moderation efforts. Supporters increasingly used AI tools to convert written propaganda into audio, produce news-style content, and recreate deceased ideologues in videos. Researchers monitored around 500 accounts, channels, and websites across platforms such as Facebook, TikTok, Instagram, WhatsApp, Telegram, Element, and SimpleX Chat. These interconnected networks enabled supporters to quickly recreate accounts after removals and to coordinate activities across platforms.


The report highlighted how IS supporters exploited platform features to spread propaganda, recruit followers, and, in some cases, monetize content. Influencers and unofficial support groups often acted as hubs that redirected audiences between mainstream social media, encrypted messaging services, and niche platforms. On platforms such as Facebook and Instagram, some accounts used "digital creator" features to boost reach, while on TikTok, propaganda was sometimes repackaged in gamified formats inspired by games like Roblox and Minecraft to attract younger audiences.


Overall, the IS online ecosystem remained resilient due to its decentralized and cross-platform structure. The report recommended stronger cross-platform cooperation, more specialized moderation in different languages, and improved detection tools to address evolving tactics used by extremist networks online.


Source: ISD, M. Ayad, Coordinating Through Chaos: The State of the Islamic State Online in 2026. [online] Published February 2026. Available at: https://www.isdglobal.org/wp-content/uploads/2026/02/Coordinating-through-Chaos-FINAL.pdf


Online Amplification and Disinformation Escalated Leicester Unrest

Unrest in Leicester and its rapid transformation into a highly polarized national and transnational controversy through coordinated online amplification and misleading narratives were examined in a report by the Independent Commission of Inquiry into the 2022 Leicester violence. Localized incidents were reframed on social media as evidence of deep-seated ethnic and religious conflict, with posts frequently relying on decontextualized footage, recycled videos, and unverified claims. This content circulated widely before accurate information could stabilize public understanding, contributing to confusion and heightened tensions.


The study identifies the role of organized digital networks, including politically aligned influencers and diaspora-linked accounts, in accelerating and legitimizing inflammatory narratives. Through synchronized posting patterns, emotionally charged language, and identity-based framing, these actors amplified selective content to portray events as part of a broader civilizational struggle. Algorithm-driven engagement further boosted sensational material, enabling misleading or exaggerated claims to reach large audiences quickly and repeatedly. The report underscores how digital manipulation shaped public perception as much as, if not more than, on-the-ground developments. By exploiting platform dynamics and social divisions, coordinated actors escalated a local dispute into a broader flashpoint, undermining trust in authorities and deepening polarization. The findings illustrate how hostile or opportunistic influence tactics can weaponize social media ecosystems to magnify unrest and distort democratic discourse.


Source: School of Oriental and African Studies (SOAS) - University of London, Independent Commission of Inquiry into the 2022 Leicester violence, C. Bhatt & V. Puri & S. F. Kamal & S. Patel & S. Sinha & S. Abraham-Gonsalves & S. Grover & L. Magarrell & J. E. Méndez & N. Sultan. Better Together: Understanding the 2022 Violence in Leicester. [online] Published 1 January 2026. Available at: https://static1.squarespace.com/static/6505d742fdd85426286c1396/t/699c3be7e6c285093f8f2541/1771846631816/Report+Leicester.pdf


[CRC Glossary]


The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult.


To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.


As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website 










 
 
bottom of page