Search CRC
158 results found with an empty search
- Cyber-based hostile influence campaigns 23rd - 29th March 2026
[Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect. During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [State Actors] Russia Russia's Promotion of Separatism Abroad Russian Hybrid Tactics in Europe 2022 to 2025 Disinformation Campaign Targeting Kaja Kallas China Disinformation Denying Uyghur Forced Labor in China Iran Pro-Iranian Nasir Security Targets the Energy Sector in the Middle East Disinformation Trends in the 2026 Iran War Disinformation and Hybrid Coercion in Iran’s War Strategy [General Reports] AI-generated YouTube channels Spread Fake News Reports Pierre Poilievre’s misinformation on Joe Rogan’s podcast Disinformation and Climate Information Integrity in Australia The Return of Claims that Trump's Assassination Attempt Was Staged AI-Generated Audio of Clinton Criticized the Iran War Orbán Doubled Down on Anti-Ukrainian Campaign to Secure Reelection Disinformation After the Bondi Attack Orban Spokesperson Misrepresented 2021 Lawsuit to Smear Journalist Catherine Belton [Appendix - Frameworks to Counter Disinformation] Disrupting the foundations of FIMI Trump Administration Accused of Turning Voice of America into a Partisan Propaganda Outlet [CRC Glossary] [ Report Highlights] A Center for Strategic and International Studies ( CSIS) article outlined how Iran is conducting a multidomain “punishment campaign” that combines military, economic, cyber, and informational tactics to pressure the United States and Israel indirectly. As revealed by DisinfoWatch , a post from the Chinese Embassy in Canada exemplifies a coordinated disinformation effort aimed at denying well-documented human rights abuses in Xinjiang. Australia’s Senate inquiry highlighted growing concern in Australia about the widespread impact of misinformation and disinformation on climate change and energy debates. According to an article by The Jamestown Foundation , Hungary’s ruling Fidesz party has intensified its anti-Ukrainian rhetoric ahead of the country’s most competitive election in 16 years, using recent tensions with Kyiv to strengthen its campaign. A federal lawsuit filed by VOA journalists and press freedom groups alleges that the Trump administration, through USAGM leadership, systematically converted Voice of America into a state propaganda outlet by mandating pro-White House messaging, suppressing independent coverage, and manipulating Persian-language broadcasts into Iran, undermining VOA's core mandate to serve as a credible alternative to authoritarian media. [State Actors] Russia Russia's Promotion of Separatism Abroad A report by EUvsDisinfo argued that the Kremlin promotes separatist movements abroad while harshly suppressing similar ideas at home. Russia has repeatedly supported or amplified secessionist narratives in Western countries such as the United States, Canada, and Spain, often through coordinated disinformation campaigns and online networks. Examples include backing "Texit" rhetoric, reviving Alberta independence claims, and spreading misinformation around Catalonia and Brexit. In places like Moldova and Georgia, Moscow maintains influence through breakaway territories such as Transnistria and South Ossetia, using them as tools of pressure. In Estonia, this has taken the form of a disinformation campaign targeting the border town of Narva, where a majority of the population is ethnically Russian. However, inside Russia, any public support for separatism is criminalized, with activists facing imprisonment and organizations labeled as extremist or terrorist. Russia’s talk of sovereignty and territorial integrity is pragmatic rather than principled. It is used to justify repression at home and to destabilize countries abroad. Source: EUvsDisinfo. Secession for you, prison in Russia: Moscow’s selective love for self-determination . [online] Published 24 March 2026. Available at: https://euvsdisinfo.eu/secession-for-you-prison-in-russia-moscows-selective-love-for-self-determination/ Top Of Page Russian Hybrid Tactics in Europe 2022 to 2025 A report by The Soufan Center analyzed 255 Russian hybrid operations across 6 European countries between 2022 and 2025, highlighting a strategy built on cost asymmetry. Russia conducts low-cost, deniable, and scalable actions, ranging from disinformation and espionage to sabotage and infrastructure probing, while forcing targeted countries to bear the financial, political, and security burden of responding. Russia adapts its tactics to local contexts. In Western Europe, it has focused on intelligence gathering, infrastructure surveillance, and symbolic acts designed to inflame tensions, while in countries like Moldova and Georgia, it has combined long-term influence operations with political interference to shape strategic outcomes. Estonia, despite facing continuous pressure, has shown resilience due to strong public awareness and institutional preparedness. A key shift over time has been the increased use of intermediaries, often low-level recruits, to maintain deniability, alongside a move toward more direct and physical actions in 2025. The report concluded that the impact of these operations depends less on Russia’s capabilities and more on the strength of targeted societies. Countries with resilient institutions, transparent communication, and strong civil society are better able to withstand interference. To counter this threat, Europe must both raise the cost of Russian actions through coordinated responses and reduce vulnerabilities by strengthening governance and social cohesion, while carefully balancing public communication to avoid amplifying the intended disruptive effects. Source: The Soufan Center, C. Broekaert & N. Lyubarsky & C. Clarke, & J. Shelzi. PRIMING, DESTABILIZING, COERCING: Russian Hybrid Tactics in Europe 2022–2025 . [online] Published 2026. Available at: https://thesoufancenter.org/wp-content/uploads/2026/03/TSC-Report-Priming-Destabilizing-Coercing-Russian-Hybrid-Tactics-in-Europe-2022-2025.pdf Top Of Page Disinformation Campaign Targeting Kaja Kallas According to a report by DisinfoWatch , Kremlin-aligned actors are spreading a coordinated disinformation campaign that distorts history and attacks Estonian Prime Minister Kaja Kallas. Her warning that Russia’s territorial demands follow a familiar pattern was deliberately twisted into claims that regions like Donbas are historically Russian and that Estonia’s sovereignty is questionable. These narratives, amplified by state-linked media and proxy accounts, rely on insults, selective history, and imperial mythology rather than credible evidence or legal standing. The claims are demonstrably false. Estonia’s independence was legally restored in 1991 based on continuity from its pre-Soviet statehood, and the Soviet annexation was widely recognized as illegal. Similarly, Russia’s claims over Donetsk and Luhansk have been rejected by international bodies, including the UN and the European Council, which reaffirm Ukraine’s territorial integrity. Outlets such as RT have also been identified and sanctioned for their role in spreading disinformation and conducting influence operations. Source: DisinfoWatch. Kremlin-linked X cluster targets Estonia’s sovereignty and Kaja Kallas . [online] Published 27 March 2026. Available at: https://disinfowatch.org/disinfo/kremlin-linked-x-cluster-targets-estonias-sovereignty-and-kaja-kallas/ Top Of Page China Disinformation Denying Uyghur Forced Labor in China As revealed by DisinfoWatch , a post from the Chinese Embassy in Canada exemplifies a coordinated disinformation effort aimed at denying well-documented human rights abuses in Xinjiang. Triggered by Canadian MP Michael Ma’s concerns about forced labor in Chinese EV production, the message dismisses such allegations as "blatant lies" spread by "anti-China" actors. This framing follows a familiar authoritarian pattern: discredit critics, label evidence as fabricated, and shift attention toward protecting trade relations. However, substantial evidence contradicts these claims. The Canadian government has acknowledged credible reports of forced labor and imposed import restrictions tied to Xinjiang. International bodies, including the UN, have also identified persistent patterns of abuse that may amount to crimes against humanity. Independent investigations have further linked Xinjiang-produced materials to global automotive supply chains, reinforcing concerns about forced labor in EV production. This narrative serves the strategic purpose of deflecting scrutiny, protecting China’s economic interests, and reframing human rights concerns as politically motivated interference. It is part of a long-standing denial campaign that has consistently portrayed allegations of abuses in Xinjiang as fabricated. Source: DisinfoWatch. Chinese Embassy in Canada Exploits Michael Ma comments to deny forced-labour . [online] Published 28 March 2026. Available at: https://disinfowatch.org/disinfo/chinese-embassy-in-canada-exploits-michael-ma-comments-to-deny-forced-labour/ Top Of Page Iran Pro-Iranian Nasir Security Targets the Energy Sector in the Middle East A report by Resecurity highlighted the activities of Nasir Security, a relatively new and low-profile cyber group believed to be linked to Iran or its proxies. The group primarily targets the energy sector in the Middle East, focusing on supply chain vendors, including contractors in engineering, construction, and safety. Rather than targeting major energy companies directly, the actors exploit weaker third-party systems using techniques such as spear phishing, business email compromise, and cloud data exfiltration. The stolen data is often authentic but originates from vendors, obscuring the true source of the breach and creating confusion about the attack's scale. Nasir Security combines cyber operations with disinformation tactics, exaggerating the volume and impact of its alleged breaches. The group has claimed large-scale data theft from companies in the UAE, Oman, Iraq, and Saudi Arabia, but investigations suggested these claims are overstated and based on limited third-party compromises. Their activity appeared more ideological than financially motivated, aiming to project strength, fuel geopolitical narratives, and create uncertainty amid the ongoing conflict involving Iran. According to the report, numerous independent assessments confirmed that none of the Iran-linked, pro-Iranian groups and/or state-sponsored groups are making any meaningful impact on the Iran conflict. At the same time, Resecurity highlighted the supply chain cybersecurity risks that Iran could exploit, and recommended that enterprises stay vigilant and accelerate third-party cybersecurity monitoring and vendor risk assessments. Source: Resecurity. Pro-Iranian Nasir Security is Targeting The Energy Sector in the Middle East . [online] Published 23 March 2026. Available at: https://www.resecurity.com/blog/article/pro-iranian-nasir-security-is-targeting-the-energy-sector-in-the-middle-east Top Of Page Top Of Page Disinformation Trends in the 2026 Iran War According to NewsGuard ’s Reality Check , within the first 25 days of the Iran war, at least 53 false claims circulated online, attracting hundreds of millions of views and averaging roughly two disinformation posts per day. The disinformation shows three key patterns: a strong bias toward pro-Iran messaging, a shift from reused or misrepresented images to fully AI-generated visuals, and a growing tactic of dismissing legitimate journalism as fake or AI-generated. This last trend is particularly concerning, as it attempts to erode trust in credible media by falsely labeling accurate reporting as disinformation. A different NewsGuard report added that the vast majority (about 92%) of the claims promoted pro-Iran narratives, often exaggerating military successes or inventing major events, such as the destruction of Israeli strategic sites or the deaths of senior leaders like Benjamin Netanyahu. These claims were entirely baseless but aimed to shape public perception and morale. Although some false claims were amplified by Iranian-linked outlets, most originated from decentralized pro-Iran social media networks worldwide. The goal of this disinformation is not to inform but to influence emotions and shape perceptions before facts can be verified. For example, as reported in another NewsGuard 's Reality Check, a widely shared video claiming to show Iranian missiles striking a U.S. Navy ship in the Strait of Hormuz on the 25 th of March 2026 has been debunked as false. The footage, circulated by pro-Iran social media accounts and viewed millions of times, actually originates from a video game, not a real military event. Analysis of the video revealed several clear indicators of its artificial origin, including a visible mouse cursor, unrealistic visual effects, and the depiction of a ship class no longer in service. Sources: NewsGuard, L. Marchl & S. Rubinson & I. Chomnalez & J. Schofield & M. Wolf & I. Blachez & M. Calamaio & C. Lin & W. Rizvi. 50 False Claims and Counting: First 25 Days, Pro-Iran Disinformation Dominates the Online Battlefield. [online] Published 27 March 2026. Available at: https://www.newsguardtech.com/special-reports/2026-iran-war/ NewsGuard, L. Marchl. 50 False Claims and Counting: First 25 Days, Pro-Iran Disinformation Dominates the Online Battlefield. [online] Published 24 March 2026. Available at: https://www.newsguardtech.com/special-reports/2026-iran-war/ NewsGuard, J. Schofield. Debunk: Supposed War Footage Is Actually from a Video Game. [online] Published 27 March 2026. Available at: https://www.newsguardrealitycheck.com/p/pro-iran-war-footage-no-its-a-video Top Of Page Disinformation and Hybrid Coercion in Iran’s War Strategy A Center for Strategic and International Studies (CSIS) article outlined how Iran is conducting a multidomain "punishment campaign" that combines military, economic, cyber, and informational tactics to pressure the United States and Israel indirectly. Disinformation plays a central role in this approach. Alongside missile and cyber operations, Iran deploys computational propaganda and targeted influence campaigns to magnify the psychological impact of disruptions. By targeting interconnected systems, such as energy, finance, and infrastructure, Iran amplifies both the material and informational effects of its actions. Strategically, this campaign aims to weaken coalition unity and pressure governments through economic and psychological strain rather than battlefield victory. Countering this strategy requires not only military and defensive measures, but also active efforts to detect, expose, and disrupt false narratives that support Iran’s broader coercive campaign. Source: CSIS, B. Jensen. Iran’s Next Move: How to Counter Tehran’s Multidomain Punishment Campaign . [online] Published 23 March 2026. Available at: https://www.csis.org/analysis/irans-next-move-how-counter-tehrans-multidomain-punishment-campaign Top Of Page [General Reports] AI-generated YouTube channels Spread Fake News Reports The Digital Forensic Research Lab (DFRLab) reports that a network of more than two dozen YouTube channels uses AI-generated content to mimic legitimate news reporting while inserting fabricated geopolitical events. These channels, publishing in English and Russian, combine synthetic anchors, automated narration, AI-generated visuals, and coordinated posting patterns to produce large volumes of content at low cost. Collectively, they have amassed nearly 2 billion views and nearly 2 million subscribers. The operation relies on sensationalist titles, uniform branding, and repeated content across channels, with clear signs of coordination such as synchronized uploads and thematic shifts. A key tactic is blending factual reporting with false claims in the same style, making it difficult for viewers to distinguish real from fabricated events. For example, some Ukraine-related videos falsely reported attacks on logistical infrastructure in Mykolaiv and alleged strikes on military infrastructure in the Polish city of Rzeszów. Other videos suggested imminent diplomatic ruptures between Russia and Azerbaijan and dramatized the US capture of Venezuelan President Nicolás Maduro using AI-generated footage. The network also shows signs of centralized production, including duplicated videos, shared assets, and minimal human oversight, with some content still containing visible AI-generation artifacts. While it is unclear whether the channels are directly monetized, their content is eligible for advertising and benefits from algorithmic amplification. The report raises concerns about violations of YouTube’s misinformation policies and broader regulatory implications, particularly under the EU Digital Services Act, as undisclosed synthetic media at scale poses risks to information integrity and public discourse. Source : Digital Forensic Research Lab (DFRLab), I. Adam & E. Buziashvili. AI-generated YouTube channels co-opt war coverage to farm nearly two billion views . [online] Published 23 March 2026. Available at: https://dfrlab.org/2026/03/23/ai-generated-youtube-channels-co-opt-war-coverage-to-farm-nearly-two-billion-views/ Top Of Page Pierre Poilievre’s misinformation on Joe Rogan’s podcast A report by The Conversation examined Pierre Poilievre’s appearance on the controversial Joe Rogan Experience podcast and argued that the Canadian opposition leader spread or failed to challenge several misleading claims. Rogan’s podcast is one of the world’s longest-running, averaging 11 million listeners per episode. The interview aimed to reach a large international audience and exposed millions of listeners to disputed or inaccurate statements. Poilievre, citing no evidence, told Rogan that Canada admits one million immigrants per year. A number significantly higher than stated on the Canadian government website. Inflating immigration numbers is a known rhetorical tactic in far-right online spaces, where it functions to fuel anxieties about demographic change. He also downplayed the environmental and health effects of Alberta’s oil sands. Moreover, Canada is the world's largest exporter of canola oil, and Poilievre failed to push back against Rogan’s health misinformation about seed oils. Poilievre also repeated unsupported claims about Canada’s safer supply drug program and about the Liberal government's actions' impact on inflation during and following the COVID-19 pandemic. The report concluded that Poilievre's spread of false claims is dangerous because it fosters divisiveness and distrust among Canadians, particularly on immigration and public health. Source: The Conversation, J. Hodson & B. I. Wiens & N. Ruest & S. MacDonald. Fact check: Pierre Poilievre’s misinformation on Joe Rogan’s podcast disrespects Canadians. [online] Published 24 March 2026. Available at: https://theconversation.com/fact-check-pierre-poilievres-misinformation-on-joe-rogans-podcast-disrespects-canadians-278864 Top Of Page Disinformation and Climate Information Integrity in Australia Australia’s Senate inquiry highlighted growing concern in Australia about the widespread impact of misinformation and disinformation on climate change and energy debates. Surveys show that a large majority of Australians encounter false or misleading information online, particularly on climate-related issues. Examples included claims that wind turbines harm whales or that community batteries pose major safety risks, which have influenced local decisions and fueled public anxiety. Disinformation is often strategically produced and amplified by powerful actors, including corporations, governments, and political groups. These campaigns frequently aim to delay climate action by spreading doubt about scientific evidence and promoting misleading narratives. Tactics include "astroturfing" (fake grassroots campaigns), the use of bots and trolls, and increasingly, AI-generated content. The inquiry also highlighted how misinformation affects social cohesion and democratic processes. Climate-related falsehoods have contributed to division within communities, harassment of advocates, and confusion about scientific realities. The report concluded that disinformation is not just about false facts but about manipulating public discourse. It exploits existing beliefs, polarizes opinions, and weakens trust in institutions and science. Addressing this challenge requires stronger regulation of digital platforms, greater transparency, and coordinated efforts to expose and counter deliberate falsehoods while preserving open democratic debate. Source: The Senate Select Committee on Information Integrity on Climate Change and Energy. The Integrity Gap: Restoring Trust in the Climate and Energy Debate . [online] Published March 2026. Available at: https://apo.org.au/sites/default/files/resource-files/2026-03/apo-nid333872.pdf Top Of Page The Return of Claims that Trump's Assassination Attempt Was Staged According to an article by NewsGuard 's Reality Check, a Washington Post report that Russian intelligence once considered staging an assassination attempt on Hungary’s prime minister, Viktor Orban, to boost his chances in Hungary’s 12 th of April 2026 parliamentary election, has reignited claims that Trump staged his shooting in July 2024. Following the publication of the report, Anti-Trump social media users began claiming that the assassination attempt on Donald Trump was also staged to generate political sympathy. These claims quickly gained traction online, drawing significant engagement. Actually, there is no credible evidence to support assertions that the assassination attempt on Trump, in which a bullet grazed his ear, was staged. The Washington Post report was based on intelligence documents. The plan, described as a potential "gamechanger", aimed to shift the campaign away from economic concerns toward emotional themes such as security and stability. Although the proposal was never carried out and has been dismissed by the Kremlin as disinformation, it highlighted the strategic importance Moscow places on maintaining Orbán, one of its closest allies within the EU and NATO, in power. Beyond this proposal, the report pointed to broader Russian efforts to influence Hungary’s political landscape, including disinformation campaigns, support for pro-government narratives, and attempts to discredit opposition figures. Sources: NewsGuard, S. Howard. Possibility of Russia Faking an Assassination in Hungary to Help Orbán Leads Trump Critics to Resurrect Claim His Shooting Was Staged . [online] Published 2026. Available at: https://www.newsguardrealitycheck.com/p/the-return-of-crazy-claims-that-trump The Washington Post, C. Belton. To tilt Hungarian election, Russians proposed staging assassination attempt . [online] Published 21 March 2026. Available at: https://www.washingtonpost.com/world/2026/03/21/hungary-election-interference-russia-orban/ Top Of Page AI-Generated Audio of Clinton Criticized the Iran War As reported by NewsGuard 's Reality Check, a network of YouTube channels has been using AI-generated audio to impersonate former U.S. President Bill Clinton, falsely portraying him criticizing Donald Trump’s handling of the war in Iran. 144 such videos have accumulated more than 10 million views, often featuring realistic voice imitations paired with static images. While some videos include small disclosures, many viewers appear to believe the content is genuine. NewsGuard also found Clinton deepfakes discussing topics such as state elections in Florida and Texas and U.S.-Canada relations. Similar AI-generated audio commentary on the Iran war and other political topics has also targeted other former presidents, including Barack Obama and George W. Bush. The report suggested that financial incentives, rather than purely political motives, are driving this activity. The videos generate advertising revenue through YouTube’s monetization system, benefiting from high engagement and low production costs. YouTube has since removed several of these channels for violating its policies. Source: NewsGuard, S. Rubinson. AI YouTube Channels Put Words in Bill Clinton’s Mouth About the Iran War, Drawing Millions of Views . [online] Published 25 March 2026. Available at: https://www.newsguardrealitycheck.com/p/bill-clinton-on-youtube-bashes-trump Top Of Page Orbán Doubled Down on Anti-Ukrainian Campaign to Secure Reelection Hungary’s ruling Fidesz party has intensified its anti-Ukrainian rhetoric ahead of the country’s most competitive election in 16 years, as reported by The Jamestown Foundation , using recent tensions with Kyiv to strengthen its campaign. Disputes over the Druzhba oil pipeline, controversial statements by Ukrainian officials, and unverified allegations of threats and financial interference have been used by the government and pro-government media to portray Ukraine as a hostile actor. The strategy appears aimed at mobilizing voters by exploiting existing skepticism toward Ukraine and fears of involvement in the war. The rise in popularity of the main opposition Tisza party has consolidated despite multiple failed efforts by Fidesz to counter the new challenger through various tactics. These include a since-debunked artificial intelligence-generated document promoted as Tisza’s “secret austerity program”, personal attacks against Magyar over his private life, and labeling him as a Ukrainian agent. Source: The Jamestown Foundation, P. Fazekas. Orbán Doubles Down On Anti-Ukrainian Campaign To Secure Reelection . [online] Published 25 March 2026. Available at: https://jamestown.org/orban-doubles-down-on-anti-ukrainian-campaign-to-secure-reelection/ Top Of Page Disinformation After the Bondi Attack As reported by ABC News , following the Bondi attack in Sydney, a real image of survivor Arsen Ostrovsky was rapidly weaponized in a wave of disinformation. His selfie, sent to his wife, went viral but was quickly reframed by online conspiracy communities as "evidence" that the attack was staged. False claims emerged suggesting his injuries were fake and that the attack was orchestrated by Israeli actors. These narratives relied on familiar tactics such as questioning victim behavior, introducing baseless links to intelligence agencies, and labeling victims as "crisis actors". The disinformation spread quickly across platforms like Telegram, X, and Reddit, evolving from speculation to more sophisticated manipulation. Within hours, AI-generated images were created to "prove" the conspiracy, showing Ostrovsky with fake blood being applied. These fabricated visuals were widely shared internationally, even among users who recognized them as false. At the same time, authentic reporting and real evidence were dismissed as fake. Importantly, this wave of disinformation appears to have been driven less by coordinated state actors and more by decentralized networks and "conspiracy entrepreneurs" seeking attention and profit. Source: ABC News, J. Robertson & M. Connaughton. This man went viral after surviving Bondi. Then the internet took a dark turn . [online] Published 27 March 2026. Available at: https://www.abc.net.au/news/2026-03-28/how-bondi-beach-survivor-became-face-of-conspiracy-theory/106499580 Top Of Page Orban Spokesperson Misrepresented 2021 Lawsuit to Smear Journalist Catherine Belton Hungary’s international spokesman Zoltán Kovács is misrepresenting a 2021 lawsuit involving sanctioned Russian oligarch Roman Abramovich to discredit journalist Catherine Belton, as highlighted in a report by DisinfoWatch . The case, brought against Belton and her publisher over her landmark book “Putin’s People,” was widely seen as a politically motivated SLAPP suit aimed at intimidating her and undermining her reporting on Vladimir Putin’s network. Kovács is now reviving it to challenge her recent Washington Post reporting on Hungary’s upcoming election and alleged Russian links.In reality, the 2021 case did not disprove Belton’s work. The lawsuit was settled without damages, only minor amendments were made to the book, and its central findings remained intact. Abramovich’s close ties to Putin were later reaffirmed in EU sanctions records. Meanwhile, Belton’s more recent reporting on Hungary and Russia has prompted broader international scrutiny, including follow-up coverage by AP, which reported that the European Commission sought clarification from Hungary after the Washington Post allegations, and by Reuters, which reported longstanding regional suspicions. Source: DisinfoWatch. Orban spokesperson Recycles Oligarch SLAPP to Smear Journalist Catherine Belton . [online] Published 29 March 2026. Available at: https://disinfowatch.org/disinfo/orban-spokesperson-recycles-oligarch-slapp-to-smear-journalist-catherine-belton/ Top Of Page [Appendix - Frameworks to Counter Disinformation] Disrupting the foundations of FIMI An analysis by EUvsDisinfo argues that foreign information manipulation and interference (FIMI) should be understood as a structured “supply chain” of deception, where influence operations rely on coordinated resources, infrastructure, and intermediaries. These campaigns require funding, personnel, and technology, with actors often outsourcing activities to contractors and commercial providers to ensure plausible deniability and complicate attribution. The report highlights that FIMI ecosystems are highly interconnected, involving not only state and non-state actors but also overlaps with organized crime networks that provide technical infrastructure, global reach, and operational cover—such as hosting fake news websites or managing bot networks. To counter these threats, the analysis emphasizes the need to disrupt the underlying structures that enable FIMI rather than focusing solely on individual pieces of content. This includes targeting financial flows, dismantling enabling infrastructure, and increasing the operational costs for perpetrators, reflecting a broader shift toward systemic and preventive approaches in countering information manipulation. Source: EUvsDisinfo. Disrupting the foundations of FIMI . [online] Published 27 March 2026. Available at: https://euvsdisinfo.eu/disrupting-the-foundations-of-fimi/ Top Of Page Trump Administration Accused of Turning Voice of America into a Partisan Propaganda Outlet As reported by The hill , a coalition of current and former Voice of America (VOA) journalists, alongside press freedom organizations PEN America and Reporters Without Borders, filed a federal lawsuit in the U.S. District Court for the District of Columbia against the Trump administration, the U.S. Agency for Global Media (USAGM), its acting CEO Michael Rigas, and former USAGM director Kari Lake. The plaintiffs allege that USAGM leadership sought to transform VOA's newsroom into a partisan instrument of the executive branch, compelling journalists to reproduce White House talking points nearly verbatim and to disseminate imagery of President Trump in a manner characteristic of authoritarian personality cults. The complaint frames censorship and propaganda as complementary tools of the same strategic objective, arguing that these directives violate both the federal statutes governing VOA's editorial independence and constitutional protections, and that they fundamentally undermine U.S. credibility among the foreign audiences VOA is mandated to serve. Among the concrete tactics alleged are the suppression of politically inconvenient coverage, the replacement of independent editorial judgment with state-directed messaging, and the cancellation of wire service agreements with the Associated Press and Reuters in favor of a proposed arrangement with the right-wing One American News Network. The most operationally significant allegations concern VOA's Persian-language service, which broadcasts into Iran during the ongoing U.S.-Israeli military campaign. According to the lawsuit, transmissions to Iranian audiences have systematically omitted casualty figures from U.S. airstrikes, excluded perspectives from international leaders outside the administration, and minimized coverage of a strike on an elementary school, with a Lake-appointed official requiring pre-approval for all guest appearances across the Persian, Kurdish, and Afghan broadcast services. The plaintiffs further allege that Lake and Rigas suppressed interviews, video footage, and reporting on anti-government protests within Iran, and banned coverage critical of certain factions opposed to the Iranian regime from the Persian Service entirely. The lawsuit frames these interventions as particularly damaging given VOA's foundational mandate: to serve as an independent information source for audiences living under authoritarian media environments. By subordinating editorial independence to political messaging, the plaintiffs argue, VOA risks becoming indistinguishable from the state-controlled outlets its target audiences already contend with domestically, effectively neutralizing one of the United States' principal strategic communications assets. Sources: The Hill, A. Fields. VOA journalists sue, accusing US government of forcing censorship, propaganda . [online] Published 23 March 2026. Available at: https://thehill.com/homenews/media/5796635-voice-of-america-propaganda-lawsuit/ United States District Court for the District of Columbia. Complaint (Case No. 25 Civ.) . [online] Published 2026. Available at: https://www.courthousenews.com/wp-content/uploads/2026/03/voa-journalists-sue-lake-usagm-over-censorship-propaganda.pdf Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website . Top Of Page
- Cyber-based hostile influence campaigns 16th - 22nd March 2026
[Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect. During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] Facebook Meta H1 2026 Adversarial Threat Report TikTok AI-Driven Disinformation TikTok Campaign Targets Hungarian Elections [State Actors] Russia EU Sanctions Targeted Disinformation and Propaganda Coordinated Russian Disinformation Targeted Armenian Elections Disinformation Network Amplified Through Romanian Platforms The War in Ukraine Russia Targeted Ukraine’s Infrastructure While Spreading Disinformation Kremlin Amplified Disinformation Through Fabricated Political Authority Iran Iran Overstated Its Global Oil Disruption Video Showing Netanyahu Alive and Well Is Not an AI Fake Disinformation and Hacktivism in the Iranian Conflict Justice Department Disrupted Iranian Cyber-Enabled Psychological Operations Iran Cyber Operations Prioritize Influence Over Impact [General Reports] 4th EEAS Report on Foreign Information Manipulation and Interference Threats Developments in the FIMI Galaxy PRC-Aligned Networks Exploit Philippine Divides to Shape Narratives [Appendix - Frameworks to Counter Disinformation] AI-Driven Disinformation Escalates Global Influence Warfare AI Model Boosts Urdu Misinformation Detection Capabilities [CRC Glossary] [ Report Highlights] Meta H1 2026 Adversarial Threat Report describes a highly adaptive disinformation ecosystem in which state-linked and criminal actors use coordinated fake identities, AI-generated content, and cross-platform amplification to manipulate audiences, build false credibility, and evade detection while integrating influence operations with broader fraud and exploitation tactics. NewsGuard identified a coordinated network of AI-generated TikTok content and cross-platform disinformation campaigns that sought to manipulate Hungary’s 2026 elections by discrediting the opposition, amplifying fear-based narratives, and covertly promoting pro-Orbán messaging. According to an article by EU VS Disinfo , Russia’s sustained attacks on Ukraine’s energy infrastructure during the winter of 2026 were accompanied by a parallel FIMI campaign. Analysis from the Center for Strategic and International Studies (CSIS) characterizes Iran’s cyber operations as centered on disinformation, influence, and psychological impact, with the country relying on proxies, hacktivist networks, and narrative manipulation to project power while delivering only limited direct strategic effects in conflict. A report by Resecurity highlighted how the Iran war has evolved into a multi-domain conflict where psychological warfare, disinformation, and hacktivism play a central role alongside military operations. The 4th EEAS Report on Foreign Information Manipulation and Interference Threats provided a comprehensive assessment of FIMI activities worldwide, based on cases documented and investigated by the EEAS throughout 2025. [Social Media Platforms] Facebook Meta H1 2026 Adversarial Threat Report The Meta report highlights a complex and rapidly evolving disinformation landscape driven by coordinated networks that exploit false identities, deceptive narratives, and cross-platform amplification. Central to this ecosystem is Coordinated Inauthentic Behavior (CIB), where actors, often linked to state or state-aligned entities in countries such as Iran, Russia, China, and Pakistan, deploy fabricated personas and covert networks to mislead audiences and manipulate public discourse. These operations rely on structured tactics, including the creation of credible “creator” personas supported by networks of “amplifier” accounts, which artificially inflate engagement and create the appearance of grassroots legitimacy. Actors frequently disguise their origins through proxy infrastructure and identity obfuscation while targeting specific linguistic and regional audiences with tailored messaging. A defining feature of these campaigns is the increasing integration of artificial intelligence to enhance both scale and plausibility. Adversaries use AI-generated profile images, synthetic media, and large language models to produce coherent, multilingual, and culturally nuanced content, significantly lowering operational costs while improving effectiveness. These tools enable the rapid development of persuasive personas, automated engagement strategies, and consistent narrative framing across platforms. Tactics such as co-opting authentic content, impersonating journalists or activists, and constructing fictitious media brands further blur the line between legitimate discourse and manipulation. The use of AI also extends to evasion techniques, allowing actors to bypass detection systems and sustain operations despite enforcement efforts. Disinformation activities are closely intertwined with broader fraud and scam operations, reflecting a convergence of influence and financial exploitation tactics. Criminal syndicates and influence actors alike employ social engineering methods, such as the impersonation of trusted professionals, the exploitation of emotionally vulnerable populations, and the migration to off-platform communication channels, to build trust and evade oversight. These campaigns demonstrate high levels of organization and adaptability, with actors continuously refining their tactics in response to enforcement pressure. Across both domains, common patterns include identity misrepresentation, coordinated amplification, and the iterative reconstitution of networks, underscoring a persistent and professionalized threat environment in which disinformation serves as both a strategic objective and an operational enabler. Source: Meta, Adversarial Threat Report First Half 2026 . [online] Published 2026. Available at: https://transparency.meta.com/sr/first-half-2026-Adversarial-threat-report/ Top Of Page TikTok AI-Driven Disinformation TikTok Campaign Targets Hungarian Elections A coordinated influence operation, identified by NewsGuard , leveraged 34 anonymous TikTok accounts to disseminate AI-generated content aimed at discrediting opposition candidate Péter Magyar and bolstering Prime Minister Viktor Orbán ahead of Hungary’s April 2026 elections. The campaign employed a range of tactics, including fabricated news reports featuring AI-generated anchors and non-existent experts, manipulated real footage to portray Magyar as aggressive, and emotionally charged narratives linking him to Ukraine, economic instability, and cultural threats. Additional amplification strategies included AI-generated celebrity endorsements, fictional scenarios, and viral-style content, collectively generating millions of views. The accounts exhibited clear signs of coordination, including synchronized creation dates, AI-generated profile images, and consistent messaging themes. The TikTok campaign operated alongside broader cross-platform disinformation efforts, including activity attributed to the Russian-linked “Matryoshka” network on X and Telegram, which spread false claims targeting Ukraine to indirectly support Orbán. These narratives included fabricated cyberattacks, biological threats, and symbolic provocations, all intended to heighten fear and polarization. While attribution remains inconclusive, the timing, scale, and alignment with geopolitical interests suggest a structured and strategic intervention in Hungary’s electoral process. TikTok confirmed that the network was a covert influence operation and removed the accounts, highlighting ongoing platform-level responses amid broader concerns about foreign interference in democratic processes. Source: NewsGuard, A. Lee & M. Roache, Influence Campaign on TikTok Uses AI Videos to Boost Hungary’s Orbán Ahead of Crucial Elections . [online] Published 20 March 2026. Available at: https://www.newsguardtech.com/special-reports/russian-influence-campaign-uses-ai-tiktok-videos-to-boost-hungarys-viktor-orban/ Top Of Page [State Actors] Russia EU Sanctions Targeted Disinformation and Propaganda On the 16 th of March 2026, the EU Council announced it had imposed sanctions on four individuals involved in Russia’s hybrid activities, with a strong focus on Foreign Information Manipulation and Interference. Among them is Sergey Klyuchenkov, a propagandist who has actively spread false narratives to justify Russia’s war against Ukraine, including calls for violence and misleading claims about territorial expansion. Similarly, news anchor Ernest Mackevičius has used state television to promote distorted accounts of the war, Ukraine, and its military. The sanctions also targeted Graham Phillips and Adrien Bocquet, who were both accused of amplifying Kremlin propaganda. Phillips has produced misleading content from occupied Ukrainian territories, while Bocquet has acted as a key voice spreading disinformation across Europe through media appearances and social platforms. Source: European External Action Service, Russian hybrid threats: four individuals added to EU sanctions list for information manipulation activities . [online] Published 2026. Available at: https://www.eeas.europa.eu/delegations/ukraine/russian-hybrid-threats-four-individuals-added-eu-sanctions-list-information-manipulation-activities_en Top Of Page Coordinated Russian Disinformation Targeted Armenian Elections CivilNet revealed that a fake video circulated on X in March 2026 falsely claimed that Armenian political strategists had secretly traveled to France to learn vote manipulation techniques under a "secret agreement" between Emmanuel Macron and Nikol Pashinyan. The video attempted to appear credible by misusing the NewsGuard logo, despite no such report existing. The video’s spread was coordinated via newly created or previously inactive accounts that were suddenly activated, suggesting manipulation by bots or hacked profiles. Similar campaigns have used fake logos from major outlets such as CNN and Reuters, along with fabricated newspaper front pages, to promote false claims about political figures and events. These efforts are linked to known disinformation strategies, such as the Russia-associated "Matryoshka" and "Doppelgänger" operations, which rely on impersonation and mass dissemination to amplify misleading content. Source: CivilNet, L. Dubois-Harounyan, Fake Videos and Fabricated Headlines: A Coordinated Disinformation Campaign Targeting Armenia and France . [online] Published 18 March 2026. Available at: https://www.civilnet.am/en/news/1008717/fake-videos-and-fabricated-headlines-a-coordinated-disinformation-campaign-targeting-armenia-and-france/ Top Of Page Disinformation Network Amplified Through Romanian Platforms A DFRLab report found that News Time Romania (NTR) operated as a disinformation amplifier by translating and redistributing content from Russian state-aligned and sanctioned sources on TikTok and Telegram. Rather than producing original journalism, it repackaged foreign propaganda into Romanian and presented it as independent media. Analysis showed that over half of its TikTok content originated from such sources, indicating a systematic effort to inject disinformation into the local information space. Key contributors to this content included sanctioned outlets like SouthFront and pro-Kremlin platforms such as Rybar, both known for spreading misleading narratives about military and geopolitical events. NTR also indirectly promoted the ideology of Alexander Dugin through intermediary platforms, allowing sanctioned or extremist viewpoints to circulate without direct attribution. This layered approach helped bypass restrictions while maintaining a steady flow of pro-Russian and anti-Western messaging. Source: DFRLab, V. Olari, Romanian channel launders sanctioned Russian content for domestic audiences . [online] Published 20 March 2026. Available at: https://dfrlab.org/2026/03/20/romanian-channel-launders-sanctioned-russian-content-for-domestic-audiences/ Top Of Page The War in Ukraine Russia Targeted Ukraine’s Infrastructure While Spreading Disinformation According to an article by EUvsDisinfo , Russia’s sustained attacks on Ukraine’s energy infrastructure during the winter of 2026 were accompanied by a parallel FIMI campaign. As missile strikes and drone attacks left millions without electricity, heating, and water during extreme cold, pro-Kremlin outlets promoted narratives portraying Ukraine as divided and on the verge of collapse. These claims included false assertions that Ukrainians were ready to surrender territory or were protesting against their own government, as well as framing the strikes as justified retaliation rather than attacks primarily affecting civilians. Another narrative targeted European support for Ukraine, with messaging repeatedly claiming that Europe has grown tired of supporting Ukraine. Recent polls showed that, while countries closest to the conflict are beginning to experience war fatigue, overall support for Ukraine remained high. Moreover, the attacks on Ukrainian infrastructure have spurred many grassroots fundraising campaigns across Europe. Russia has also leveraged energy insecurity as part of a broader influence strategy aimed at Europe, spreading claims that the continent cannot function without Russian energy supplies. At the same time, hybrid tactics, including cyberattacks on European energy systems, reinforced these narratives. On the day of the invasion of Ukraine, a hacker group linked to Moscow disrupted monitoring systems for thousands of wind turbines across Germany, and more recently, in December 2026, Poland reported attacks on its energy sector. Source: EUvsDisinfo, Targeting the grid, shaping the story: Russia’s dual assault on Ukraine . [online] Published 16 March 2026. Available at: https://euvsdisinfo.eu/targeting-the-grid-shaping-the-story-russias-dual-assault-on-ukraine/ Top Of Page Kremlin Amplified Disinformation Through Fabricated Political Authority As reported by Stop Fake , Kremlin-linked media outlets have been systematically spreading disinformation by promoting statements from Armando Mema, falsely claimed to be a "Finnish politician". Russian state media, including RIA Novosti and RT, repeatedly cite his social media posts to claim that Western countries are losing faith in Ukraine, calling for peace with Russia, or warning of inevitable Russian victory. This creates the illusion of growing international support for pro-Kremlin narratives. In reality, Mema is not a recognized political figure in Finland but a little-known individual with no electoral success and no significant public influence. By misrepresenting him as a credible politician, these outlets confer legitimacy and amplify fringe opinions, as if they reflected mainstream Western views. Source: StopFake, The “Finnish politician”, an opponent of helping Ukraine, who is quoted daily by the Kremlin media, turned out to be a plumber without Finnish citizenship . [online] Published 18 March 2026. Available at: https://www.stopfake.org/ru/finskij-politik-protivnik-pomoshhi-ukraine-kotorogo-ezhednevno-tsitiruyut-kremlevskie-smi-okazalsya-santehnikom-bez-grazhdanstva-finlyandii/ (referring to: The Insider, Antifake investigation . [online] Published 2026. Available at: https://theins.ru/antifake/290378 Top Of Page Iran Iran Overstated Its Global Oil Disruption Since the US-Israel attack, Iran has effectively blocked the Strait of Hormuz, a narrow waterway between Iran and Oman through which roughly 20 percent of the world’s oil supply passes. The International Energy Agency said the Middle East conflict was creating the largest supply disruption in the history of the global oil market. As reported by NewsGuard 's Reality Check, Iranian state-aligned media amplified disinformation that exaggerated the global impact of the Strait of Hormuz disruption, including false claims that Australia is on the verge of a fuel crisis. Reports circulated by pro-Iran outlets alleged that Australia has only 18 days of fuel reserves remaining, roughly half of its actual supply, citing fabricated figures attributed to government sources. The claim was accompanied by a manipulated graphic, likely edited using a Google AI tool, which altered authentic data from an Australian broadcaster to support the misleading narrative. In reality, official statements from Australia confirmed significantly higher reserves, with around 36 days of petrol and substantial supplies of diesel and jet fuel. Alongside these narratives, Iranian outlets have warned of a looming "energy catastrophe" in the West, framing the crisis as evidence of Western economies' vulnerability despite more measured assessments from international energy authorities. Source: NewsGuard, W. Rizvi, Iranian-Linked Media Falsely Claim Australia Has “Only 18 Days” of Fuel Reserves, Overstating the Impact of the War . [online] Published 17 March 2026. Available at: https://www.newsguardrealitycheck.com/p/iran-overstates-its-global-oil-disruption Top Of Page Video Showing Netanyahu Alive and Well Is Not an AI Fake NewsGuard ’s Reality Check highlights that a video showing Israeli Prime Minister Benjamin Netanyahu alive at a Jerusalem café was falsely claimed to be AI-generated, forming part of a broader disinformation narrative that he was killed or seriously injured in an Iranian missile strike. The video, posted on the 15 th of March 2026, was intended to refute the rumors about Netanyahu's death, but pro-Iran and anti-Israel accounts quickly dismissed it as a deepfake, citing supposed anomalies in Netanyahu’s face, the level of the coffee in the cup, the lighting and shadows, and a perceived distortion in Netanyahu’s coat pocket. Some posts shared screenshots from the AI-detection tool, Hive, as purported evidence. And Hive indeed determined that the video was AI-generated. However, multiple verifications confirmed the video is authentic. Independent analysis, including location matching and supporting images from the café itself, as well as assessments by AI-detection experts, found no signs of manipulation. NewsGuard has identified other viral instances in which social media users claim that authentic war-related visuals are AI-generated. Pro-Iran accounts falsely claimed that a video of Netanyahu speaking at a press conference on the 12 th of March 2026 was AI-generated. And anti-regime social media users falsely claimed that The New York Times published an AI-generated image of a large crowd celebrating the naming of the new Iranian supreme leader, Mojtaba Khamenei. Another report by CyberNews presented the claims about Netanyahu’s alleged death that were amplified by social media accounts such as " Iran News Today ", which shared images of an injured man resembling him and presented them as confirmation. Shortly after, Netanyahu posted a video released by Israel’s Government Press Office showing him alongside senior officials. While this footage should have dispelled the rumors, many users insisted it was part of a cover-up, claiming the video was recycled from a 2024 meeting or had been manipulated with AI. Some relied on AI tools like Google’s Gemini to "verify" these claims, despite contradictory evidence, such as newly published images confirming the meeting took place. Sources: NewsGuard, W. Rizvi, In this Case, Believe Your Eyes: Video Showing Netanyahu Alive and Well Is Not an AI Fake . [online] Published 16 March 2026. Available at: https://www.newsguardrealitycheck.com/p/netanyahus-proof-of-life-video-is Cybernews, N. Ancell, Israeli Prime Minister Benjamin Netanyahu is trying to prove he's not an AI deepfake . [online] Published 18 March 2026. Available at: https://cybernews.com/news/benjamin-netanyahu-artificial-intelligence-video/ X, Iran News Today (@MCK_Kaif), X post . [online] Published 18 March 2026. Available at: https://x.com/MCK_Kaif/status/2034172316155056236 X, Benjamin Netanyahu (@netanyahu), X post (official account). [online] Published 17 March 2026. Available at: https://x.com/netanyahu/status/2034021266861003258 Top Of Page Disinformation and Hacktivism in the Iranian Conflict A report by Resecurity highlighted how the Iran war has evolved into a multi-domain conflict where psychological warfare, disinformation, and hacktivism play a central role alongside military operations. Pro-Iran Hacktivists were deliberately spreading fear and confusion among targeted populations. Iran has also conducted internal psychological campaigns to keep the population motivated for the war and aligned with the existing regime. To counter that, the U.S. and its allies conducted attacks against broadcasting infrastructure and state media. At the same time, AI-generated images, recycled footage, and even video game clips have been widely circulated as real battlefield content, distorting public perception of events. In one of the misinformation episodes, a pro-Iranian group was circulating messaging that Khamenei had not been killed. Resecurity highlighted misinformation about both cyber activity and kinetic actions. Pro-Iran hacktivist groups like the Cyber Islamic Resistance, Fatimion Cyber Team, and Cyber Fattah had been identified as key players. Their activities included reconnaissance, DDoS campaigns, website defacements, and data theft. Pro-Western hacktivist groups have also been active and targeted Iranian religious apps and news sites. Notably, such factors included Iranian opposition groups residing outside Iran who targeted the existing regime. While many of the attacks were low-impact or exaggerated, they generated significant "noise", forcing defenders to divert resources and amplifying uncertainty. Source: Resecurity, Iran War: Kinetic, Cyber, Electronic, and Psychological Warfare Convergence . [online] Published 17 March 2026. Available at: https://www.resecurity.com/blog/article/iran-war-kinetic-cyber-electronic-and-psychological-warfare-convergence Top Of Page Justice Department Disrupted Iranian Cyber-Enabled Psychological Operations As published by the Office of Public Affairs, a U.S. Justice Department operation seized four domains linked to Iran’s Ministry of Intelligence and Security (MOIS), which were used to conduct cyber-enabled psychological warfare campaigns. These websites functioned as part of a coordinated "faketivist" strategy, where Iranian actors claimed responsibility for cyberattacks, leaked or fabricated sensitive data, and amplified propaganda to intimidate adversaries. The platforms were used to target journalists, dissidents, and Israeli-linked individuals through doxxing, harassment, and threats, including calls for violence and financial bounties. The seized domains were Justicehomeland[.]org, Handala-Hack[.]to, Karmabelow80[.]org, and Handala-Redwanted[.]to. For example, the MOIS used the Handala-hack[.]to domain to claim credit for a destructive malware attack in March 2026 against a U.S.-based multinational medical technology firm. These activities were often aimed less at technical disruption and more at psychological impact, spreading fear, damaging reputations, and deterring opposition. The use of fake hacktivist personas allowed Iranian actors to obscure attribution while amplifying their messaging across online platforms. Source: U.S. Department of Justice, Justice Department Disrupts Iranian Cyber Enabled Psychological Operations . [online] Published 19 March 2026. Available at: https://www.justice.gov/opa/pr/justice-department-disrupts-iranian-cyber-enabled-psychological-operations Top Of Page Iran Cyber Operations Prioritize Influence Over Impact An article by the Center for Strategic and International Studies (CSIS) portrays Iran as a capable and adaptive cyber actor that integrates cyber operations into a broader hybrid warfare strategy, combining espionage, disruption, and influence campaigns to advance political objectives. Central to its approach is the use of cyber capabilities for symbolic impact and information dominance rather than decisive military effect. Iranian state-linked actors, including proxies such as hacktivists and cybercriminal networks, conduct operations that blend destructive attacks, reconnaissance, and influence efforts, often targeting critical infrastructure and foreign audiences. These actors employ tactics such as hack-and-leak campaigns, social media manipulation, and the mobilization of loosely affiliated cyber groups to amplify perceived reach, while maintaining plausible deniability and projecting power in the information domain. Disinformation and influence operations emerge as a key battlefield, with Iran prioritizing narrative control, psychological pressure, and audience manipulation over high-impact cyber disruption. Tactics include exploiting social media to reach foreign populations, generating “noise” through coordinated low-level attacks, and leveraging AI and digital tools to shape perceptions and sow division. Domestically, information control is reinforced through internet shutdowns that isolate populations and suppress dissent. Internationally, Iran combines opportunistic cyber targeting with strategic messaging designed to exaggerate its capabilities, obscure attribution, and erode adversary cohesion. Overall, cyber operations primarily serve as enablers of information warfare, aimed at destabilizing opponents, influencing public opinion, and increasing the perceived costs of conflict, rather than delivering decisive operational outcomes. Source: Center for Strategic and International Studies, N. Shah, Demystifying Iranian Cyber Operations in the U.S.-Iran Conflict . [online] Published 20 March 2026. Available at: https://www.csis.org/analysis/demystifying-iranian-cyber-operations-us-iran-conflict Top Of Page [General Reports] 4th EEAS Report on Foreign Information Manipulation and Interference Threats The 4 th European External Action Service (EEAS) Report on Foreign Information Manipulation and Interference Threats provided a comprehensive assessment of FIMI activities worldwide, based on cases documented and investigated by the EEAS throughout 2025. A total of 540 incidents were identified, involving around 10,500 digital channels, with Ukraine remaining the primary target, followed by several European countries. Russia and China were the main attributed actors, though much of the activity relied on covert, unattributed networks that complicated detection and accountability. A key trend was the widespread use of artificial intelligence, with AI-generated text, audio, and video now routinely deployed to produce scalable, low-cost disinformation. FIMI was increasingly integrated into broader hybrid strategies, combining digital manipulation with geopolitical and physical actions, particularly around elections and regional conflicts. A central innovation of the report was the FIMI Deterrence Playbook, which marked a shift from reactive responses to proactive disruption. Instead of focusing only on identifying threats, the EU aimed to target the underlying systems that enabled them, such as intermediaries, service providers, and financial and technical infrastructures. By leveraging sanctions, regulations, and law enforcement in a coordinated manner, the EU sought to raise the costs and reduce the effectiveness of FIMI operations. The report also mapped a complex global network, or "Galaxy", of interconnected channels, revealing a highly coordinated core infrastructure with regional clusters targeting Eastern Europe, Africa, and the Middle East. To translate the assessment into practice, the report suggested a set of concrete operational follow-ups, such as Strengthening data collection and information sharing, integrating a deterrence dimension into counter-FIMI, and strengthening collective response and Member States engagement. Source: EEAS, 4th EEAS Report on Foreign Information Manipulation and Interference Threats . [online] Published March 2026. Available at: https://transparency.meta.com/sr/first-half-2026-Adversarial-threat-report/ Top Of Page Developments in the FIMI Galaxy A report by EUvsDisinfo highlighted how Foreign Information Manipulation and Interference (FIMI) have evolved into a dense, global ecosystem of interconnected actors, infrastructures, and narratives. The "FIMI Galaxy" has become more complex and active, with a growing number of incidents and operational channels. Threat actors increasingly reuse the same networks and tools across regions, reflecting a shift toward an organized, industry-like model of disinformation. The integration of AI has further accelerated this trend, enabling faster, cheaper, and more scalable production and distribution of manipulative content across multiple languages and platforms. Another defining development is the expansion of the front lines in the information space. Almost every major geopolitical event now has a parallel dimension in the FIMI domain. The Galaxy mapped this ecosystem through three layers: threat actors (including Russia, China, and largely unattributed networks), manipulation infrastructures (known as Information Manipulation Sets), and targeted audiences. Notably, around 90 percent of the ecosystem operates through covert or proxy networks, making attribution difficult and allowing actors to coordinate influence campaigns while maintaining plausible deniability. These networks are highly adaptable, often shifting focus between regions, while using recurring tactics such as fake accounts, impersonation of media, and coordinated amplification. Source: EUvsDisinfo, What’s new in the FIMI Galaxy? . [online] Published 20 March 2026. Available at: https://euvsdisinfo.eu/whats-new-in-the-fimi-galaxy/ Top Of Page PRC-Aligned Networks Exploit Philippine Divides to Shape Narratives An article by Doublethink Lab argues that the Philippines faces a sustained Foreign Information Manipulation and Interference (FIMI) campaign centered on the West Philippine Sea and domestic political divisions, with the People’s Republic of China (PRC) state, state-controlled, and state-aligned actors, as well as proxy actors, playing a central role. Across the cases examined, these actors used coordinated narrative seeding and amplification to portray the United States as a destabilizing force, recast the Philippines as the aggressor in maritime confrontations, promote Sara Duterte as a preferable political alternative, agitate unrest within the armed forces, and frame Philippine defense cooperation with Washington as a path to proxy war. The campaign relied on a mix of official diplomatic and media channels, local PRC-aligned think tanks and influencers, pro-Duterte networks, and inauthentic accounts that amplified forged documents, misleading claims, and manipulated framings. These actors frequently presented themselves as experts, journalists, or credible public figures to lend legitimacy to their messaging and exploit existing domestic polarization to advance geopolitical objectives. The tactics documented in the article reflect a deliberate effort to distort facts, degrade adversaries, cultivate support, and flood the information space across multiple platforms. These included forged memos alleging U.S. interference, early and coordinated framing of breaking maritime incidents, cross-posting across Chinese and Philippine digital ecosystems, AI-generated or suspicious account imagery, single-purpose amplification accounts, and the repeated use of local voices to normalize PRC-aligned narratives within Filipino discourse. The report concludes that these campaigns are not isolated disinformation incidents but rather components of a broader, persistent influence effort targeting the country’s cognitive security, democratic stability, and public understanding of national sovereignty issues. It emphasizes that the overlap between pro-PRC and pro-Duterte ecosystems, the growing role of locally based aligned actors, and the systematic use of coordinated inauthentic behavior make FIMI a national security and human rights concern that requires a coordinated, rights-based response. Source: Doublethink Lab, A Bayanihan Initiative: Countering FIMI in the Philippines . [online] Published 20 March 2026. Available at: https://medium.com/doublethinklab/a-bayanihan-initiative-countering-fimi-in-the-philippines-15d14c4e2356 Top Of Page [Appendix - Frameworks to Counter Disinformation] AI-Driven Disinformation Escalates Global Influence Warfare Keynote speech by High Representative of the Union for Foreign Affairs and Security Policy (HRVP) Kaja Kallas at the 2026 Conference on Countering Foreign Information Manipulation and Interference frames Foreign Information Manipulation and Interference (FIMI) as a central threat to democratic systems, driven primarily by state actors such as Russia and China, and increasingly enabled by artificial intelligence. These actors employ evolving tactics that move beyond traditional bot farms to large-scale, AI-driven influence operations, including deepfakes, coordinated bot amplification, and the use of paid influencers to manipulate public opinion. The example of Romania’s 2024 elections illustrates how Russia-backed networks used AI-generated content and influencer promotion to artificially boost a fringe candidate, demonstrating the ability of such campaigns to distort electoral processes. The speech also highlights the emergence of a "marketplace for manipulation", where influence operations are outsourced to private contractors, digital marketing actors, and influence-for-hire networks, often financed through opaque mechanisms such as cryptocurrency. The address emphasizes that FIMI operates as a form of cognitive warfare aimed at eroding trust, polarizing societies, and shaping political outcomes across multiple domains: within the EU, in neighboring regions, and globally. Tactics include coordinated narrative amplification, financial support to proxy actors, early and sustained interference campaigns, and the exploitation of democratic openness to spread manipulative content. Case studies such as Moldova demonstrate how foreign actors deploy large-scale funding, vote-buying, and information campaigns to influence elections, while targeting countries closer to the EU with intensified efforts. The speech concludes that disinformation campaigns are persistent, adaptive, and globally networked, requiring coordinated responses that combine regulation, financial disruption of influence networks, cross-border cooperation, and strategic communication to counter manipulation and protect information integrity. Source: European External Action Service, Keynote speech by HRVP Kaja Kallas at the 2026 Conference on Countering Foreign Information Manipulation and Interference: ‘From Insight to Impact’ . [online] Published 17 March 2026. Available at: https://www.eeas.europa.eu/eeas/keynote-speech-hrvp-kaja-kallas-2026-conference-countering-foreign-information-manipulation-and_en Top Of Page AI Model Boosts Urdu Misinformation Detection Capabilities An article by TechXplore describes the development of an advanced artificial intelligence system designed to detect misinformation in Urdu-language media, addressing a significant gap in the analysis of non-English disinformation. Developed using a dataset of over 14,000 Pakistani news articles spanning politically and socially sensitive topics, the model can identify fake, misleading, and partially true content with reported accuracy of 96%. The system analyzes linguistic patterns, sentiment, and structural features to distinguish fabricated narratives from legitimate reporting, targeting a “polluted information ecosystem” where misinformation can influence public health, elections, and trust in institutions. This effort responds to the historical lack of comprehensive Urdu datasets, particularly on politically and religiously sensitive issues where disinformation is prevalent. The research highlights both the potential and limitations of AI in countering information manipulation. While the model represents a significant advancement in detecting disinformation in a low-resource language, it also raises concerns about misclassification, particularly with satire or political dissent, and about its implications for content moderation and enforcement. The dataset has been made open access to support broader efforts to scale detection capabilities and adapt to evolving narratives. Overall, the system is positioned as a technical tool to strengthen resilience against misinformation campaigns targeting Urdu-speaking audiences, while acknowledging the ongoing challenges posed by dynamic and context-dependent disinformation tactics. Source: Tech Xplore, Heriot-Watt University, AI model trained on 14,000 Urdu news stories spots misinformation with 96% accuracy . [online] Published 18 March 2026. Available at: https://techxplore.com/news/2026-03-ai-urdu-news-stories-misinformation.html Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website . Top Of Page
- Cyber-based hostile influence campaigns 9th - 15th March 2026
[Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect. During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] Facebook Bulgarian Disinformation Network Drives Traffic to Fabricated News Site X AI and Disinformation Surge on X During Iran Conflict [State Actors] Russia Kremlin’s Myth of "Protecting Russian Compatriots" Russian Disinformation Targeting French Local Elections Disinformation Networks Linked to Russia Expand Across Europe and Africa False Narratives About the War in the Middle East The War in Ukraine Historical Myths in Russia’s War Against Ukraine China China Pushed Pro-Iran War Claims Iran Disinformation Claims About Israeli Leaders During the Iran War AI-Manipulated Image Used for Disinformation [AI Related Articles] The Risk of Automation of Disinformation Campaigns AI’s Influence on Journalism [General Reports] Disinformation Campaigns Target Human Rights Defenders in Guatemala [Appendix - Frameworks to Counter Disinformation] Study Proposed Aid System Resilient to Disinformation During Disasters CFTC’s Selig Said Prediction Markets Could Help Counter Disinformation Detecting AI Content Farms and Online Disinformation [CRC Glossary] [ Report Highlights] In the wake of the conflict between the United States, Israel, and Iran on 28 February, a Wired report highlights how X became saturated with disinformation, much of it fueled by AI-generated content. A NewsGuard report uncovers a Russian disinformation campaign seeking to influence France’s 2026 municipal elections by spreading false claims about political candidates. Institute for Strategic Dialogue (ISD) Investigations across six EU countries and parts of the Sahel revealed that Russian-aligned disinformation campaigns have become more sophisticated since the 2022 invasion of Ukraine. Chinese state media and pro-China social media accounts, as documented by NewsGuard ’s Reality Check, spread false or misleading narratives about the war involving Iran to promote Iran’s military strength and undermine trust in the US and its allies. A new study by ArXiv examined how AI agents could coordinate online information operations. According to findings by Global Witness , coordinated disinformation campaigns on social media have been used to attack and criminalize human rights defenders in Guatemala. [Social Media Platforms] Facebook Bulgarian Disinformation Network Drives Traffic to Fabricated News Site An article by the DFR Lab reports that the Bulgarian website dailystandart[.]com publishes fabricated and misleading political articles to generate revenue through the Adskeeper native advertising platform. Traffic to the site is largely driven by a coordinated Facebook network consisting of two pages, nine groups, and six accounts. This network demonstrates signs of coordinated inauthentic behavior, including accounts with stolen profile images, mismatched names and handles, and prolonged inactivity. Several of these accounts administer Facebook groups and repeatedly amplify links to the website, suggesting organized efforts to distribute disinformation. The Facebook groups in the network are politically themed and often promote pro-Russian narratives. Together, they have over 211,600 members, with the largest groups explicitly supporting Russian President Vladimir Putin. Content from dailystandart is spread through direct links, screenshots of headlines, and posts with links in the comments. Evidence suggests the website itself relies on automated publishing: duplicated articles with identical URL structures, repeated timestamps, and clusters of posts published at regular intervals indicate programmatic content generation rather than manual journalism. Stories falsely attribute statements to Russian officials and Bulgaria’s former president, Rumen Radev, exaggerate Russian military capabilities, and portray Ukraine as weak or defeated. The website also circulates fabricated stories during sensitive political periods in Bulgaria and employs AI-generated images and incomplete headlines to increase click-through rates. Source: DFR Lab, S. Gelava. Coordinated Bulgarian Facebook network amplifies fabricated for-profit political content . [online] Published 9 March 2026. Available at: https://dfrlab.org/2026/03/09/coordinated-bulgarian-facebook-network-amplifies-fabricated-for-profit-political-content/ Top Of Page X AI and Disinformation Surge on X During Iran Conflict A Wired report states that, since the start of the conflict between the United States, Israel, and Iran on 28th of February, X has been flooded with disinformation, much of it driven by AI-generated content. When disinformation researcher Tal Hagin asked X’s AI chatbot Grok to verify a video claiming Iranian missiles struck Tel Aviv, the chatbot repeatedly provided incorrect information about the video’s location and date and even attempted to support its claims using an AI-generated image. AI-generated images and videos have increasingly been used to spread false or exaggerated claims about the conflict. Iranian officials, state media, and other accounts on X have circulated fabricated visuals depicting events such as destroyed military aircraft, captured U.S. soldiers, and large-scale military damage. Some of these posts gained millions of views before being removed. Researchers also found AI-generated content being used to promote propaganda and antisemitic narratives, showing how easily synthetic media can be deployed to manipulate public perception during wartime. At the same time, traditional disinformation continues to spread widely on the platform. Misleading videos and recycled footage have been used to push false claims about attacks during the conflict, including narratives about a deadly strike on a school in Minab. Source: Wired. Fake AI Content About the Iran War Is All Over X . [online] Published 10 March 2026. Available at: https://www.wired.com/story/fake-ai-content-about-the-iran-war-is-all-over-x/ Top Of Page [State Actors] Russia Kremlin’s Myth of "Protecting Russian Compatriots" The Atlantic Council highlights in its analysis that, since Russia’s full-scale invasion of Ukraine in 2022, public debate about the war has been shaped by a long-standing Kremlin narrative claiming that Moscow is protecting Russian populations and reclaiming historically Russian lands. According to the analysis, this narrative is a form of disinformation designed to justify imperial expansion rather than defend minority rights. The idea was heavily promoted during Russia’s 2014 annexation of Crimea and invasion of eastern Ukraine, when many observers accepted the misleading claim that Russian-speaking communities wanted to join Russia. The Kremlin has repeatedly used the “protection of compatriots” narrative across the post-Soviet region, combining propaganda, cultural influence, passport distribution, and military intervention. Russia’s “compatriots” strategy has been observed in Transnistria, Moldova, regions of Georgia, and Ukraine. By broadly defining “Russian compatriots,” Moscow creates a justification for intervention even where local populations do not support Russian involvement. In Ukraine, the claim is contradicted by the fact that many Russian-speaking Ukrainians have resisted Russian forces, joined the Ukrainian military, or fled Russian occupation. Despite clear evidence that the narrative functions as propaganda, elements of it continue to shape international discussions about ending the war. Some proposals for peace implicitly treat parts of Ukraine as areas where Russian claims may be legitimate. Source: Atlantic Council, A. Grigas. Russian talk of protecting compatriots masks Putin’s imperial ambitions . [online] Published 10 March 2026. Available at: https://www.atlanticcouncil.org/blogs/ukrainealert/russian-talk-of-protecting-compatriots-masks-putins-imperial-ambitions/ Top Of Page Russian Disinformation Targeting French Local Elections Amid growing concerns about foreign interference, a NewsGuard report reveals that a Russian disinformation campaign is attempting to influence France’s 2026 municipal elections by spreading false claims about political candidates. One example involved a fake website that impersonated the campaign site of Paris mayoral candidate Pierre-Yves Bournazel. The site falsely claimed that Bournazel planned to convert the Centre Pompidou museum into a facility for homeless migrants. The fabricated story was amplified on social media by pro-Kremlin accounts through videos and posts, despite Bournazel’s campaign confirming that the claim and website were entirely false. The operation has been linked to the Russian influence network known as Storm-1516, believed to be connected to the Internet Research Agency. This network typically creates fake websites, forged content, and AI-generated materials around controversial topics such as immigration, then spreads them through coordinated social media activity. Another campaign, called Matryoshka, has also circulated fabricated videos that mimic credible media outlets such as Le Monde, TF1, and RTL. These fake reports claim that security threats linked to international conflicts could disrupt the elections. Similar tactics were also used in Germany, where fake news videos falsely accused Ukrainian refugees of planning attacks. Source: NewsGuard, I. Blachez. Russian Disinformation Goes Local Ahead of French Elections, Targeting Paris Mayoral Candidate Backed by Macron’s Party . [online] Published 10 March 2026. Available at: https://www.newsguardrealitycheck.com/p/russia-has-its-eyes-on-paris Top Of Page Disinformation Networks Linked to Russia Expand Across Europe and Africa Institute for Strategic Dialogue (ISD) Investigations across six EU countries and parts of the Sahel revealed that Russian-aligned disinformation campaigns have become more sophisticated since the 2022 invasion of Ukraine. Instead of relying mainly on state media, these operations now use proxy websites, AI-generated content, bot networks, and local influencers to spread Kremlin narratives while bypassing sanctions. Researchers found that the strategy focuses on “laundering” propaganda through intermediaries, allowing misleading claims to spread widely within domestic information environments. One major example involved a coordinated network promoting the false claim that Russia was rescuing Ukrainian children from an international sex-trafficking ring linked to the Epstein case. Posts spreading this narrative gained millions of views across Facebook, X, and TikTok, despite evidence that Russia has forcibly deported Ukrainian children, which led to international legal action. Despite EU sanctions on Russian outlets such as RT, enforcement gaps allow sanctioned content to remain accessible through mirror sites and proxy platforms. Disinformation networks have exploited elections in countries like Germany and Poland by spreading fabricated videos, impersonating media outlets, and promoting false claims about politicians and Ukrainian refugees. Similar tactics are also appearing outside Europe, particularly in the Sahel, where Russian-aligned actors use local influencers and regional languages to present Kremlin narratives as authentic domestic opinions. Source: ISD Global. ISD investigation exposes the sophistication of Russian interference in the EU and beyond . [online] Published 9 March 2026. Available at: https://www.isdglobal.org/media-mentions/the-consequences-are-extremely-serious-isd-investigation-exposes-sophistication-of-russian-interference-in-democracies-in-europe-and-beyond/ Top Of Page False Narratives About the War in the Middle East EUvsDisinfo , in its article , describes how pro-Kremlin information outlets have incorporated the Middle East conflict into their broader disinformation strategy, seeking to link the war involving Iran to Ukraine and Western policies. Russian messaging has promoted misleading narratives suggesting that the 2014 Ukrainian Maidan revolution triggered wider global instability and contributed to the current crisis. Another widely circulated narrative claimed that Ukraine might stage a “provocation” to regain international attention that could be diverted by the war in Iran. This allegation follows a familiar Kremlin pattern of accusing Ukraine of planning sabotage or false flag operations, while also promoting the claim that Western countries are “tired” of Ukraine. However, available polling data indicate that public support for Ukraine in Europe remains high. Moreover, a fabricated story falsely alleged that an Iranian missile destroyed a luxury property in Dubai belonging to an aide to Ukraine’s military leadership, attributing the story to a nonexistent report by Euronews. The claim originated on anonymous Telegram channels and was amplified by the pro-Kremlin Pravda disinformation network. Source: EUvsDisinfo. Russian FIMI and the war in Iran . [online] Published 13 March 2026. Available at: https://euvsdisinfo.eu/how-russia-lies-about-the-war-in-the-middle-east-2/ Top Of Page The War in Ukraine Historical Myths in Russia’s War Against Ukraine EUvsDisinfo draws attention in its article to a persistent form of disinformation surrounding Russia’s war against Ukraine, which portrays Ukraine as historically marginal and naturally within Russia’s sphere of influence. Analysts argue that this narrative reflects a Kremlin-centered view of history that ignores Ukraine’s long-standing role in European development. Historically, the territory of modern Ukraine has been a crossroads of civilizations, trade routes, and cultural exchanges, making it central to European history. Presenting Ukraine as an artificial or secondary state helps justify Russian claims over its territory. Another key element of Kremlin disinformation is the myth that modern Russia directly inherits the legacy of medieval Rus’, implying that Ukraine and Russia are historically inseparable. This narrative is used to legitimize territorial ambitions and frame Russian aggression as a historical restoration. In reality, Ukraine’s history is complex and plural, shaped by multiple cultures and political traditions. The Kremlin has also promoted broader myths that reinforce its geopolitical goals, including the idea that Russia is militarily invincible and that nuclear weapons guarantee victory. Source: EUvsDisinfo, T. Snyder. A Historian’s Big Picture. Russia’s war against Ukraine and how to end it in a right way . [online] Published 10 March 2026. Available at: https://euvsdisinfo.eu/a-historians-big-picture-russias-war-against-ukraine-and-how-to-end-it-in-a-right-way/ Top Of Page China China Pushed Pro-Iran War Claims As reported by NewsGuard 's Reality Check, Chinese state media and pro-China social media accounts spread false or misleading narratives about the war involving Iran in an effort to promote Iran’s military strength and undermine trust in the US and its allies. The messaging targeted audiences in mainland China, Taiwan, and the global Chinese diaspora, reflecting the close strategic relationship between Beijing and Tehran. NewsGuard found that Chinese outlets have increasingly echoed Iranian claims about the conflict, including unverified reports of major US military losses and Iranian battlefield successes. One widely circulated claim alleged that Iran shot down a US fighter jet near its border with Kuwait. The story was repeated dozens of times by Chinese state media and gained significant engagement on Chinese platforms such as Weibo and Douyin. However, US and Kuwaiti officials stated that the aircraft was not downed by Iran but was mistakenly shot down by Kuwaiti forces. Another false narrative promoted by pro-China accounts claimed that Israeli Prime Minister Benjamin Netanyahu had fled to Germany during the war, despite public records and media reports showing that he remained in Israel and continued making public appearances. Researchers also found that Chinese-linked content farms used AI-generated posts to spread similar narratives to Taiwanese audiences on Facebook. These accounts, which previously focused on lifestyle content, began publishing coordinated articles portraying the US and Israel as suffering heavy losses while depicting Iran as dominant in the conflict. Source: NewsGuard, C. Lin. China Pumps Out War Disinformation to Boost Its Ally Iran and Undermine the West . [online] Published 12 March 2026. Available at: https://www.newsguardrealitycheck.com/p/china-pushes-pro-iran-war-claims Top Of Page Iran Disinformation Claims About Israeli Leaders During the Iran War In its Reality Check, NewsGuard documents that, since the start of the Iran war on 28th of February 2026, Iranian state media and pro-regime social media accounts have circulated disinformation that exaggerates or fabricates Iranian military successes. A major theme of these campaigns has been false claims that senior Israeli officials were killed or injured in Iranian missile strikes. These narratives spread widely online, gaining hundreds of thousands or even millions of views despite lacking credible evidence. Among the most prominent false claims were reports that Israeli Prime Minister Benjamin Netanyahu and Mossad director David Barnea had been killed in Iranian attacks. Iranian state media outlets and pro-Iran social media accounts posted messages suggesting Netanyahu was injured or dead, while others shared a video supposedly showing the aftermath of a missile strike that killed Barnea. In reality, no credible media reports confirmed these claims. Netanyahu was photographed publicly visiting Ashdod Port after the alleged attack, and the video linked to Barnea’s supposed death was actually from a 2024 suicide bombing in Tel Aviv, unrelated to the current conflict. Similar false claims also targeted Israeli National Security Minister Itamar Ben-Gvir and other officials and family members. None of these deaths was confirmed, and some officials publicly showed that they were alive. Source: NewsGuard, L. Marchl. Iranian State Media Fabricate a Long List of Israeli Leaders Supposedly Killed by the Regime’s Missiles . [online] Published 11 March 2026. Available at: https://www.newsguardrealitycheck.com/p/iranian-death-hoaxes-target-israeli Top Of Page AI-Manipulated Image Used for Disinformation As revealed by NewsGuard , Pro-Iran social media accounts recently circulated disinformation using an AI-altered image of an Iranian missile allegedly inscribed with the phrase "In memory of the victims of Epstein Island". The posts claimed the message was written on a missile launched during the conflict with Israel, portraying Iran as acknowledging victims connected to the Jeffrey Epstein scandal and indirectly mocking U.S. politics. The image spread widely online, gaining millions of views across social media platforms and being shared by pro-Iran users and state-controlled media outlets. However, investigations found that the image was digitally manipulated and not authentic. The original photograph shows an Iranian Khorramshahr-4 ballistic missile without any inscription, and had circulated online weeks before the war began. AI analysis tools confirmed that the text referencing Epstein was added using an AI image editing system. Source: NewsGuard, I. Chomnalez. Debunk: Iranian Missile With Epstein Inscription Is an AI Fabrication . [online] Published 13 March 2026. Available at: https://www.newsguardrealitycheck.com/p/iranian-missile-dedicated-to-epstein Top Of Page [AI Related Articles] The Risk of Automation of Disinformation Campaigns An arXiv study explores how AI agents could coordinate online information operations, showing that advances in large language models and generative AI may enable influence campaigns to become largely automated, with networks of AI agents producing and amplifying content with minimal human supervision. Traditionally, such operations relied on human organizers and simple tactics, such as synchronized posts or hashtag flooding, to create the illusion of widespread support. Using a simulation, researchers created a social media environment in which AI agents promoted a political candidate and disseminated a campaign hashtag. The results showed that even minimal awareness among AI agents, such as knowing who their teammates are, can trigger strong coordination. The agents began amplifying one another’s posts, repeating similar narratives, and thereby increasing the spread and engagement of their messages. These behaviors mirror real-world disinformation tactics, where coordinated accounts attempt to manipulate algorithms and make certain views appear more popular or widely accepted than they actually are. The findings highlight that the growing risk of automated networks of AI agents could scale disinformation campaigns more quickly and efficiently than traditional bot operations. Source: arXiv, G. M. Orlando, J. Ye, V. La Gatta, M. Saeedi, V. Moscato, E. Ferrara, L. Luceri. Emergent Coordinated Behaviors in Networked LLM Agents: Modeling the Strategic Dynamics of Information Operations . [online] Published 2025. Available at: https://arxiv.org/pdf/2510.25003 Top Of Page AI’s Influence on Journalism The European External Action Service (EEAS) reports on an event organised by Europe House titled “Journalism and Technology: How is AI changing newsrooms?”, which explored how artificial intelligence is transforming the way news is produced, distributed, and consumed. While AI can assist journalists with tasks such as data analysis, translation, research, and content creation, experts warned that the rapidly evolving digital environment also increases the risk of inaccurate information and disinformation. Participants emphasized that although AI can improve newsroom efficiency, human journalists remain essential for accountability, fact-checking, and ethical decision-making. Experts highlighted that AI should be used as a supporting tool rather than a replacement for core journalistic skills such as reporting, investigation, and verification. The discussion also focused on how news organizations must adapt to new technologies while protecting democratic values. Source: EEAS. Europe House hosts discussion on the use of artificial intelligence in newsrooms and media ethics . [online] Published 11 March 2026. Available at: https://www.eeas.europa.eu/delegations/bosnia-and-herzegovina/europe-house-hosts-discussion-use-artificial-intelligence-newsrooms-and-media-ethics_en Top Of Page [General Reports] Disinformation Campaigns Target Human Rights Defenders in Guatemala A Global Witness investigation documents how coordinated disinformation campaigns on social media have been used to attack and criminalize human rights defenders in Guatemala. A prominent example is the case of Luis Pacheco, a Maya K’iche’ leader and former deputy energy minister who was arrested in 2025 on charges of terrorism and sedition. Before and after his arrest, thousands of posts on platforms such as Facebook, X, and TikTok accused him of corruption, terrorism, and electoral fraud. Many of these allegations were spread by accounts known for disseminating false information, anonymous profiles, and figures associated with political and military elites. According to the report, these online campaigns aim to damage activists' reputations, create distrust within their communities, and make it easier to justify legal persecution. Disinformation has been particularly prominent during key political moments, including efforts to overturn the results of the 2023 national election and ongoing conflicts over Indigenous land rights. Critics argue that companies such as Meta, X, and TikTok have failed to prevent the weaponization of their platforms for harassment and misinformation, leaving activists more vulnerable to intimidation and criminalization. Source: Global Witness. Weaponising social media . [online] Published 2026. Available at: https://gw.hacdn.io/media/documents/Weponising_Social_Media_Guatemala_-_March_2026.pdf Top Of Page [Appendix - Frameworks to Counter Disinformation] Study Proposed Aid System Resilient to Disinformation During Disasters Computers & Industrial Engineering Journal presents a study showing how an optimization model for humanitarian logistics combines trucks and drones to deliver aid more fairly and efficiently in disaster zones. The system, called the Multi-Truck and UAV Routing Problem (MTURP), aims to minimize both the total travel distance and the time gap between the first and last deliveries, ensuring that assistance reaches all affected communities as evenly as possible. Trucks transport supplies and deploy drones to complete last-mile deliveries, enabling aid to reach areas where damaged infrastructure or flooded roads make ground transport. A key aspect of the model is its ability to operate under conditions of disruption and in the presence of uncertain or misleading information. The researchers highlighted that disasters can be accompanied by disinformation or false service requests, which may distort priorities and lead to misallocation of resources. To address this challenge, the framework incorporates community vulnerability data, using the Social Vulnerability Index, to guide equitable distribution and maintain fairness even when information reliability is uncertain. The model was tested in two case studies simulating flood scenarios in Hoboken, New Jersey, and Hopkins County, Kentucky. Results showed that the proposed optimization approach yields near-optimal routing solutions while remaining scalable for large-scale operations. Source: ScienceDirect, R. T. Khameneh & N. Ghorbani-Renani & J. E. Ramirez-Marquez, Computers & Industrial Engineering, Volume 213, 111786. Multi-objective optimization of a truck–drone delivery system for fair and efficient humanitarian logistics under disruption and disinformation . [online] Published March 2026. Available at: https://www.sciencedirect.com/science/article/abs/pii/S0360835225009325 Top Of Page CFTC’s Selig Said Prediction Markets Could Help Counter Disinformation At an event organized by the Futures Industry Association in Florida, Politico reports that U.S. Commodity Futures Trading Commission Chair Michael Selig said prediction markets could serve as tools to counter disinformation by providing publicly generated forecasts about major events. He argued that more Americans are turning to prediction markets to understand issues ranging from elections to economic developments and suggested that combining these markets with blockchain technology could create decentralized systems that help verify information and challenge misleading narratives circulating online. Prediction markets allow users to trade contracts based on the likelihood of real-world events, generating odds that reflect collective expectations. Supporters claim these platforms can be more accurate than traditional forecasting methods such as opinion polls. However, critics question whether the participants represent the broader public and whether their predictions can reliably reflect reality. Source: Politico, D. Harty. CFTC’s Selig: Prediction markets can offer a check on ‘disinformation’ . [online] Published 9 March 2026. Available at: https://subscriber.politicopro.com/article/2026/03/cftcs-selig-prediction-markets-can-offer-a-check-on-disinformation-00819232 Top Of Page Detecting AI Content Farms and Online Disinformation NewsGuard announced it has launched a new system to detect "AI Content Farms", websites that mass-produce AI-generated articles that are often inaccurate or misleading. Developed with Pangram Labs, the tool combines automated AI detection with expert human analysis to identify sites that publish large amounts of AI-written content without clearly disclosing it. So far, the system has identified more than 3,000 AI Content Farm websites, and the number is growing rapidly, with hundreds of new sites appearing each month. These websites frequently imitate legitimate news outlets and publish dozens of articles daily, often spreading false claims about public figures, companies, health issues, or politics. In some cases, disinformation from these sites has been amplified by larger media networks or social media accounts, increasing its reach. Some AI Content Farms have also been linked to foreign influence operations, including networks connected to Russia that publish misleading content disguised as local news sources in the United States and Europe. The new detection system aims to protect readers, advertisers, and online platforms from the growing flood of AI-generated disinformation and low-quality content. By identifying these sites in real time, companies can avoid placing advertisements on them and reduce the financial incentives that allow disinformation networks and propaganda operations to spread misleading narratives online. Source: NewsGuard. NewsGuard Launches Real-time “AI Content Farm” Detection Datastream to Counter Onslaught of AI Slop in News . [online] Published 12 March 2026. Available at: https://www.newsguardtech.com/press/newsguard-launches-real-time-ai-content-farm-detection-datastream-to-counter-onslaught-of-ai-slop-in-news/ Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website Top Of Page
- Mad MAX: The Kremlin’s Attempt at Complete Information Control
Telegram and the Authoritarian Control Paradox Authoritarian regimes increasingly seek information dominance by nationalizing or tightly regulating digital communication platforms. Yet attempts to exert comprehensive narrative control frequently encounter a structural constraint: the same platforms these regimes seek to suppress are operationally indispensable. Russia’s ongoing effort to suppress Telegram exemplifies this paradox. The popular messaging platform simultaneously serves as Russia’s top propaganda distribution channel, a battlefield communication medium, and a vector for hostile influence campaigns (HICs). At the same time, it remains a largely uncontrolled information space capable of exposing clandestine military activity and information operations, facilitating internal dissent, and enabling intelligence collection. As the Kremlin escalates pressure on Telegram, promoting a state-controlled alternative messenger platform, it exposes the inherent tension between operational pragmatism and the innate need for complete information control, which characterizes authoritarian regimes’ information governance policies. Asserting Information Dominance Modern information dominance is pursued through legal, technical, and infrastructural measures, designed to consolidate control over the domestic information environment. China, Iran, and Russia have each implemented increasingly restrictive policies aimed at regulating digital communication. These measures include platform bans, strict censorship, content moderation, legislation that restricts online expression, and even overall internet shutdowns. Despite variations in implementation, these approaches all share three key strategic objectives: Consolidating state oversight over the national information space Limiting channels for adversarial influence and foreign intelligence activity Restricting the outward flow of politically or militarily sensitive information In that context, messaging platforms such as WhatsApp and Telegram have become frequent targets of regulatory limitations, technical disruption, or formal bans. However, these same platforms are widely used by state actors and state-aligned threat actors to conduct influence operations, espionage, and cyber-enabled activities. Telegram in particular has emerged as a central infrastructure for HICs and hybrid operations, due to its popularity, anonymity features, and permissive moderation standards. Russia vs. Telegram Telegram occupies a uniquely ambiguous position in Russia’s information ecosystem. It functions simultaneously as a vibrant domestic media environment, a practical communications tool for military forces, and a major source of tactical intelligence. Due to its unregulated nature, Russian authorities have increasingly moved to constrain the platform. As of February 2026, the state communications regulator Roskomnadzor introduced a series of phased restrictions against Telegram, citing the company’s alleged noncompliance with domestic legislation, refusal to remove content deemed extremist, and failure to comply with data localization requirements. As of early March 2026, Telegram still remains accessible to the overwhelming majority of Russian users. Authorities have demonstrated a sustained pattern of selective, regionally-targeted throttling, consistent with their broader strategy of graduated digital coercion. In response, Telegram's founder, Pavel Durov, publicly defied coercive regulatory pressure, reaffirming the platform's commitment to user privacy. Russian courts imposed punitive financial sanctions against Telegram. The escalation intensified on February 24, 2026, when Russian authorities opened a criminal case against Durov himself on charges of "aiding terrorism", a charge Durov publicly rejected on his @durov Telegram channel, where he described the Russian state's conduct as the daily fabrication of "new pretexts to restrict Russians' access to Telegram" in pursuit of suppressing "the right to privacy and free speech" characterizing it as "a sad spectacle of a state afraid of its own people". Posts by Telegram founder Pavel Durov responding to Russia’s regulatory actions against the platform. A State-Sponsored Usurper In parallel to its applied pressure, the Kremlin has promoted a domestic alternative to foreign messaging platforms: MAX, a messenger platform developed by VK (formerly Mail.ru Group). The MAX messenger app is listed on the Google Play Store, showing over 50 million downloads as of March 2026. Designed as a multifunctional “super-app” comparable to China’s WeChat, MAX integrates messaging, social networking, financial services, and e-government functions into a single state-aligned ecosystem. Since 2025, the application has been mandatorily preinstalled on new devices sold within Russia. Feature comparison of Russia's Max (VK) and China's WeChat (Tencent) across communication capabilities, platform ecosystem, and international reach. Data current to early 2025. In comparison with Telegram, assessments regarding MAX’s actual adoption within Russia differ greatly. VK has reported more than 50 million registered users by late 2025, roughly corresponding to the number of downloads recorded on the Google Play Store. Independent estimations, however, suggest that actual active usage may be significantly lower. Comparison of four messaging platforms across privacy and state-control indicators. Data as of early 2025. Regardless of precise adoption rates, the Kremlin’s strategic objective is clear: replace unruly communication platforms with a domestically governed digital infrastructure that allows the state complete visibility and control over public discourse and financial transactions. Why Telegram Still Matters Battlefield Communications and Intelligence At the tactical level, Telegram has become a primary conduit for Russian battlefield coordination. In fact, it is so indispensable to Russian forces that the Minister for Digital Development explicitly exempted usage in warzone areas (i.e. , the Ukrainian front) from the nationwide restrictions. This dependency on Telegram has been recently exposed and worsened by Ukraine's successful degradation of Russian Starlink access at the front, which deepened Russia's communication dependencies at a moment when state authorities are escalating regulatory pressure. Aside from its benefits for combat communications, Telegram is also used for intel gathering purposes: It is considered a feed of open-source intelligence (OSINT), where troop movements, battlefield imagery, and operational details circulate across channels accessible to any actor. Its open-access structure, which permits users to contact any other user without prior connection, lends itself to the deployment of botnets for passive intelligence collection or proactive human intelligence (HUMINT) based operations. A Disinformation Hotbed According to multiple reports, Telegram has become a primary distribution channel for Russian propaganda and HICs targeting both domestic and foreign information environments, respectively. Top 5 monitored Russian bloggers’ Telegram channels, between March 2025 and March 2026. This view showcases the extensive available visibility into channels’ reach and engagement. Courtesy of Alliance for Securing Democracy at the German Marshall Fund. The European Union’s ban on several Russian state-aligned media outlets solidified Telegram’s role as a key platform for continuous narrative dissemination across European audiences. These ongoing activities are not limited to official state channels. A whole ecosystem of proxy accounts, alternative media networks, and coordinated messaging campaigns amplifies Russian narratives across multiple languages and platforms. Network mapping of Russian affiliated Pseudo-Ukrainian Telegram channels (January 2026), Courtesy of OpenMinds. In October 2023, Foundation for Defense of Democracies (FDD) adjunct fellow Ari Ben Am uncovered a campaign that targeted Ukrainian soldiers with coordinated surrender narratives disseminated across Telegram, WhatsApp, Signal, and Facebook. According to his findings, the manipulative messaging originated from Russian Telegram channels and was subsequently amplified in both German and French, exemplifying the multilingual, cross-platform reach of Russia's Telegram channels. Another prominent example is the Telegram channel “War on Fakes.” Although it presents itself as an independent fact-checking initiative debunking disinformation, the channel consistently promotes pro-Kremlin narratives. With more than 410,000 followers, the channel has repeatedly circulated misleading claims about the war in Ukraine. "War on Fakes" active Telegram channels in Russian (left) and English (right). A Comparative Model Like Russia, the Islamic Republic of Iran has also pursued a sovereign national internet model designed to reduce dependence on foreign digital infrastructure. However, Iran’s system still relies on certain Western services. A January 28, 2026 , report by Filter Watch indicated that authorities gradually permitted access to select platforms such as Google, Bing, and ChatGPT due to the absence of domestic alternatives. International platforms whitelisted by Iranian authorities, as of January 28, 2026. Courtesy of Filter Watch. However, according to Filter Watch: “Regarding social media and messaging platforms, Instagram, Telegram, YouTube, and X remain accessible only through circumvention tools and continue to suffer from instability.” Telegram itself has been formally banned in Iran since 2018 following its role in the 2017–2018 protest movement. Nevertheless, as in Russia, the platform remained widely used among Iranian citizens. Top 10 countries by number of Telegram channels and user count, as of March 2026. Courtesy of Telemetr.io . Like Russia, Iran has also used Telegram for hostile influence efforts and intelligence gathering abroad. Following the October 7 attacks, Iranian-linked threat actors reportedly increased the scale of their Telegram-based HUMINT and hybrid operations targeting Israel. The main difference between Russia and Iran lies in enforcement methods. Historically, Iran has had to resort to nationwide internet shutdowns to assert its information control in times of increased threat to the regime’s stability. The three total blackouts (enacted by Iran in June 2025, January 2026, and March 2026) are emblematic of Tehran’s concerns about uncontrolled information flows. Attempts by journalists and civil society actors to bypass Iran’s January 2026 blackout using Starlink terminals proved largely unsuccessful, illustrating the regime’s ability to enforce physical control over connectivity infrastructure. In comparison, Moscow has mostly managed to avoid drastic and highly disruptive measures. Instead, it has opted for more targeted techniques (e.g., specific protocol blocking, traffic throttling, and regional restrictions). In addition, Russia has invested in developing substitutes to key foreign (and especially Western) digital platforms, including its own search engine, AI tools, and online communication apps. An important dimension of such infrastructural consolidation is the large-scale deployment of DNS tampering. By systematically deleting domain records from the National Domain Name System (NSDI), Russian authorities render targeted domains, including bbc.com , youtube.com , and torproject[.]org, entirely non-existent within the Russian information space. Strategic Liabilities and the Limits of Information Control Despite its beneficial functionality, Telegram is perceived by Russian authorities as a malign threat. In the long term, it has the potential to facilitate sensitive information leaks while hosting the largest digital space outside of the Kremlin’s control. At present, the existing widespread usage of Telegram by the Russian populace creates valuable opportunities for external actors looking to inject narratives into Russian information spaces. When comparing Russia and Iran, the two authoritarian nations present distinct variations of the same structural dilemma. Russia currently preserves selective access to Telegram due to critical dependencies. Iran prioritizes stricter control, even at the cost of nationwide shutdowns and the economic-reputational disruption they generate. For now, Telegram remains a contested arena within authoritarian information environments: a tool of state power with inherent strategic vulnerabilities for the regimes that rely on it. The CRC would like to thank Ari Ben Am , FDD’s CCTI Adjunct Fellow and co-founder of Telemetry Data Labs, and Dr. Daria Dergacheva , for their contribution to this analysis. [References:] Alliance for Securing Democracy at the German Marshall Fund. (2026). The War in Ukraine: Military Bloggers Dashboard . Retrieved March 10, 2026 from: https://securingdemocracy.gmfus.org/war-in-ukraine-military-bloggers/ Source: BBC, O. Chia & B. Tavener. Russia orders block on WhatsApp in messaging app crackdown . [online] Published 12 February 2026. Available at: https://www.bbc.com/news/articles/clygd10pg5lo TIME, V. Bergengruen. Telegram Becomes a Digital Battlefield in Russia-Ukraine War . [online] Published 21 March 2022. Available at: https://time.com/6158437/telegram-russia-ukraine-information-war/ Global Voices, D. Dergacheva. Inside-net: Russia is dismantling free internet connections . [online] Published 7 October 2025. Available at: https://globalvoices.org/2025/10/07/inside-net-russia-is-dismantling-free-internet-connections/ FDD, R. Easter. What Russia’s War on Telegram Means for the West . [online] Published 23 February 2026. Available at: https://www.fdd.org/analysis/2026/02/23/what-russias-war-on-telegram-means-for-the-west/ Filter Watch, N. Keshavarznia. Network Monitoring January 2026: Internet repression in times of protest . [online] Published 24 January 2026. Available at: https://filter.watch/english/2026/01/24/network-monitoring-january-2025-internet-repression-in-times-of-protest-2/ Filter Watch, N. Keshavarznia. A Month of Iran’s Internet: From Regional Disruptions to Total Blackout and Whitelisted Access . [online] Published 28 January 2026. Available at: https://filter.watch/english/2026/01/28/network-monitoring-january-2026-from-regional-disuptions-to-total-blackout-and-whitelisted-access/ Freedom House. Freedom on the Net 2025: Russia . [online] Published 2025. Available at: https://freedomhouse.org/country/russia/freedom-net/2025 Institute for National Security Studies (INSS), N. Yasur & D. Citrinowicz. Iranian Foreign Information Manipulation and Interference During the Swords of Iron War . [online] Published 12 November 2024. Available at: https://www.inss.org.il/publication/iran-influence/ IStories, L. Lemyasova. The MAX state messenger announced 45 million users. These are probably mostly “dead souls” . [online] Published 15 October 2025. Available at: https://istories.media/news/2025/10/15/gossmessendzher-max-zayavil-o-45-mln-polzovatelyakh-veroyatno-v-osnovnom-eto-mertvie-dushi FDD, D. Shapiro & K. Korkiya. SpaceX Blocks Russia’s Starlink Access but Ubiquiti’s Tech Remains Ubiquitous . [online] Published 6 February 2026. Available at: https://www.fdd.org/analysis/2026/02/06/spacex-blocks-russias-starlink-access-but-ubiquitis-tech-remains-ubiquitous/ OpenAI. Disrupting Malicious Uses of AI Report . [online] Published 25 February 2026. Available at: https://openai.com/index/disrupting-malicious-ai-uses/ OpenMinds. The Kremlin Connection: Mapping Telegram Networks in Russia, Ukraine, and Belarus . [online] Published 8 January 2026. Available at: https://telegram-network.openminds.ltd M. Wang & P. Lin & J. Knockel & W. Greenberg & J. Mayer & P. Mittal. Proceedings on Privacy Enhancing Technologies, 2025(4) pp.896–911. What WeChat knows: Pervasive first-party tracking in a billion-user super-app ecosystem . [online] Published 2025. Available at: https://petsymposium.org/popets/2025/popets-2025-0163.pdf
- Cyber-based hostile influence campaigns 2nd - 8th March 2026
[Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect. During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [State Actors] Russia Russia’s Information Control in Ukraine’s Occupied Territories Kremlin Used Iran War Disinformation to Target Ukraine and France Pro-Kremlin Disinformation Targeted Middle East War Narratives China Leaked Documents Revealed AI-Driven Chinese Influence Operations A Case Study of PRC Influence in Malaysia and Singapore Iran Misleading War Images Exaggerated Iran’s Military Success Cyber and Information Operations Expand Pressure on Iran’s Regime Iran Launched a Disinformation Campaign Following War with US and Israel AI Video Showing Massive Destruction in Tel-Aviv Spread by Pro-Iran Accounts The Online Information Environment Around the Iran War Hacktivist Cyber Campaign Escalates Following Operation Epic Fury [AI Related Articles] Google’s AI Overviews Amplified Iran-Related Hoaxes AI Fueled Election Interference Across Three States AI-Assisted Russian Influence Network Targets Africa [Cyfluence Attacks] Iran Expected to Launch Cyber Attacks Worldwide Following US and Israeli Strikes Middle East Conflict Triggered Surge in Global Cyber Activity [Appendix - Frameworks to Counter Disinformation] EU and France Rethink Strategy to Counter Russian Disinformation [CRC Glossary] [ Report Highlights] In a recent report, DoubleThink Lab reveals that leaked internal documents from the Chinese technology company GoLaxy outline plans to build advanced AI-powered systems to automate the manipulation of information. A study by DoubleThink Lab examining the People’s Republic of China's influence in Southeast Asia found that both Malaysia and Singapore face growing exposure to Chinese political, economic, and technological influence. Iranian state media have intensified disinformation efforts following the outbreak of war with the US and Israel, spreading false claims about battlefield successes and using manipulated or misleading visuals. As identified by NewsGuard 's Reality Check, at least 18 false claims have circulated since the conflict began. A report issued by Radware states that following Operation Epic Fury, coordinated pro-Iranian and allied hacktivist groups launched widespread cyber campaigns, including DDoS attacks, phishing operations, and hack-and-leak claims, targeting government and critical infrastructure across the Middle East while amplifying political messaging online. Graphika ’s analysis finds that AI played a central role in influence operations targeting elections in Bangladesh, Colombia, and Tibet, enabling the rapid production and amplification of synthetic political content across online platforms. Le Monde reports that the EU and France are reassessing their strategies to counter rising Russian disinformation ahead of major elections in 2027, including the French presidential vote and parliamentary elections in Italy. [State Actors] Russia Russia’s Information Control in Ukraine’s Occupied Territories An article by EUvsDisinfo reports that since the start of Russia’s occupation of parts of Ukraine in 2014 and the full-scale invasion in 2022, authorities under Russia have built a comprehensive system of information control in Ukraine’s occupied regions. An estimated 5–6 million residents now live in an information environment largely cut off from Ukrainian media and dominated by Russian state narratives. Ukrainian outlets have been shut down or replaced with pro-Kremlin media, while telecommunications infrastructure has been “Russified”, shifting networks to Russian operators that enable traffic filtering, website blocking, and internet disruptions. Authorities have also restricted access to major platforms and promoted state-controlled messaging apps to expand surveillance and propaganda. Ukrainian journalists have been detained or forced to leave, while new media outlets and Telegram channels simulate “normal life” under occupation while spreading Kremlin narratives. Russian authorities have also targeted language and education as tools of control, eliminating Ukrainian-language schooling, removing Ukrainian books from libraries, and imposing Russian as the dominant language in public life. Arrests, surveillance, and online monitoring have created a climate of fear that encourages widespread self-censorship among residents. Despite these pressures, forms of quiet resistance persist. Some residents use VPNs to access Ukrainian information, maintain Ukrainian language and culture at home, or continue education through Ukrainian online schools. Source: EUvsDisinfo, E. Beketova. Russia’s Information Grip on Ukraine’s Occupied Territories . [online] Published 2 March 2026. Available at: https://euvsdisinfo.eu/russias-information-grip-on-ukraines-occupied-territories/ Top Of Page Kremlin Used Iran War Disinformation to Target Ukraine and France As reported by NewsGuard 's Reality Check, the Russian influence operation Matryoshka has used the ongoing conflict involving Iran to spread fabricated reports aimed at discrediting Ukraine and its allies, particularly France. The campaign produced at least 9 false claims during the conflict triggered by coordinated strikes by the US and Israel against Iranian targets. The reports mimicked credible media outlets and research institutions to make the claims appear legitimate. Among the false narratives was a fake video attributed to the Institute for the Study of War claiming that an Iranian missile killed 70 French soldiers at a base in Abu Dhabi. Another report falsely attributed to Le Point alleged that Ukrainian phone scammers stole millions of euros from French citizens by exploiting the crisis. Additional claims attempted to portray Ukrainian military leadership as corrupt, including a story falsely suggesting that Ukraine’s army chief, Oleksandr Syrskyi, owned a luxury property in Dubai that was destroyed by an Iranian missile. Source: NewsGuard Reality Check, A. Lee. Russia Uses Iran War to Plant Fake Reports Discrediting Ukraine And France . [online] Published 4 March 2026. Available at: https://www.newsguardrealitycheck.com/p/kremlin-creates-iran-war-fakes-to Top Of Page Pro-Kremlin Disinformation Targeted Middle East War Narratives An article by EUvsDisinfo showed that Pro-Kremlin media outlets have circulated several misleading narratives related to the war in the Middle East. One widely promoted claim, attributed to the Russian Foreign Intelligence Service, alleged that France and the UK planned to provide Ukraine with a nuclear or "dirty" bomb. Russian state media repeated the accusation across multiple languages, despite offering no evidence, while officials in Ukraine, France, and the UK rejected the claims. Other pro-Kremlin narratives accused Ukraine and the British intelligence agency MI6 of plotting to sabotage Black Sea energy infrastructure, including the Turkish Stream and Blue Stream gas pipelines. These allegations were largely based on statements by Putin and echoed previous unsupported accusations made by Moscow during the war in Ukraine. At the same time, pro-Kremlin outlets promoted the claim that Gulf states such as Qatar, Saudi Arabia, and the United Arab Emirates had asked Russia to serve as the guarantor of peace in the Middle East. Reports indicated that the diplomatic contacts were initiated by Moscow itself. Source: EUvsDisinfo. How Russia lies about the war in the Middle East . [online] Published 6 March 2026. Available at: https://euvsdisinfo.eu/how-russia-lies-about-the-war-in-the-middle-east/ Top Of Page China Leaked Documents Revealed AI-Driven Chinese Influence Operations A 399-page leak analyzed by DoubleThink Lab reveals that the Chinese technology company GoLaxy planned to build advanced AI-powered systems to automate the manipulation of foreign information. The documents describe a “smart propaganda” platform capable of monitoring political trends, identifying targets, generating tailored propaganda, and distributing it through large networks of realistic fake social media accounts. The company, which originated from the Institute of Computing Technology under the Chinese Academy of Sciences, aims to use AI and data analysis to influence public opinion abroad more effectively than previous campaigns, often associated with spam-like messaging. The system gathers large amounts of political and social data, profiles influential individuals and organizations, and generates customized content that can be automatically posted by thousands of “persona bots” across platforms such as Facebook, X, and Telegram. The documents also reveal detailed monitoring programs targeting sensitive regions and political issues for China, including extensive datasets on political figures and organizations in Taiwan, as well as monitoring of political actors and institutions in the US and across the Indo-Pacific. These systems are designed to track elections, analyze public sentiment, and identify opportunities to shape narratives or deepen political divisions. The leak also highlights cooperation between GoLaxy and organizations linked to the Chinese government and military, including the Cyberspace Administration of China and the Central Military Commission. Source: DoubleThink Lab. The Rise of AI in PRC Influence Operations: Nine Takeaways from the GoLaxy Documents . [online] Published 4 March 2026. Available at: https://medium.com/doublethinklab/the-rise-of-ai-in-prc-influence-operations-nine-takeaways-from-the-golaxy-documents-2d6617a753e5 Top Of Page A Case Study of PRC Influence in Malaysia and Singapore A study by DoubleThink Lab examining the People’s Republic of China's influence in Southeast Asia found that both Malaysia and Singapore face growing exposure to Chinese political, economic, and technological influence. According to the Doublethink Lab China Index 2024, several Southeast Asian countries ranked among the most affected globally. In Malaysia, ties with Beijing have strengthened under Prime Minister Anwar Ibrahim, including expanded economic cooperation, visa policies, and agreements in media, telecommunications, and technology. Public sentiment toward China has also grown more positive, partly due to geopolitical tensions, such as the Gaza war. At the same time, concerns remained about economic competition, security risks, and territorial disputes in the South China Sea. Malaysia has signed multiple cooperation agreements with Chinese state-linked media and technology companies, raising questions about potential narrative influence and information sharing. In Singapore, influence efforts were evident in cultural, educational, and media exchanges, as well as on Chinese social media platforms and diaspora networks. Authorities have also faced cyber incidents attributed to Chinese-linked groups and instances of online disinformation, including AI-generated content spreading false claims about political divisions within the ruling People’s Action Party and its leader, Prime Minister Lawrence Wong. Despite expanding engagement with Beijing, both countries maintain complex and cautious relationships with China. Singapore continues to balance ties with both China and the US, while Malaysia’s closer cooperation with Beijing coexists with concerns over sovereignty and economic dependence. Source: DoubleThink Lab. PRC Influence on ASEAN Countries in 2025: The Case Study of Malaysia and Singapore . [online] Published 3 March 2026. Available at: https://medium.com/doublethinklab/prc-influence-on-asean-countries-in-2025-the-case-study-of-malaysia-and-singapore-48d84c1e71c1 Top Of Page Iran Misleading War Images Exaggerated Iran’s Military Success As coordinated strikes on 28 February 2026 escalated tensions between the US, Israel, and Iran, an analysis by NewsGuard ’s Reality Check documented how misleading images and videos began circulating widely on social media. At least three viral visual misinformation pieces collectively garnered more than 21.9 million views on X, falsely portraying Iran as achieving major military victories. In reality, each example relied on unrelated or old footage. A widely shared image allegedly showing the USS Abraham Lincoln sinking was actually a photo of the scuttling of the decommissioned USS Oriskany in 2006. Another viral video claimed to depict an Iranian strike on Israel’s Negev Nuclear Research Center but was traced to a 2017 ammunition depot explosion in Balakliya, Ukraine. A third clip circulated as footage of a missile strike on a CIA headquarters in Dubai, United Arab Emirates, but actually showed a residential tower fire in nearby Sharjah in 2015. Source: NewsGuard Reality Check, L. Marchl & I. Blachez & S. Rubinson. Amid U.S. and Israeli Attack, Phony Images and Videos Make It Look Like Iran Has Had Big Wins . [online] Published 2 March 2026. Available at: https://www.newsguardrealitycheck.com/p/iran-is-winning-according-to-misleading Top Of Page Cyber and Information Operations Expand Pressure on Iran’s Regime Alongside US and Israeli airstrikes on Iran, a parallel campaign has targeted the country’s digital infrastructure and information environment. A report by the Australian Strategic Policy Institute (ASPI) , described this strategy as a fusion of cyber and kinetic operations designed to disrupt communications, weaken situational awareness, and pressure political decision-making. By targeting telecommunications, state media systems, and online services, cyber tools can create uncertainty within government institutions while simultaneously affecting how the public perceives events on the ground. Though none of these guarantees regime collapse. The campaign reportedly included attempts to reach civilians directly through digital platforms. One example cited is the alleged hacking of the widely used Iranian prayer and calendar application BadeSaba, which sent push notifications urging users and military personnel to surrender. Similar approaches have appeared in the past, including those used by Russia in digital influence operations. Source: ASPI, J. Bassi & J. Corera. Beyond bombs: cyber and information operations targeting Iran’s regime . [online] Published 2 March 2026. Available at: https://www.aspi.org.au/strategist-posts/beyond-bombs-cyber-and-information-operations-targeting-irans-regime/ Top Of Page Iran Launched a Disinformation Campaign Following War with US and Israel Iranian state media have intensified disinformation efforts following the outbreak of war with the US and Israel, spreading false claims about battlefield successes and using manipulated or misleading visuals. As identified by NewsGuard 's Reality Check, at least 18 provably false war-related claims linked to Iran-aligned sources have circulated since the conflict began, more than triple the number recorded in the two weeks before the attacks. Several claims relied on altered or misleading images and videos. One example involved an AI-manipulated satellite image shared by the Iranian state outlet Tehran Times that falsely depicted the destruction of a US radar system at Al Udeid Air Base. In another case, Iranian channels circulated a video claiming that Iranian forces had shot down a US fighter jet over Tehran. However, the footage actually showed an Iranian aircraft destroyed by the Israeli Air Force. The spread of these narratives has been amplified by Iran’s near-total internet shutdown, which reduced connectivity by about 98 percent and limited access to outside information. Source: NewsGuard Reality Check, W. Rizvi. Iran’s State Media Bombards Iranians with False Claims and Misleading Visuals . [online] Published 5 March 2026. Available at: https://www.newsguardrealitycheck.com/p/iran-goes-on-a-disinformation-offensive Top Of Page AI Video Showing Massive Destruction in Tel-Aviv Spread by Pro-Iran Accounts An investigation by NewsGuard ’s Reality Check found that a viral video circulating on social media falsely claimed to show a dozen Iranian missiles striking a residential neighborhood in Tel Aviv on 3 March 2026, supposedly bypassing Israel’s missile defense system and causing massive destruction. The footage spread widely across platforms, gaining tens of millions of views after being promoted by pro-Iran and anti-Israel accounts, some of which described it as verified “breaking” footage of a major Iranian attack. The investigation determined that the video had actually been generated using AI. Analysis using the AI-detection tool Hive, along with visual inconsistencies in the footage, revealed that it was fabricated. For example, the street shown in the clip appeared completely deserted despite the supposed daytime missile barrage, and the siren heard in the background does not match authentic Israeli missile alert sirens. Grok described the video as authentic citizen footage, while Google AI Overviews suggested it depicted a real military engagement. In reality, while one Iranian missile did strike Tel Aviv on 28 February 2026, causing injuries and one death, the large-scale destruction shown in the viral video never occurred. Source: NewsGuard Reality Check, I. Chomnalez. Debunk: Video Showing Massive Destruction in Tel Aviv Is an AI Fabrication Spread by Pro-Iran Accounts . [online] Published 6 March 2026. Available at: https://www.newsguardrealitycheck.com/p/dozens-of-missiles-struck-tel-aviv Top Of Page The Online Information Environment Around the Iran War Following US and Israeli airstrikes on Iran, researchers at Graphika tracked the evolving online information environment surrounding the conflict. Their monitoring highlighted several parallel developments, including influence operations, hacktivist mobilization, and the spread of false or misleading information across social media platforms. Pro-China state media outlets such as CGTN and China Daily, along with Chinese diplomatic accounts, have published videos, memes, and official statements portraying the US and Israel as aggressors while expressing support for Iran and calling for de-escalation. At the same time, disinformation has circulated widely online, including coordinated posts falsely claiming that Donald Trump and Benjamin Netanyahu had been killed in the conflict. These claims spread across multiple platforms using copied messages and repurposed images. Several pro-Iranian hacktivist groups linked to Iranian security services, including actors associated with the Islamic Revolutionary Guard Corps, reactivated and claimed attacks against regional and Western targets. Meanwhile, a coalition of hacktivist groups mobilized within hours of the strikes, calling for cyber operations against US and Israeli interests, though many of the claimed attacks remain unverified. Researchers also observed pre-strike messaging from pro-Iranian and pro-Russian networks aimed at discouraging US military action and warning of possible retaliation. Source: Graphika. Iran War: Monitoring the Online Threat Landscape . [online] Published 6 March 2026. Available at: https://graphika.com/posts/iran-war-monitoring-the-online-threat-landscape Top Of Page Hacktivist Cyber Campaign Escalates Following Operation Epic Fury A report issued by Radware states that following the launch of the U.S.–Israeli military offensive “Operation Epic Fury” on February 28, 2026, a rapid surge of hacktivist cyber activity emerged across the Middle East. According to the first report, hacktivist groups mobilized within nine hours of the initial strikes. They launched coordinated distributed denial-of-service (DDoS) campaigns targeting government institutions, financial systems, and telecommunications providers. The activity was highly concentrated, with two groups - Keymous+ and DieNet - responsible for nearly 70% of all attack claims between February 28 and March 2. In total, nine hacktivist collectives claimed 107 attacks against 81 organizations across eight countries, with government institutions accounting for approximately 53% of all targets. The campaign focused primarily on Kuwait, Israel, and Jordan, which together represented more than 76% of attack claims, indicating a strategic effort to disrupt state infrastructure and maximize political visibility during the regional escalation. A separate threat analysis highlights the broader cyber and influence ecosystem surrounding the conflict, noting that although Iranian state cyber capabilities were likely temporarily constrained by severe internet outages and disruptions to command structures, external actors rapidly intensified cyber activity. According to the second report, numerous pro-Iranian, pro-Palestinian, and pro-Russian hacktivist groups, including Handala Hack, APT Iran, Cyber Islamic Resistance, Dark Storm Team, and 313 Team, conducted disruptive operations, including DDoS attacks, hack-and-leak campaigns, website defacements, and attempts to access industrial control systems. The report also identifies influence-oriented tactics, including phishing campaigns using a malicious replica of Israel’s Home Front Command RedAlert application to deliver surveillance malware, intimidation campaigns targeting critics of Iran, and public claims of system compromises intended to generate psychological pressure and shape public narratives during the conflict. Taken together, the two analyses describe a coordinated cyber escalation accompanying the kinetic conflict. Both reports emphasize that a small number of ideologically aligned hacktivist networks dominate the activity, often publicizing operations through Telegram and other online platforms to amplify political messaging. The participation of external actors, including pro-Russian groups such as NoName057(16), suggests the emergence of a broader transnational cyber campaign in which disruptive attacks, exaggerated breach claims, and information operations operate alongside military confrontation to destabilize infrastructure, influence public perception, and expand the digital dimension of the conflict. Source: Radware. Retaliatory Hacktivist DDoS Activity Following Operation Epic Fury/Roaring Lion . [online] Published 3 March 2026. Available at: https://www.radware.com/security/threat-advisories-and-attack-reports/ddos-activity-following-operation-epic-fury-roaring-lion/ Unit 42, Palo Alto Networks. Threat Brief: March 2026 Escalation of Cyber Risk Related to Iran . [online] Published 2 March 2026. Available at: https://unit42.paloaltonetworks.com/iranian-cyberattacks-2026/ Top Of Page [AI Related Articles] Google’s AI Overviews Amplified Iran-Related Hoaxes An investigation by NewsGuard ’s Reality Check found that Google’s AI-generated search summaries were repeating false claims about images linked to the conflict involving Iran. In several cases, the summaries presented unverified claims as plausible explanations, even when the images or videos were unrelated to the conflict or created using AI. NewsGuard identified multiple examples. Google’s AI summary described a video from a 2015 residential fire in the United Arab Emirates as a possible missile strike on a CIA-linked building in Dubai. In another case, an AI-generated video allegedly showing multiple Iranian missiles striking Tel Aviv was summarized as depicting a real military escalation. Similarly, footage posted weeks earlier was mischaracterized as evidence that the US Embassy in Riyadh had been destroyed in a drone attack, despite verified footage showing only minor damage. Although Google’s reverse-image search can still help trace the origins of images through its “Exact Matches” feature, the AI-generated summary appears at the top of search results and may shape users’ first impressions. Source: NewsGuard Reality Check, S. Rubinson & I. Chomnalez. Google’s AI Overviews Are Fooled by Iran-Related Visual Hoaxes in its Own Reverse-Image Searches . [online] Published 3 March 2026. Available at: https://www.newsguardrealitycheck.com/p/googles-ai-overviews-supercharge Top Of Page AI Fueled Election Interference Across Three States An analysis by Graphika found that AI played a central role in influence operations targeting elections in Bangladesh, Colombia, and Tibet. Across the three cases, coordinated networks used AI tools to generate deepfake videos, produce political content in multiple languages, and automate the mass distribution of posts through inauthentic accounts designed to appear like genuine users. Ahead of Bangladesh’s February 2026 election, AI-generated videos circulated online, showing fabricated authority figures accusing political actors, while fact-checkers later identified the clips as fake. In the Tibetan diaspora election for the Central Tibetan Administration, accounts linked to the pro-China "Spamouflage" network published AI-generated articles and cartoons in both English and Tibetan to discredit candidates. Meanwhile, in Colombia’s parliamentary elections, a network of automated accounts flooded social media with thousands of posts targeting President Gustavo Petro, amplifying both pro and anti-government narratives to create confusion and information overload. AI significantly lowered the cost and effort required to produce persuasive political content and coordinate large-scale influence campaigns. As a result, election interference is becoming easier to conduct, more difficult to detect, and increasingly focused on eroding public confidence in democratic institutions rather than simply promoting or attacking specific candidates. Source: Graphika. Deepfakes, Noise, and Doubt: AI's Role in Three Recent Elections . [online] Published 4 March 2026. Available at: https://graphika.com/posts/deepfakes-noise-and-doubt-ai-s-role-in-three-recent-elections Top Of Page AI-Assisted Russian Influence Network Targets Africa A pro-Kremlin influence network linked to the media outlet Rybar used ChatGPT to support covert information operations targeting audiences in Africa, according to a threat report by OpenAI . The campaign, dubbed "Fish Food", relied on artificial intelligence to mass-produce multilingual social media content that was later disseminated across Telegram and X through accounts with no declared connection to the outlet. Researchers found that the system was used primarily as a content-generation hub, producing batches of comments and narratives, often from Russian-language prompts, designed to promote pro-Russian messaging, criticize Ukraine, and portray Western countries as interfering actors. Beyond producing propaganda content, the network reportedly used AI tools to plan and structure influence operations. Prompts requested assistance drafting proposals for covert campaigns in Africa, including strategies to manage coordinated social media accounts, create a bilingual investigative journalism website, arrange paid media placements in French-language outlets, and organize offline events. Additional prompts sought information on electoral systems in Burundi, Cameroon, and Madagascar, including suggestions for mobilizing protests and developing local networks to support election-related influence activities. Although some of the accounts involved appeared to originate in Russia and Rybar maintains a large Telegram audience, OpenAI stated it found limited evidence that the campaign achieved substantial amplification or measurable real-world impact. Source: OpenAI. Disrupting malicious uses of AI . [online] Published 25 February 2026. Available at: https://openai.com/index/disrupting-malicious-ai-uses/ Top Of Page [Cyfluence Attacks] Iran Expected to Launch Cyber Attacks Worldwide Following US and Israeli Strikes As reported by Infosecurity Magazine , Iran is highly likely to respond to recent US and Israeli air strikes with cyber-attacks targeting organizations across the Middle East and other regions, according to Google’s chief of cyber threat intelligence, John Hultquist. Speaking at an event hosted by the Royal United Services Institute (RUSI) in London, Hultquist warned that countries hosting US military bases, such as Qatar, Bahrain, Jordan, the United Arab Emirates, and Kuwait, could face particularly aggressive cyber operations. While Iran’s cyber tools may not be new, the scope of potential targets is expected to expand significantly, exposing organizations with weaker cybersecurity defenses. Hultquist noted that Iran often operates through loosely connected cybercriminal or hacktivist groups, creating plausible deniability for state-backed actions. These groups may appear independent, but could actually act as fronts for the Iranian Revolutionary Guard Corps (IRGC). As a result, some incidents labeled as hacktivism or ransomware may in fact be covert state-sponsored cyber campaigns aimed at governments, businesses, and critical infrastructure in the US, Gulf states, and other countries viewed as hostile to Iran. Source: Infosecurity Magazine, D. Palmer. Expect Iran to Launch Cyber-Attacks Globally, Warns Google Head of Threat Intel . [online] Published 2 March 2026. Available at: https://www.infosecurity-magazine.com/news/iran-cyber-attacks-global-google/ Top Of Page Middle East Conflict Triggered Surge in Global Cyber Activity Infosecurity Magazine describes the escalating conflict in the Middle East as entering a hybrid phase that combines military operations with large-scale cyber campaigns, raising concerns about spillover risks for organizations worldwide. The shift followed joint Israeli and US strikes on Iran targeting Iranian leadership, military, and nuclear-related sites. At the same time, a major cyber operation reportedly disrupted Iran’s digital infrastructure, with internet connectivity dropping to about 4% of normal levels. Security experts warned that cyber retaliation is likely to intensify, potentially disguised as criminal activity such as ransomware attacks. Increased activity has already been observed from hacktivist and cybercriminal groups, including HydraC2, Handala, and Sicarii, alongside a surge in DDoS attacks, website defacements, and alleged data breaches targeting government, banking, aviation, and telecom sectors. More than 150 hacktivist incidents were reported in the days following the strikes. Experts noted that Iran has previously used cyber operations to retaliate against geopolitical pressure and may deploy tactics designed to obscure state involvement. These could include ransomware attacks followed by destructive data wiping, exploiting long-term espionage access for sabotage, operating through proxy cybercriminal groups, and publicly releasing stolen data to pressure victims. Source: Infosecurity Magazine, A. Mascellino. Hybrid Middle East Conflict Triggers Surge in Global Cyber Activity . [online] Published 2 March 2026. Available at: https://www.infosecurity-magazine.com/news/middle-east-conflict-surge-global/ Top Of Page [Appendix - Frameworks to Counter Disinformation] EU and France Rethink Strategy to Counter Russian Disinformation Amid growing concerns about foreign interference ahead of major elections in 2027, an article in Le Monde outlines how the EU and France are reassessing their strategies to counter rising Russian disinformation. The elections include the French presidential election and the Italian parliamentary elections. Concerns have grown over interference campaigns ranging from propaganda websites and manipulated AI content to algorithm-driven influence on social media platforms such as TikTok and X. In response, the European Commission launched the European Center for Democratic Resilience as part of its “European Democratic Shield” initiative, aiming to coordinate member states’ responses and protect public debate. Both Brussels and Paris are shifting toward a society-wide approach that emphasizes public resilience rather than relying solely on removing false content. The strategy focuses on strengthening media literacy, empowering trusted community figures such as educators and librarians to help identify misinformation, and improving research and education around information manipulation. Finland is often cited as a successful model in which media literacy is taught from an early age and public trust in institutions remains relatively high. France has also developed specific tools to counter foreign interference. Since 2021, the government unit Viginum has monitored digital manipulation campaigns, while officials plan to expand research programs and public awareness initiatives ahead of the 2027 election cycle. Source: Le Monde, W. Audureau. EU and France rethink their approach to combat Russian disinformation surge . [online] Published 4 March 2026. Available at: https://www.lemonde.fr/en/les-decodeurs/article/2026/03/04/eu-and-france-rethink-their-approach-to-combat-russian-disinformation-surge_6751095_8.html/t/699c3be7e6c285093f8f2541/1771846631816/Report+Leicester.pdf Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website Top Of Page
- Cyber-based hostile influence campaigns 23rd February - 01st March 2026
[Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect. During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] X Disinformation Spread on X After US and Israeli Strike on Iran White House Staffer Linked to Anonymous Pro-Trump X Account [State Actors] Russia Escalating Hybrid Warfare Risks in Europe Russian Disinformation Wave Targeted the Olympics, France, and the Navalny Investigation Russian Disinformation Campaign Targeted Ukraine’s European Allies Russia Expands Influence in Africa Through Hybrid and Disinformation Tactics China Pro-China Bot Network Spread Propaganda on Threads Targeting Taiwanese Users Iran Pro-Iran Networks Mobilized Hacktivists and Media to Control War Narratives Hacktivist Activity Linked to Iran War [AI Related Articles] AI-Enabled Scams and State-Linked Influence Operations Exposed in 2026 Threat Report Quarterly NewsGuard Audit Found Leading AI Chatbots Repeat False Claims at High Rate [General Reports] AI-Driven Disinformation Amplifies Fear During Mexican Cartel Violence False Hostage Narrative Spread Widely After Killing of Cartel Leader The Islamic State’s Evolving Online Ecosystem in 2026 Online Amplification and Disinformation Escalated Leicester Unrest [CRC Glossary] [ Report Highlights] Russia has conducted increasingly aggressive hybrid warfare in NATO countries since its 2022 invasion of Ukraine, and according to a report by Recorded Future , it will likely intensify coordinated tactics such as cyberattacks, sabotage, propaganda, airspace violations, and energy pressure based on a New Generation Warfare doctrine. The Carnegie Endowment for International Peace report found that Russia advances its influence in Africa through coordinated security partnerships and strategic information campaigns that exploit local grievances and anti-Western sentiment, though its long-term impact is constrained by limited economic capacity and structural vulnerabilities. NewsGuard analysis shows that the Russian influence operation Storm-1516 has increasingly shifted its focus from Ukraine itself to European countries supporting Ukraine’s defense, particularly France and Germany. According to an analysis by Graphika , during the June 2025 war between Israel and Iran, pro-Iran media outlets, social media networks, and hacktivist groups coordinated efforts to shape public perception and promote unified narratives. The "February 2026 update" of OpenAI detailed how criminal networks and state-linked actors leveraged AI models to scale scams, social engineering, and coordinated influence operations, combining automated content generation with fake personas, cross-platform amplification, and targeted harassment campaigns. A report by the Independent Commission of Inquiry into the 2022 Leicester violence finds that coordinated online networks used misleading content and identity-based narratives to amplify and distort unrest in Leicester, escalating local tensions into a broader, polarized crisis. [Social Media Platforms] X Disinformation Spread on X After US and Israeli Strike on Iran Following announcements by Donald Trump about a major U.S. and Israeli military operation against Iran, a wave of misleading and false content rapidly spread on X. An analysis by WIRED reviewed hundreds of viral posts containing inaccurate claims about the scale and locations of the attacks. Many widely shared videos and images were old footage, taken from unrelated conflicts, altered, or generated with AI. In some cases, video game clips were circulated as supposed evidence of real combat events. Much of the viral misinformation came from accounts with paid verification badges, whose posts can generate revenue through engagement. Although some misleading posts later received contextual corrections through community notes, they often accumulated millions of views before being flagged. Examples included footage falsely claiming to show missiles over Dubai, alleged Israeli aircraft shootdowns without credible confirmation, and manipulated images depicting damage to Iranian or U.S. military assets. Source: WIRED, D. Gilbert. X Is Drowning in Disinformation Following US and Israeli Attack on Iran . [online] Published 28 February 2026. Available at: https://www.wired.com/story/x-is-drowning-in-disinformation-following-us-and-israels-attack-on-iran/ Top Of Page White House Staffer Linked to Anonymous Pro-Trump X Account A WIRED report described how a popular pro-Trump account on X, known as “Johnny MAGA,” with nearly 300,000 followers, appeared to be run by a White House rapid-response staffer named Garrett Wade. The account has frequently amplified messaging from the White House and defended Donald Trump's policies while presenting itself online as an independent supporter. WIRED linked Wade to the anonymous account through publicly available records and a phone number associated with both him and the profile. The account has promoted administration narratives, attacked political opponents such as Gavin Newsom, and defended controversial content shared by Trump, including an AI-generated video depicting Barack Obama and Michelle Obama as apes. Disinformation researchers suggested that, if a government staffer is secretly operating an influential political account, it could blur the line between official government messaging and what appears to be organic digital support. Source: WIRED, M. Kelly. A White House Staffer Appears to Run Massive Pro-Trump X Account . [online] Published 25 February 2026. Available at: https://www.wired.com/story/a-white-house-staffer-appears-to-run-massive-pro-trump-meme-page/ Top Of Page [State Actors] Russia Escalating Hybrid Warfare Risks in Europe Russia has conducted increasingly aggressive hybrid warfare in NATO countries since its 2022 invasion of Ukraine, and a report by Recorded Future’s Insikt Group warns that it will likely intensify coordinated tactics such as cyberattacks, sabotage, propaganda, airspace violations, and energy pressure based on a New Generation Warfare doctrine. Unlike opportunistic actions, a full campaign would involve more frequent, simultaneous operations across multiple countries to undermine confidence in governments and strain NATO’s ability to respond. These tactics could target critical infrastructure, including communications cables, transportation, and energy systems, while expanding disinformation campaigns to deepen political divisions and reduce public trust. The report warned that both governments and private companies would face disruption, reputational risks, and economic damage. Russia may also exploit Europe’s remaining dependence on Russian oil and gas and use emerging technologies such as AI to enhance propaganda and cyber operations. Putin is likely to assess over the next three to five years whether conditions are favorable for a military incursion into NATO territory, particularly in Eastern Europe. Source: Recorded Future, Insikt Group, Preparing for Russia’s New Generation Warfare in Europe . [online] Published 24 February 2026. Available at: https://www.recordedfuture.com/research/preparing-for-russias-new-generation-warfare-in-europe Top Of Page Russian Disinformation Wave Targeted the Olympics, France, and the Navalny Investigation Disinformation campaigns targeting the Olympics, France, and the Navalny investigation intensified in the week of 23 February 2026, as described in an EUvsDisinfo article . On the Winter Olympics in Milan and Cortina, where Russia is banned from competing as a state due to doping scandals and its war against Ukraine, pro-Kremlin sources spread AI-enhanced false reports portraying Ukrainian athletes as aggressive or problematic. These narratives included fabricated news segments impersonating legitimate outlets such as Euronews, AFP, and CBC, with AI-generated voices and altered footage used to falsely claim that Ukrainian athletes were isolated from others because of “toxic behavior.” Regarding France, pro-Kremlin outlets falsely claimed that President Emmanuel Macron intended to ban virtual private networks after France adopted new rules restricting social media access for children under 15. In reality, Macron stated that VPNs would not be banned and could even be used to bypass the restrictions. Another disinformation effort sought to deflect attention from evidence that Russian opposition leader Alexey Navalny had been poisoned. Russian officials suggested that the findings of a multinational investigation were meant to distract from the release of files related to Jeffrey Epstein, despite no evidence linking the two events. In the past, pro-Kremlin FIMI outlets have alleged that US special services killed Navalny, that the UK probably assassinated Navalny, and that Navalny died after using an mRNA vaccine supplied by Germany, among many other deceptions. Source: EUvsDisinfo, Russia’s quest for disinformation gold [online] Published 27 February 2026. Available at: https://euvsdisinfo.eu/russias-quest-for-disinformation-gold/ Top Of Page Russian Disinformation Campaign Targeted Ukraine’s European Allies A report by NewsGuard found that the Russian influence operation Storm-1516 has increasingly shifted its focus from Ukraine itself to European countries supporting Ukraine’s defense, particularly France and Germany. Since January 2025, the network has circulated dozens of fabricated claims targeting leaders such as Emmanuel Macron and Friedrich Merz, generating hundreds of millions of views on X. The campaign intensified as European countries increased financial and military aid to Ukraine following a sharp decline in U.S. support. The operation spread disinformation by publishing fabricated stories on fake websites that imitated legitimate news outlets, then amplified them through anonymous social media accounts and pro-Kremlin influencers. Many of the false narratives involved sensational personal accusations, fabricated documents, or manipulated videos aimed at damaging the reputations of political leaders and undermining trust in democratic institutions. In France, several false claims targeted Macron personally, including fabricated stories linking him to the Jeffrey Epstein scandal. In Germany, narratives portrayed Merz as corrupt or involved in criminal activity, while also spreading misleading claims about election integrity. Source: NewsGuard, N. Huet & M. Wolf & E. Maitland, Russian Influence Campaign Shifts to Target Ukraine’s European Allies Funding its Defense . [online] Published 27 February 2026. Available at: https://www.newsguardrealitycheck.com/p/russian-influence-campaign-storm-1516-targets-france-germany Top Of Page Russia Expands Influence in Africa Through Hybrid and Disinformation Tactics Russia’s expanding footprint in Africa and the central role of influence operations in advancing Moscow’s strategic objectives were analyzed in a report by the Carnegie Endowment for International Peace . Beyond military deployments and security partnerships, Russia leverages coordinated information campaigns to portray itself as an anti-colonial partner and an alternative to Western engagement. Through state-backed media, diplomatic messaging, and amplification by local actors, pro-Kremlin narratives frame Western governments as destabilizing forces while depicting Russia as a defender of sovereignty and regime stability. These campaigns frequently exploit local grievances, governance failures, and dissatisfaction with Western security interventions to gain traction. Information manipulation operates alongside security assistance, elite cultivation, and economic arrangements. Russian-linked actors, including private military companies and affiliated media platforms, support friendly political elites while shaping public discourse to legitimize Moscow’s presence. Disinformation and narrative amplification are used to undermine Western credibility, reinforce anti-Western sentiment, and present Russian engagement as both pragmatic and ideologically aligned with African sovereignty. In fragile or conflict-affected states, this hybrid approach has enabled Russia to secure influence disproportionate to its economic footprint. However, the report highlighted significant constraints on Moscow’s long-term influence. Russia’s engagement is heavily concentrated in the security domain and relies on opportunistic partnerships rather than sustained institutional investment. Limited economic capacity, reputational risks associated with mercenary activity, and complex local political dynamics constrain the durability and scalability of its influence. Source : Carnegie Endowment for International Peace, N. Reynolds, Russia in Africa: Examining Moscow’s Influence and Its Limits . [online] Published 26 February 2026. Available at: https://carnegieendowment.org/research/2026/02/russia-role-west-southern-africa-junta-wagner-africa-corps Top Of Page China Pro-China Bot Network Spread Propaganda on Threads Targeting Taiwanese Users A coordinated influence campaign linked to pro-China actors using automated AI bots on Threads to spread propaganda aimed at Taiwanese audiences was uncovered in an investigation by NewsGuard. The campaign involved at least 10 accounts posing as Taiwanese users. Automated bots can generate massive amounts of content and engagement in near-real time. As a result, despite the accounts having relatively small follower bases, some of their posts have gone viral. The bot accounts frequently posted in Traditional Chinese, following Taiwan’s standard conventions, a style of Chinese writing used uniquely in Taiwan for everyday communication. Their messages promoted narratives praising China while attacking Taiwan’s ruling Democratic Progressive Party and highlighting domestic grievances such as housing costs and governance. At the same time, the bots avoided politically sensitive topics banned by Beijing, including questions about the Tiananmen Square Massacre. NewsGuard became aware of the campaign when Threads users with advanced programming knowledge tricked the bots into revealing their true nature. In one instance, a bot repeatedly followed instructions to repeat a phrase verbatim, and in another, it identified itself as OpenAI’s AI assistant (ChatGPT). In response to a comment by a Meta spokesperson questioning the accounts’ influence, NewsGuard argued that the appropriate way to measure their impact is not by follower counts but by the accounts’ reach. Source : NewsGuard, C. Lin, Pro-China Influence Campaign Uses Automated Bots on Meta’s Threads to Target Taiwanese Users With Propaganda . [online] Published 25 February 2026. Available at: https://www.newsguardrealitycheck.com/p/automated-pro-china-bots-swarm-metas Top Of Page Iran Pro-Iran Networks Mobilized Hacktivists and Media to Control War Narratives Coordination among pro-Iran media outlets, social media networks, and hacktivist groups during the June 2025 war between Israel and Iran was examined in a Graphika analysis (also see CRC Weekly Update, 15th - 21st December 2025 click here ). After an initial delay, these actors amplified breaking-news-style claims, threats, and unverified “hack-and-leak” announcements designed to portray Iran as dominant online and to depict Israel as exposed and technically weak. Graphika highlighted the rapid promotion of a new actor, Cyber Isnaad Front, which was repeatedly boosted by outlets linked to the IRGC. The analysis documented a large wave of mostly organic pro-Iran hacktivist activity, including Handala Hack, Gaza Children Hackers, CyberToufan, and Homeland Justice, claiming dozens of intrusions, DDoS attacks, doxxes, and data dumps. Some disruptive operations (and counter-claims) were attributed to other actors, such as Gonjeshke Darande. State media amplification, for example, by Tasnim News, helped inflate the visibility of these personas, but Graphika stressed that most public claims remained unproven. Source : Graphika, Everything Everywhere All at Once: The Pro-Iran Playbook for Narrative Control – Part 2 . [online] Published February 2026. Available at: https://public-assets.graphika.com/reports/Graphika_Report-Everything_Everywhere_All_at_Once_(part%202).pdf Top Of Page Hacktivist Activity Linked to Iran War CyberKnow posted that 60 hacktivist groups are engaged in activities related to the Iran-US/Israel war. Most neighboring countries to Iran have been targeted. Pro-Russian groups started to join the fight in support of Iran, and more are expected to join. Source: X, CyberKnow (@Cyberknow20). Post on X . [online] Published 2 March 2026. Available at: https://x.com/Cyberknow20/status/2028454796077019583 Top Of Page [AI Related Articles] AI-Enabled Scams and State-Linked Influence Operations Exposed in 2026 Threat Report The February 2026 OpenAI update documented how threat actors across multiple regions leveraged AI models alongside traditional digital infrastructure to conduct scams, covert influence operations, and social engineering campaigns. The report highlighted that AI was typically integrated into broader operational workflows rather than acting as a standalone tool. Documented abuses include semi-automated romance and investment scams originating in Cambodia, fake "scam recovery" services impersonating U.S. law enforcement and law firms, and social engineering efforts likely linked to China targeting U.S. officials. These operations followed structured engagement funnels, often described as "ping, zing, sting", using AI-generated content to initiate contact, build emotional manipulation, and ultimately extract funds. Distribution tactics relied heavily on social media ads, messaging apps, fake websites, impersonation of trusted institutions, and multilingual content generation to increase credibility and reach. The report also detailed multiple covert influence operations linked to or aligned with the state. A Russia-linked content farm associated with the "Rybar" network generated multilingual social media posts and coordinated amplification across Telegram and X, including proposals for election interference campaigns in Africa. Another Russia-linked campaign targeted African audiences with long-form geopolitical articles under fabricated academic personas, seeking to criticize Western governments and legitimize Russian positions. Most significantly, a ChatGPT account linked to an individual associated with Chinese law enforcement documented and planned large-scale "cyber special operations" aimed at suppressing dissent globally. These activities included coordinated harassment, hashtag campaigns, impersonation of officials, creation of fake accounts, document forgery, doxxing, smear campaigns, and attempts to trigger platform enforcement actions against dissidents. Source: OpenAI. Disrupting malicious uses of our model: An update . [online] Published February 2026. Available at: https://cdn.openai.com/pdf/df438d70-e3fe-4a6c-a403-ff632def8f79/disrupting-malicious-uses-of-ai.pdf Top Of Page Quarterly NewsGuard Audit Found Leading AI Chatbots Repeat False Claims at High Rate A quarterly audit by NewsGuard of the 11 leading generative AI tools and their propensity to repeat false claims on controversial topics in the news. In tests, major chatbots, including ChatGPT, Gemini, Copilot, Claude, Grok, and others, repeated false claims in 28.79 percent of responses to prompts about controversial news topics. The failure rate rose to 55.45 percent when prompts were designed to imitate malicious actors attempting to manipulate the systems. The audit also found that chatbots often relied on unreliable online sources, including social media posts tied to foreign influence operations. In one case linked to the Russian operation Internet Research Agency and its offshoot, Storm-1516, several chatbots repeated a false claim that Armenian Prime Minister Nikol Pashinyan allowed Turkish companies to buy discounted gold from the Amulsar mine. Models such as ChatGPT, Grok, and Mistral even cited posts on X that spread the disinformation, inadvertently increasing their visibility. In another case, an AI system incorrectly confirmed a deepfake video of women protesters attacking a paramilitary vehicle in Iran. Some models performed better than others, most notably Claude, which correctly debunked all tested claims. NewsGuard sent emails to the companies that own the tools, but did not receive responses. Source: NewsGuard, I. Blachez. January 2026 — AI False Claim Monitor . [online] Published 25 February 2026. Available at: https://www.newsguardtech.com/wp-content/uploads/2026/02/January-2026-Quarterly-AI-Audit.pdf Top Of Page [General Reports] AI-Driven Disinformation Amplifies Fear During Mexican Cartel Violence Following a large-scale Mexican military operation targeting Jalisco New Generation Cartel leader Nemesio Oseguera Cervantes (“El Mencho”), Politico reported that widespread violence across approximately 20 states was accompanied by a parallel surge of online disinformation. As roadblocks, explosions, and shootings unfolded, social media platforms were flooded with AI-generated and manipulated videos and images portraying exaggerated or fabricated scenes of national collapse, including false claims that Guadalajara’s international airport had been seized and set ablaze. According to data presented by Mexican authorities citing Tecnológico de Monterrey, between 200 and 500 problematic posts circulated in the immediate aftermath, with 35% to 40% lacking context, at least 25% deemed misleading, and nearly 25% identified as AI-manipulated or fabricated content. Several posts garnered more than 100,000 views, amplifying panic amid acute public uncertainty. The disinformation narratives included claims that a U.S. agent had strangled Oseguera Cervantes, that President Claudia Sheinbaum was hiding offshore, and that the killing was linked to unrelated geopolitical events. Although the university report did not attribute responsibility, organized crime experts suggested that actors linked to the Jalisco cartel may have contributed, noting the group’s prior investment in online influence and its growing technological sophistication. AI-generated footage, reportedly resembling drone imagery, was described as unusually polished for unfolding crisis conditions, enhancing its credibility and emotional impact. Authorities and the U.S. Embassy attempted to counter false claims, but residents reported difficulty distinguishing authentic reporting from fabricated content. The campaign significantly intensified public fear, reinforcing perceptions of chaos and demonstrating how digitally amplified disinformation can compound real-world violence during security crises. Source: Politico, Associated Press. Online disinformation fueled panic after killing of Mexican drug lord . [online] Published 25 February 2026. Available at: https://www.politico.com/news/2026/02/25/online-disinformation-fueled-panic-after-killing-of-mexican-drug-lord-00799837 Top Of Page False Hostage Narrative Spread Widely After Killing of Cartel Leader NewsGuard debunked a widely shared disinformation claim falsely alleging that a Mexican drug cartel had taken U.S. tourists hostage in Guadalajara, Mexico, following the death of cartel leader Nemesio Rubén Oseguera Cervantes. The rumor spread rapidly across social media, gaining nearly 27 million views. Influencers and websites claimed that members of the Jalisco New Generation Cartel had kidnapped an unspecified number of American tourists from airports, hotels, and highways. The narrative was amplified by prominent figures, including Laura Loomer, whose posts on X reused real footage of chaotic scenes at Guadalajara airport but misrepresented them as evidence of cartel kidnappings. However, authorities found no proof that any tourists were taken hostage. The Mexican Embassy in the United States publicly denied the claims, stating that there had been no attacks on civilians or hostage situations at the airport. Source: NewsGuard, L. Marchl, Debunk: No Laura Loomer, Claims that the Mexican Cartel Jalisco Took U.S. Tourists Hostage Are Baseless . [online] Published 26 February 2026. Available at: https://www.newsguardrealitycheck.com/p/us-tourists-held-hostage-in-mexico Top Of Page The Islamic State’s Evolving Online Ecosystem in 2026 A report by the Institute for Strategic Dialogue (ISD) found that the digital ecosystem supporting the Islamic State continued to adapt and expand across multiple platforms in 2026 despite ongoing moderation efforts. Supporters increasingly used AI tools to convert written propaganda into audio, produce news-style content, and recreate deceased ideologues in videos. Researchers monitored around 500 accounts, channels, and websites across platforms such as Facebook, TikTok, Instagram, WhatsApp, Telegram, Element, and SimpleX Chat. These interconnected networks enabled supporters to quickly recreate accounts after removals and to coordinate activities across platforms. The report highlighted how IS supporters exploited platform features to spread propaganda, recruit followers, and, in some cases, monetize content. Influencers and unofficial support groups often acted as hubs that redirected audiences between mainstream social media, encrypted messaging services, and niche platforms. On platforms such as Facebook and Instagram, some accounts used "digital creator" features to boost reach, while on TikTok, propaganda was sometimes repackaged in gamified formats inspired by games like Roblox and Minecraft to attract younger audiences. Overall, the IS online ecosystem remained resilient due to its decentralized and cross-platform structure. The report recommended stronger cross-platform cooperation, more specialized moderation in different languages, and improved detection tools to address evolving tactics used by extremist networks online. Source: ISD, M. Ayad, Coordinating Through Chaos: The State of the Islamic State Online in 2026 . [online] Published February 2026. Available at: https://www.isdglobal.org/wp-content/uploads/2026/02/Coordinating-through-Chaos-FINAL.pdf Top Of Page Online Amplification and Disinformation Escalated Leicester Unrest Unrest in Leicester and its rapid transformation into a highly polarized national and transnational controversy through coordinated online amplification and misleading narratives were examined in a report by the Independent Commission of Inquiry into the 2022 Leicester violence . Localized incidents were reframed on social media as evidence of deep-seated ethnic and religious conflict, with posts frequently relying on decontextualized footage, recycled videos, and unverified claims. This content circulated widely before accurate information could stabilize public understanding, contributing to confusion and heightened tensions. The study identifies the role of organized digital networks, including politically aligned influencers and diaspora-linked accounts, in accelerating and legitimizing inflammatory narratives. Through synchronized posting patterns, emotionally charged language, and identity-based framing, these actors amplified selective content to portray events as part of a broader civilizational struggle. Algorithm-driven engagement further boosted sensational material, enabling misleading or exaggerated claims to reach large audiences quickly and repeatedly. The report underscores how digital manipulation shaped public perception as much as, if not more than, on-the-ground developments. By exploiting platform dynamics and social divisions, coordinated actors escalated a local dispute into a broader flashpoint, undermining trust in authorities and deepening polarization. The findings illustrate how hostile or opportunistic influence tactics can weaponize social media ecosystems to magnify unrest and distort democratic discourse. Source: School of Oriental and African Studies (SOAS) - University of London, Independent Commission of Inquiry into the 2022 Leicester violence, C. Bhatt & V. Puri & S. F. Kamal & S. Patel & S. Sinha & S. Abraham-Gonsalves & S. Grover & L. Magarrell & J. E. Méndez & N. Sultan. Better Together: Understanding the 2022 Violence in Leicester . [online] Published 1 January 2026. Available at: https://static1.squarespace.com/static/6505d742fdd85426286c1396/t/699c3be7e6c285093f8f2541/1771846631816/Report+Leicester.pdf Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website Top Of Page
- Cyber-based hostile influence campaigns 16th - 22th February 2026
[Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect. During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [State Actors] Russia Recurring False Narratives Used by the Kremlin War in Ukraine Kremlin Rewrites Ukraine’s Identity China Chinese Globalization of State-Directed Disinformation [AI Related Articles] Audit Found High Disinformation Rates in AI Voice Responses AI Revives Racist Stereotypes and Misinformation [General Reports] False Claim that Mamdani Allowed Five Daily Muslim Prayer Broadcasts in NYC Uganda's 2026 Election Marked by Repression and Internet Shutdown Voters Misidentify Political Disinformation UN Committee Condemns Disinformation Campaign Targeting Special Rapporteu r AI-Generated Disinformation Surges Ahead of Brazil’s Elections Australian Senate Probes AI-Driven Climate Disinformation and Fossil Fuel Influence Networks EU Warns of Coordinated Disinformation Targeting West Africa [Appendix - Frameworks to Counter Disinformation] The French response to Digital Foreign Interference Questionable Efforts to Fight Online Financial Scams Building Immunity Against the Disinformation ‘Virus’ EU and Spain Escalate Fight Against Climate Disinformation [CRC Glossary] [ Report Highlights] A report by EU vs. Disinfo described how Russia has used disinformation as a long-term political tool to reshape Ukrainian history and identity. Research by the China Media Project argued that China’s rapid expansion in artificial intelligence is closely linked to its long-standing strategy of information control and global influence. A study by Harvard Kennedy School’s Misinformation Review examined how Australians identified political misinformation during the 2025 federal election. In an X's tweet, the European Union in Ghana warns that coordinated disinformation campaigns, documented across 13 West African countries, including Ghana, pose a destabilizing regional threat and require heightened public awareness. According to the United Nations Office for Disaster Risk Reduction (UNDRR) article, following climate-related disinformation after a major blackout, the EU and Spain advanced regulatory and legal measures, including enforcement of the Digital Services Act and endorsement of the UN Declaration on Climate Information Integrity, to curb algorithmic amplification and hold platforms accountable for spreading false climate narratives. [State Actors] Russia Recurring False Narratives Used by the Kremlin Since its full-scale invasion of Ukraine in 2022, Russia has intensified its disinformation campaigns, using false narratives as a central tool in its confrontation with Ukraine and the West. In an article , EU vs. Disinfo presented recurring false narratives used by the Kremlin. One key claim falsely portrayed the European Union as prolonging the war out of "Russophobia" and military ambition, while another framed the conflict as a NATO proxy war against Russia, presenting Moscow as the victim rather than the aggressor. The Kremlin also promoted misleading narratives that EU sanctions have devastated Europe’s economy, despite evidence that sanctions were imposed in response to Russia’s invasion and have also strained Russia’s own economy. Additional disinformation sought to discredit Ukraine by portraying its leadership as corrupt, illegitimate, or even "Nazi", often relying on fabricated stories and fake sources. Source: EUvsDisinfo, What the Kremlin wants you to believe about its war against Ukraine . [online] Published 20 February 2026. Available at: https://euvsdisinfo.eu/what-the-kremlin-wants-you-to-believe-about-its-war-against-ukraine/ Top Of Page The War in Ukraine Kremlin Rewrites Ukraine’s Identity In a detailed analysis , EUvsDisinfo highlighted how Russia has used disinformation as a long-term political tool to reshape Ukrainian history and identity. The Kremlin has repeatedly distorted historical facts, manipulated archives, and reframed narratives to fit an imperial vision. Ukrainian aspirations for freedom and independence are treated in Moscow not as historical facts but as provocations. This strategy continues a pattern from both imperial and Soviet times, when Ukrainian distinctiveness was absorbed into a broader Russian story. A key example is Putin’s 2025 Decree No. 858, which sets a goal for 95% of residents in occupied Ukrainian regions to identify as Russian by 2036. While written in neutral bureaucratic language, the decree reflects a clear disinformation objective: to redefine identity through policy. After 2014, Russian narratives shifted from describing Ukrainians and Russians as “brotherly peoples” to claiming they are “one people,” effectively denying Ukraine’s separate existence. Any resistance to this narrative is blamed on interference by a hostile “collective West.” In this framework, control over territory is matched by control over memory, education, and public symbols. Disinformation is used not only to justify military actions but also to reshape identity itself. Source: EUvsDisinfo, Total Recall: How Russia tried to erase the Ukrainian identity . [online] Published 22 February 2026. Available at: https://euvsdisinfo.eu/total-recall-how-russia-tried-to-erase-the-ukrainian-identity/ Top Of Page China Chinese Globalization of State-Directed Disinformation Research by the China Media Project argues that China’s rapid expansion in artificial intelligence is closely linked to its long-standing strategy of information control and global influence. The Chinese Communist Party (CCP) views AI not only as a driver of economic and technological power but also as a tool for shaping international narratives. Chinese large language models (LLMs) are increasingly adopted worldwide due to their high performance and low cost, yet testing shows that many embed “information guidance” aligned with official state positions. This goes beyond simple censorship, as models subtly promote pro-government narratives on issues such as human rights, territorial claims, religion, and China’s global reputation. The spread of these models creates new risks for FIMI. Developers building on Chinese foundation models often struggle to remove embedded narrative biases, leading users to encounter CCP-aligned messaging without realizing its origin. In some cases, governments in countries such as India, Malaysia, and the UAE have adapted Chinese-based chatbots to restrict discussion of domestic human rights issues. The report also highlighted security concerns. Open-source Chinese models have shown vulnerability to “jailbreaking,” allowing users to extract dangerous information, including instructions related to weapons or toxic substances. At the same time, the geopolitical risks of narrative influence remain underexamined compared to debates over technical AI safety. The authors concluded that without stronger transparency rules and awareness of disinformation tactics embedded in AI systems, the global adoption of Chinese models could normalize state-directed information control and weaken information integrity worldwide. Source: China Media Project, A. Colville & Y. Mengli & J. Haime & D. Bandurski, Guided Intelligence: China’s AI Strategy and the Global Information Space . [online] Published 2025. Available at: https://mpf.se/download/18.4626748819c47b558e3716/1770885147750/Guided-Intelligence.pdf Top Of Page [AI Related Articles] Audit Found High Disinformation Rates in AI Voice Responses An audit by NewsGuard found that OpenAI’s ChatGPT Voice and Google’s Gemini Live frequently generated convincing audio content repeating false claims—45 percent of the time for ChatGPT and 50 percent of the time for Gemini Live when prompted to do so. These AI voice tools were especially vulnerable when asked to produce radio-style segments advancing disinformation, including pro-Kremlin narratives and fabricated political stories. The audit tested the bots using false claims related to health, politics, global news, and foreign disinformation. Both ChatGPT Voice and Gemini Live were more likely to repeat political and foreign disinformation than health falsehoods, sometimes presenting fabricated events as breaking news. In contrast, Amazon’s Alexa+ refused to repeat any false claims and often explained why they were incorrect, achieving a 0 percent failure rate. Source: NewsGuard, I. Blachez & I. Chomnalez & L. Marchl, Risky AI: ChatGPT and Gemini Readily Produce False Audio Claims, While Alexa+ Declines . [online] Published 19 February 2026. Available at: https://www.newsguardtech.com/special-reports/chatgpt-and-gemini-readily-produce-false-audio-claims-while-alexa-declines/ Top Of Page AI Revives Racist Stereotypes and Misinformation Highlighting the darker side of technological progress, The Conversation reported that in 2025, advances in artificial intelligence transformed public communication while also accelerating the spread of disinformation rooted in racism. AI-generated content, including lifelike videos produced by tools such as Sora, enabled harmful narratives to circulate faster than fact-checkers could respond. During the U.S. government shutdown, fabricated videos depicting Black women abusing Supplemental Nutrition Assistance Program (SNAP) benefits went viral. Although clearly labeled as AI-generated, many viewers accepted them as truthful because they reinforced the long-standing racist trope of the "Black welfare queen." These clips, described as "digital blackface," demonstrate how AI can modernize and amplify deeply embedded stereotypes. A similar pattern emerged in the revival of a Minnesota fraud case involving members of the Somali community. Despite prior legal resolutions, political rhetoric reignited the story, and AI-generated videos spread caricatures such as the "Somali pirate" narrative. Rather than encouraging informed debate about fraud or policy, the disinformation campaign fueled anti-Black, anti-immigrant, and Islamophobic sentiments. Even when investigations disproved widespread wrongdoing, the false framing persisted. The article argued that AI does not create racism but accelerates its reach. Fact-checking alone proves insufficient when audiences are predisposed to believe familiar stereotypes. Disinformation succeeds because it taps into existing racial anxieties and political agendas, transforming old prejudices into viral digital spectacles. Source : The Conversation, N. N. Ali, How AI resurrects racist stereotypes and disinformation — and why fact-checking isn’t enough . [online] Published 22 February 2026. Available at: https://theconversation.com/how-ai-resurrects-racist-stereotypes-and-disinformation-and-why-fact-checking-isnt-enough-270000 Top Of Page [General Reports] False Claim that Mamdani Allowed Five Daily Muslim Prayer Broadcasts in NYC NewsGuard identified a widely shared allegation as its False Claim of the Week: that Zohran Mamdani allowed mosques in New York City to broadcast the Islamic call to prayer five times daily, starting at 5 a.m. Users widely circulated two videos of New York City, with the call to prayer heard in the background, claiming the scenes showed the Adhan being broadcast at 5 a.m., thereby disturbing residents. The claim spread rapidly on social media, driven by anti-Islam activists such as Amy Mek and commentators including Eric Daugherty of Florida’s Voice News, gaining millions of views. In reality, there was no evidence that Mamdani introduced any such change. The existing policy, introduced in 2023 under former mayor Eric Adams, allows mosques to broadcast the call to prayer without a permit only once on Friday afternoons and at sunset during Ramadan. NewsGuard found no official announcements, media reports, or public complaints indicating any expansion of broadcasts. Even the viral videos appeared inconsistent with winter conditions and sunrise times. Mamdani’s press office did not respond to requests for comment, and a representative of the NYPD declined to comment. Source: NewsGuard, L. Marchl, Debunk: Mamdani Did Not Permit Mosques to Broadcast Calls to Prayer Five Times a Day . [online] Published 2026. Available at: https://www.newsguardrealitycheck.com/p/mamdani-lets-nyc-mosques-broadcast Top Of Page Uganda's 2026 Election Marked by Repression and Internet Shutdown Examining the political climate surrounding Uganda’s January 2026 presidential election, the London School of Economics and Political Science (LSE) reported that the vote was characterized by repression, silence, and widespread disillusionment rather than the energetic opposition seen in previous cycles. Long-time President Yoweri Museveni secured a seventh term with 71.65 percent of the vote, but turnout fell to just 52.5 percent, reflecting deep public distrust. The campaign period saw intensified judicial harassment, detention of opposition figures, and the forced hiding or exile of critics, including opposition leader Bobi Wine. A key feature of the election was a nationwide internet shutdown from 13 to 18 January 2026, which cut off more than 45 million people. Authorities justified the blackout as a measure against misinformation, but critics said it was intended to suppress dissent and disrupt opposition coordination. The shutdown severely affected political participation, economic activity, and the ability to report human rights abuses, while exemptions for banks and government services ensured economic continuity and reduced backlash from elites. Marginalized groups dependent on mobile money were hit hardest. Source: London School of Economics and Political Science, M. Daka & K. Rukirabashaija, The 2026 Ugandan election was marked by digital strangulation rather than hope . [online] Published 20 February 2026. Available at: https://blogs.lse.ac.uk/africaatlse/2026/02/20/the-2026-ugandan-election-was-marked-by-digital-strangulation-rather-than-hope/ Top Of Page Voters Misidentify Political Disinformation A study published by the Harvard Kennedy School’s (HKS) Misinformation Review examined how Australians identified political misinformation during the 2025 federal election. Using a digital diary method, 38 participants collected 254 examples they believed were misinformation. However, independent fact-checking revealed a significant gap: only 10 percent of the examples were found to be false, while 37 percent were verified as true. Participants most often labeled content as misinformation when it contradicted their existing beliefs (28 percent) or appeared biased (22 percent). Very few (4 percent) relied on external fact-checking. The findings suggested that perceptions of disinformation are strongly shaped by partisan worldviews rather than objective verification. The research also showed that most people do little when encountering suspected misinformation. The most common response was “doing nothing” (44 percent), followed by discussing it with others (21 percent). Information overload, political burnout, and low media literacy discourage active fact-checking. Psychological factors, such as the “illusory truth effect,” in which repeated claims feel true, further complicate the landscape. Participants were most confident in judging content false when it conflicted with their prior knowledge, even when professional fact-checkers later confirmed the information was accurate. Nearly half of the examples identified as misinformation were found on social media, and many originated from political actors or paid advertisements. The study highlighted a critical problem: what citizens perceive as disinformation often does not align with factual assessments. This mismatch allows misleading claims to circulate freely during elections, as voters rarely verify information and frequently reject accurate reporting that challenges their beliefs. Source: Harvard Kennedy School, S. Park & J. Y. Lee & K. McGuinness & C. Fisher & J. Fulton, People rely on their existing political beliefs to identify election misinformation . [online] Published 2026. Available at: https://misinforeview.hks.harvard.edu/article/people-rely-on-their-existing-political-beliefs-to-identify-election-misinformation/ Top Of Page UN Committee Condemns Disinformation Campaign Targeting Special Rapporteur The Coordination Committee of the Special Procedures of the UN Human Rights Council (OHCHR) condemned what it described as a coordinated disinformation campaign by several State Ministers targeting Francesca Albanese, the Special Rapporteur on human rights in the Palestinian territories occupied since 1967. The Committee said ministers relied on “manufactured facts,” attributing statements to Albanese that she did not make at the 17th Al Jazeera Forum, as part of a broader effort to discredit her work. It linked these actions to an organized campaign that previously resulted in what it called illegal sanctions against Albanese, with serious personal consequences, and warned of a growing pattern of politically motivated and malicious attacks against independent human rights experts, UN officials, and international judges engaged in accountability efforts. The Committee emphasized that Albanese’s mandate is to investigate alleged violations of international law in the occupied Palestinian territories and argued that calls for her resignation, alongside persistent intimidation and coordinated personal attacks, undermine independent oversight mechanisms. It urged states to redirect their efforts toward accountability for alleged war crimes and crimes against humanity, rather than targeting investigators, and framed the attacks as contributing to a broader erosion of respect for international law and institutional integrity. Source: OHCHR, Special Procedures’ Coordination Committee condemns attacks against Francesca Albanese by State representatives: “Promote justice, not disinformation” . [online] Published 17 February 2026. Available at: https://www.ohchr.org/en/press-releases/2026/02/special-procedures-coordination-committee-condemns-attacks-against-francesca Top Of Page AI-Generated Disinformation Surges Ahead of Brazil’s Elections LatAm Journalism Review (LJR) reported that Brazil is experiencing a sharp rise in AI-generated disinformation, characterized by hybrid content that blends fabricated elements with authentic media to enhance credibility and evade detection. A recent example involved a TikTok video falsely claiming that a new virus had reached Brazil ahead of Carnaval, created by stitching an AI-generated news anchor onto a legitimate broadcast while omitting contextual details that contradicted the alarmist claim. Fact-checkers identified the manipulation, but the case illustrates how increasingly sophisticated AI tools enable the production of realistic, emotionally charged narratives. According to Observatório Lupa’s February report, AI-generated fake content has increased by 308 percent since 2024, shifting from predominantly scam-related material to politically focused disinformation ahead of Brazil’s October general elections. Researchers note that these productions often combine compelling visuals, fabricated audio, and selective editing to distort public understanding. Experts warn that the accessibility of generative AI tools, integrated into mainstream platforms and requiring minimal technical expertise, has dramatically lowered the barrier to producing manipulated content, including fake interviews, simulated conversations, and fabricated “selfie” videos attributed to public figures. Detection tools struggle to keep pace with the realism of these outputs, complicating efforts by media organizations and fact-checking teams. Brazil’s Superior Electoral Court is considering financial penalties for AI-generated fake news, reflecting institutional concern about electoral manipulation. However, analysts caution that traditional rebuttals may fail to counter viral falsehoods effectively, as AI-driven disinformation thrives on speed, emotional appeal, and algorithmic amplification that often outpaces corrective communication. Source : LatAm Journalism Review, L. Coelho, AI-fueled disinformation surges ahead of Brazil’s elections . [online] Published 18 February 2026. Available at: https://latamjournalismreview.org/articles/ai-fueled-disinformation-surges-ahead-of-brazils-elections/ Top Of Page Australian Senate Probes AI-Driven Climate Disinformation and Fossil Fuel Influence Networks During a Senate Select Committee hearing, ABC reported that Australia’s Senate Select Committee on Information Integrity heard evidence that coordinated disinformation campaigns, amplified by AI tools, bots, trolls, and engagement-driven social media algorithms, are delaying action on climate and renewable energy. Witnesses described how “AI-slopaganda,” astroturfing, and opaque third-party political campaigning contribute to the pollution of online information channels. Human Rights Commissioner Lorraine Finlay warned that algorithms prioritizing engagement over accuracy elevate extreme or sensational content, while foreign interference, deepfakes, and emerging technologies further erode public trust. Meta representatives stated that the company removes bots and dismantles coordinated inauthentic behavior but does not censor politicians unless they incite violence, prompting concern from senators about the platform’s responsibility in limiting harmful falsehoods. The hearings also examined fossil fuel-linked lobbying and influence networks. Coal Australia defended transferring nearly $4 million, approximately 95 percent of Australians for Prosperity’s funding, to a third-party group that campaigned against Labor, the Greens, and independent candidates during the 2025 federal election, rejecting claims of astroturfing despite questions over transparency and voter awareness. Academic testimony highlighted the role of the Atlas Network, a global consortium of more than 500 free-market think tanks, in supporting decades-long, fossil fuel-funded campaigns promoting climate denial and anti-renewable narratives while obscuring funding origins. The inquiry further underscored how public debate is distorted by persistent climate science denial, exemplified by exchanges rejecting scientific consensus, and how correcting misinformation is resource-intensive compared to producing it. The committee’s findings point to a complex ecosystem in which digital manipulation, coordinated influence networks, and political amplification collectively undermine climate policy discourse and democratic trust. Source: ABC, G. Hutchens. Senate committee features climate disinformation, the Atlas Network, and Dr Karl’s clash with One Nation . [online] Published 18 February 2026. Available at: https://www.abc.net.au/news/2026-02-19/climate-disinformation-senate-committee-dr-karl-atlas-network/106350002 Top Of Page EU Warns of Coordinated Disinformation Targeting West Africa In a post on X, the European Union in Ghana stated that disinformation campaigns constitute a documented and ongoing risk across West Africa, noting that Ghana is among 13 countries in the region affected by at least 72 identified campaigns aimed at destabilization. The message frames these efforts as deliberate actions by “bad actors,” highlighting the scale and coordinated nature of the threat to regional stability. By emphasizing that awareness matters, the post underscores the importance of public recognition of hostile information activities as a first line of defense. The communication situates disinformation as a systemic regional challenge rather than isolated incidents, reinforcing the need for vigilance against organized influence operations targeting West African states. Source: X, European Union in Ghana (@EuropeInGhana) [online] Published 23 February 2026. Available at: https://x.com/EuropeInGhana/status/2025937364342681926 Top Of Page [Appendix - Frameworks to Counter Disinformation] The French response to Digital Foreign Interference In a report , the Australian Strategic Policy Institute (ASPI) argued that the Western alliance faces a growing threat not only from adversaries such as Russia but also from misinformation circulating within allied political ecosystems. Examples included Trump-aligned accounts spreading false claims about France, questioning Europe’s military value, and misrepresenting NATO’s collective role in Afghanistan. In response, France developed a proactive strategy to counter information manipulation. It established VIGINUM, a specialized agency that monitors and exposes foreign digital interference and trains diplomats to actively engage in information battles. French authorities also launched the “French Response” social media account, which uses humor, irony, and facts to debunk false claims and reach wider audiences. In addition, legal action against platforms accused of enabling political interference signaled France’s willingness to hold technology companies accountable and defend its information space. The report concluded that France’s approach offers lessons for allies facing similar challenges. Effective responses include monitoring misinformation from both adversaries and partners, coordinating factual rebuttals, and reinforcing shared values such as burden-sharing within alliances like NATO. Middle powers, including Australia, can support efforts to build information resilience by sharing their experience of what works. Source: Australian Strategic Policy Institute, E. Frécon & Fitriani, France strikes to address misinformation weakening Western alliance . [online] Published 20 February 2026. Available at: https://www.aspi.org.au/strategist-posts/france-strikes-to-address-misinformation-weakening-western-alliance/ Top Of Page Questionable Efforts to Fight Online Financial Scams The European Commission announced that it has launched formal investigations under the Digital Services Act into major online platforms such as Apple, Google, Microsoft, and Booking[.]com to assess their efforts to prevent financial scams. The move comes amid a sharp rise in online fraud, with billions of dollars lost annually in the United States alone. Regulators are concerned that fraudulent app store products, fake hotel listings, and manipulated search results are enabling large-scale deception. While the EU’s approach seeks accountability from very large online platforms, critics question whether focusing primarily on these companies addresses the deeper roots of the problem. A major driver of today’s fraud surge is AI-powered disinformation, particularly deepfakes and voice cloning. The number of deepfake files online has grown dramatically, and such tools are increasingly used to impersonate executives, clone voices, and create convincing fake websites. In this environment, disinformation is not just political—it has become a powerful tool for financial crime. The essay argued that responsibility should extend beyond large platforms to the broader digital infrastructure where fraudulent content originates. Online marketplaces, domain registrars, and hosting services often serve as the entry points for scammers but face less scrutiny. A stronger focus on proactive identity verification could disrupt fraud at its source. Source : The Hill, I. Bondar-Mucci, Efforts to fight online financial scams are missing the mark . [online] Published 22 February 2026. Available at: https://thehill.com/opinion/technology/5748005-online-platforms-fraud-prevention/ Top Of Page Building Immunity Against the Disinformation ‘Virus’ An article by the University of Lodz compared disinformation to a virus that spreads rapidly through everyday information channels. According to Dr. Krzysztof Grzegorzewski of the University of Lodz, disinformation is the deliberate spread of false information designed to appear true, while misinformation is shared in good faith, and malinformation involves the selective or harmful use of real facts. Unlike harmless gossip, disinformation is often intentional and strategically amplified, functioning like a “megaphone” that manipulates perception and distorts public debate. The spread of this “information virus” is accelerated by clickbait and platform algorithms. Sensational headlines trigger dopamine-driven curiosity, encouraging users to click and consume more emotionally charged content. Algorithms then personalize and intensify this exposure, making disinformation travel faster than ever. Because technology evolves more quickly than human habits, individuals often struggle to adapt, allowing false or misleading narratives to circulate widely before they are critically examined. To counter this pandemic of fake news, the expert emphasized early media education, critical thinking, and consistent fact-checking. Reliable journalism and science communication are presented as essential “vaccines” that strengthen public resilience. Initiatives such as “Science Inspires” promote clear, contextualized knowledge rather than sensationalism, helping audiences distinguish trustworthy information from manipulative content. Source: University of Lodz, Disinformation Is a Virus. How to Fight the Fake News Pandemic? Polish Science Day . [online] Published 2026. Available at: https://www.uni.lodz.pl/en/news/details/disinformation-is-a-virus-how-to-fight-the-fake-news-pandemic-polish-science-day Top Of Page EU and Spain Escalate Fight Against Climate Disinformation Climate-related disinformation spread rapidly online after a ten-hour power outage in the Iberian Peninsula on April 28, 2025, the United Nations Office for Disaster Risk Reduction (UNDRR) reported , echoing earlier false narratives that blamed renewable energy for the 2021 Texas blackout. The article argues that such disinformation is deliberately engineered by public relations actors and professional propagandists and amplified by social media platform algorithms that prioritize engagement-driven content, including outrage and falsehoods. It frames climate disinformation as both a commercial and political enterprise, alleging that Big Tech platforms profit from algorithmic amplification while coordinated actors exploit crises to undermine trust in public institutions, science, and democratic governance. The piece also claims that European Union digital accountability efforts, particularly the Digital Services Act (DSA), are facing political pushback from MAGA-aligned U.S. lawmakers seeking to weaken regulatory safeguards. In response, the European Union’s endorsement of the UN Declaration on Information Integrity on Climate Change is presented as a structural countermeasure, committing governments to protect science-based information, support independent journalism, and counter disinformation that distorts public understanding of climate issues. The article highlights enforcement actions under the DSA, fossil fuel advertising bans in cities such as The Hague, Amsterdam, and Florence, and Spain’s proposed reforms to hold platform executives legally accountable, criminalize algorithmic amplification of illegal content, monitor polarization, and enhance protections for minors. These measures are portrayed as a coordinated effort by European actors to shift from reactive moderation to systemic accountability, positioning digital regulation as a central tool in countering climate-related disinformation and safeguarding democratic stability. Source: UNDRR, D. Schran & P. Newell, Spain to protect public against climate disinformation . [online] Published 19 February 2026. Available at: https://www.preventionweb.net/news/spain-protect-public-against-climate-disinformation Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website Top Of Page
- Cyber-based hostile influence campaigns 9th - 15th February 2026
[Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect. During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] Facebook Meta's Community Notes System Showed Limits in Countering Misleading Content X X Users Asked Grok to Unblur Photos of Children from Epstein Files Grok Raised Regulatory Concerns About Misinformation and Platform Risk Iranian Officials Received Premium Services on X During Protest Crackdown [State Actors] Russia The Russian Federation Used Video Games to Spread State Propaganda Russian Influence Operation Targeted Ukrainians During the Winter Olympics Check First Unveiled GRU's Information Operations Troops with OSINT and Phaleristics France's Municipal Elections and the Risk of Russian Influence Disinformation Continued After the New START Treaty Expired Report Attributes Coordinated Disinformation Campaigns to Russian State-Aligned Actors [General Reports] False Allegations Labeled Epstein an Israeli Spy Epstein Document Dump Revived the Long-Debunked Pizzagate Conspiracy Afghan Taliban Launched Digital Campaign Against Pakistan After Alleged Airstrikes India Mandates AI Content Labeling and Rapid Takedowns to Curb Synthetic Misinformation Foreign Hybrid Campaigns Intensify Across the European Union TikTok Algorithms Amplify Disinformation and Youth Vulnerabilities in Taiwan and Myanmar G7 Ranks Cyberattacks and Disinformation Among Top Security Threats in 2026 Index Coordinated Disinformation Campaign Fuels Anti-Migrant Vigilantism in Poland [Appendix - Frameworks to Counter Disinformation] Design Thinking Workshop Informed New Approaches to Counter Disinformation and FIMI EU Launches Initiative to Counter Online Polarization and Disinformation [CRC Glossary] [ Report Highlights] An article by Bellingcat reports that, after the US Department of Justice released 3.5 million pages of documents related to convicted sex offender Jeffrey Epstein, users on X began asking Grok to “unblur” images in which the faces of children and women had been redacted for privacy. As published by Rand , Controversy surrounding Grok has drawn regulatory scrutiny from Ofcom and the European Commission, highlighting growing concerns about misinformation and systemic platform risks. NewsGuard’s Reality Check reports that a Kremlin-linked influence operation known as Matryoshka has circulated at least 28 fabricated news reports targeting Ukraine and Ukrainians during the 2026 Winter Olympics in Italy. A report by the Ukrainian Centre for Strategic Communications and the NATO Strategic Communications Centre of Excellence concludes that coordinated, state-aligned Russian actors conducted structured, cross-platform disinformation campaigns that employed forged content, narrative laundering, and synchronized amplification to undermine Ukraine and advance the Kremlin's geopolitical objectives. The latest Institute for Strategic Dialogue (ISD) publication finds that Russia, alongside China and Iran, is conducting coordinated hybrid campaigns across the EU that combine disinformation, cyberattacks, political subversion, and kinetic actions to undermine democratic institutions, inflame societal divisions, and weaken European unity. The 2026 Munich Security Index, as covered by Infosecurity Magazine , shows that G7 nations rank cyberattacks as their top security threat for the second consecutive year, with disinformation campaigns also identified as a leading concern, underscoring heightened awareness of digital and information-based hostile influence risks. [Social Media Platforms] Facebook Meta's Community Notes System Showed Limits in Countering Misleading Content An article by Politico reports that, in the wake of the fatal shootings of Alex Pretti and Nicole Renee Good in Minneapolis, Meta’s new volunteer-driven fact-checking system faced a major test as misleading and AI-generated posts flooded Facebook, Instagram, and Threads. The company adopted this model after its CEO, Mark Zuckerberg, ended partnerships with professional U.S. fact-checkers, citing perceived political bias. Inspired by X’s Community Notes system, Meta decided to rely on volunteers to propose and rate notes that add context to questionable posts. Only notes that receive broad support across differing viewpoints are published, meaning users typically see comments that reflect cross-partisan agreement. However, behind the scenes, the process is often marked by partisan conflict. A review of Meta’s closed rollout found that many proposed notes were themselves politically charged, echoing concerns raised by professional fact-checkers that volunteers may be motivated by ideology rather than neutrality. While overtly one-sided notes are usually filtered out by low ratings, critics question whether the system can effectively counter misinformation, especially on platforms such as Facebook, where content spreads within private networks rather than in high-visibility public debates. Meta said the program is expanding rapidly, with tens of thousands of contributors and thousands of published notes. Yet only a small fraction of proposed notes ultimately appear publicly. Source: Politico, A. Ng., Meta’s answer to ‘politically biased’ fact-checkers: Let them fight . [online] Published 10 February 2026. Available at: https://www.politico.com/newsletters/digital-future-daily/2026/02/10/metas-answer-to-politically-biased-fact-checkers-let-them-fight-00773633 Top Of Page X X Users Asked Grok to Unblur Photos of Children from Epstein Files In coverage by Bellingcat , users on X were reported to have asked Grok to “unblur” images after the US Department of Justice released 3.5 million pages of documents related to convicted sex offender Jeffrey Epstein, in which the faces of children and women had been redacted for privacy. The documents were already criticized by survivors, 18 of whom issued a joint statement condemning the release for exposing identifying details. Thousands of documents were later removed due to flawed redactions that reportedly compromised the identities of nearly 100 survivors. Between the 30th of January and the 5th of February 2026, researchers reviewed 31 requests in which Grok was asked to remove redactions or identify individuals in protected images. Although Grok often acknowledged that minors’ faces were blurred for legal and ethical reasons, it still generated AI images in response to most requests, producing fabricated faces that ranged from realistic to clearly distorted. Some of these posts attracted millions of views, amplified by X’s monetization system. Following media inquiries, stricter guardrails appeared to be implemented. In subsequent days, Grok largely refused such requests or instead generated unrelated images. X did not respond to requests for comment. Grok has also been used to create manipulated images tied to other incidents, sometimes falsely identifying individuals. Investigations have been launched in countries including the UK and France over the chatbot’s role in generating non-consensual sexual deepfakes, including child abuse imagery. Source: Bellingcat, K. Koltai, Epstein Files: X Users Are Asking Grok to ‘Unblur’ Photos of Children . [online] Published 10 February 2026. Available at: https://www.bellingcat.com/news/2026/02/10/grok-epstein-photos/ Top Of Page Grok Raised Regulatory Concerns About Misinformation and Platform Risk As published by Rand , controversy surrounding Grok has drawn regulatory scrutiny from Ofcom and the European Commission, highlighting growing concerns about misinformation and systemic platform risks. The case emerged at a critical moment when UK and EU digital regulations shifted from voluntary compliance to active enforcement. Regulators increasingly rejected claims that harmful outputs were isolated glitches, instead focusing on whether platforms had proactively identified and mitigated foreseeable systemic risks, including misinformation, abuse, and radicalization. Generative AI poses distinct risks of misinformation because it engages users in personalized, iterative conversations that, over time, can reinforce false narratives or extremist views. Unlike standalone AI tools, Grok operates within X, where its conversational responses can be rapidly amplified and repurposed. The interactions occur privately and cumulatively, making harm less visible, harder to evidence, and more difficult to audit using tools designed for posts, shares, or recommendations. The report concluded that Grok raises fundamental questions about whether platforms can credibly self-govern where systemic risks are foreseeable, and whether governments can meaningfully enforce laws designed to protect users, democratic processes, and the integrity of the information environment in a fragmented, cross-border digital ecosystem. Source: RAND, D. Toro, Grok Isn’t a Glitch—It Is a Regulatory Reckoning . [online] Published 9 February 2026. Available at: https://www.rand.org/pubs/commentary/2026/02/grok-isnt-a-glitch-it-is-a-regulatory-reckoning.html Top Of Page Iranian Officials Received Premium Services on X During Protest Crackdown The Tech Transparency Project reports that X provided paid premium subscriptions to dozens of Iranian officials, state agencies, and state-controlled media outlets, including individuals subject to U.S. sanctions. These premium accounts, marked with blue checkmarks, offered enhanced features, including increased visibility, longer posts, and monetization tools. Many of these accounts were used to spread government messaging and propaganda during nationwide protests and internet blackouts in Iran, including threats of harsher repression against demonstrators. The findings raised concerns that X may have violated US sanctions, which allow free access to communication platforms but prohibit charging sanctioned entities for additional services. Some premium badges were removed after media inquiries, but others remained active. The report highlighted that Elon Musk appeared to show support for the recent anti-government protests in Iran, taunting Khamenei on X, changing the Iranian flag emoji on X to the pre-revolutionary version, and allowing Iranians to access his Starlink satellite service for free amid an internet blackout in the country. Source : Tech Transparency Project, Iranian Officials Get Premium Treatment on X Amid Protester Crackdown . [online] Published 12 February 2026. Available at: https://www.techtransparencyproject.org/articles/iranian-officials-get-premium-treatment-on-x-amid-protester-crackdown Top Of Page [State Actors] Russia The Russian Federation Used Video Games to Spread State Propaganda A report by EU VS Disinfo revealed that the Russian Federation recognized the potential of video games as a tool of ideological influence and used them to spread propaganda. Unlike passive media, games immerse players in interactive worlds in which ideological messages are embedded in narratives and mechanics. With 3.3 billion players globally, many aged 18–34, games can shape perceptions through community dynamics, repetition of symbols, and the illusion of personal choice. The Kremlin has systematically integrated the gaming industry into its state information strategy under a model of “digital authoritarianism.” Through state funding, censorship, and the development of domestic platforms and engines, Russian authorities have promoted “patriotic” content aligned with official narratives. Organisations such as the Institute for Internet Development provide substantial subsidies to ideologically aligned projects, while youth movements such as “Yunarmiya” incorporate military-themed games into their activities. Some titles, reportedly recommended for cadets, recreate battles in Ukraine with overt pro-Kremlin symbolism, presenting war as heroic and justified. Gaming platforms such as Steam further enable the spread of Z-propaganda and extremist content within echo chambers, normalizing militarism and dehumanizing narratives under the guise of humor or community culture. Source: EU VS Disinfo, A. Vyshniakova, New weapon in the shadows: how the Kremlin uses video games for war propaganda . [online] Published 11 February 2026. Available at: https://euvsdisinfo.eu/new-weapon-in-the-shadows-how-the-kremlin-uses-video-games-for-war-propaganda/ Top Of Page Russian Influence Operation Targeted Ukrainians During the Winter Olympics Findings by NewsGuard’s Reality Check show that a Kremlin-linked influence operation known as Matryoshka circulated at least 28 fabricated news reports between the 30th of January and the 10th, of February 2026, targeting Ukraine and Ukrainians during the 2026 Winter Olympics in Italy. The fake materials, shared mainly on Telegram and viewed approximately two million times, mimicked respected outlets such as Reuters, Euronews, CBC, and Italian institutions. Among the allegations were claims that Ukrainian staff spat in food, that anti-doping rules were relaxed for Ukrainian athletes, and that Ukrainians were segregated in the Olympic Village due to aggressive behavior—assertions denied by official sources. Other viral claims accused Ukrainians of vandalizing Rome’s Colosseum, staging obscene protests, creating fraudulent hotel booking websites, or participating in the Olympics to avoid military service. As the Games approached, the disinformation gained broader traction, spreading beyond pro-Kremlin Telegram channels to multilingual websites in the Pravda network and even being repeated by mainstream Russian media. Source: NewsGuard, A. Lee. Russia Targets Ukraine at the Winter Olympics with Flood of Fabricated News Reports . [online] Published 10 February 2026. Available at: https://www.newsguardrealitycheck.com/p/russias-olympian-disinformation-game Top Of Page CheckFirst Unveiled GRU's Information Operations Troops with OSINT and Phaleristics Building on a previous investigation into the FSB’s 16th Centre, CheckFirst's report applied OSINT and heraldry, the study of military decorations, to uncover the structure of the GRU’s Information Operations Troops (VIO). By analyzing 118 photographs of insignia, patches, and pennants linked to the GRU, researchers reconstructed key elements of the VIO’s organization, chain of command, and likely geographic footprint. Although the VIO has been publicly acknowledged since 2014, its internal structure and missions have remained opaque. The study suggested that the VIO was formalized around 2014, likely under Military Unit 55111, consolidating pre-existing GRU units into a unified command. The VIO comprises at least fifteen units spanning three core specializations: encryption and cryptanalysis, computer network operations, and psychological operations. Some units, such as 26165 and 74455, previously linked to the APT28 and Sandworm cyber groups, are well documented, while others, including Units 20766, 48707, and 20978, have received little or no prior public attention. While identifying operational locations proved challenging, the researchers triangulated data from investigative journalism, leaked materials, Russian public records, and social media platforms to assess probable sites. The report concluded that combining phaleristics with OSINT offers a replicable and effective method for illuminating clandestine Russian intelligence structures, helping to clarify the scale and evolution of the GRU’s information and cyber warfare capabilities amid ongoing conflict with Ukraine and increased operations targeting Europe. Sources: CheckFirst, Unveiling FSB’s 16th Center SIGINT Capabilities . [online] Published July 2025. Available at: https://checkfirst.network/wp-content/uploads/2025/07/OSINT_Phaleristics_Unveiling_FSB_16th_Center_SIGINT_Capabilities.pdf CheckFirst, OSINT & Phaleristics: Unveiling GRU’s Information Operations Troops (VIO) . [online] Published February 2026. Available at: https://checkfirst.network/2026/02/OSINT_Phaleristics_Unveiling_GRUs_Information_Operations_Troops.pdf Top Of Page France's Municipal Elections and the Risk of Russian Influence An article published by Le Monde reports that a network of more than 80 fake French-language news websites, identified by Reporters Without Borders and reportedly linked to the Russian group Storm-1516, has raised concerns about potential foreign interference ahead of France’s March 2026 municipal elections. These sites imitate local media and promote pro-Russian narratives. Similar tactics have been observed in countries such as Moldova, where pro-Russian disinformation targeted legislative elections, though pro-European candidates ultimately prevailed. Political choices made by major metropolitan areas are of interest to major foreign powers, sometimes for primarily economic reasons. However, these elections are also particularly difficult for foreign actors to grasp. Russia’s past information campaigns in France have focused more on national themes, such as criticizing French leadership or promoting narratives of Western decline, rather than attempting to shape specific local races, including high-profile contests in Paris involving allies and opponents of Emmanuel Macron. In practice, most manipulation in municipal elections has come from domestic actors rather than foreign states. Cases in multiple countries have involved candidates spreading false information, using doctored media, or engaging in questionable campaign funding. Source: Le Monde, D. Leloup, France's municipal elections at risk of foreign interference . [online] Published 10 February 2026. Available at: https://www.lemonde.fr/en/opinion/article/2026/02/10/france-s-municipal-elections-at-risk-of-foreign-interference_6750325_23.html Top Of Page Disinformation Continued After the New START Treaty Expired EU VS Disinfo reports that the New START Treaty expired on 6 February 2026, bringing to an end the last remaining nuclear arms control agreement between Russia and the United States. In response, pro-Kremlin information campaigns sought to deflect blame from Moscow by portraying Russia as a responsible actor whose extension efforts were ignored by Washington, while warning of nuclear catastrophe and promoting the idea of a new multipolar nuclear order. These narratives aimed to frame the treaty’s collapse as inevitable and as the result of Western actions, despite the Kremlin’s own role in weakening the agreement over several years. The treaty, which entered into force in 2011, limited strategic nuclear arsenals and included transparency measures such as on-site inspections. However, Russia suspended inspections in 2022, citing Western sanctions after it invaded Ukraine, and formally halted its participation in 2023. At the same time, pro-Kremlin outlets intensified messaging accusing Western countries and NATO of provoking nuclear confrontation, spreading false claims about Ukrainian nuclear threats, biological weapons, and Western sabotage. Source: EU VS Disinfo, As New START ends, disinformation about it continues . [online] Published 13 February 2026. Available at: https://euvsdisinfo.eu/as-new-start-ends-disinformation-about-it-continues/ Top Of Page Report Attributes Coordinated Disinformation Campaigns to Russian State-Aligned Actors The joint report by the Ukrainian Centre for Strategic Communications and the NATO Strategic Communications Centre of Excellence develops and tests a structured Information Influence Attribution Framework (IIAF) to assess Russian information influence operations targeting Ukraine and neighbouring regions. Focusing on campaigns linked to Russian state and state-aligned actors, including RT, Sputnik, pro-Kremlin Telegram networks, and proxy outlets, the report documents coordinated disinformation efforts built on falsified content, impersonation of trusted media brands, source laundering, and synchronized cross-platform amplification. Case studies demonstrate how Russian actors deploy forged documents, deceptively edited images and videos, bot networks, and multilingual content tailored to specific audiences to advance narratives portraying Ukraine as corrupt, unstable, or manipulated by the West. The analysis shows that these campaigns rely on layered tactics: technical infrastructure reuse (shared domains, hosting, analytics IDs), behavioural coordination (near-simultaneous reposting, template-based comment flooding, and tasking via channels such as the “Digital Army of Russia”), and contextual alignment with Kremlin geopolitical objectives. Disinformation narratives are strategically timed to exploit political events, social tensions, and policy debates. In contrast, narrative laundering techniques obscure their origins by staging dissemination across inauthentic and overtly state-linked media ecosystems. Applying a spectrum of state responsibility, the report concludes that several examined operations fall between “state-shaped” and “state-coordinated,” reflecting high-confidence assessments of structured, state-aligned influence activity rather than organic or rogue behaviour. Source: The Ukrainian Centre for Strategic Communications & NATO Strategic Communications Centre of Excellence, V. Smith & J. Pamment & S. Dikhtiarenko & B. Heap & D. Tsurtsumia & A. Maunder, Attributing Russian Information Influence Operations . [online] Published January 2026. Available at: https://stratcomcoe.org/pdfjs/?file=/publications/download/Attribution_Russian_Information_Influence.pdf?zoom=page-fit Top Of Page [General Reports] False Allegations Labeled Epstein an Israeli Spy In its latest debunking , NewsGuard’s Reality Check examines false claims that emerged after the January 2026 release of millions of Justice Department documents related to Jeffrey Epstein, alleging that the files proved he was an Israeli spy. The allegation centered on an October 2020 FBI document included in the release, which summarized statements from an unnamed source who described Epstein as a “co-opted Mossad agent.” Anti-Israel accounts on X amplified screenshots of the document, asserting it confirmed that Epstein had trained as a spy under former Israeli Prime Minister Ehud Barak. Posts spreading the claim gained millions of views and tens of thousands of engagements. However, the document in question is an FD-1023 form, which records raw, unverified information from confidential sources and does not constitute confirmed findings or investigative conclusions. The FBI has explicitly stated that such forms do not validate or establish the credibility of the claims they contain. No credible evidence has emerged showing that Epstein was affiliated with Israeli intelligence. In response to similar allegations made previously, former Israeli Prime Minister Naftali Bennett publicly denied that Epstein had ever worked for the Mossad. Source : NewsGuard, J. Schofield, Debunk: Unverified Allegations Used to Label Epstein an Israeli Spy . [online] Published 9 February 2026. Available at: https://www.newsguardrealitycheck.com/p/was-epstein-an-israeli-spy-newsguards Top Of Page Epstein Document Dump Revived the Long-Debunked Pizzagate Conspiracy Following the Justice Department’s January 2026 release of documents related to Jeffrey Epstein, NewsGuard’s Reality Check reports that the long-debunked “Pizzagate” conspiracy theory, falsely claiming that top Democrats ran a child sex-trafficking ring out of Comet Ping Pong, has resurfaced. Online users cited hundreds of references to the word “pizza” in the files, alleging that it was coded language for abuse. Social media posts promoting the claim surged dramatically, and the commentator Tucker Carlson amplified the theory in a YouTube episode, suggesting it might be real despite having previously rejected it. However, a review of the documents found no evidence supporting the conspiracy. References to “pizza” were largely mundane, such as casual emails discussing restaurants or kitchen equipment. The pizzeria and its owner were mentioned only briefly, including in an article about the conspiracy itself and an unverified tip. Authorities have repeatedly debunked allegations that figures such as Hillary Clinton were involved. Although “pizza” appears in 842 documents, the files contain more references to other foods, such as “salad” and “soup.” The conspiracy has previously had dangerous real-world consequences, including a 2016 incident in which an armed man entered the restaurant believing he was rescuing children. Source : NewsGuard, M. Wollen & S. Howard, Epstein Document Dump Breathes New Life into Long-Debunked Pizzagate Conspiracy — Pushed Now By Tucker Carlson . [online] Published 11 February 2026. Available at: https://www.newsguardrealitycheck.com/p/epstein-files-revive-pizzagate-with Top Of Page Afghan Taliban Launched Digital Campaign Against Pakistan After Alleged Airstrikes An article by The Jamestown Foundation reports that, following alleged Pakistani airstrikes near Kabul in October 2025 targeting leaders of Tehreek-e-Taliban Pakistan (TTP), the Taliban launched a coordinated digital campaign to counter Pakistan’s accusations and reshape regional narratives. Using social media, affiliated news outlets, militant songs, and propaganda videos, the Taliban sought to portray Pakistan as the aggressor while denying claims that it sheltered militant groups. A widely circulated video featuring TTP leader Mufti Noor Wali Mehsud claimed he was not in Afghanistan, aiming to undermine Pakistan’s justification for the strikes and weaken its credibility. Territorial narratives rejecting the Durand Line border and circulated maps depicting parts of Pakistan as belonging to a so-called Greater Afghanistan were also promoted, reinforcing nationalist and revisionist claims. Source : The Jamestown Foundation, R. Nasar, The Afghan Taliban’s ‘Digital War’ Against Pakistan . [online] Published 12 February 2026. Available at: https://jamestown.org/the-afghan-talibans-digital-war-against-pakistan// Top Of Page India Mandates AI Content Labeling and Rapid Takedowns to Curb Synthetic Misinformation According to a TechXplore article , India has amended its 2021 information technology regulations to tighten oversight of artificial intelligence–generated content, requiring social media platforms to clearly and permanently label synthetic media and comply with government takedown requests within three hours, replacing the previous 36-hour window. The updated rules, effective February 20, apply broadly to content “created, generated, modified, or altered” through computer resources, excluding routine or good-faith editing. Synthetic data is defined as material that appears real or is likely to be perceived as indistinguishable from an actual person or event, and it directly targets the growing spread of AI-generated misinformation among India’s more than one billion internet users. The regulations mandate that platforms such as X, Instagram, and Facebook deploy automated tools to prevent the dissemination of illegal content, including false documents and sexually abusive material, and ensure that AI-manipulated content carries indelible markings that cannot be removed or suppressed. The government has also operationalized enforcement through the Sahyog portal, which automates and streamlines the submission of official takedown notices. The announcement precedes a major global AI summit in New Delhi, underscoring India’s intent to assert regulatory control over synthetic media and to hold platforms accountable amid rapid AI expansion and rising disinformation risks. Source : TechXplore, A. Zinin, India ramps up AI rules for social media platforms . [online] Published 10 February 2026. Available at: https://techxplore.com/news/2026-02-india-ramps-ai-social-media.html Top Of Page Foreign Hybrid Campaigns Intensify Across the European Union The latest Institute for Strategic Dialogue (ISD) publication documents a sustained campaign of hybrid activity targeting all 27 EU member states since Russia’s full-scale invasion of Ukraine in February 2022. Russia is identified as the primary threat actor in most cases, with China and Iran also implicated. The report categorizes hybrid threats into information operations, cyber operations, kinetic actions, political and civil society subversion, and malign finance. Across the cases, adversarial states deploy coordinated disinformation campaigns, inauthentic news sites, bot amplification networks, deceptive branding, and impersonation tactics to manipulate public opinion, inflame ethnic and political tensions, interfere in elections, and erode trust in democratic institutions. These efforts are often synchronized with cyberattacks, espionage, and covert financing of political actors to maximize destabilizing impact. The analysis highlights a notable escalation in kinetic and disruptive tactics accompanying information manipulation, including sabotage of critical infrastructure, arson, drone incursions, assassination plots, and the recruitment of local proxies through encrypted platforms. Many incidents combine physical acts, such as vandalism or migrant weaponization, with coordinated narrative amplification through state media, Telegram channels, and bot networks to magnify psychological and political effects. The report concludes that hybrid operations increasingly blur the line between foreign and domestic actors, leveraging criminal networks, diaspora communities, and local intermediaries to obscure attribution while advancing strategic goals of undermining European unity, weakening institutional trust, and challenging liberal democratic governance. Source : Institute for Strategic Dialogue, K. Sikora & L. Savoia & B. Schafer, Europe’s Other Battlefields: Foreign Hybrid Threats in the EU . [online] Published 12 February 2026. Available at: https://www.isdglobal.org/publication/europes-other-battlefields-foreign-hybrid-threats-in-the-eu/ Top Of Page TikTok Algorithms Amplify Disinformation and Youth Vulnerabilities in Taiwan and Myanmar A comparative study by Doublethink Lab examines TikTok’s impact on adolescents in Taiwan and Myanmar, highlighting its role as both a vector for disinformation and a platform vulnerable to algorithmically amplified influence operations. In Myanmar, following the 2021 military coup and restrictions on Facebook and VPNs, propaganda and disinformation campaigns shifted toward Telegram and TikTok, which were perceived as having weaker moderation. Youth reported exposure to political propaganda, hate speech, and misinformation, often responding by skipping or reporting content but facing structural challenges in verification. In Taiwan, concerns extend beyond harmful content to the platform’s potential role in amplifying narratives aligned with the People’s Republic of China (PRC). During the 2024 presidential election, adolescents reported increased exposure to short, decontextualized political videos that criticized Taiwan’s democratic institutions and subtly promoted favorable perceptions of PRC governance. A 2025 nationwide study cited in the report found that active TikTok users showed higher agreement with conspiracy-like and PRC-aligned narratives, suggesting a measurable relationship between platform engagement and increased populist skepticism toward democratic institutions. The report highlights TikTok’s highly personalized recommendation algorithm as a central driver of user exposure. Adolescents in both contexts described the platform as “precise” and immersive, with rapid adaptation to user behavior reinforcing content themes and potentially narrowing informational diversity. In Taiwan, researchers documented algorithmic promotion of borderline sexually suggestive content involving minors, alongside accounts linked to potential exploitation and scam networks, while in Myanmar, the primary risks centered on political manipulation and harassment. Across both contexts, the study identifies algorithmic amplification, subtle embedding of political messaging within entertainment content, and the use of emotionally charged, fragmented narratives as key tactics. The findings underscore how platform design, limited transparency, and weak regulatory oversight can facilitate information manipulation, normalize harmful behaviors, and erode digital literacy, particularly among adolescents in politically sensitive environments. Source : Doublethink Lab, E. Hsu & Iris & P. S. S. Thant, Algorithms, Digital literacy, and Social influence: TikTok use among youths of Taiwan and Myanmar . [online] Published 13 February 2026. Available at: https://medium.com/doublethinklab/algorithms-digital-literacy-and-social-influence-tiktok-use-among-youths-of-taiwan-and-myanmar-5ac4ee947b8c Top Of Page G7 Ranks Cyberattacks and Disinformation Among Top Security Threats in 2026 Index As reported by Infosecurity Magazine , the Munich Security Index (MSI) 2026, released at the Munich Security Conference, identifies cyberattacks as the top perceived national security risk among G7 countries for the second consecutive year, followed by economic or financial crises and disinformation campaigns from adversaries. The ranking reflects a sustained increase in cyber threats in G7 risk assessments, with particularly great concern in Germany, the UK, and Japan. Disinformation campaigns are widely recognized as a major hostile tactic, a core security concern alongside cyber operations, indicating growing awareness of information manipulation as a strategic threat vector. In contrast, among Brazil, India, China, and South Africa (BICS), cyber threats have declined in perceived urgency, while disinformation ranks seventh, suggesting differing threat prioritization across geopolitical blocs. The report is based on a November 2025 survey of more than 11,000 respondents across G7 and BICS countries, employing representative sampling and multiple evaluative criteria to assess perceived risk severity and imminence. While most countries reported declining threat perceptions overall, concerns have intensified in the UK, the US, and India, particularly regarding economic instability, democratic fragility, civil unrest, and political violence, factors that can intersect with cyber and disinformation campaigns in destabilizing democratic systems. The authors caution that polling in autocratic contexts, particularly in China, may limit respondents’ willingness to express views freely, highlighting methodological constraints in assessing perceptions of information and cyber threats across political systems. Source: Infosecurity Magazine, K. Poireault, Munich Security Conference: Cyber Threats Lead G7 Risk Index, Disinformation Ranks Third . [online] Published 13 February 2026. Available at: https://www.infosecurity-magazine.com/news/munich-security-index-cyberattacks/ Top Of Page Coordinated Disinformation Campaign Fuels Anti-Migrant Vigilantism in Poland An investigation by the DFRLab found that citizen patrols organized along the Polish-German border in 2025 were accompanied by a coordinated online disinformation campaign that amplified anti-migration rhetoric, legitimized vigilante activity, and portrayed the Polish government as weak. Inauthentic Facebook pages, AI-generated images, recycled protest photos, and copy-paste narratives were used to stoke fear and frame migrants as a violent “invasion.” Right-wing actors, including members of the Confederation Party and Law and Justice (PiS), boosted this content, at times sharing AI-manipulated visuals and unverified claims. Political posts amplified incidents, such as a soldier’s injury later clarified as accidental, to fuel calls for violence against migrants and accusations that border officials were “traitors.” The investigation identified coordinated networks of Facebook pages posting identical content within seconds or minutes of one another, often with large follower counts but minimal engagement, suggesting likely inauthentic amplification. The PATRIOCI Facebook group emerged as a key amplification node, administered by pages impersonating media outlets and linked to previously dormant or disinformation-associated entities, including accounts that had shared content from Russian state-controlled RT. These networks promoted articles from outlets such as Tysol[.]pl in synchronized bursts and targeted Polish border guards with harassment and doxxing-style identification efforts. AI-generated banners depicting aggressive “foreign” figures and violent imagery were used to mobilize offline protests, while the absence of platform labeling for synthetic media enabled emotionally manipulative visuals to circulate unchecked. The report concludes that coordinated digital manipulation, leveraging algorithmic amplification, opaque platform governance, and political endorsement, distorted Poland’s migration debate, normalized extremist rhetoric, and blurred the line between grassroots activism and orchestrated influence operations. Source : DFRLab, G. Gigitashvili & S. Gelava, Inauthentic networks fuel Poland’s border vigilante movement . [online] Published 11 February 2026. Available at: https://dfrlab.org/2026/02/11/inauthentic-networks-fuel-polands-border-vigilante-movement/ Top Of Page [Appendix - Frameworks to Counter Disinformation] Design Thinking Workshop Informed New Approaches to Counter Disinformation and FIMI Based on a design thinking workshop, a paper published by DoubleThink Lab argued that participatory design methods can play a constructive, bounded role in counter-FIMI policy development. The workshop, which involved civil society, defense practitioners, and journalists, noted that traditional top-down, reactive policies often fail to keep pace with fast-moving influence operations. Instead, participatory and human-centered approaches can help policymakers better understand the emotional, institutional, and operational pressures caused by FIMI, including declining trust, decision-making under uncertainty, and professional burnout. The workshop reframed FIMI not only as a problem of false content but also asa broader human and systems problem. Several tensions, such as speed versus due process, control versus trust, and short-term response versus long-term sustainability, appeared across all groups. Based on these insights, the paper proposed exploratory policy recommendations using a four-part framework: Education, Engineering, Enforcement, and Ethics (4E). These include improving media and crisis literacy, investing in shared verification tools and secure communication systems, clarifying institutional roles, strengthening legal protections, and ensuring responses respect rights and wellbeing. The paper concluded that design thinking can play a valuable but limited role in counter-FIMI policymaking. While it does not measure effectiveness or replace legal and evidence-based analysis, it helps policymakers surface hidden assumptions, understand trade-offs, and design more adaptable and human-centered responses. The recommendations were presented as starting points that require further testing, legal review, and empirical validation. Source : Doublethink Lab, D. V. Ligot, Crafting Policy Recommendations for Countering Disinformation and FIMI . [online] Published 11 February 2026. Available at: https://medium.com/doublethinklab/crafting-policy-recommendations-for-countering-disinformation-and-fimi-952825b9aa2b Top Of Page EU Launches Initiative to Counter Online Polarization and Disinformation The European Commission has launched a €785,000 pilot call for proposals to strengthen social cohesion and improve understanding of online political polarisation, recognizing the growing threat that digitally amplified division poses to democratic resilience in the European Union. The initiative highlights how algorithm-driven platforms, while central to civic engagement, can also amplify extreme, divisive, or manipulative content, creating conditions in which coordinated malicious influence operations and disinformation campaigns can thrive. Such dynamics risk eroding trust in democratic institutions and distorting public understanding of societal issues. The project seeks to build cross-sector and cross-border partnerships to design, test, and scale mitigation strategies that counter harmful polarization without restricting freedom of expression. Proposed activities may include multilingual data collection; the development of AI- and IT-based tools; practitioner toolkits; policy and technical interventions; and the establishment of durable stakeholder networks. By engaging civil society, researchers, media actors, and technology companies, the initiative aims to produce actionable responses that align with the European Democracy Shield and reinforce long-term democratic resilience while safeguarding open and pluralistic debate. Source : European Commission, New call for proposals to advance social cohesion amid online polarisation . [online] Published 12 February 2026. Available at: https://digital-strategy.ec.europa.eu/en/news/new-call-proposals-advance-social-cohesion-face-online-polarisation Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website Top Of Page
- Cyber-based hostile influence campaigns 2nd February - 8th February 2026
[Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect. During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [State Actors] Russia Pro-Kremlin Network Promoted Alberta Separatism to U.S. Audiences Italy Accused Russia-Linked Hackers of Cyberattacks Ahead of 2026 Winter Olympics Russia intensified disinformation as the EU tightened scrutiny of its shadow fleet [General Reports] France Caught Between Russian and U.S.-Linked Disinformation Campaigns Nearly Half of Americans Believed Top False Claims in 2025 Social Media Users Circulated Fake Images Linking Politicians to Epstein Files Pro-IS Propaganda Circulated Online Following Syrian Detention Camp Takeover Digital Risks for the 2026 Winter Olympics Gabbard’s Seizure of Voting Machines Raised Fears of Election Interference The Rise of Violent Rhetoric Targeting US Public Officials Foreign State Disinformation and Cyber Influence Threaten Norway’s Information Space in 2026 [CRC Glossary] [ Report Highlights] An article by EU VS Disinfo showed that Russia has intensified disinformation efforts as the EU tightened scrutiny of its so-called "shadow fleet", hundreds of ageing oil tankers that operate under false flags to evade sanctions on Russian oil exports. According to an article by Le Monde , France has increasingly become a target of overlapping disinformation efforts from pro-Russian networks and Trump-aligned American accounts, creating a complex and unprecedented challenge. Ahead of the Milano Cortina 2026 Winter Olympics, Graphika analyzed online conversations from the past 60 days to assess potential digital risks surrounding the Games and highlighted how major sporting events attract influence operations and hacktivist activity. New analysis by the Institute for Strategic Dialogue (ISD) found that violent online rhetoric targeting US public officials more than tripled between 2021 and 2025. The Norwegian Police Security Service’s (PST) 2026 “National Threat Assessment 2026” warns that Russia, China, and Iran are intensifying cyber-enabled disinformation, influence operations, and proxy activities to undermine public trust, suppress dissent, and exploit extremist and digital ecosystems within the Norwegian information space. [State Actors] Russia Pro-Kremlin Network Promoted Alberta Separatism to U.S. Audiences As reported by DisinfoWatch , a pro-Kremlin outlet in the Pravda News Network amplified claims that Alberta was moving toward independence and that its secession would deliver "two red senators and electoral votes" to the United States. Identified by France’s Viginum agency as part of the Russian “Portal Kombat” disinformation ecosystem, the network repackaged real but limited developments, including meetings between Alberta separatist figures and U.S. officials, into a misleading narrative. While such meetings did occur, U.S. officials have denied offering any support or commitments. Alberta is a Canadian province, not a U.S. state, and there is no legal or political pathway for it to gain U.S. senators or electoral votes. Alberta is also not institutionally “moving toward independence.” Public support remains a minority position, and current activity is limited to the early stages of a referendum petition process. Even a hypothetical referendum would not result in immediate independence, as Canadian law requires a clear majority, a clear question, and negotiations. Source: DisinfoWatch. Moscow to MAGA: Pro-Kremlin Pravda Network Amplifying Alberta Separatism . [online] Published 2 February 2026. Available at: https://disinfowatch.org/disinfo/moscow-to-maga-pro-kremlin-pravda-network-amplifying-alberta-separatism/ Top Of Page Italy Accused Russia-Linked Hackers of Cyberattacks Ahead of 2026 Winter Olympics An article by The Record reports that Foreign Minister Antonio Tajani said Italy has blocked a wave of cyberattacks of “Russian origin” targeting its diplomatic missions abroad and infrastructure linked to the 2026 Winter Olympics. The attempted attacks originated in Washington and extended to consulates in Sydney, Toronto, and Paris, as well as to hotels associated with the Games in Milan and the Alpine resort of Cortina d’Ampezzo. Around 120 targets were hit in total, but authorities reported no significant disruption. The pro-Russian hacker group NoName057(16) claimed responsibility, describing the campaign as retaliation for Italy’s support for Ukraine. Active since Russia’s full-scale invasion of Ukraine in 2022, the group has repeatedly targeted European countries backing Kyiv using relatively simple but disruptive distributed DoS attacks. Italian officials noted that Russia has been barred from competing as a nation in the 2026 Winter Olympics, a context that echoes previous incidents in which Russia-linked hackers targeted countries hosting major sporting events. Russian state officials have not publicly commented on the allegations. Source: The Record, D. Antoniuk. Italy blames Russia-linked hackers for cyberattacks ahead of Winter Olympics . [online] Published 5 February 2026. Available at: https://therecord.media/italy-blames-russia-linked-hackers-winter-games-cyberattack Top Of Page Russia intensified disinformation as the EU tightened scrutiny of its shadow fleet EUvsDisinfo reports that Russia has intensified disinformation efforts as the EU tightened scrutiny of its so-called “shadow fleet”, hundreds of ageing oil tankers that operate under false flags to evade sanctions on Russian oil exports. These vessels, often with obscured ownership and inadequate insurance, transport oil and other goods to countries such as China and India, and are also suspected of carrying stolen Ukrainian property or military cargo. EU member states expanded sanctions, now covering nearly 600 vessels, and debated an even tougher 20th sanctions package. Recent interceptions of suspected sanctions-busting tankers in the North Sea and Mediterranean, including a January seizure by the French navy, have triggered a wave of false Kremlin-backed narratives. Pro-Kremlin outlets have portrayed lawful inspections and seizures as “piracy” or a covert naval blockade, falsely claiming Europe is provoking open confrontation. In parallel, deterrence messaging has become increasingly explicit, hinting at retaliation against Western shipping or undersea infrastructure and warning of instability in the Baltic Sea—claims that have no basis in reality, as neither the EU nor NATO is preparing a blockade or military escalation. Source: EUvsDisinfo. Sailing under false flag: Moscow’s ‘shadow fleet’ meets Europe’s resolve . [online] Published 6 February 2026. Available at: https://euvsdisinfo.eu/sailing-under-false-flag-moscows-shadow-fleet-meets-europes-resolve/ Top Of Page [General Reports] France Caught Between Russian and U.S.-Linked Disinformation Campaigns France has increasingly become a target of overlapping disinformation efforts from pro-Russian networks and Trump-aligned American accounts, as detailed by Le Monde , creating a complex and unprecedented challenge. Since early 2026, Russian-linked actors have continued to circulate fabricated stories aimed at discrediting Ukraine and France, including a false claim that French and Ukrainian officials embezzled billions of euros intended for Rafale fighter jets. Investigations by French and international fact-checkers confirmed the story was entirely fabricated and likely originated from “Storm-1516”, a Russian propaganda group active in France since 2025. At the same time, French authorities have had to counter misleading narratives spreading from U.S. sources, including false claims about President Emmanuel Macron’s role in drug pricing and France’s military record in Afghanistan. This dual threat poses a new challenge for France, as Russia and the United States rely on radically different methods. Russian disinformation operates as a long-term, organised background campaign, often using fake news sites and AI-generated content to subtly influence public opinion. By contrast, Trump-aligned messaging relies on high-visibility, immediate amplification through personal social media accounts, normalising distortions of fact through repetition and provocation. Adding to the challenge is a convergence between pro-Russian and MAGA-aligned narratives, which share anti-elite, anti-EU, anti-Macron, and anti-Ukraine themes and often relay each other’s claims. French authorities have responded by more actively debunking falsehoods on social media, including through the Foreign Ministry’s French Response account and direct interventions from the Élysée. Source: Le Monde, W. Audureau. France faces a crossfire of Russian and American disinformation . [online] Published 2 February 2026. Available at: https://www.lemonde.fr/en/les-decodeurs/article/2026/02/02/france-faces-a-crossfire-of-russian-and-american-disinformation_6750034_8.html Top Of Page Nearly Half of Americans Believed Top False Claims in 2025 NewsGuard’s Reality Gap Index found that, from June to December 2025, an average of 46 percent of Americans believed at least one major false claim circulating in the news. In the first six months of tracking, belief levels averaged 50 percent before dipping slightly in December. Fluctuations largely reflected the virality and sensationalism of particular hoaxes, with peaks in midsummer driven by widely shared false claims, such as reports that President Donald Trump had declared martial law in Washington, D.C., or that a Florida detention center was surrounded by an alligator-filled moat. The index, based on monthly YouGov surveys, measured the share of Americans who believed at least one of the top three false claims identified each month by NewsGuard. The surveys also revealed widespread confusion around AI-generated and manipulated media. Large shares of respondents either believed in or were unsure about the authenticity of fake images, videos, and audio, including AI-generated visuals falsely linking Trump to Jeffrey Epstein and fabricated audio clips purporting to capture him berating Cabinet members. Differences by age and political affiliation were present but modest. Younger adults were more likely to express uncertainty, while older respondents showed greater confidence but were not consistently better at identifying false claims. Across all age groups, only about 8–9 percent correctly identified all false claims in a given month. Politically, Republicans were slightly more likely than Democrats and Independents to believe at least one false claim, but the overall difficulty in consistently spotting misinformation cut across party lines. Source: NewsGuard Reality Check, S. Tanner. Nearly Half of Americans in 2025 Believed False Claims Across Seven Months of Surveys . [online] Published 2 February 2026. Available at: https://www.newsguardrealitycheck.com/p/nearly-half-of-americans-believed Top Of Page Social Media Users Circulated Fake Images Linking Politicians to Epstein Files Following the U.S. Department of Justice’s release of millions of documents related to Jeffrey Epstein in January 2026, NewsGuard’s Reality Check reported that social media users began circulating AI-generated and digitally manipulated images falsely linking prominent politicians to the convicted sex offender. NewsGuard identified at least seven such images, which together amassed more than 21 million views on X. The images targeted figures across the political spectrum, including New York City Mayor Zohran Mamdani, former South Carolina governor Nikki Haley, and Venezuelan opposition leader Maria Corina Machado, with users claiming the materials were part of the newly released Epstein files. In one widely shared case, fabricated images purported to show Epstein posing with Mamdani as a child and with his mother, filmmaker Mira Nair. Although DOJ files confirm Nair once attended an event hosted by Epstein associate Ghislaine Maxwell, NewsGuard found the images themselves were AI-generated, bearing digital watermarks from Google’s image-generation tools. Other posts falsely claimed Haley had emailed Epstein to arrange flights with her children, but no such message exists in the DOJ database, and the alleged screenshot contained clear inconsistencies. Similarly, an image linking Machado to Epstein was shown to be a manipulated version of an authentic photo that did not include her. As described in a related article , a “red-teaming” audit by NewsGuard of three leading AI image generators found that X’s Grok Imagine consistently produced false images depicting prominent politicians hanging out with convicted sex offender Jeffrey Epstein. Google Gemini’s Nano Banana Pro produced convincing visuals as well, but only after some prodding, while OpenAI’s ChatGPT flatly declined all prompts seeking to place Epstein with public figures. Sources: NewsGuard Reality Check, L. Marchl & J. Schofield & I. Chomnalez. AI and Manipulated Images Falsely Implicate Politicians in the Epstein Files . [online] Published 3 February 2026. Available at: https://www.newsguardrealitycheck.com/p/these-shocking-images-were-not-in NewsGuard Reality Check, I. Chomnalez. Prompted to Create Photos of Politicians with Jeffrey Epstein, X’s ‘Grok Imagine’ Produced Convincing Fakes in Seconds, Google’s Gemini Hesitated but Then Complied, while OpenAI’s ChatGPT Refused . [online] Published 5 February 2026. Available at: https://www.newsguardrealitycheck.com/p/ai-tools-willingly-generate-fake Top Of Page Pro-IS Propaganda Circulated Online Following Syrian Detention Camp Takeover An investigation by the Centre for Information Resilience (CIR) found a sharp increase in pro-Islamic State (IS) activity on Facebook following the Syrian government’s takeover of the Al-Hol detention camp in late January 2026. After Kurdish-led forces withdrew from north-eastern Syria, including Al-Hol, IS supporters used social media to spread propaganda and coordinate logistical and financial assistance for detainees attempting to escape. The activity emerged within days of the transfer of control and coincided with President Ahmed al-Sharaa’s efforts to reassert state authority one year after the fall of the Assad regime. CIR documented dozens of Facebook accounts openly calling for violence against Syrian government forces and urging armed resistance. Some posts explicitly encouraged supporters to attack security personnel, while others focused on practical support, such as requests for vehicles to transport detainees’ families out of the camp. Financial appeals were also widespread, with pro-IS users soliciting donations from supporters abroad and directing them to private Telegram channels for coordination. Source: CIR Centre for Information Resilience. Pro-IS propaganda circulates online following Syrian detention camp takeover . [online] Published 5 February 2026. Available at: https://www.info-res.org/cir/articles/pro-is-propaganda-circulates-online-following-syrian-detention-camp-takeover/ Top Of Page Digital Risks for the 2026 Winter Olympics Ahead of the Milano Cortina 2026 Winter Olympics, the Graphika Team highlighted in a Graphika Blog post that it analyzed online conversations from the past 60 days to assess potential digital risks surrounding the Games and how major sporting events attract influence operations and hacktivist activity. While online discussion remained largely neutral or supportive of the Games, the shift toward controversy-focused narratives created openings for coordinated influence campaigns. IO actors use high-profile events to further their ideological messages or attract engagement. Graphika hasn’t yet seen any known IOs weighing in on the upcoming Games, but will continue to monitor signals related to this threat, as the monitoring and public reporting during the 2024 Paris Games led them to expect that IOs will exploit the Milan Cortina Games. Potential threats include the spread of false or misleading claims about political, security, and even accommodation-related developments, which may be supported by AI-generated content. Additionally, no major hacktivist campaigns have yet targeted the 2026 Winter Olympics. Past events indicate a strong likelihood of cyber disruptions, data leaks, doxing, and DDoS attacks. During the Paris 2024 Olympics, “pro-Russia” and “anti-Zionist” hacktivist groups leaked sensitive athlete data and attacked national websites. Politico reported that Italian authorities have already thwarted hacking attempts against several embassies and Olympic venues and attributed a series of DDoS attacks to a pro-Kremlin gang that claimed responsibility on Telegram. Security groups tracking these threats have also warned that hackers may be preparing to spread disinformation online to discredit the Games or to launch further cyberattacks on Olympic sites ahead of the Opening Ceremony. The Italian National Cybersecurity Agency (ACN) has embedded experts inside the Milano-Cortina security center, with additional staff at ACN HQ, while coordinating with the IOC, CISA, and other international partners to share real-time threat intelligence. Source: Graphika Blog, The Graphika Team. Slippery Slopes: Online Risk and the 2026 Winter Olympics . [online] Published 3 February 2026. Available at: https://graphika.com/posts/slippery-slopes-online-risk-and-the-2026-winter-olympics Politico, M. Miller & D. Nickel. ‘Irresistible stage’: Olympic authorities on high alert for cyberattacks as winter games kick off . [online] Published 6 February 2026. Available at: https://www.politico.com/news/2026/02/06/olympics-high-alert-cyberattacks-00767434 Top Of Page Gabbard’s Seizure of Voting Machines Raised Fears of Election Interference As reported by Politico , while the U.S. midterm elections approached, President Donald Trump intensified questioning election integrity, reviving baseless claims about a "rigged" 2020 vote and urging federal involvement in election administration. These concerns have deepened following revelations that Director of National Intelligence Tulsi Gabbard oversaw the seizure of voting machines in Puerto Rico and appeared at an FBI raid tied to past election fraud allegations. Democratic lawmakers argued that these actions blur the line between foreign intelligence oversight and domestic election matters, raising concerns that the administration may be laying the groundwork to challenge future election outcomes. Gabbard’s office claimed the investigation uncovered serious cybersecurity vulnerabilities, including cellular modems that could connect machines to foreign networks. However, election security experts and bipartisan lawmakers noted that such vulnerabilities have been publicly documented for years and are extremely difficult to exploit without physical access. They emphasized there is no evidence these issues have ever altered U.S. election results. Critics also pointed to recent cuts to federal programs designed to protect election infrastructure and counter foreign threats to U.S. democracy. Source: Politico, M. Miller & D. Nickel. Gabbard’s seizure of voting machines heightens concerns of midterm meddling . [online] Published 6 February 2026. Available at: https://www.politico.com/news/2026/02/06/gabbard-trump-election-midterms-voting-integrity-00769768 Top Of Page The Rise of Violent Rhetoric Targeting US Public Officials New analysis by the Institute for Strategic Dialogue (ISD) found that violent online rhetoric targeting US public officials more than tripled between 2021 and 2025, rising by 241 percent overall with a median monthly increase of 5 percent. Threats intensified around major political events and continued to rise after the 2024 election and presidential transition. Republicans were disproportionately targeted, driven largely by threats against President Donald Trump, who accounted for 47 percent of all violent rhetoric in the dataset. While threats against Democrats also rose, the growth rate for Republicans was far steeper, especially following the July 2024 assassination attempt on Trump. The study found that most violent rhetoric did not originate from organized extremist groups but from partisan individuals reacting to high-profile news and political controversy. Only a small fraction of cases were linked to extremist organizations, pointing instead to a shift toward decentralized, post-organizational political violence. Inflammatory statements by public officials themselves often acted as catalysts, with supporters directing threats at perceived opponents and fueling cycles of escalation across party lines. Real-world violence further intensified this dynamic, triggering waves of online threats, celebration of attacks, or calls for retaliation. Violent rhetoric remained widespread across major platforms, much of it still accessible despite clear policy violations. Platform dynamics shaped who was targeted: left-leaning platforms such as Reddit and Bluesky hosted more threats against right-wing officials, while X contained a disproportionate share of threats against left-wing figures. Source: ISD, N. Doctor & K. Keneally & C. Zoschak. ‘Tick tock traitor:’ The rise of violent rhetoric targeting US public officials . [online] Published 3 February 2026. Available at: https://www.isdglobal.org/digital-dispatch/tick-tock-traitor-the-rise-of-violent-rhetoric-targeting-us-public-officials/ Top Of Page Foreign State Disinformation and Cyber Influence Threaten Norway’s Information Space in 2026 The Norwegian Police Security Service’s (PST) National Threat Assessment 2026 outlines a deteriorating security and information environment in Norway, driven largely by hostile state actors employing influence operations, cyber activity, and disinformation as core tools. Russia, China, and Iran are identified as the primary actors, with Russia expected to intensify cyber and influence operations aimed at weakening Norwegian public support for Ukraine, NATO, and Western cooperation, including through targeted disinformation, proxy sabotage, and recruitment via digital platforms. China’s activities are assessed as increasingly cyber-centric, combining cyber espionage, covert influence, and transnational repression to silence critics of the Chinese Communist Party, including through malware disguised as legitimate apps and pressure on diaspora communities. Iran is expected to continue intelligence and influence operations using proxy actors and criminal networks, alongside cyber intrusions and intimidation campaigns targeting dissidents, journalists, and Western- or Israeli-linked interests in Norway. The report emphasizes that disinformation and influence operations increasingly intersect with extremist ecosystems and digital radicalisation dynamics. Foreign state actors exploit online platforms, fake accounts, AI-generated content, and proxy networks to spread polarising narratives, undermine trust in democratic institutions, and amplify conspiracy theories, sometimes leveraging existing right-wing or anti-government extremist forums as dissemination channels. These activities contribute to a more diffuse and unpredictable threat landscape, where state-driven influence operations, cyber-enabled repression, and extremist propaganda reinforce one another. The assessment highlights that such tactics risk eroding public trust, increasing societal polarisation, and complicating attribution, thereby strengthening hostile actors’ ability to manipulate the information space while remaining below the threshold of overt conflict. Source: Norwegian Police Security Service (PST). National Threat Assessment 2026 . [online] Published February 2026. Available at: https://www.pst.no/wp-content/uploads/2026/02/National-Threat-Assessment-2026.pdf [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website Top Of Page
- Cyber-based hostile influence campaigns 26th January - 1st February 2026
[Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect. During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [State Actors] Russia Pro-Kremlin Network Spread Fake Media Reports to Undermine EU Unity After Davos NewsGuard’s 2025 Disinformer of the Year: Yevgeny Shevchenko, Creator of the Pravda Network Russian Pravda Network Amplified Alberta Secessionist Content Kremlin claimed Russia and the US won’t talk to EU politicians EU Sanctioned Six Individuals Over Russian Information Manipulation Activities Latvia Claimed Russia Remained its Top Cyber Threat as Attacks Hit Record High The War in Ukraine Fake Videos Targeted Ukrainian Refugees in France China China’s Influence in the Nordic–Baltic Information Environment in Denmark and Lithuania Pro-China AI Videos Falsely Claim Taiwanese Support for Unification [General Reports] Disinformation Surrounding the Shooting of Alex Pretti Disinformation As a Systemic Threat to Democratic Resilience Disinformation Vortex Around Minnesota ICE Protests ICE Surveillance Sparks Online Counterattacks [Appendix - Frameworks to Counter Disinformation] UK Warned It Risks Absorbing Cyber and Hybrid Attacks Without Deterrence Commercialized Social Media Manipulation and Disinformation Amplification EU-Supported Media Literacy Effort Against Disinformation in Kosovo [CRC Glossary] [ Report Highlights] The European External Action Service (EEAS) reported that the EU Council imposed sanctions on six additional individuals for their involvement in Russian hybrid activities, particularly foreign information manipulation and interference targeting the EU, its member states, and partners. In its annual report, Latvia’s “Constitution Protection Bureau” (SAB) , said 2025 marked an all-time high in registered cyber threats targeting the country, while Russia remained the primary source of cyber risk. A report by NATO Strategic Communications Centre of Excellence examined how Beijing has sought to shape the Nordic-Baltic information environment, documenting a shift from earlier partnership approaches to a more cautious, contested relationship. According to an EU VS Disinfo article, Global risk assessments increasingly identify foreign information manipulation, disinformation, and misinformation as structural threats that undermine democracy, human rights, economic stability, and crisis response, making investment in trustworthy public-interest media and stronger policy safeguards essential to protecting information integrity. A Politico article describes a growing digital struggle in which expanded federal surveillance and data use for deportations is met by activist and hacker efforts to track, leak, and disrupt ICE operations, fueling an increasingly contested and misinformation-prone information environment. An article from the NATO Strategic Communications Centre of Excellence explains how coordinated actors exploit social media platforms through fake accounts, automated amplification, and targeted narrative tactics to manipulate public opinion, spread disinformation, and undermine trust in democratic information environments. [State Actors] Russia Pro-Kremlin Network Spread Fake Media Reports to Undermine EU Unity After Davos NewsGuard’s Reality Check reveals how a pro-Kremlin influence operation sought to undermine the European Union during and immediately after the World Economic Forum in Davos by circulating fabricated videos that impersonated trusted outlets. The fake content included a fabricated Reuters video alleging that France and Germany were preparing to leave the EU, a bogus Gallup report claiming that most young Europeans supported exiting the union, and a falsified Economist video attributing anti-EU remarks to former German Chancellor Angela Merkel. All of these claims were demonstrably false, and none appeared on the outlets’ official platforms. The individuals cited consistently expressed strong pro-EU positions. Reuters explicitly confirmed that the video attributed to it was fake, while independent polling showed strong EU support among European youth. The campaign also promoted additional fabricated quotes, anti-EU statements attributed to global leaders, claims about fabricated anti-EU protests, and reports of escalating tensions among EU member states. Although individual videos achieved only modest reach, collectively they garnered thousands of views across platforms such as Telegram, where audiences have limited tools for verifying authenticity. Source: NewsGuard, A. Lee. Post Davos, Russian Influence Operation Uses Phony Videos Impersonating Reuters, Gallup and the Economist to Foment EU Discord . [online] Published 28 January 2026. Available at: https://www.newsguardrealitycheck.com/p/russians-seed-fake-reports-to-sow Top Of Page NewsGuard’s 2025 Disinformer of the Year: Yevgeny Shevchenko, Creator of the Pravda Network Yevgeny Shevchenko was designated by NewsGuard’s Reality Check as its 2025 Disinformer of the Year for his role in building the Pravda network, one of the most prolific pro-Kremlin disinformation operations globally. The network comprises hundreds of automated, news-style websites published in 49 languages, collectively producing approximately 6.3 million articles in 2025 alone. These sites repeatedly amplified false claims aligned with Kremlin narratives, targeting topics such as the war in Ukraine, European and U.S. elections, public health, and geopolitics, while using domain names designed to appear legitimate and local. A key impact of the Pravda network was its success in polluting search results and influencing generative AI systems. NewsGuard audits found that while some AI systems successfully debunked false claims sourced directly from Pravda articles, others reproduced them at significant rates. Shevchenko, a Crimea-based web entrepreneur and founder of the company TigerWeb, has kept a low public profile despite the network’s reach. The Pravda network expanded rapidly after Russia’s full-scale invasion of Ukraine in 2022 and was sanctioned by the European Union in July 2025 for coordinated information manipulation. Source: NewsGuard, A. Lee & E. Maitland. NewsGuard’s 2025 Disinformer of the Year: Yevgeny Shevchenko, Creator of the Pravda Network . [online] Published 27 January 2026. Available at: https://www.newsguardrealitycheck.com/p/newsguards-2025-disinformer-of-the Top Of Page Russian Pravda Network Amplified Alberta Secessionist Content As reported by DisinfoWatch , Russia’s Pravda News Network published what appeared to be an AI-generated video promoting an Alberta secessionist rally scheduled for Monday, January 26, on its Russian VK social media platform. The Pravda network, also known as Portal Kombat, was first identified by France’s VIGINUM agency as a coordinated pro-Kremlin disinformation ecosystem that aggregates and republishes content from Russian state media, official channels, and aligned online sources rather than producing original reporting. The network operates a Canada-focused site that republishes material daily, drawing heavily from outlets such as RT, TASS, and the Russian Embassy in Canada. Canadian civil society group Cyber Alberta has warned that Pravda is targeting Canadian interests. Source: DisinfoWatch, Russian Pravda News Platform Amplifying Alberta Secessionist Events . [online] Published 25 January 2026. Available at: https://disinfowatch.org/disinfo/russian-pravda-news-platform-amplifying-alberta-secessionist-events/ Top Of Page Kremlin Claimed Russia and the US won’t talk to EU politicians DisinfoWatch documents how Russian state outlet RT reported claims by Kremlin spokesperson Dmitry Peskov asserting that Russia would not engage with EU foreign policy chief Kaja Kallas and that it was “obvious” the United States would also refuse to engage with her. The claim sought to widen EU–US fractures amid UAE-hosted talks. Kallas is the EU’s High Representative for Foreign Affairs and Security Policy and a Commission Vice-President, a role appointed through formal EU processes and documented in EU institutional records. The report relied on delegitimizing language and provided no evidence to support the assertion that Washington shared Moscow’s position. Source: DisinfoWatch, Kremlin Claims Russia and US won’t talk to EU . [online] Published 26 January 2026. Available at: https://disinfowatch.org/disinfo/kremlin-claims-russia-and-us-wont-talk-to-eu/ Top Of Page EU Sanctioned Six Individuals Over Russian Information Manipulation Activities The European External Action Service (EEAS) reported that the EU Council imposed sanctions on six additional individuals for their involvement in Russian hybrid activities, particularly foreign information manipulation and interference targeting the EU, its member states, and partners. Those sanctioned include prominent Russian television presenters Dmitry Guberniev, Ekaterina Andreeva, and Maria Sittel; propagandist Pavel Zarubin; actor Roman Chumakov; and ballet dancer Sergey Polunin. The Council said these figures have actively promoted pro-Kremlin disinformation, anti-Ukraine and anti-Western narratives, and, in some cases, helped raise funds for the Russian armed forces, directly contributing to Russia’s war effort against Ukraine. With this decision, EU restrictive measures now apply to 65 individuals and 17 entities. Sanctions include asset freezes, bans on EU citizens and companies providing funds or economic resources to those listed, and travel restrictions preventing entry into or transit through EU territory. The legal acts formalizing the measures have been published in the Official Journal of the European Union. Source: EEAS, Russian hybrid threats: Council sanctions six individuals over information manipulation activities . [online] Published 30 January 2026. Available at: https://www.eeas.europa.eu/delegations/ukraine/russian-hybrid-threats-council-sanctions-six-individuals-over-information-manipulation-activities_en Top Of Page Latvia Claimed Russia Remained its Top Cyber Threat as Attacks Hit Record High In its annual report , Latvia’s Constitution Protection Bureau ( SAB) , said that 2025 marked an all-time high in the number of registered cyber threats targeting the country, while Russia remained the primary source of cyber risk. While most incidents involved cybercrime and digital fraud, state-linked threats remained elevated, particularly from Russia. From a national security perspective, the most significant risks included intrusion attempts, malware, system compromises, and distributed DDoS attacks. Latvian authorities noted that effective defensive measures, particularly by CERT, were in place. LV helped limit the impact of many attacks, including during politically sensitive events such as municipal elections. A key concern highlighted in the report was the growing role of Russian hacktivists, who have demonstrated both intent and capability to target critical and industrial systems across Latvia and other Western countries. These actors aim to disrupt essential services, intimidate populations, punish support for Ukraine, and deter further assistance. Examples cited included hacktivist attacks on operational technologies, such as dams and power plants, in Norway and Poland, where weak security controls enabled attackers to manipulate industrial control systems and, in one case, shut down a hydroelectric facility. Although Latvia has so far avoided major incidents affecting critical infrastructure, vulnerabilities in operational technologies remain a significant risk. Russian DDoS campaigns continued to target Latvian government institutions, municipalities, and critical infrastructure, often timed to coincide with political decisions or symbolic events. In most cases, DDoS attacks had little or no effect on services’ availability. To counter this threat, Latvia has invested in centralized, state-funded DDoS protection for public institutions and strengthened oversight of ICT critical infrastructure through new cybersecurity laws and regulations. Source: SAB, Annual Report 2025 . [online] Published January 2026. Available at: https://www.sab.gov.lv/files/uploads/2026/01/SABs-annual-report_2025_ENG.pdf Top Of Page War in Ukraine Fake Videos Targeted Ukrainian Refugees in France StopFake reports that a series of fake videos on Telegram falsely alleged that Ukrainian refugees in France committed mass crimes, including murder, theft, drug distribution, and even terrorism. These videos, which mimicked the logos and formats of prominent French and international media outlets, including Le Parisien, Le Figaro, L’Équipe, Reuters, Le Point, and Euronews, were part of a coordinated disinformation campaign. Analysis showed that the videos were released over a short period, from January 12 to 16, 2026, via at least three anonymous Telegram channels, and were then widely amplified across pro-Russian networks. Fact-checks confirmed that none of the alleged crimes or news stories were real. Official websites, social media accounts, and publications from the cited media outlets contain no reports that match the videos’ claims. France also does not maintain official statistics on crime specifically among Ukrainian refugees, and available data suggest that Ukrainians do not pose a higher criminal threat than other migrant groups. Source: StopFake, Фейк: Мировые СМИ сообщили о массовых преступлениях украинских беженцев во Франции . [online] Published 28 January 2026. Available at: https://www.stopfake.org/ru/fejk-mirovye-smi-soobshhili-o-massovyh-prestupleniyah-ukrainskih-bezhentsev-vo-frantsii/ Top Of Page China China’s Influence in the Nordic–Baltic Information Environment in Denmark and Lithuania The NATO Strategic Communications Centre of Excellence examined , in a report on China’s influence in the Nordic–Baltic information environment, how Beijing has sought to shape the region’s information space, documenting a shift from earlier partnership approaches to a more cautious, contested relationship. It mapped China’s objectives (protecting core interests, acquiring technology, and improving perceptions), described eight avenues of influence, and analysed official PRC frames and their resonance in local media using country case studies of Lithuania and Denmark. In Lithuania, relations with China deteriorated sharply after 2019, culminating in Vilnius's withdrawal from the China-CEEC format and the authorization of Taiwan to open a representative office under its own name. China responded with economic pressure and a coordinated diplomatic and information campaign. However, the study found that China’s influence in Lithuania’s media space remained limited. Chinese narratives had little resonance, partly due to the absence of strong Chinese media channels and partly because Beijing’s coercive tactics proved counterproductive, reinforcing public skepticism rather than shaping opinion. Media debates largely reflected domestic political dissatisfaction and broader geopolitical shocks, particularly Russia’s invasion of Ukraine, rather than successful Chinese messaging. In Denmark, the report identified a different pattern, described as “Shadow Wolf Warrior” diplomacy. Rather than relying on aggressive public messaging, China relied more on backstage influence through elite networks, business ties, and United Front activities, while maintaining a low public profile. Although Danish public discourse was generally skeptical of Chinese frames, and official messaging failed to gain broad traction, the report warned that China’s covert channels and long-standing institutional ties create a durable influence. Source: NATO Strategic Communications Centre of Excellence, M. Lanteigne & L. Stünkel & K. Andrijauskas & A. K. Jakobsson. China’s Influence in the Nordic – Baltic Information Environment: Denmark and Lithuania . [online] Published 28 January 2026. Available at: https://stratcomcoe.org/pdfjs/?file=/publications/download/Chinas-Influence-in-the-Nordic-Baltic---Denmark-Lithuania-FINAL-FILE.pdf?zoom=page-fit Top Of Page Pro-China AI Videos Falsely Claim Taiwanese Support for Unification NewsGuard’s Reality Check has tracked how, since December 2025, pro-China sources have circulated AI-generated videos showing people purportedly from Taiwan speaking Mandarin with authentic Taiwanese accents and calling for unification with China. One account on the Chinese platform RedNote, “Taiwanese come home,” posted 35 such videos featuring teachers, doctors, police officers, firefighters, and students, garnering over 21,000 likes. NewsGuard confirmed that the videos were AI-generated using OpenAI’s Sora 2 tool, and that the individuals depicted do not exist or were misrepresented. For example, Taipei Municipal Chien Kuo High School, shown in one video, has never admitted female students, and the National Taiwan University professors depicted in another video were entirely fabricated. These AI-generated videos are part of a broader cognitive warfare effort by China, which has commissioned companies like Magic Data and iFlytek to create voice databases of native Taiwanese speakers in Mandarin, Hokkien, and Hakka. These databases are intended to lend authenticity to fabricated pro-China messaging. Source: NewsGuard, C. Lin. Pro-China AI-Generated Videos Use Databanks of Taiwanese Accents to Fake Calls for Taiwan-China Unification . [online] Published 29 January 2026. Available at: https://www.newsguardrealitycheck.com/p/pro-china-ai-fakes-a-taiwanese-accent Top Of Page [General Reports] Disinformation Surrounding the Shooting of Alex Pretti NewsGuard’s Reality Check designated the claim that Alex Pretti pulled a gun on federal agents before being fatally shot in Minneapolis in January 2026 as its “False Claim of the Week,” citing its rapid spread, high engagement, and promotion by high-profile figures. Following the January 2024 shooting, Trump administration officials and conservative commentators alleged that Pretti brandished a firearm and posed an imminent threat, framing the killing as justified. Statements from the Department of Homeland Security, Homeland Security Secretary Kristi Noem, and White House aide Stephen Miller were widely echoed across social media and partisan websites, drawing millions of views. However, a detailed review of eyewitness video footage from five angles by NewsGuard, alongside reporting from major outlets including Reuters, CNN, The New York Times, and ABC News, found no evidence that Pretti pulled or reached for a gun before he was shot. The footage showed Pretti holding a phone, with no weapon visible, as officers confronted him. He was pepper-sprayed, tackled, and pinned to the ground before an officer removed a concealed handgun from his waistband. Authorities later confirmed that Pretti was legally carrying a concealed firearm with a permit, but video analysis indicated it remained holstered and hidden throughout the initial encounter. Additionally, NewsGuard’s Reality Check reported that shortly after the shooting, an AI-manipulated image circulated widely on social media, falsely claiming to show Pretti holding a gun at the moment he was shot, with posts reaching millions of views within hours. Investigators and journalists confirmed the image was fabricated. Sources: NewsGuard, C. Vercellone, Reality Check. Debunk: Pretti Didn’t Pull Out a Firearm, Contrary to the Trump Administration’s Claims . [online] Published 30 January 2026. Available at: https://www.newsguardrealitycheck.com/p/did-alex-pretti-brandish-a-gun-newsguards NewsGuard, M. Calamaio, Reality Check. AI-Manipulated Image Cited as False Evidence that Victim in Latest ICE Shooting Was Brandishing a Gun . [online] Published 26 January 2026. Available at: https://www.newsguardrealitycheck.com/p/ai-manipulated-image-shows-gun-not Top Of Page Disinformation As a Systemic Threat to Democratic Resilience EUvsDisinfo argues that recent global risk assessments characterize foreign information manipulation and interference (FIMI), disinformation, and misinformation as systemic threats that undermine democratic resilience worldwide. Reports from the World Economic Forum, the United Nations, and the European External Action Service highlight how these campaigns deepen societal divides, erode trust in institutions, and weaken crisis response by undermining the shared evidence base required for collective decision-making. The Human Rights Council further warns that FIMI increasingly targets marginalized communities, independent media, and human rights defenders, demonstrating that information disorder is not a marginal issue but a global risk multiplier affecting governance, security, and social cohesion. The article also emphasizes the economic and structural damage caused by disinformation, noting that misleading narratives can destabilize markets, distort financial expectations, and erode long-term investment and policy stability. Conflict-related and climate-focused manipulation campaigns can incite hatred, obstruct humanitarian efforts, and delay sustainable development by casting doubt on scientific consensus or promoting false solutions. As a safeguard, the article emphasizes robust public-interest media ecosystems, independent journalism, and media literacy, alongside initiatives such as the European Democracy Shield, which seeks to strengthen election integrity and counter FIMI through detection, cooperation, and proactive investment in trustworthy information spaces. Source: EUvsDisinfo, FIMI and disinformation as global threats . [online] Published 30 January 2026. Available at: https://euvsdisinfo.eu/fimi-and-disinformation-as-global-threats/ Top Of Page Disinformation Vortex Around Minnesota ICE Protests A podcast episode of Uncanny Valley by Wired describes a fragmented, high-tempo information environment surrounding intensified ICE activity in Minnesota, in which far-right and pro-administration messaging rapidly shaped and distorted public understanding of events. Hosts discuss how a right-wing influencer, Nick Shirley, promoted an unproven claim that Somali-run daycare centers in Minneapolis misappropriated millions in a Medicaid-related fraud narrative; they link this amplification to subsequent harassment and violence dynamics, including an attack on Rep. Ilhan Omar and attempts to frame the incident as staged. They also describe a rolling cycle of narrative shifts and reputational smears after the killing of protester Alex Pretti, moving from claims of an assassination attempt and “terrorist” labeling to alternative insinuations (e.g., immigration status) and finally to blame-shifting arguments about protest behavior, illustrating a “spin-to-fit” approach that prioritizes ideological utility over verifiable facts, including repetition by figures at the highest levels of the administration. The episode then broadens to platform governance and credibility crises as accelerants for misinformation and perceived censorship. It highlights how users interpreted TikTok outages and content-performance changes as politically motivated suppression following a U.S. ownership restructuring, noting that distrust is compounded by opaque, personalized recommendation systems that are difficult to audit externally. The hosts suggest that even subtle algorithmic tweaks can influence which narratives gain traction without leaving clear evidence. They further note that TikTok’s updated terms request more granular location permissions and enable the broader collection of user input for AI features, raising concerns about surveillance, targeting, and the erosion of user trust at a moment when many already suspect political capture of major information channels. Source: WIRED, B. Barrett & Z. Schiffer, & T. Marchman. Uncanny Valley: Minneapolis Misinformation, TikTok’s New Owners, and Moltbot Hype . [online] Published 29 January 2026. Available at: https://www.wired.com/story/uncanny-valley-podcast-ice-minneapolis-tiktok-moltbot/ Top Of Page ICE Surveillance Sparks Online Counterattacks Politico reports an escalating digital information conflict surrounding the Trump administration’s mass deportation agenda, in which federal agencies have expanded domestic surveillance capabilities while online activists and hacker groups deploy countermeasures to track and expose immigration enforcement operations. ICE has reportedly increased its use of advanced surveillance tools and data access, including contracts with firms such as Paragon and Palantir, forensic phone-cracking technologies, facial recognition systems, and data brokers collecting sensitive personal information. The administration has also granted ICE access to large federal datasets from agencies like the IRS, Medicaid, and Social Security. In response, activists have used encrypted messaging platforms, social media, and community-built tools to report raid locations, map surveillance infrastructure, and identify agents, while cybercriminal collectives have escalated tactics by leaking names and personal details of ICE and DHS officials online. The article highlights how these developments create fertile ground for disinformation, coercive influence, and contested narratives over legitimacy and safety. Digital tools intended to document or resist enforcement actions have prompted aggressive efforts by the government and major technology companies to suppress information sharing, including app removals, Meta’s content restrictions, and federal investigations into encrypted communications. Officials have framed ICE-tracking platforms as threats to agent security, while critics argue these actions represent intimidation and censorship aimed at silencing opposition. The environment is characterized by breaches, doxxing, surveillance expansion, and attempts to control online discourse, illustrating how both state and non-state actors use digital tactics to influence public perception, disrupt organizing, and shape the information space around immigration enforcement. Source : Politico, D. Nickel & A. Ng. ICE has expanded its mass surveillance efforts. Online activists are fighting back. [online] Published 29 January 2026. Available at: https://www.politico.com/news/2026/01/29/ice-tracking-tools-protesters-00755703 Top Of Page [Appendix - Frameworks to Counter Disinformation] UK Warned It Risks Absorbing Cyber and Hybrid Attacks Without Deterrence Warnings from UK security leaders, reported by The Record , highlighted that Britain risks exposing itself to cyberattacks, sabotage, and disinformation campaigns unless it develops credible offensive deterrence alongside defensive resilience. Former national security adviser Lord Sedwill told a parliamentary hearing that resilience measures alone would not discourage hostile states. The warnings came as ministers defended plans agreed at last year’s NATO summit to raise total security spending to 5 percent of GDP within a decade, including 1.5 percent for indirect defense and resilience such as cybersecurity. Committee members questioned whether this resilience funding would deliver new capabilities or merely repackage existing spending, given the lack of clear NATO definitions. Sedwill expressed concern that creative accounting could undermine the effort, urging ministers to clarify the additional capacity that would be delivered in the coming years. Ministers acknowledged that cyber incidents and hybrid attacks below the threshold of armed conflict are already having serious strategic effects. The government plans to publish a revised National Cyber Action Plan, shifting from a strategic framework to an operational plan focused on countering threats, strengthening resilience, and supporting economic growth. Source: The Record, A. Martin. UK leaders warned country risks 'absorbing' cyber and hybrid attacks without offensive deterrence . [online] Published 28 January 2026. Available at: https://therecord.media/uk-government-warned-cyber-hybrid-threats-offensive-operations Top Of Page Commercialized Social Media Manipulation and Disinformation Amplification An experiment published in 2025 by NATO Strategic Communications Centre of Excellence examined how major social media platforms detect and counter commercially purchased inauthentic engagement. Despite regulatory advances such as the EU Digital Services Act, manipulation services remain widely accessible and inexpensive, allowing actors to buy fake likes, comments, shares, and followers at scale. The study found that more than 30,000 inauthentic accounts generated more than 100,000 units of engagement, with enforcement varying significantly across platforms: X and YouTube removed a larger share of fake activity, whereas Instagram, TikTok, and others left most purchased engagement intact. The experiment also showed that manipulation is not confined to organic posts, as paid advertising systems can be exploited to distribute inauthentic narratives to targeted audiences. The report highlights a shift toward more sophisticated influence tactics, including AI-enabled bot networks designed to blend into authentic conversations rather than relying on overt spam. These bots increasingly amplify politically sensitive and military-related narratives, including pro-Kremlin and pro-China themes, while commercial providers use cryptocurrency payments to obscure traceability and sustain a resilient ecosystem of manipulation. Overall, the findings underscore how hostile actors can exploit low-cost, automated tools to shape discourse, erode trust, and embed disinformation within legitimate online communities, emphasizing the need for behavioural detection, financial disruption, and stronger cross-platform accountability. Source: NATO Strategic Communications Centre of Excellence , Social Media Manipulation for Sale: 2025 Experiment on Platform Capabilities to Detect and Counter Inauthentic Social Media Engagement. [online] Published 30 January 2026. Available at: https://stratcomcoe.org/publications/social-media-manipulation-for-sale-2025-experiment-on-platform-capabilities-to-detect-and-counter-inauthentic-social-media-engagement/338 Top Of Page EU-Supported Media Literacy Effort Against Disinformation in Kosovo As published by the EEAS , digital and AI-driven technologies have increased vulnerability to misinformation and information manipulation across the Western Balkans, prompting the European Union and its local European Houses to support public resilience initiatives. At the launch of the exhibition The Glass Room: Misinformation Edition in Pristina, EU Ambassador Airo Orav emphasized the shared challenge of safeguarding societies from disinformation and the EU’s commitment to equipping citizens with tools to recognize and counter misleading narratives. With EU support, the exhibition toured multiple towns in Kosovo between October and December 2025, using posters, interactive applications, and animations to explain how misinformation spreads, why it is persuasive, and how everyday online behaviors such as clicks, likes, and shares amplify false content. The tour also included capacity-building workshops led by experts Kreshnik Gashi and Darko Dimitrijević, which addressed risks such as deepfakes, algorithmic bias, and the influence of digital design on public opinion, while promoting source verification and privacy awareness as key defenses against hostile information dynamics. Source: EEAS, Press and information team of the EU Office/EU Special Representative in Kosovo. The Kosovo Journey of The Glass Room – Misinformation Exhibition Edition . [online] Published 29 January 2026. Available at: https://www.eeas.europa.eu/delegations/kosovo/kosovo-journey-glass-room-%E2%80%93-misinformation-exhibition-edition_en [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website Top Of Page
.png)









