top of page

Search CRC

154 results found with an empty search

  • Cyber-based hostile influence campaigns 2nd - 8th March 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [State Actors] Russia   Russia’s Information Control in Ukraine’s Occupied Territories Kremlin Used Iran War Disinformation to Target Ukraine and France Pro-Kremlin Disinformation Targeted Middle East War Narratives China Leaked Documents Revealed AI-Driven Chinese Influence Operations A Case Study of PRC Influence in Malaysia and Singapore Iran Misleading War Images Exaggerated Iran’s Military Success Cyber and Information Operations Expand Pressure on Iran’s Regime Iran Launched a Disinformation Campaign Following War with US and Israel AI Video Showing Massive Destruction in Tel-Aviv Spread by Pro-Iran Accounts The Online Information Environment Around the Iran War Hacktivist Cyber Campaign Escalates Following Operation Epic Fury [AI Related Articles] Google’s AI Overviews Amplified Iran-Related Hoaxes AI Fueled Election Interference Across Three States AI-Assisted Russian Influence Network Targets Africa [Cyfluence Attacks] Iran Expected to Launch Cyber Attacks Worldwide Following US and Israeli Strikes Middle East Conflict Triggered Surge in Global Cyber Activity [Appendix - Frameworks to Counter Disinformation] EU and France Rethink Strategy to Counter Russian Disinformation [CRC Glossary] [ Report Highlights] In a recent report, DoubleThink Lab reveals that leaked internal documents from the Chinese technology company GoLaxy outline plans to build advanced AI-powered systems to automate the manipulation of information. A study by DoubleThink Lab examining the People’s Republic of China's influence in Southeast Asia found that both Malaysia and Singapore face growing exposure to Chinese political, economic, and technological influence. Iranian state media have intensified disinformation efforts following the outbreak of war with the US and Israel, spreading false claims about battlefield successes and using manipulated or misleading visuals. As identified by NewsGuard 's Reality Check, at least 18 false claims have circulated since the conflict began. A report issued by Radware states that following Operation Epic Fury, coordinated pro-Iranian and allied hacktivist groups launched widespread cyber campaigns, including DDoS attacks, phishing operations, and hack-and-leak claims, targeting government and critical infrastructure across the Middle East while amplifying political messaging online. Graphika ’s analysis finds that AI played a central role in influence operations targeting elections in Bangladesh, Colombia, and Tibet, enabling the rapid production and amplification of synthetic political content across online platforms. Le Monde reports that the EU and France are reassessing their strategies to counter rising Russian disinformation ahead of major elections in 2027, including the French presidential vote and parliamentary elections in Italy. [State Actors] Russia Russia’s Information Control in Ukraine’s Occupied Territories An article by EUvsDisinfo  reports that since the start of Russia’s occupation of parts of Ukraine in 2014 and the full-scale invasion in 2022, authorities under Russia have built a comprehensive system of information control in Ukraine’s occupied regions. An estimated 5–6 million residents now live in an information environment largely cut off from Ukrainian media and dominated by Russian state narratives. Ukrainian outlets have been shut down or replaced with pro-Kremlin media, while telecommunications infrastructure has been “Russified”, shifting networks to Russian operators that enable traffic filtering, website blocking, and internet disruptions. Authorities have also restricted access to major platforms and promoted state-controlled messaging apps to expand surveillance and propaganda. Ukrainian journalists have been detained or forced to leave, while new media outlets and Telegram channels simulate “normal life” under occupation while spreading Kremlin narratives. Russian authorities have also targeted language and education as tools of control, eliminating Ukrainian-language schooling, removing Ukrainian books from libraries, and imposing Russian as the dominant language in public life. Arrests, surveillance, and online monitoring have created a climate of fear that encourages widespread self-censorship among residents. Despite these pressures, forms of quiet resistance persist. Some residents use VPNs to access Ukrainian information, maintain Ukrainian language and culture at home, or continue education through Ukrainian online schools. Source:  EUvsDisinfo, E. Beketova. Russia’s Information Grip on Ukraine’s Occupied Territories . [online] Published 2 March 2026. Available at: https://euvsdisinfo.eu/russias-information-grip-on-ukraines-occupied-territories/ Top Of Page Kremlin Used Iran War Disinformation to Target Ukraine and France As reported by NewsGuard 's Reality Check, the Russian influence operation Matryoshka has used the ongoing conflict involving Iran to spread fabricated reports aimed at discrediting Ukraine and its allies, particularly France. The campaign produced at least 9 false claims during the conflict triggered by coordinated strikes by the US and Israel against Iranian targets. The reports mimicked credible media outlets and research institutions to make the claims appear legitimate. Among the false narratives was a fake video attributed to the Institute for the Study of War claiming that an Iranian missile killed 70 French soldiers at a base in Abu Dhabi. Another report falsely attributed to Le Point alleged that Ukrainian phone scammers stole millions of euros from French citizens by exploiting the crisis. Additional claims attempted to portray Ukrainian military leadership as corrupt, including a story falsely suggesting that Ukraine’s army chief, Oleksandr Syrskyi, owned a luxury property in Dubai that was destroyed by an Iranian missile. Source:  NewsGuard Reality Check, A. Lee. Russia Uses Iran War to Plant Fake Reports Discrediting Ukraine And France . [online] Published 4 March 2026. Available at: https://www.newsguardrealitycheck.com/p/kremlin-creates-iran-war-fakes-to Top Of Page Pro-Kremlin Disinformation Targeted Middle East War Narratives An article by EUvsDisinfo showed that Pro-Kremlin media outlets have circulated several misleading narratives related to the war in the Middle East. One widely promoted claim, attributed to the Russian Foreign Intelligence Service, alleged that France and the UK planned to provide Ukraine with a nuclear or "dirty" bomb. Russian state media repeated the accusation across multiple languages, despite offering no evidence, while officials in Ukraine, France, and the UK rejected the claims. Other pro-Kremlin narratives accused Ukraine and the British intelligence agency MI6 of plotting to sabotage Black Sea energy infrastructure, including the Turkish Stream and Blue Stream gas pipelines. These allegations were largely based on statements by Putin and echoed previous unsupported accusations made by Moscow during the war in Ukraine. At the same time, pro-Kremlin outlets promoted the claim that Gulf states such as Qatar, Saudi Arabia, and the United Arab Emirates had asked Russia to serve as the guarantor of peace in the Middle East. Reports indicated that the diplomatic contacts were initiated by Moscow itself. Source:  EUvsDisinfo. How Russia lies about the war in the Middle East . [online] Published 6 March 2026. Available at: https://euvsdisinfo.eu/how-russia-lies-about-the-war-in-the-middle-east/ Top Of Page China Leaked Documents Revealed AI-Driven Chinese Influence Operations A 399-page leak analyzed by DoubleThink Lab  reveals that the Chinese technology company GoLaxy planned to build advanced AI-powered systems to automate the manipulation of foreign information. The documents describe a “smart propaganda” platform capable of monitoring political trends, identifying targets, generating tailored propaganda, and distributing it through large networks of realistic fake social media accounts. The company, which originated from the Institute of Computing Technology under the Chinese Academy of Sciences, aims to use AI and data analysis to influence public opinion abroad more effectively than previous campaigns, often associated with spam-like messaging. The system gathers large amounts of political and social data, profiles influential individuals and organizations, and generates customized content that can be automatically posted by thousands of “persona bots” across platforms such as Facebook, X, and Telegram. The documents also reveal detailed monitoring programs targeting sensitive regions and political issues for China, including extensive datasets on political figures and organizations in Taiwan, as well as monitoring of political actors and institutions in the US and across the Indo-Pacific. These systems are designed to track elections, analyze public sentiment, and identify opportunities to shape narratives or deepen political divisions. The leak also highlights cooperation between GoLaxy and organizations linked to the Chinese government and military, including the Cyberspace Administration of China and the Central Military Commission. Source:  DoubleThink Lab. The Rise of AI in PRC Influence Operations: Nine Takeaways from the GoLaxy Documents . [online] Published 4 March 2026. Available at: https://medium.com/doublethinklab/the-rise-of-ai-in-prc-influence-operations-nine-takeaways-from-the-golaxy-documents-2d6617a753e5 Top Of Page A Case Study of PRC Influence in Malaysia and Singapore A study by DoubleThink Lab  examining the People’s Republic of China's influence in Southeast Asia found that both Malaysia and Singapore face growing exposure to Chinese political, economic, and technological influence. According to the Doublethink Lab China Index 2024, several Southeast Asian countries ranked among the most affected globally. In Malaysia, ties with Beijing have strengthened under Prime Minister Anwar Ibrahim, including expanded economic cooperation, visa policies, and agreements in media, telecommunications, and technology. Public sentiment toward China has also grown more positive, partly due to geopolitical tensions, such as the Gaza war. At the same time, concerns remained about economic competition, security risks, and territorial disputes in the South China Sea. Malaysia has signed multiple cooperation agreements with Chinese state-linked media and technology companies, raising questions about potential narrative influence and information sharing. In Singapore, influence efforts were evident in cultural, educational, and media exchanges, as well as on Chinese social media platforms and diaspora networks. Authorities have also faced cyber incidents attributed to Chinese-linked groups and instances of online disinformation, including AI-generated content spreading false claims about political divisions within the ruling People’s Action Party and its leader, Prime Minister Lawrence Wong. Despite expanding engagement with Beijing, both countries maintain complex and cautious relationships with China. Singapore continues to balance ties with both China and the US, while Malaysia’s closer cooperation with Beijing coexists with concerns over sovereignty and economic dependence. Source:  DoubleThink Lab. PRC Influence on ASEAN Countries in 2025: The Case Study of Malaysia and Singapore . [online] Published 3 March 2026. Available at: https://medium.com/doublethinklab/prc-influence-on-asean-countries-in-2025-the-case-study-of-malaysia-and-singapore-48d84c1e71c1 Top Of Page Iran Misleading War Images Exaggerated Iran’s Military Success As coordinated strikes on 28 February 2026 escalated tensions between the US, Israel, and Iran, an analysis by NewsGuard ’s Reality Check documented how misleading images and videos began circulating widely on social media. At least three viral visual misinformation pieces collectively garnered more than 21.9 million views on X, falsely portraying Iran as achieving major military victories. In reality, each example relied on unrelated or old footage. A widely shared image allegedly showing the USS Abraham Lincoln sinking was actually a photo of the scuttling of the decommissioned USS Oriskany in 2006. Another viral video claimed to depict an Iranian strike on Israel’s Negev Nuclear Research Center but was traced to a 2017 ammunition depot explosion in Balakliya, Ukraine. A third clip circulated as footage of a missile strike on a CIA headquarters in Dubai, United Arab Emirates, but actually showed a residential tower fire in nearby Sharjah in 2015. Source:  NewsGuard Reality Check, L. Marchl & I. Blachez & S. Rubinson. Amid U.S. and Israeli Attack, Phony Images and Videos Make It Look Like Iran Has Had Big Wins . [online] Published 2 March 2026. Available at: https://www.newsguardrealitycheck.com/p/iran-is-winning-according-to-misleading Top Of Page Cyber and Information Operations Expand Pressure on Iran’s Regime Alongside US and Israeli airstrikes on Iran, a parallel campaign has targeted the country’s digital infrastructure and information environment. A report by the Australian Strategic Policy Institute (ASPI) , described this strategy as a fusion of cyber and kinetic operations designed to disrupt communications, weaken situational awareness, and pressure political decision-making. By targeting telecommunications, state media systems, and online services, cyber tools can create uncertainty within government institutions while simultaneously affecting how the public perceives events on the ground. Though none of these guarantees regime collapse. The campaign reportedly included attempts to reach civilians directly through digital platforms. One example cited is the alleged hacking of the widely used Iranian prayer and calendar application BadeSaba, which sent push notifications urging users and military personnel to surrender. Similar approaches have appeared in the past, including those used by Russia in digital influence operations. Source:  ASPI, J. Bassi & J. Corera. Beyond bombs: cyber and information operations targeting Iran’s regime . [online] Published 2 March 2026. Available at: https://www.aspi.org.au/strategist-posts/beyond-bombs-cyber-and-information-operations-targeting-irans-regime/ Top Of Page Iran Launched a Disinformation Campaign Following War with US and Israel Iranian state media have intensified disinformation efforts following the outbreak of war with the US and Israel, spreading false claims about battlefield successes and using manipulated or misleading visuals. As identified by NewsGuard 's Reality Check, at least 18 provably false war-related claims linked to Iran-aligned sources have circulated since the conflict began, more than triple the number recorded in the two weeks before the attacks. Several claims relied on altered or misleading images and videos. One example involved an AI-manipulated satellite image shared by the Iranian state outlet Tehran Times that falsely depicted the destruction of a US radar system at Al Udeid Air Base. In another case, Iranian channels circulated a video claiming that Iranian forces had shot down a US fighter jet over Tehran. However, the footage actually showed an Iranian aircraft destroyed by the Israeli Air Force. The spread of these narratives has been amplified by Iran’s near-total internet shutdown, which reduced connectivity by about 98 percent and limited access to outside information. Source:  NewsGuard Reality Check, W. Rizvi. Iran’s State Media Bombards Iranians with False Claims and Misleading Visuals . [online] Published 5 March 2026. Available at: https://www.newsguardrealitycheck.com/p/iran-goes-on-a-disinformation-offensive Top Of Page AI Video Showing Massive Destruction in Tel-Aviv Spread by Pro-Iran Accounts An investigation by NewsGuard ’s Reality Check found that a viral video circulating on social media falsely claimed to show a dozen Iranian missiles striking a residential neighborhood in Tel Aviv on 3 March 2026, supposedly bypassing Israel’s missile defense system and causing massive destruction. The footage spread widely across platforms, gaining tens of millions of views after being promoted by pro-Iran and anti-Israel accounts, some of which described it as verified “breaking” footage of a major Iranian attack. The investigation determined that the video had actually been generated using AI. Analysis using the AI-detection tool Hive, along with visual inconsistencies in the footage, revealed that it was fabricated. For example, the street shown in the clip appeared completely deserted despite the supposed daytime missile barrage, and the siren heard in the background does not match authentic Israeli missile alert sirens. Grok described the video as authentic citizen footage, while Google AI Overviews suggested it depicted a real military engagement. In reality, while one Iranian missile did strike Tel Aviv on 28 February 2026, causing injuries and one death, the large-scale destruction shown in the viral video never occurred. Source:  NewsGuard Reality Check, I. Chomnalez. Debunk: Video Showing Massive Destruction in Tel Aviv Is an AI Fabrication Spread by Pro-Iran Accounts . [online] Published 6 March 2026. Available at: https://www.newsguardrealitycheck.com/p/dozens-of-missiles-struck-tel-aviv Top Of Page The Online Information Environment Around the Iran War Following US and Israeli airstrikes on Iran, researchers at Graphika tracked the evolving online information environment surrounding the conflict. Their monitoring highlighted several parallel developments, including influence operations, hacktivist mobilization, and the spread of false or misleading information across social media platforms. Pro-China state media outlets such as CGTN and China Daily, along with Chinese diplomatic accounts, have published videos, memes, and official statements portraying the US and Israel as aggressors while expressing support for Iran and calling for de-escalation. At the same time, disinformation has circulated widely online, including coordinated posts falsely claiming that Donald Trump and Benjamin Netanyahu had been killed in the conflict. These claims spread across multiple platforms using copied messages and repurposed images. Several pro-Iranian hacktivist groups linked to Iranian security services, including actors associated with the Islamic Revolutionary Guard Corps, reactivated and claimed attacks against regional and Western targets. Meanwhile, a coalition of hacktivist groups mobilized within hours of the strikes, calling for cyber operations against US and Israeli interests, though many of the claimed attacks remain unverified. Researchers also observed pre-strike messaging from pro-Iranian and pro-Russian networks aimed at discouraging US military action and warning of possible retaliation. Source:  Graphika. Iran War: Monitoring the Online Threat Landscape . [online] Published 6 March 2026. Available at: https://graphika.com/posts/iran-war-monitoring-the-online-threat-landscape Top Of Page Hacktivist Cyber Campaign Escalates Following Operation Epic Fury A report issued by Radware  states that following the launch of the U.S.–Israeli military offensive “Operation Epic Fury” on February 28, 2026, a rapid surge of hacktivist cyber activity emerged across the Middle East. According to the first report, hacktivist groups mobilized within nine hours of the initial strikes. They launched coordinated distributed denial-of-service (DDoS) campaigns targeting government institutions, financial systems, and telecommunications providers. The activity was highly concentrated, with two groups - Keymous+ and DieNet - responsible for nearly 70% of all attack claims between February 28 and March 2. In total, nine hacktivist collectives claimed 107 attacks against 81 organizations across eight countries, with government institutions accounting for approximately 53% of all targets. The campaign focused primarily on Kuwait, Israel, and Jordan, which together represented more than 76% of attack claims, indicating a strategic effort to disrupt state infrastructure and maximize political visibility during the regional escalation. A separate threat analysis  highlights the broader cyber and influence ecosystem surrounding the conflict, noting that although Iranian state cyber capabilities were likely temporarily constrained by severe internet outages and disruptions to command structures, external actors rapidly intensified cyber activity. According to the second report, numerous pro-Iranian, pro-Palestinian, and pro-Russian hacktivist groups, including Handala Hack, APT Iran, Cyber Islamic Resistance, Dark Storm Team, and 313 Team, conducted disruptive operations, including DDoS attacks, hack-and-leak campaigns, website defacements, and attempts to access industrial control systems. The report also identifies influence-oriented tactics, including phishing campaigns using a malicious replica of Israel’s Home Front Command RedAlert application to deliver surveillance malware, intimidation campaigns targeting critics of Iran, and public claims of system compromises intended to generate psychological pressure and shape public narratives during the conflict. Taken together, the two analyses describe a coordinated cyber escalation accompanying the kinetic conflict. Both reports emphasize that a small number of ideologically aligned hacktivist networks dominate the activity, often publicizing operations through Telegram and other online platforms to amplify political messaging. The participation of external actors, including pro-Russian groups such as NoName057(16), suggests the emergence of a broader transnational cyber campaign in which disruptive attacks, exaggerated breach claims, and information operations operate alongside military confrontation to destabilize infrastructure, influence public perception, and expand the digital dimension of the conflict. Source:   Radware. Retaliatory Hacktivist DDoS Activity Following Operation Epic Fury/Roaring Lion . [online] Published 3 March 2026. Available at: https://www.radware.com/security/threat-advisories-and-attack-reports/ddos-activity-following-operation-epic-fury-roaring-lion/ Unit 42, Palo Alto Networks. Threat Brief: March 2026 Escalation of Cyber Risk Related to Iran . [online] Published 2 March 2026. Available at: https://unit42.paloaltonetworks.com/iranian-cyberattacks-2026/ Top Of Page [AI Related Articles] Google’s AI Overviews Amplified Iran-Related Hoaxes An investigation by NewsGuard ’s Reality Check found that Google’s AI-generated search summaries were repeating false claims about images linked to the conflict involving Iran. In several cases, the summaries presented unverified claims as plausible explanations, even when the images or videos were unrelated to the conflict or created using AI. NewsGuard identified multiple examples. Google’s AI summary described a video from a 2015 residential fire in the United Arab Emirates as a possible missile strike on a CIA-linked building in Dubai. In another case, an AI-generated video allegedly showing multiple Iranian missiles striking Tel Aviv was summarized as depicting a real military escalation. Similarly, footage posted weeks earlier was mischaracterized as evidence that the US Embassy in Riyadh had been destroyed in a drone attack, despite verified footage showing only minor damage. Although Google’s reverse-image search can still help trace the origins of images through its “Exact Matches” feature, the AI-generated summary appears at the top of search results and may shape users’ first impressions. Source:  NewsGuard Reality Check, S. Rubinson & I. Chomnalez. Google’s AI Overviews Are Fooled by Iran-Related Visual Hoaxes in its Own Reverse-Image Searches . [online] Published 3 March 2026. Available at: https://www.newsguardrealitycheck.com/p/googles-ai-overviews-supercharge Top Of Page AI Fueled Election Interference Across Three States An analysis by Graphika  found that AI played a central role in influence operations targeting elections in Bangladesh, Colombia, and Tibet. Across the three cases, coordinated networks used AI tools to generate deepfake videos, produce political content in multiple languages, and automate the mass distribution of posts through inauthentic accounts designed to appear like genuine users. Ahead of Bangladesh’s February 2026 election, AI-generated videos circulated online, showing fabricated authority figures accusing political actors, while fact-checkers later identified the clips as fake. In the Tibetan diaspora election for the Central Tibetan Administration, accounts linked to the pro-China "Spamouflage" network published AI-generated articles and cartoons in both English and Tibetan to discredit candidates. Meanwhile, in Colombia’s parliamentary elections, a network of automated accounts flooded social media with thousands of posts targeting President Gustavo Petro, amplifying both pro and anti-government narratives to create confusion and information overload. AI significantly lowered the cost and effort required to produce persuasive political content and coordinate large-scale influence campaigns. As a result, election interference is becoming easier to conduct, more difficult to detect, and increasingly focused on eroding public confidence in democratic institutions rather than simply promoting or attacking specific candidates. Source:  Graphika. Deepfakes, Noise, and Doubt: AI's Role in Three Recent Elections . [online] Published 4 March 2026. Available at: https://graphika.com/posts/deepfakes-noise-and-doubt-ai-s-role-in-three-recent-elections Top Of Page AI-Assisted Russian Influence Network Targets Africa A pro-Kremlin influence network linked to the media outlet Rybar used ChatGPT to support covert information operations targeting audiences in Africa, according to a threat report by OpenAI . The campaign, dubbed "Fish Food", relied on artificial intelligence to mass-produce multilingual social media content that was later disseminated across Telegram and X through accounts with no declared connection to the outlet. Researchers found that the system was used primarily as a content-generation hub, producing batches of comments and narratives, often from Russian-language prompts, designed to promote pro-Russian messaging, criticize Ukraine, and portray Western countries as interfering actors. Beyond producing propaganda content, the network reportedly used AI tools to plan and structure influence operations. Prompts requested assistance drafting proposals for covert campaigns in Africa, including strategies to manage coordinated social media accounts, create a bilingual investigative journalism website, arrange paid media placements in French-language outlets, and organize offline events. Additional prompts sought information on electoral systems in Burundi, Cameroon, and Madagascar, including suggestions for mobilizing protests and developing local networks to support election-related influence activities. Although some of the accounts involved appeared to originate in Russia and Rybar maintains a large Telegram audience, OpenAI stated it found limited evidence that the campaign achieved substantial amplification or measurable real-world impact. Source:  OpenAI. Disrupting malicious uses of AI . [online] Published 25 February 2026. Available at: https://openai.com/index/disrupting-malicious-ai-uses/ Top Of Page [Cyfluence Attacks] Iran Expected to Launch Cyber Attacks Worldwide Following US and Israeli Strikes As reported by Infosecurity Magazine , Iran is highly likely to respond to recent US and Israeli air strikes with cyber-attacks targeting organizations across the Middle East and other regions, according to Google’s chief of cyber threat intelligence, John Hultquist. Speaking at an event hosted by the Royal United Services Institute (RUSI) in London, Hultquist warned that countries hosting US military bases, such as Qatar, Bahrain, Jordan, the United Arab Emirates, and Kuwait, could face particularly aggressive cyber operations. While Iran’s cyber tools may not be new, the scope of potential targets is expected to expand significantly, exposing organizations with weaker cybersecurity defenses. Hultquist noted that Iran often operates through loosely connected cybercriminal or hacktivist groups, creating plausible deniability for state-backed actions. These groups may appear independent, but could actually act as fronts for the Iranian Revolutionary Guard Corps (IRGC). As a result, some incidents labeled as hacktivism or ransomware may in fact be covert state-sponsored cyber campaigns aimed at governments, businesses, and critical infrastructure in the US, Gulf states, and other countries viewed as hostile to Iran. Source:  Infosecurity Magazine, D. Palmer. Expect Iran to Launch Cyber-Attacks Globally, Warns Google Head of Threat Intel . [online] Published 2 March 2026. Available at: https://www.infosecurity-magazine.com/news/iran-cyber-attacks-global-google/ Top Of Page Middle East Conflict Triggered Surge in Global Cyber Activity Infosecurity Magazine   describes the escalating conflict in the Middle East as entering a hybrid phase that combines military operations with large-scale cyber campaigns, raising concerns about spillover risks for organizations worldwide. The shift followed joint Israeli and US strikes on Iran targeting Iranian leadership, military, and nuclear-related sites. At the same time, a major cyber operation reportedly disrupted Iran’s digital infrastructure, with internet connectivity dropping to about 4% of normal levels. Security experts warned that cyber retaliation is likely to intensify, potentially disguised as criminal activity such as ransomware attacks. Increased activity has already been observed from hacktivist and cybercriminal groups, including HydraC2, Handala, and Sicarii, alongside a surge in DDoS attacks, website defacements, and alleged data breaches targeting government, banking, aviation, and telecom sectors. More than 150 hacktivist incidents were reported in the days following the strikes. Experts noted that Iran has previously used cyber operations to retaliate against geopolitical pressure and may deploy tactics designed to obscure state involvement. These could include ransomware attacks followed by destructive data wiping, exploiting long-term espionage access for sabotage, operating through proxy cybercriminal groups, and publicly releasing stolen data to pressure victims. Source:  Infosecurity Magazine, A. Mascellino. Hybrid Middle East Conflict Triggers Surge in Global Cyber Activity . [online] Published 2 March 2026. Available at: https://www.infosecurity-magazine.com/news/middle-east-conflict-surge-global/ Top Of Page [Appendix - Frameworks to Counter Disinformation] EU and France Rethink Strategy to Counter Russian Disinformation Amid growing concerns about foreign interference ahead of major elections in 2027, an article in Le Monde  outlines how the EU and France are reassessing their strategies to counter rising Russian disinformation. The elections include the French presidential election and the Italian parliamentary elections. Concerns have grown over interference campaigns ranging from propaganda websites and manipulated AI content to algorithm-driven influence on social media platforms such as TikTok and X. In response, the European Commission launched the European Center for Democratic Resilience as part of its “European Democratic Shield” initiative, aiming to coordinate member states’ responses and protect public debate. Both Brussels and Paris are shifting toward a society-wide approach that emphasizes public resilience rather than relying solely on removing false content. The strategy focuses on strengthening media literacy, empowering trusted community figures such as educators and librarians to help identify misinformation, and improving research and education around information manipulation. Finland is often cited as a successful model in which media literacy is taught from an early age and public trust in institutions remains relatively high. France has also developed specific tools to counter foreign interference. Since 2021, the government unit Viginum has monitored digital manipulation campaigns, while officials plan to expand research programs and public awareness initiatives ahead of the 2027 election cycle. Source:  Le Monde, W. Audureau. EU and France rethink their approach to combat Russian disinformation surge . [online] Published 4 March 2026. Available at: https://www.lemonde.fr/en/les-decodeurs/article/2026/03/04/eu-and-france-rethink-their-approach-to-combat-russian-disinformation-surge_6751095_8.html/t/699c3be7e6c285093f8f2541/1771846631816/Report+Leicester.pdf Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • Cyber-based hostile influence campaigns 23rd February - 01st March 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] X Disinformation Spread on X After US and Israeli Strike on Iran White House Staffer Linked to Anonymous Pro-Trump X Account [State Actors] Russia   Escalating Hybrid Warfare Risks in Europe Russian Disinformation Wave Targeted the Olympics, France, and the Navalny Investigation Russian Disinformation Campaign Targeted Ukraine’s European Allies Russia Expands Influence in Africa Through Hybrid and Disinformation Tactics China Pro-China Bot Network Spread Propaganda on Threads Targeting Taiwanese Users Iran Pro-Iran Networks Mobilized Hacktivists and Media to Control War Narratives Hacktivist Activity Linked to Iran War [AI Related Articles] AI-Enabled Scams and State-Linked Influence Operations Exposed in 2026 Threat Report Quarterly NewsGuard Audit Found Leading AI Chatbots Repeat False Claims at High Rate [General Reports] AI-Driven Disinformation Amplifies Fear During Mexican Cartel Violence False Hostage Narrative Spread Widely After Killing of Cartel Leader The Islamic State’s Evolving Online Ecosystem in 2026 Online Amplification and Disinformation Escalated Leicester Unrest [CRC Glossary] [ Report Highlights] Russia has conducted increasingly aggressive hybrid warfare in NATO countries since its 2022 invasion of Ukraine, and according to a report by Recorded Future , it will likely intensify coordinated tactics such as cyberattacks, sabotage, propaganda, airspace violations, and energy pressure based on a New Generation Warfare doctrine. The Carnegie Endowment for International Peace report found that Russia advances its influence in Africa through coordinated security partnerships and strategic information campaigns that exploit local grievances and anti-Western sentiment, though its long-term impact is constrained by limited economic capacity and structural vulnerabilities. NewsGuard analysis shows that the Russian influence operation Storm-1516 has increasingly shifted its focus from Ukraine itself to European countries supporting Ukraine’s defense, particularly France and Germany. According to an analysis by Graphika , during the June 2025 war between Israel and Iran, pro-Iran media outlets, social media networks, and hacktivist groups coordinated efforts to shape public perception and promote unified narratives. The "February 2026 update" of OpenAI detailed how criminal networks and state-linked actors leveraged AI models to scale scams, social engineering, and coordinated influence operations, combining automated content generation with fake personas, cross-platform amplification, and targeted harassment campaigns. A report by the Independent Commission of Inquiry into the 2022 Leicester violence finds that coordinated online networks used misleading content and identity-based narratives to amplify and distort unrest in Leicester, escalating local tensions into a broader, polarized crisis. [Social Media Platforms] X Disinformation Spread on X After US and Israeli Strike on Iran Following announcements by Donald Trump about a major U.S. and Israeli military operation against Iran, a wave of misleading and false content rapidly spread on X. An analysis by WIRED reviewed hundreds of viral posts containing inaccurate claims about the scale and locations of the attacks. Many widely shared videos and images were old footage, taken from unrelated conflicts, altered, or generated with AI. In some cases, video game clips were circulated as supposed evidence of real combat events. Much of the viral misinformation came from accounts with paid verification badges, whose posts can generate revenue through engagement. Although some misleading posts later received contextual corrections through community notes, they often accumulated millions of views before being flagged. Examples included footage falsely claiming to show missiles over Dubai, alleged Israeli aircraft shootdowns without credible confirmation, and manipulated images depicting damage to Iranian or U.S. military assets. Source:  WIRED, D. Gilbert. X Is Drowning in Disinformation Following US and Israeli Attack on Iran . [online] Published 28 February 2026. Available at: https://www.wired.com/story/x-is-drowning-in-disinformation-following-us-and-israels-attack-on-iran/ Top Of Page White House Staffer Linked to Anonymous Pro-Trump X Account A WIRED report described how a popular pro-Trump account on X, known as “Johnny MAGA,” with nearly 300,000 followers, appeared to be run by a White House rapid-response staffer named Garrett Wade. The account has frequently amplified messaging from the White House and defended Donald Trump's policies while presenting itself online as an independent supporter. WIRED linked Wade to the anonymous account through publicly available records and a phone number associated with both him and the profile. The account has promoted administration narratives, attacked political opponents such as Gavin Newsom, and defended controversial content shared by Trump, including an AI-generated video depicting Barack Obama and Michelle Obama as apes. Disinformation researchers suggested that, if a government staffer is secretly operating an influential political account, it could blur the line between official government messaging and what appears to be organic digital support. Source:  WIRED, M. Kelly. A White House Staffer Appears to Run Massive Pro-Trump X Account . [online] Published 25 February 2026. Available at: https://www.wired.com/story/a-white-house-staffer-appears-to-run-massive-pro-trump-meme-page/ Top Of Page [State Actors] Russia Escalating Hybrid Warfare Risks in Europe Russia has conducted increasingly aggressive hybrid warfare in NATO countries since its 2022 invasion of Ukraine, and a report by Recorded Future’s Insikt Group  warns that it will likely intensify coordinated tactics such as cyberattacks, sabotage, propaganda, airspace violations, and energy pressure based on a New Generation Warfare doctrine. Unlike opportunistic actions, a full campaign would involve more frequent, simultaneous operations across multiple countries to undermine confidence in governments and strain NATO’s ability to respond. These tactics could target critical infrastructure, including communications cables, transportation, and energy systems, while expanding disinformation campaigns to deepen political divisions and reduce public trust. The report warned that both governments and private companies would face disruption, reputational risks, and economic damage. Russia may also exploit Europe’s remaining dependence on Russian oil and gas and use emerging technologies such as AI to enhance propaganda and cyber operations. Putin is likely to assess over the next three to five years whether conditions are favorable for a military incursion into NATO territory, particularly in Eastern Europe. Source:  Recorded Future, Insikt Group, Preparing for Russia’s New Generation Warfare in Europe . [online] Published 24 February 2026. Available at: https://www.recordedfuture.com/research/preparing-for-russias-new-generation-warfare-in-europe Top Of Page Russian Disinformation Wave Targeted the Olympics, France, and the Navalny Investigation Disinformation campaigns targeting the Olympics, France, and the Navalny investigation intensified in the week of 23 February 2026, as described in an EUvsDisinfo   article . On the Winter Olympics in Milan and Cortina, where Russia is banned from competing as a state due to doping scandals and its war against Ukraine, pro-Kremlin sources spread AI-enhanced false reports portraying Ukrainian athletes as aggressive or problematic. These narratives included fabricated news segments impersonating legitimate outlets such as Euronews, AFP, and CBC, with AI-generated voices and altered footage used to falsely claim that Ukrainian athletes were isolated from others because of “toxic behavior.” Regarding France, pro-Kremlin outlets falsely claimed that President Emmanuel Macron intended to ban virtual private networks after France adopted new rules restricting social media access for children under 15. In reality, Macron stated that VPNs would not be banned and could even be used to bypass the restrictions. Another disinformation effort sought to deflect attention from evidence that Russian opposition leader Alexey Navalny had been poisoned. Russian officials suggested that the findings of a multinational investigation were meant to distract from the release of files related to Jeffrey Epstein, despite no evidence linking the two events. In the past, pro-Kremlin FIMI outlets have alleged that US special services killed Navalny, that the UK probably assassinated Navalny, and that Navalny died after using an mRNA vaccine supplied by Germany, among many other deceptions. Source:  EUvsDisinfo, Russia’s quest for disinformation gold  [online] Published 27 February 2026. Available at: https://euvsdisinfo.eu/russias-quest-for-disinformation-gold/ Top Of Page Russian Disinformation Campaign Targeted Ukraine’s European Allies A report by NewsGuard  found that the Russian influence operation Storm-1516 has increasingly shifted its focus from Ukraine itself to European countries supporting Ukraine’s defense, particularly France and Germany. Since January 2025, the network has circulated dozens of fabricated claims targeting leaders such as Emmanuel Macron and Friedrich Merz, generating hundreds of millions of views on X. The campaign intensified as European countries increased financial and military aid to Ukraine following a sharp decline in U.S. support. The operation spread disinformation by publishing fabricated stories on fake websites that imitated legitimate news outlets, then amplified them through anonymous social media accounts and pro-Kremlin influencers. Many of the false narratives involved sensational personal accusations, fabricated documents, or manipulated videos aimed at damaging the reputations of political leaders and undermining trust in democratic institutions. In France, several false claims targeted Macron personally, including fabricated stories linking him to the Jeffrey Epstein scandal. In Germany, narratives portrayed Merz as corrupt or involved in criminal activity, while also spreading misleading claims about election integrity. Source:  NewsGuard, N. Huet & M. Wolf & E. Maitland, Russian Influence Campaign Shifts to Target Ukraine’s European Allies Funding its Defense . [online] Published 27 February 2026. Available at: https://www.newsguardrealitycheck.com/p/russian-influence-campaign-storm-1516-targets-france-germany Top Of Page Russia Expands Influence in Africa Through Hybrid and Disinformation Tactics Russia’s expanding footprint in Africa and the central role of influence operations in advancing Moscow’s strategic objectives were analyzed in a report by the Carnegie Endowment for International Peace . Beyond military deployments and security partnerships, Russia leverages coordinated information campaigns to portray itself as an anti-colonial partner and an alternative to Western engagement. Through state-backed media, diplomatic messaging, and amplification by local actors, pro-Kremlin narratives frame Western governments as destabilizing forces while depicting Russia as a defender of sovereignty and regime stability. These campaigns frequently exploit local grievances, governance failures, and dissatisfaction with Western security interventions to gain traction. Information manipulation operates alongside security assistance, elite cultivation, and economic arrangements. Russian-linked actors, including private military companies and affiliated media platforms, support friendly political elites while shaping public discourse to legitimize Moscow’s presence. Disinformation and narrative amplification are used to undermine Western credibility, reinforce anti-Western sentiment, and present Russian engagement as both pragmatic and ideologically aligned with African sovereignty. In fragile or conflict-affected states, this hybrid approach has enabled Russia to secure influence disproportionate to its economic footprint. However, the report highlighted significant constraints on Moscow’s long-term influence. Russia’s engagement is heavily concentrated in the security domain and relies on opportunistic partnerships rather than sustained institutional investment. Limited economic capacity, reputational risks associated with mercenary activity, and complex local political dynamics constrain the durability and scalability of its influence. Source : Carnegie Endowment for International Peace, N. Reynolds, Russia in Africa: Examining Moscow’s Influence and Its Limits . [online] Published 26 February 2026. Available at: https://carnegieendowment.org/research/2026/02/russia-role-west-southern-africa-junta-wagner-africa-corps Top Of Page China Pro-China Bot Network Spread Propaganda on Threads Targeting Taiwanese Users A coordinated influence campaign linked to pro-China actors using automated AI bots on Threads to spread propaganda aimed at Taiwanese audiences was uncovered in an investigation by NewsGuard.  The campaign involved at least 10 accounts posing as Taiwanese users. Automated bots can generate massive amounts of content and engagement in near-real time. As a result, despite the accounts having relatively small follower bases, some of their posts have gone viral. The bot accounts frequently posted in Traditional Chinese, following Taiwan’s standard conventions, a style of Chinese writing used uniquely in Taiwan for everyday communication. Their messages promoted narratives praising China while attacking Taiwan’s ruling Democratic Progressive Party and highlighting domestic grievances such as housing costs and governance. At the same time, the bots avoided politically sensitive topics banned by Beijing, including questions about the Tiananmen Square Massacre. NewsGuard became aware of the campaign when Threads users with advanced programming knowledge tricked the bots into revealing their true nature. In one instance, a bot repeatedly followed instructions to repeat a phrase verbatim, and in another, it identified itself as OpenAI’s AI assistant (ChatGPT). In response to a comment by a Meta spokesperson questioning the accounts’ influence, NewsGuard argued that the appropriate way to measure their impact is not by follower counts but by the accounts’ reach. Source : NewsGuard, C. Lin, Pro-China Influence Campaign Uses Automated Bots on Meta’s Threads to Target Taiwanese Users With Propaganda . [online] Published 25 February 2026. Available at: https://www.newsguardrealitycheck.com/p/automated-pro-china-bots-swarm-metas Top Of Page Iran Pro-Iran Networks Mobilized Hacktivists and Media to Control War Narratives Coordination among pro-Iran media outlets, social media networks, and hacktivist groups during the June 2025 war between Israel and Iran was examined in a Graphika   analysis (also see CRC Weekly Update, 15th - 21st December 2025 click here ). After an initial delay, these actors amplified breaking-news-style claims, threats, and unverified “hack-and-leak” announcements designed to portray Iran as dominant online and to depict Israel as exposed and technically weak. Graphika highlighted the rapid promotion of a new actor, Cyber Isnaad Front, which was repeatedly boosted by outlets linked to the IRGC. The analysis documented a large wave of mostly organic pro-Iran hacktivist activity, including Handala Hack, Gaza Children Hackers, CyberToufan, and Homeland Justice, claiming dozens of intrusions, DDoS attacks, doxxes, and data dumps. Some disruptive operations (and counter-claims) were attributed to other actors, such as Gonjeshke Darande. State media amplification, for example, by Tasnim News, helped inflate the visibility of these personas, but Graphika stressed that most public claims remained unproven. Source : Graphika, Everything Everywhere All at Once: The Pro-Iran Playbook for Narrative Control – Part 2 . [online] Published February 2026. Available at: https://public-assets.graphika.com/reports/Graphika_Report-Everything_Everywhere_All_at_Once_(part%202).pdf Top Of Page Hacktivist Activity Linked to Iran War CyberKnow   posted  that 60 hacktivist groups are engaged in activities related to the Iran-US/Israel war. Most neighboring countries to Iran have been targeted. Pro-Russian groups started to join the fight in support of Iran, and more are expected to join. Source:  X, CyberKnow (@Cyberknow20). Post on X . [online] Published 2 March 2026. Available at: https://x.com/Cyberknow20/status/2028454796077019583 Top Of Page [AI Related Articles] AI-Enabled Scams and State-Linked Influence Operations Exposed in 2026 Threat Report The February 2026 OpenAI   update documented how threat actors across multiple regions leveraged AI models alongside traditional digital infrastructure to conduct scams, covert influence operations, and social engineering campaigns. The report highlighted that AI was typically integrated into broader operational workflows rather than acting as a standalone tool. Documented abuses include semi-automated romance and investment scams originating in Cambodia, fake "scam recovery" services impersonating U.S. law enforcement and law firms, and social engineering efforts likely linked to China targeting U.S. officials. These operations followed structured engagement funnels, often described as "ping, zing, sting", using AI-generated content to initiate contact, build emotional manipulation, and ultimately extract funds. Distribution tactics relied heavily on social media ads, messaging apps, fake websites, impersonation of trusted institutions, and multilingual content generation to increase credibility and reach. The report also detailed multiple covert influence operations linked to or aligned with the state. A Russia-linked content farm associated with the "Rybar" network generated multilingual social media posts and coordinated amplification across Telegram and X, including proposals for election interference campaigns in Africa. Another Russia-linked campaign targeted African audiences with long-form geopolitical articles under fabricated academic personas, seeking to criticize Western governments and legitimize Russian positions. Most significantly, a ChatGPT account linked to an individual associated with Chinese law enforcement documented and planned large-scale "cyber special operations" aimed at suppressing dissent globally. These activities included coordinated harassment, hashtag campaigns, impersonation of officials, creation of fake accounts, document forgery, doxxing, smear campaigns, and attempts to trigger platform enforcement actions against dissidents. Source:  OpenAI. Disrupting malicious uses of our model: An update . [online] Published February 2026. Available at: https://cdn.openai.com/pdf/df438d70-e3fe-4a6c-a403-ff632def8f79/disrupting-malicious-uses-of-ai.pdf Top Of Page Quarterly NewsGuard Audit Found Leading AI Chatbots Repeat False Claims at High Rate A quarterly audit by NewsGuard  of the 11 leading generative AI tools and their propensity to repeat false claims on controversial topics in the news. In tests, major chatbots, including ChatGPT, Gemini, Copilot, Claude, Grok, and others, repeated false claims in 28.79 percent of responses to prompts about controversial news topics. The failure rate rose to 55.45 percent when prompts were designed to imitate malicious actors attempting to manipulate the systems. The audit also found that chatbots often relied on unreliable online sources, including social media posts tied to foreign influence operations. In one case linked to the Russian operation Internet Research Agency and its offshoot, Storm-1516, several chatbots repeated a false claim that Armenian Prime Minister Nikol Pashinyan allowed Turkish companies to buy discounted gold from the Amulsar mine. Models such as ChatGPT, Grok, and Mistral even cited posts on X that spread the disinformation, inadvertently increasing their visibility. In another case, an AI system incorrectly confirmed a deepfake video of women protesters attacking a paramilitary vehicle in Iran. Some models performed better than others, most notably Claude, which correctly debunked all tested claims. NewsGuard sent emails to the companies that own the tools, but did not receive responses. Source:  NewsGuard, I. Blachez. January 2026 — AI False Claim Monitor . [online] Published 25 February 2026. Available at: https://www.newsguardtech.com/wp-content/uploads/2026/02/January-2026-Quarterly-AI-Audit.pdf Top Of Page [General Reports] AI-Driven Disinformation Amplifies Fear During Mexican Cartel Violence Following a large-scale Mexican military operation targeting Jalisco New Generation Cartel leader Nemesio Oseguera Cervantes (“El Mencho”), Politico   reported that widespread violence across approximately 20 states was accompanied by a parallel surge of online disinformation. As roadblocks, explosions, and shootings unfolded, social media platforms were flooded with AI-generated and manipulated videos and images portraying exaggerated or fabricated scenes of national collapse, including false claims that Guadalajara’s international airport had been seized and set ablaze. According to data presented by Mexican authorities citing Tecnológico de Monterrey, between 200 and 500 problematic posts circulated in the immediate aftermath, with 35% to 40% lacking context, at least 25% deemed misleading, and nearly 25% identified as AI-manipulated or fabricated content. Several posts garnered more than 100,000 views, amplifying panic amid acute public uncertainty. The disinformation narratives included claims that a U.S. agent had strangled Oseguera Cervantes, that President Claudia Sheinbaum was hiding offshore, and that the killing was linked to unrelated geopolitical events. Although the university report did not attribute responsibility, organized crime experts suggested that actors linked to the Jalisco cartel may have contributed, noting the group’s prior investment in online influence and its growing technological sophistication. AI-generated footage, reportedly resembling drone imagery, was described as unusually polished for unfolding crisis conditions, enhancing its credibility and emotional impact. Authorities and the U.S. Embassy attempted to counter false claims, but residents reported difficulty distinguishing authentic reporting from fabricated content. The campaign significantly intensified public fear, reinforcing perceptions of chaos and demonstrating how digitally amplified disinformation can compound real-world violence during security crises. Source:  Politico, Associated Press. Online disinformation fueled panic after killing of Mexican drug lord . [online] Published 25 February 2026. Available at: https://www.politico.com/news/2026/02/25/online-disinformation-fueled-panic-after-killing-of-mexican-drug-lord-00799837 Top Of Page False Hostage Narrative Spread Widely After Killing of Cartel Leader NewsGuard   debunked a widely shared disinformation claim falsely alleging that a Mexican drug cartel had taken U.S. tourists hostage in Guadalajara, Mexico, following the death of cartel leader Nemesio Rubén Oseguera Cervantes. The rumor spread rapidly across social media, gaining nearly 27 million views. Influencers and websites claimed that members of the Jalisco New Generation Cartel had kidnapped an unspecified number of American tourists from airports, hotels, and highways. The narrative was amplified by prominent figures, including Laura Loomer, whose posts on X reused real footage of chaotic scenes at Guadalajara airport but misrepresented them as evidence of cartel kidnappings. However, authorities found no proof that any tourists were taken hostage. The Mexican Embassy in the United States publicly denied the claims, stating that there had been no attacks on civilians or hostage situations at the airport. Source:  NewsGuard, L. Marchl, Debunk: No Laura Loomer, Claims that the Mexican Cartel Jalisco Took U.S. Tourists Hostage Are Baseless . [online] Published 26 February 2026. Available at: https://www.newsguardrealitycheck.com/p/us-tourists-held-hostage-in-mexico Top Of Page The Islamic State’s Evolving Online Ecosystem in 2026 A report by the Institute for Strategic Dialogue (ISD)  found that the digital ecosystem supporting the Islamic State continued to adapt and expand across multiple platforms in 2026 despite ongoing moderation efforts. Supporters increasingly used AI tools to convert written propaganda into audio, produce news-style content, and recreate deceased ideologues in videos. Researchers monitored around 500 accounts, channels, and websites across platforms such as Facebook, TikTok, Instagram, WhatsApp, Telegram, Element, and SimpleX Chat. These interconnected networks enabled supporters to quickly recreate accounts after removals and to coordinate activities across platforms. The report highlighted how IS supporters exploited platform features to spread propaganda, recruit followers, and, in some cases, monetize content. Influencers and unofficial support groups often acted as hubs that redirected audiences between mainstream social media, encrypted messaging services, and niche platforms. On platforms such as Facebook and Instagram, some accounts used "digital creator" features to boost reach, while on TikTok, propaganda was sometimes repackaged in gamified formats inspired by games like Roblox and Minecraft to attract younger audiences. Overall, the IS online ecosystem remained resilient due to its decentralized and cross-platform structure. The report recommended stronger cross-platform cooperation, more specialized moderation in different languages, and improved detection tools to address evolving tactics used by extremist networks online. Source:  ISD, M. Ayad, Coordinating Through Chaos: The State of the Islamic State Online in 2026 . [online] Published February 2026. Available at: https://www.isdglobal.org/wp-content/uploads/2026/02/Coordinating-through-Chaos-FINAL.pdf Top Of Page Online Amplification and Disinformation Escalated Leicester Unrest Unrest in Leicester and its rapid transformation into a highly polarized national and transnational controversy through coordinated online amplification and misleading narratives were examined in a report  by the Independent Commission of Inquiry into the 2022 Leicester violence . Localized incidents were reframed on social media as evidence of deep-seated ethnic and religious conflict, with posts frequently relying on decontextualized footage, recycled videos, and unverified claims. This content circulated widely before accurate information could stabilize public understanding, contributing to confusion and heightened tensions. The study identifies the role of organized digital networks, including politically aligned influencers and diaspora-linked accounts, in accelerating and legitimizing inflammatory narratives. Through synchronized posting patterns, emotionally charged language, and identity-based framing, these actors amplified selective content to portray events as part of a broader civilizational struggle. Algorithm-driven engagement further boosted sensational material, enabling misleading or exaggerated claims to reach large audiences quickly and repeatedly. The report underscores how digital manipulation shaped public perception as much as, if not more than, on-the-ground developments. By exploiting platform dynamics and social divisions, coordinated actors escalated a local dispute into a broader flashpoint, undermining trust in authorities and deepening polarization. The findings illustrate how hostile or opportunistic influence tactics can weaponize social media ecosystems to magnify unrest and distort democratic discourse. Source:  School of Oriental and African Studies (SOAS) - University of London, Independent Commission of Inquiry into the 2022 Leicester violence, C. Bhatt & V. Puri & S. F. Kamal & S. Patel & S. Sinha & S. Abraham-Gonsalves & S. Grover & L. Magarrell & J. E. Méndez & N. Sultan. Better Together: Understanding the 2022 Violence in Leicester . [online] Published 1 January 2026. Available at: https://static1.squarespace.com/static/6505d742fdd85426286c1396/t/699c3be7e6c285093f8f2541/1771846631816/Report+Leicester.pdf Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • Cyber-based hostile influence campaigns 16th - 22th February 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [State Actors] Russia   Recurring False Narratives Used by the Kremlin War in Ukraine Kremlin Rewrites Ukraine’s Identity China Chinese Globalization of State-Directed Disinformation [AI Related Articles] Audit Found High Disinformation Rates in AI Voice Responses AI Revives Racist Stereotypes and Misinformation [General Reports] False Claim that Mamdani Allowed Five Daily Muslim Prayer Broadcasts in NYC Uganda's 2026 Election Marked by Repression and Internet Shutdown Voters Misidentify Political Disinformation UN Committee Condemns Disinformation Campaign Targeting Special Rapporteu r AI-Generated Disinformation Surges Ahead of Brazil’s Elections Australian Senate Probes AI-Driven Climate Disinformation and Fossil Fuel Influence Networks EU Warns of Coordinated Disinformation Targeting West Africa [Appendix - Frameworks to Counter Disinformation] The French response to Digital Foreign Interference Questionable Efforts to Fight Online Financial Scams Building Immunity Against the Disinformation ‘Virus’ EU and Spain Escalate Fight Against Climate Disinformation [CRC Glossary] [ Report Highlights] A report by EU vs. Disinfo described how Russia has used disinformation as a long-term political tool to reshape Ukrainian history and identity. Research by the China Media Project argued that China’s rapid expansion in artificial intelligence is closely linked to its long-standing strategy of information control and global influence. A study by Harvard Kennedy School’s Misinformation Review examined how Australians identified political misinformation during the 2025 federal election. In an X's tweet, the European Union in Ghana warns that coordinated disinformation campaigns, documented across 13 West African countries, including Ghana, pose a destabilizing regional threat and require heightened public awareness. According to the United Nations Office for Disaster Risk Reduction (UNDRR) article, following climate-related disinformation after a major blackout, the EU and Spain advanced regulatory and legal measures, including enforcement of the Digital Services Act and endorsement of the UN Declaration on Climate Information Integrity, to curb algorithmic amplification and hold platforms accountable for spreading false climate narratives. [State Actors] Russia Recurring False Narratives Used by the Kremlin Since its full-scale invasion of Ukraine in 2022, Russia has intensified its disinformation campaigns, using false narratives as a central tool in its confrontation with Ukraine and the West. In an article , EU vs. Disinfo  presented recurring false narratives used by the Kremlin. One key claim falsely portrayed the European Union as prolonging the war out of "Russophobia" and military ambition, while another framed the conflict as a NATO proxy war against Russia, presenting Moscow as the victim rather than the aggressor. The Kremlin also promoted misleading narratives that EU sanctions have devastated Europe’s economy, despite evidence that sanctions were imposed in response to Russia’s invasion and have also strained Russia’s own economy. Additional disinformation sought to discredit Ukraine by portraying its leadership as corrupt, illegitimate, or even "Nazi", often relying on fabricated stories and fake sources. Source:  EUvsDisinfo, What the Kremlin wants you to believe about its war against Ukraine . [online] Published 20 February 2026. Available at: https://euvsdisinfo.eu/what-the-kremlin-wants-you-to-believe-about-its-war-against-ukraine/ Top Of Page The War in Ukraine Kremlin Rewrites Ukraine’s Identity In a detailed analysis , EUvsDisinfo  highlighted how Russia has used disinformation as a long-term political tool to reshape Ukrainian history and identity. The Kremlin has repeatedly distorted historical facts, manipulated archives, and reframed narratives to fit an imperial vision. Ukrainian aspirations for freedom and independence are treated in Moscow not as historical facts but as provocations. This strategy continues a pattern from both imperial and Soviet times, when Ukrainian distinctiveness was absorbed into a broader Russian story. A key example is Putin’s 2025 Decree No. 858, which sets a goal for 95% of residents in occupied Ukrainian regions to identify as Russian by 2036. While written in neutral bureaucratic language, the decree reflects a clear disinformation objective: to redefine identity through policy. After 2014, Russian narratives shifted from describing Ukrainians and Russians as “brotherly peoples” to claiming they are “one people,” effectively denying Ukraine’s separate existence. Any resistance to this narrative is blamed on interference by a hostile “collective West.” In this framework, control over territory is matched by control over memory, education, and public symbols. Disinformation is used not only to justify military actions but also to reshape identity itself. Source: EUvsDisinfo, Total Recall: How Russia tried to erase the Ukrainian identity . [online] Published 22 February 2026. Available at: https://euvsdisinfo.eu/total-recall-how-russia-tried-to-erase-the-ukrainian-identity/ Top Of Page China Chinese Globalization of State-Directed Disinformation Research  by the China Media Project  argues that China’s rapid expansion in artificial intelligence is closely linked to its long-standing strategy of information control and global influence. The Chinese Communist Party (CCP) views AI not only as a driver of economic and technological power but also as a tool for shaping international narratives. Chinese large language models (LLMs) are increasingly adopted worldwide due to their high performance and low cost, yet testing shows that many embed “information guidance” aligned with official state positions. This goes beyond simple censorship, as models subtly promote pro-government narratives on issues such as human rights, territorial claims, religion, and China’s global reputation. The spread of these models creates new risks for FIMI. Developers building on Chinese foundation models often struggle to remove embedded narrative biases, leading users to encounter CCP-aligned messaging without realizing its origin. In some cases, governments in countries such as India, Malaysia, and the UAE have adapted Chinese-based chatbots to restrict discussion of domestic human rights issues. The report also highlighted security concerns. Open-source Chinese models have shown vulnerability to “jailbreaking,” allowing users to extract dangerous information, including instructions related to weapons or toxic substances. At the same time, the geopolitical risks of narrative influence remain underexamined compared to debates over technical AI safety. The authors concluded that without stronger transparency rules and awareness of disinformation tactics embedded in AI systems, the global adoption of Chinese models could normalize state-directed information control and weaken information integrity worldwide. Source: China Media Project, A. Colville & Y. Mengli & J. Haime & D. Bandurski, Guided Intelligence: China’s AI Strategy and the Global Information Space . [online] Published 2025. Available at: https://mpf.se/download/18.4626748819c47b558e3716/1770885147750/Guided-Intelligence.pdf Top Of Page [AI Related Articles] Audit Found High Disinformation Rates in AI Voice Responses An audit  by NewsGuard  found that OpenAI’s ChatGPT Voice and Google’s Gemini Live frequently generated convincing audio content repeating false claims—45 percent of the time for ChatGPT and 50 percent of the time for Gemini Live when prompted to do so. These AI voice tools were especially vulnerable when asked to produce radio-style segments advancing disinformation, including pro-Kremlin narratives and fabricated political stories. The audit tested the bots using false claims related to health, politics, global news, and foreign disinformation. Both ChatGPT Voice and Gemini Live were more likely to repeat political and foreign disinformation than health falsehoods, sometimes presenting fabricated events as breaking news. In contrast, Amazon’s Alexa+ refused to repeat any false claims and often explained why they were incorrect, achieving a 0 percent failure rate. Source:  NewsGuard, I. Blachez & I. Chomnalez & L. Marchl, Risky AI: ChatGPT and Gemini Readily Produce False Audio Claims, While Alexa+ Declines . [online] Published 19 February 2026. Available at: https://www.newsguardtech.com/special-reports/chatgpt-and-gemini-readily-produce-false-audio-claims-while-alexa-declines/ Top Of Page AI Revives Racist Stereotypes and Misinformation Highlighting the darker side of technological progress, The Conversation   reported  that in 2025, advances in artificial intelligence transformed public communication while also accelerating the spread of disinformation rooted in racism. AI-generated content, including lifelike videos produced by tools such as Sora, enabled harmful narratives to circulate faster than fact-checkers could respond. During the U.S. government shutdown, fabricated videos depicting Black women abusing Supplemental Nutrition Assistance Program (SNAP) benefits went viral. Although clearly labeled as AI-generated, many viewers accepted them as truthful because they reinforced the long-standing racist trope of the "Black welfare queen." These clips, described as "digital blackface," demonstrate how AI can modernize and amplify deeply embedded stereotypes. A similar pattern emerged in the revival of a Minnesota fraud case involving members of the Somali community. Despite prior legal resolutions, political rhetoric reignited the story, and AI-generated videos spread caricatures such as the "Somali pirate" narrative. Rather than encouraging informed debate about fraud or policy, the disinformation campaign fueled anti-Black, anti-immigrant, and Islamophobic sentiments. Even when investigations disproved widespread wrongdoing, the false framing persisted. The article argued that AI does not create racism but accelerates its reach. Fact-checking alone proves insufficient when audiences are predisposed to believe familiar stereotypes. Disinformation succeeds because it taps into existing racial anxieties and political agendas, transforming old prejudices into viral digital spectacles. Source : The Conversation, N. N. Ali, How AI resurrects racist stereotypes and disinformation — and why fact-checking isn’t enough . [online] Published 22 February 2026. Available at: https://theconversation.com/how-ai-resurrects-racist-stereotypes-and-disinformation-and-why-fact-checking-isnt-enough-270000 Top Of Page [General Reports] False Claim that Mamdani Allowed Five Daily Muslim Prayer Broadcasts in NYC NewsGuard   identified  a widely shared allegation as its False Claim of the Week: that Zohran Mamdani allowed mosques in New York City to broadcast the Islamic call to prayer five times daily, starting at 5 a.m. Users widely circulated two videos of New York City, with the call to prayer heard in the background, claiming the scenes showed the Adhan being broadcast at 5 a.m., thereby disturbing residents. The claim spread rapidly on social media, driven by anti-Islam activists such as Amy Mek and commentators including Eric Daugherty of Florida’s Voice News, gaining millions of views. In reality, there was no evidence that Mamdani introduced any such change. The existing policy, introduced in 2023 under former mayor Eric Adams, allows mosques to broadcast the call to prayer without a permit only once on Friday afternoons and at sunset during Ramadan. NewsGuard found no official announcements, media reports, or public complaints indicating any expansion of broadcasts. Even the viral videos appeared inconsistent with winter conditions and sunrise times. Mamdani’s press office did not respond to requests for comment, and a representative of the NYPD declined to comment. Source:  NewsGuard, L. Marchl, Debunk: Mamdani Did Not Permit Mosques to Broadcast Calls to Prayer Five Times a Day . [online] Published 2026. Available at: https://www.newsguardrealitycheck.com/p/mamdani-lets-nyc-mosques-broadcast Top Of Page Uganda's 2026 Election Marked by Repression and Internet Shutdown Examining the political climate surrounding Uganda’s January 2026 presidential election, the London School of Economics and Political Science (LSE)   reported  that the vote was characterized by repression, silence, and widespread disillusionment rather than the energetic opposition seen in previous cycles. Long-time President Yoweri Museveni secured a seventh term with 71.65 percent of the vote, but turnout fell to just 52.5 percent, reflecting deep public distrust. The campaign period saw intensified judicial harassment, detention of opposition figures, and the forced hiding or exile of critics, including opposition leader Bobi Wine. A key feature of the election was a nationwide internet shutdown from 13 to 18 January 2026, which cut off more than 45 million people. Authorities justified the blackout as a measure against misinformation, but critics said it was intended to suppress dissent and disrupt opposition coordination. The shutdown severely affected political participation, economic activity, and the ability to report human rights abuses, while exemptions for banks and government services ensured economic continuity and reduced backlash from elites. Marginalized groups dependent on mobile money were hit hardest. Source:  London School of Economics and Political Science, M. Daka & K. Rukirabashaija, The 2026 Ugandan election was marked by digital strangulation rather than hope . [online] Published 20 February 2026. Available at: https://blogs.lse.ac.uk/africaatlse/2026/02/20/the-2026-ugandan-election-was-marked-by-digital-strangulation-rather-than-hope/ Top Of Page Voters Misidentify Political Disinformation A study  published by the Harvard Kennedy School’s (HKS)  Misinformation Review examined how Australians identified political misinformation during the 2025 federal election. Using a digital diary method, 38 participants collected 254 examples they believed were misinformation. However, independent fact-checking revealed a significant gap: only 10 percent of the examples were found to be false, while 37 percent were verified as true. Participants most often labeled content as misinformation when it contradicted their existing beliefs (28 percent) or appeared biased (22 percent). Very few (4 percent) relied on external fact-checking. The findings suggested that perceptions of disinformation are strongly shaped by partisan worldviews rather than objective verification. The research also showed that most people do little when encountering suspected misinformation. The most common response was “doing nothing” (44 percent), followed by discussing it with others (21 percent). Information overload, political burnout, and low media literacy discourage active fact-checking. Psychological factors, such as the “illusory truth effect,” in which repeated claims feel true, further complicate the landscape. Participants were most confident in judging content false when it conflicted with their prior knowledge, even when professional fact-checkers later confirmed the information was accurate. Nearly half of the examples identified as misinformation were found on social media, and many originated from political actors or paid advertisements. The study highlighted a critical problem: what citizens perceive as disinformation often does not align with factual assessments. This mismatch allows misleading claims to circulate freely during elections, as voters rarely verify information and frequently reject accurate reporting that challenges their beliefs. Source:  Harvard Kennedy School, S. Park & J. Y. Lee & K. McGuinness & C. Fisher & J. Fulton, People rely on their existing political beliefs to identify election misinformation . [online] Published 2026. Available at: https://misinforeview.hks.harvard.edu/article/people-rely-on-their-existing-political-beliefs-to-identify-election-misinformation/ Top Of Page UN Committee Condemns Disinformation Campaign Targeting Special Rapporteur The Coordination Committee of the Special Procedures of the UN Human Rights Council (OHCHR)   condemned  what it described as a coordinated disinformation campaign by several State Ministers targeting Francesca Albanese, the Special Rapporteur on human rights in the Palestinian territories occupied since 1967. The Committee said ministers relied on “manufactured facts,” attributing statements to Albanese that she did not make at the 17th Al Jazeera Forum, as part of a broader effort to discredit her work. It linked these actions to an organized campaign that previously resulted in what it called illegal sanctions against Albanese, with serious personal consequences, and warned of a growing pattern of politically motivated and malicious attacks against independent human rights experts, UN officials, and international judges engaged in accountability efforts. The Committee emphasized that Albanese’s mandate is to investigate alleged violations of international law in the occupied Palestinian territories and argued that calls for her resignation, alongside persistent intimidation and coordinated personal attacks, undermine independent oversight mechanisms. It urged states to redirect their efforts toward accountability for alleged war crimes and crimes against humanity, rather than targeting investigators, and framed the attacks as contributing to a broader erosion of respect for international law and institutional integrity. Source:  OHCHR, Special Procedures’ Coordination Committee condemns attacks against Francesca Albanese by State representatives: “Promote justice, not disinformation” . [online] Published 17 February 2026. Available at: https://www.ohchr.org/en/press-releases/2026/02/special-procedures-coordination-committee-condemns-attacks-against-francesca Top Of Page AI-Generated Disinformation Surges Ahead of Brazil’s Elections LatAm Journalism Review (LJR)   reported  that Brazil is experiencing a sharp rise in AI-generated disinformation, characterized by hybrid content that blends fabricated elements with authentic media to enhance credibility and evade detection. A recent example involved a TikTok video falsely claiming that a new virus had reached Brazil ahead of Carnaval, created by stitching an AI-generated news anchor onto a legitimate broadcast while omitting contextual details that contradicted the alarmist claim. Fact-checkers identified the manipulation, but the case illustrates how increasingly sophisticated AI tools enable the production of realistic, emotionally charged narratives. According to Observatório Lupa’s February report, AI-generated fake content has increased by 308 percent since 2024, shifting from predominantly scam-related material to politically focused disinformation ahead of Brazil’s October general elections. Researchers note that these productions often combine compelling visuals, fabricated audio, and selective editing to distort public understanding. Experts warn that the accessibility of generative AI tools, integrated into mainstream platforms and requiring minimal technical expertise, has dramatically lowered the barrier to producing manipulated content, including fake interviews, simulated conversations, and fabricated “selfie” videos attributed to public figures. Detection tools struggle to keep pace with the realism of these outputs, complicating efforts by media organizations and fact-checking teams. Brazil’s Superior Electoral Court is considering financial penalties for AI-generated fake news, reflecting institutional concern about electoral manipulation. However, analysts caution that traditional rebuttals may fail to counter viral falsehoods effectively, as AI-driven disinformation thrives on speed, emotional appeal, and algorithmic amplification that often outpaces corrective communication. Source : LatAm Journalism Review, L. Coelho, AI-fueled disinformation surges ahead of Brazil’s elections . [online] Published 18 February 2026. Available at: https://latamjournalismreview.org/articles/ai-fueled-disinformation-surges-ahead-of-brazils-elections/ Top Of Page Australian Senate Probes AI-Driven Climate Disinformation and Fossil Fuel Influence Networks During a Senate Select Committee hearing, ABC   reported  that Australia’s Senate Select Committee on Information Integrity heard evidence that coordinated disinformation campaigns, amplified by AI tools, bots, trolls, and engagement-driven social media algorithms, are delaying action on climate and renewable energy. Witnesses described how “AI-slopaganda,” astroturfing, and opaque third-party political campaigning contribute to the pollution of online information channels. Human Rights Commissioner Lorraine Finlay warned that algorithms prioritizing engagement over accuracy elevate extreme or sensational content, while foreign interference, deepfakes, and emerging technologies further erode public trust. Meta representatives stated that the company removes bots and dismantles coordinated inauthentic behavior but does not censor politicians unless they incite violence, prompting concern from senators about the platform’s responsibility in limiting harmful falsehoods. The hearings also examined fossil fuel-linked lobbying and influence networks. Coal Australia defended transferring nearly $4 million, approximately 95 percent of Australians for Prosperity’s funding, to a third-party group that campaigned against Labor, the Greens, and independent candidates during the 2025 federal election, rejecting claims of astroturfing despite questions over transparency and voter awareness. Academic testimony highlighted the role of the Atlas Network, a global consortium of more than 500 free-market think tanks, in supporting decades-long, fossil fuel-funded campaigns promoting climate denial and anti-renewable narratives while obscuring funding origins. The inquiry further underscored how public debate is distorted by persistent climate science denial, exemplified by exchanges rejecting scientific consensus, and how correcting misinformation is resource-intensive compared to producing it. The committee’s findings point to a complex ecosystem in which digital manipulation, coordinated influence networks, and political amplification collectively undermine climate policy discourse and democratic trust. Source:  ABC, G. Hutchens. Senate committee features climate disinformation, the Atlas Network, and Dr Karl’s clash with One Nation . [online] Published 18 February 2026. Available at: https://www.abc.net.au/news/2026-02-19/climate-disinformation-senate-committee-dr-karl-atlas-network/106350002 Top Of Page EU Warns of Coordinated Disinformation Targeting West Africa In a post  on X, the European Union in Ghana  stated that disinformation campaigns constitute a documented and ongoing risk across West Africa, noting that Ghana is among 13 countries in the region affected by at least 72 identified campaigns aimed at destabilization. The message frames these efforts as deliberate actions by “bad actors,” highlighting the scale and coordinated nature of the threat to regional stability. By emphasizing that awareness matters, the post underscores the importance of public recognition of hostile information activities as a first line of defense. The communication situates disinformation as a systemic regional challenge rather than isolated incidents, reinforcing the need for vigilance against organized influence operations targeting West African states. Source:  X, European Union in Ghana (@EuropeInGhana) [online] Published 23 February 2026. Available at: https://x.com/EuropeInGhana/status/2025937364342681926 Top Of Page [Appendix - Frameworks to Counter Disinformation] The French response to Digital Foreign Interference In a report , the Australian Strategic Policy Institute (ASPI)  argued that the Western alliance faces a growing threat not only from adversaries such as Russia but also from misinformation circulating within allied political ecosystems. Examples included Trump-aligned accounts spreading false claims about France, questioning Europe’s military value, and misrepresenting NATO’s collective role in Afghanistan. In response, France developed a proactive strategy to counter information manipulation. It established VIGINUM, a specialized agency that monitors and exposes foreign digital interference and trains diplomats to actively engage in information battles. French authorities also launched the “French Response” social media account, which uses humor, irony, and facts to debunk false claims and reach wider audiences. In addition, legal action against platforms accused of enabling political interference signaled France’s willingness to hold technology companies accountable and defend its information space. The report concluded that France’s approach offers lessons for allies facing similar challenges. Effective responses include monitoring misinformation from both adversaries and partners, coordinating factual rebuttals, and reinforcing shared values such as burden-sharing within alliances like NATO. Middle powers, including Australia, can support efforts to build information resilience by sharing their experience of what works. Source:  Australian Strategic Policy Institute, E. Frécon & Fitriani, France strikes to address misinformation weakening Western alliance . [online] Published 20 February 2026. Available at: https://www.aspi.org.au/strategist-posts/france-strikes-to-address-misinformation-weakening-western-alliance/ Top Of Page Questionable Efforts to Fight Online Financial Scams The European Commission   announced that it has launched formal investigations under the Digital Services Act into major online platforms such as Apple, Google, Microsoft, and Booking[.]com to assess their efforts to prevent financial scams. The move comes amid a sharp rise in online fraud, with billions of dollars lost annually in the United States alone. Regulators are concerned that fraudulent app store products, fake hotel listings, and manipulated search results are enabling large-scale deception. While the EU’s approach seeks accountability from very large online platforms, critics question whether focusing primarily on these companies addresses the deeper roots of the problem. A major driver of today’s fraud surge is AI-powered disinformation, particularly deepfakes and voice cloning. The number of deepfake files online has grown dramatically, and such tools are increasingly used to impersonate executives, clone voices, and create convincing fake websites. In this environment, disinformation is not just political—it has become a powerful tool for financial crime. The essay argued that responsibility should extend beyond large platforms to the broader digital infrastructure where fraudulent content originates. Online marketplaces, domain registrars, and hosting services often serve as the entry points for scammers but face less scrutiny. A stronger focus on proactive identity verification could disrupt fraud at its source. Source : The Hill, I. Bondar-Mucci, Efforts to fight online financial scams are missing the mark . [online] Published 22 February 2026. Available at: https://thehill.com/opinion/technology/5748005-online-platforms-fraud-prevention/ Top Of Page Building Immunity Against the Disinformation ‘Virus’ An article by the University of Lodz compared disinformation to a virus that spreads rapidly through everyday information channels. According to Dr. Krzysztof Grzegorzewski of the University of Lodz, disinformation is the deliberate spread of false information designed to appear true, while misinformation is shared in good faith, and malinformation involves the selective or harmful use of real facts. Unlike harmless gossip, disinformation is often intentional and strategically amplified, functioning like a “megaphone” that manipulates perception and distorts public debate. The spread of this “information virus” is accelerated by clickbait and platform algorithms. Sensational headlines trigger dopamine-driven curiosity, encouraging users to click and consume more emotionally charged content. Algorithms then personalize and intensify this exposure, making disinformation travel faster than ever. Because technology evolves more quickly than human habits, individuals often struggle to adapt, allowing false or misleading narratives to circulate widely before they are critically examined. To counter this pandemic of fake news, the expert emphasized early media education, critical thinking, and consistent fact-checking. Reliable journalism and science communication are presented as essential “vaccines” that strengthen public resilience. Initiatives such as “Science Inspires” promote clear, contextualized knowledge rather than sensationalism, helping audiences distinguish trustworthy information from manipulative content. Source:  University of Lodz, Disinformation Is a Virus. How to Fight the Fake News Pandemic? Polish Science Day . [online] Published 2026. Available at: https://www.uni.lodz.pl/en/news/details/disinformation-is-a-virus-how-to-fight-the-fake-news-pandemic-polish-science-day Top Of Page EU and Spain Escalate Fight Against Climate Disinformation Climate-related disinformation spread rapidly online after a ten-hour power outage in the Iberian Peninsula on April 28, 2025, the United Nations Office for Disaster Risk Reduction (UNDRR) reported , echoing earlier false narratives that blamed renewable energy for the 2021 Texas blackout. The article argues that such disinformation is deliberately engineered by public relations actors and professional propagandists and amplified by social media platform algorithms that prioritize engagement-driven content, including outrage and falsehoods. It frames climate disinformation as both a commercial and political enterprise, alleging that Big Tech platforms profit from algorithmic amplification while coordinated actors exploit crises to undermine trust in public institutions, science, and democratic governance. The piece also claims that European Union digital accountability efforts, particularly the Digital Services Act (DSA), are facing political pushback from MAGA-aligned U.S. lawmakers seeking to weaken regulatory safeguards. In response, the European Union’s endorsement of the UN Declaration on Information Integrity on Climate Change is presented as a structural countermeasure, committing governments to protect science-based information, support independent journalism, and counter disinformation that distorts public understanding of climate issues. The article highlights enforcement actions under the DSA, fossil fuel advertising bans in cities such as The Hague, Amsterdam, and Florence, and Spain’s proposed reforms to hold platform executives legally accountable, criminalize algorithmic amplification of illegal content, monitor polarization, and enhance protections for minors. These measures are portrayed as a coordinated effort by European actors to shift from reactive moderation to systemic accountability, positioning digital regulation as a central tool in countering climate-related disinformation and safeguarding democratic stability. Source: UNDRR, D. Schran & P. Newell, Spain to protect public against climate disinformation . [online] Published 19 February 2026. Available at: https://www.preventionweb.net/news/spain-protect-public-against-climate-disinformation Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • Cyber-based hostile influence campaigns 9th - 15th February 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] Facebook Meta's Community Notes System Showed Limits in Countering Misleading Content X X Users Asked Grok to Unblur Photos of Children from Epstein Files Grok Raised Regulatory Concerns About Misinformation and Platform Risk Iranian Officials Received Premium Services on X During Protest Crackdown [State Actors] Russia   The Russian Federation Used Video Games to Spread State Propaganda Russian Influence Operation Targeted Ukrainians During the Winter Olympics Check First Unveiled GRU's Information Operations Troops with OSINT and Phaleristics France's Municipal Elections and the Risk of Russian Influence Disinformation Continued After the New START Treaty Expired Report Attributes Coordinated Disinformation Campaigns to Russian State-Aligned Actors [General Reports] False Allegations Labeled Epstein an Israeli Spy Epstein Document Dump Revived the Long-Debunked Pizzagate Conspiracy Afghan Taliban Launched Digital Campaign Against Pakistan After Alleged Airstrikes India Mandates AI Content Labeling and Rapid Takedowns to Curb Synthetic Misinformation Foreign Hybrid Campaigns Intensify Across the European Union TikTok Algorithms Amplify Disinformation and Youth Vulnerabilities in Taiwan and Myanmar G7 Ranks Cyberattacks and Disinformation Among Top Security Threats in 2026 Index Coordinated Disinformation Campaign Fuels Anti-Migrant Vigilantism in Poland [Appendix - Frameworks to Counter Disinformation] Design Thinking Workshop Informed New Approaches to Counter Disinformation and FIMI EU Launches Initiative to Counter Online Polarization and Disinformation [CRC Glossary] [ Report Highlights] An article by Bellingcat reports that, after the US Department of Justice released 3.5 million pages of documents related to convicted sex offender Jeffrey Epstein, users on X began asking Grok to “unblur” images in which the faces of children and women had been redacted for privacy. As published by Rand , Controversy surrounding Grok has drawn regulatory scrutiny from Ofcom and the European Commission, highlighting growing concerns about misinformation and systemic platform risks. NewsGuard’s Reality Check reports that a Kremlin-linked influence operation known as Matryoshka has circulated at least 28 fabricated news reports targeting Ukraine and Ukrainians during the 2026 Winter Olympics in Italy. A report by the Ukrainian Centre for Strategic Communications and the NATO Strategic Communications Centre of Excellence concludes that coordinated, state-aligned Russian actors conducted structured, cross-platform disinformation campaigns that employed forged content, narrative laundering, and synchronized amplification to undermine Ukraine and advance the Kremlin's geopolitical objectives. The latest Institute for Strategic Dialogue (ISD) publication finds that Russia, alongside China and Iran, is conducting coordinated hybrid campaigns across the EU that combine disinformation, cyberattacks, political subversion, and kinetic actions to undermine democratic institutions, inflame societal divisions, and weaken European unity. The 2026 Munich Security Index, as covered by Infosecurity Magazine , shows that G7 nations rank cyberattacks as their top security threat for the second consecutive year, with disinformation campaigns also identified as a leading concern, underscoring heightened awareness of digital and information-based hostile influence risks. [Social Media Platforms] Facebook Meta's Community Notes System Showed Limits in Countering Misleading Content An article by Politico  reports that, in the wake of the fatal shootings of Alex Pretti and Nicole Renee Good in Minneapolis, Meta’s new volunteer-driven fact-checking system faced a major test as misleading and AI-generated posts flooded Facebook, Instagram, and Threads. The company adopted this model after its CEO, Mark Zuckerberg, ended partnerships with professional U.S. fact-checkers, citing perceived political bias. Inspired by X’s Community Notes system, Meta decided to rely on volunteers to propose and rate notes that add context to questionable posts. Only notes that receive broad support across differing viewpoints are published, meaning users typically see comments that reflect cross-partisan agreement. However, behind the scenes, the process is often marked by partisan conflict. A review of Meta’s closed rollout found that many proposed notes were themselves politically charged, echoing concerns raised by professional fact-checkers that volunteers may be motivated by ideology rather than neutrality. While overtly one-sided notes are usually filtered out by low ratings, critics question whether the system can effectively counter misinformation, especially on platforms such as Facebook, where content spreads within private networks rather than in high-visibility public debates. Meta said the program is expanding rapidly, with tens of thousands of contributors and thousands of published notes. Yet only a small fraction of proposed notes ultimately appear publicly. Source:  Politico, A. Ng., Meta’s answer to ‘politically biased’ fact-checkers: Let them fight . [online] Published 10 February 2026. Available at: https://www.politico.com/newsletters/digital-future-daily/2026/02/10/metas-answer-to-politically-biased-fact-checkers-let-them-fight-00773633 Top Of Page X X Users Asked Grok to Unblur Photos of Children from Epstein Files In coverage by Bellingcat , users on X were reported to have asked Grok to “unblur” images after the US Department of Justice released 3.5 million pages of documents related to convicted sex offender Jeffrey Epstein, in which the faces of children and women had been redacted for privacy. The documents were already criticized by survivors, 18 of whom issued a joint statement condemning the release for exposing identifying details. Thousands of documents were later removed due to flawed redactions that reportedly compromised the identities of nearly 100 survivors. Between the 30th of January and the 5th of February 2026, researchers reviewed 31 requests in which Grok was asked to remove redactions or identify individuals in protected images. Although Grok often acknowledged that minors’ faces were blurred for legal and ethical reasons, it still generated AI images in response to most requests, producing fabricated faces that ranged from realistic to clearly distorted. Some of these posts attracted millions of views, amplified by X’s monetization system. Following media inquiries, stricter guardrails appeared to be implemented. In subsequent days, Grok largely refused such requests or instead generated unrelated images. X did not respond to requests for comment. Grok has also been used to create manipulated images tied to other incidents, sometimes falsely identifying individuals. Investigations have been launched in countries including the UK and France over the chatbot’s role in generating non-consensual sexual deepfakes, including child abuse imagery. Source:  Bellingcat, K. Koltai, Epstein Files: X Users Are Asking Grok to ‘Unblur’ Photos of Children . [online] Published 10 February 2026. Available at: https://www.bellingcat.com/news/2026/02/10/grok-epstein-photos/ Top Of Page Grok Raised Regulatory Concerns About Misinformation and Platform Risk As published by Rand , controversy surrounding Grok has drawn regulatory scrutiny from Ofcom and the European Commission, highlighting growing concerns about misinformation and systemic platform risks. The case emerged at a critical moment when UK and EU digital regulations shifted from voluntary compliance to active enforcement. Regulators increasingly rejected claims that harmful outputs were isolated glitches, instead focusing on whether platforms had proactively identified and mitigated foreseeable systemic risks, including misinformation, abuse, and radicalization. Generative AI poses distinct risks of misinformation because it engages users in personalized, iterative conversations that, over time, can reinforce false narratives or extremist views. Unlike standalone AI tools, Grok operates within X, where its conversational responses can be rapidly amplified and repurposed. The interactions occur privately and cumulatively, making harm less visible, harder to evidence, and more difficult to audit using tools designed for posts, shares, or recommendations. The report concluded that Grok raises fundamental questions about whether platforms can credibly self-govern where systemic risks are foreseeable, and whether governments can meaningfully enforce laws designed to protect users, democratic processes, and the integrity of the information environment in a fragmented, cross-border digital ecosystem. Source:  RAND, D. Toro, Grok Isn’t a Glitch—It Is a Regulatory Reckoning . [online] Published 9 February 2026. Available at: https://www.rand.org/pubs/commentary/2026/02/grok-isnt-a-glitch-it-is-a-regulatory-reckoning.html Top Of Page Iranian Officials Received Premium Services on X During Protest Crackdown The Tech Transparency Project   reports that X provided paid premium subscriptions to dozens of Iranian officials, state agencies, and state-controlled media outlets, including individuals subject to U.S. sanctions. These premium accounts, marked with blue checkmarks, offered enhanced features, including increased visibility, longer posts, and monetization tools. Many of these accounts were used to spread government messaging and propaganda during nationwide protests and internet blackouts in Iran, including threats of harsher repression against demonstrators. The findings raised concerns that X may have violated US sanctions, which allow free access to communication platforms but prohibit charging sanctioned entities for additional services. Some premium badges were removed after media inquiries, but others remained active. The report highlighted that Elon Musk appeared to show support for the recent anti-government protests in Iran, taunting Khamenei on X, changing the Iranian flag emoji on X to the pre-revolutionary version, and allowing Iranians to access his Starlink satellite service for free amid an internet blackout in the country. Source : Tech Transparency Project, Iranian Officials Get Premium Treatment on X Amid Protester Crackdown . [online] Published 12 February 2026. Available at: https://www.techtransparencyproject.org/articles/iranian-officials-get-premium-treatment-on-x-amid-protester-crackdown Top Of Page [State Actors] Russia The Russian Federation Used Video Games to Spread State Propaganda A report by EU VS Disinfo  revealed that the Russian Federation recognized the potential of video games as a tool of ideological influence and used them to spread propaganda. Unlike passive media, games immerse players in interactive worlds in which ideological messages are embedded in narratives and mechanics. With 3.3 billion players globally, many aged 18–34, games can shape perceptions through community dynamics, repetition of symbols, and the illusion of personal choice. The Kremlin has systematically integrated the gaming industry into its state information strategy under a model of “digital authoritarianism.” Through state funding, censorship, and the development of domestic platforms and engines, Russian authorities have promoted “patriotic” content aligned with official narratives. Organisations such as the Institute for Internet Development provide substantial subsidies to ideologically aligned projects, while youth movements such as “Yunarmiya” incorporate military-themed games into their activities. Some titles, reportedly recommended for cadets, recreate battles in Ukraine with overt pro-Kremlin symbolism, presenting war as heroic and justified. Gaming platforms such as Steam further enable the spread of Z-propaganda and extremist content within echo chambers, normalizing militarism and dehumanizing narratives under the guise of humor or community culture. Source:  EU VS Disinfo, A. Vyshniakova, New weapon in the shadows: how the Kremlin uses video games for war propaganda . [online] Published 11 February 2026. Available at: https://euvsdisinfo.eu/new-weapon-in-the-shadows-how-the-kremlin-uses-video-games-for-war-propaganda/ Top Of Page Russian Influence Operation Targeted Ukrainians During the Winter Olympics Findings by NewsGuard’s Reality Check  show that a Kremlin-linked influence operation known as Matryoshka circulated at least 28 fabricated news reports between the 30th of January and the 10th, of February 2026, targeting Ukraine and Ukrainians during the 2026 Winter Olympics in Italy. The fake materials, shared mainly on Telegram and viewed approximately two million times, mimicked respected outlets such as Reuters, Euronews, CBC, and Italian institutions. Among the allegations were claims that Ukrainian staff spat in food, that anti-doping rules were relaxed for Ukrainian athletes, and that Ukrainians were segregated in the Olympic Village due to aggressive behavior—assertions denied by official sources. Other viral claims accused Ukrainians of vandalizing Rome’s Colosseum, staging obscene protests, creating fraudulent hotel booking websites, or participating in the Olympics to avoid military service. As the Games approached, the disinformation gained broader traction, spreading beyond pro-Kremlin Telegram channels to multilingual websites in the Pravda network and even being repeated by mainstream Russian media. Source:  NewsGuard, A. Lee. Russia Targets Ukraine at the Winter Olympics with Flood of Fabricated News Reports . [online] Published 10 February 2026. Available at: https://www.newsguardrealitycheck.com/p/russias-olympian-disinformation-game Top Of Page CheckFirst Unveiled GRU's Information Operations Troops with OSINT and Phaleristics Building on a previous investigation into the FSB’s 16th Centre, CheckFirst's   report applied OSINT and heraldry, the study of military decorations, to uncover the structure of the GRU’s Information Operations Troops (VIO). By analyzing 118 photographs of insignia, patches, and pennants linked to the GRU, researchers reconstructed key elements of the VIO’s organization, chain of command, and likely geographic footprint. Although the VIO has been publicly acknowledged since 2014, its internal structure and missions have remained opaque. The study suggested that the VIO was formalized around 2014, likely under Military Unit 55111, consolidating pre-existing GRU units into a unified command. The VIO comprises at least fifteen units spanning three core specializations: encryption and cryptanalysis, computer network operations, and psychological operations. Some units, such as 26165 and 74455, previously linked to the APT28 and Sandworm cyber groups, are well documented, while others, including Units 20766, 48707, and 20978, have received little or no prior public attention. While identifying operational locations proved challenging, the researchers triangulated data from investigative journalism, leaked materials, Russian public records, and social media platforms to assess probable sites. The report concluded that combining phaleristics with OSINT offers a replicable and effective method for illuminating clandestine Russian intelligence structures, helping to clarify the scale and evolution of the GRU’s information and cyber warfare capabilities amid ongoing conflict with Ukraine and increased operations targeting Europe. Sources:   CheckFirst, Unveiling FSB’s 16th Center SIGINT Capabilities . [online] Published July 2025. Available at: https://checkfirst.network/wp-content/uploads/2025/07/OSINT_Phaleristics_Unveiling_FSB_16th_Center_SIGINT_Capabilities.pdf CheckFirst, OSINT & Phaleristics: Unveiling GRU’s Information Operations Troops (VIO) . [online] Published February 2026. Available at: https://checkfirst.network/2026/02/OSINT_Phaleristics_Unveiling_GRUs_Information_Operations_Troops.pdf Top Of Page France's Municipal Elections and the Risk of Russian Influence An article published by Le Monde  reports that a network of more than 80 fake French-language news websites, identified by Reporters Without Borders and reportedly linked to the Russian group Storm-1516, has raised concerns about potential foreign interference ahead of France’s March 2026 municipal elections. These sites imitate local media and promote pro-Russian narratives. Similar tactics have been observed in countries such as Moldova, where pro-Russian disinformation targeted legislative elections, though pro-European candidates ultimately prevailed. Political choices made by major metropolitan areas are of interest to major foreign powers, sometimes for primarily economic reasons. However, these elections are also particularly difficult for foreign actors to grasp. Russia’s past information campaigns in France have focused more on national themes, such as criticizing French leadership or promoting narratives of Western decline, rather than attempting to shape specific local races, including high-profile contests in Paris involving allies and opponents of Emmanuel Macron. In practice, most manipulation in municipal elections has come from domestic actors rather than foreign states. Cases in multiple countries have involved candidates spreading false information, using doctored media, or engaging in questionable campaign funding. Source:  Le Monde, D. Leloup, France's municipal elections at risk of foreign interference . [online] Published 10 February 2026. Available at: https://www.lemonde.fr/en/opinion/article/2026/02/10/france-s-municipal-elections-at-risk-of-foreign-interference_6750325_23.html Top Of Page Disinformation Continued After the New START Treaty Expired EU VS Disinfo   reports that the New START Treaty expired on 6 February 2026, bringing to an end the last remaining nuclear arms control agreement between Russia and the United States. In response, pro-Kremlin information campaigns sought to deflect blame from Moscow by portraying Russia as a responsible actor whose extension efforts were ignored by Washington, while warning of nuclear catastrophe and promoting the idea of a new multipolar nuclear order. These narratives aimed to frame the treaty’s collapse as inevitable and as the result of Western actions, despite the Kremlin’s own role in weakening the agreement over several years. The treaty, which entered into force in 2011, limited strategic nuclear arsenals and included transparency measures such as on-site inspections. However, Russia suspended inspections in 2022, citing Western sanctions after it invaded Ukraine, and formally halted its participation in 2023. At the same time, pro-Kremlin outlets intensified messaging accusing Western countries and NATO of provoking nuclear confrontation, spreading false claims about Ukrainian nuclear threats, biological weapons, and Western sabotage. Source: EU VS Disinfo, As New START ends, disinformation about it continues . [online] Published 13 February 2026. Available at: https://euvsdisinfo.eu/as-new-start-ends-disinformation-about-it-continues/ Top Of Page Report Attributes Coordinated Disinformation Campaigns to Russian State-Aligned Actors The joint report by the Ukrainian Centre for Strategic Communications  and the NATO Strategic Communications Centre   of Excellence  develops and tests a structured Information Influence Attribution Framework (IIAF) to assess Russian information influence operations targeting Ukraine and neighbouring regions. Focusing on campaigns linked to Russian state and state-aligned actors, including RT, Sputnik, pro-Kremlin Telegram networks, and proxy outlets, the report documents coordinated disinformation efforts built on falsified content, impersonation of trusted media brands, source laundering, and synchronized cross-platform amplification. Case studies demonstrate how Russian actors deploy forged documents, deceptively edited images and videos, bot networks, and multilingual content tailored to specific audiences to advance narratives portraying Ukraine as corrupt, unstable, or manipulated by the West. The analysis shows that these campaigns rely on layered tactics: technical infrastructure reuse (shared domains, hosting, analytics IDs), behavioural coordination (near-simultaneous reposting, template-based comment flooding, and tasking via channels such as the “Digital Army of Russia”), and contextual alignment with Kremlin geopolitical objectives. Disinformation narratives are strategically timed to exploit political events, social tensions, and policy debates. In contrast, narrative laundering techniques obscure their origins by staging dissemination across inauthentic and overtly state-linked media ecosystems. Applying a spectrum of state responsibility, the report concludes that several examined operations fall between “state-shaped” and “state-coordinated,” reflecting high-confidence assessments of structured, state-aligned influence activity rather than organic or rogue behaviour. Source:  The Ukrainian Centre for Strategic Communications & NATO Strategic Communications Centre of Excellence, V. Smith & J. Pamment & S. Dikhtiarenko & B. Heap & D. Tsurtsumia & A. Maunder, Attributing Russian Information Influence Operations . [online] Published January 2026. Available at: https://stratcomcoe.org/pdfjs/?file=/publications/download/Attribution_Russian_Information_Influence.pdf?zoom=page-fit Top Of Page [General Reports] False Allegations Labeled Epstein an Israeli Spy In its latest debunking , NewsGuard’s Reality Check  examines false claims that emerged after the January 2026 release of millions of Justice Department documents related to Jeffrey Epstein, alleging that the files proved he was an Israeli spy. The allegation centered on an October 2020 FBI document included in the release, which summarized statements from an unnamed source who described Epstein as a “co-opted Mossad agent.” Anti-Israel accounts on X amplified screenshots of the document, asserting it confirmed that Epstein had trained as a spy under former Israeli Prime Minister Ehud Barak. Posts spreading the claim gained millions of views and tens of thousands of engagements. However, the document in question is an FD-1023 form, which records raw, unverified information from confidential sources and does not constitute confirmed findings or investigative conclusions. The FBI has explicitly stated that such forms do not validate or establish the credibility of the claims they contain. No credible evidence has emerged showing that Epstein was affiliated with Israeli intelligence. In response to similar allegations made previously, former Israeli Prime Minister Naftali Bennett publicly denied that Epstein had ever worked for the Mossad. Source : NewsGuard, J. Schofield, Debunk: Unverified Allegations Used to Label Epstein an Israeli Spy . [online] Published 9 February 2026. Available at: https://www.newsguardrealitycheck.com/p/was-epstein-an-israeli-spy-newsguards Top Of Page Epstein Document Dump Revived the Long-Debunked Pizzagate Conspiracy Following the Justice Department’s January 2026 release of documents related to Jeffrey Epstein, NewsGuard’s Reality Check   reports that the long-debunked “Pizzagate” conspiracy theory, falsely claiming that top Democrats ran a child sex-trafficking ring out of Comet Ping Pong, has resurfaced. Online users cited hundreds of references to the word “pizza” in the files, alleging that it was coded language for abuse. Social media posts promoting the claim surged dramatically, and the commentator Tucker Carlson amplified the theory in a YouTube episode, suggesting it might be real despite having previously rejected it. However, a review of the documents found no evidence supporting the conspiracy. References to “pizza” were largely mundane, such as casual emails discussing restaurants or kitchen equipment. The pizzeria and its owner were mentioned only briefly, including in an article about the conspiracy itself and an unverified tip. Authorities have repeatedly debunked allegations that figures such as Hillary Clinton were involved. Although “pizza” appears in 842 documents, the files contain more references to other foods, such as “salad” and “soup.” The conspiracy has previously had dangerous real-world consequences, including a 2016 incident in which an armed man entered the restaurant believing he was rescuing children. Source : NewsGuard, M. Wollen & S. Howard, Epstein Document Dump Breathes New Life into Long-Debunked Pizzagate Conspiracy — Pushed Now By Tucker Carlson . [online] Published 11 February 2026. Available at: https://www.newsguardrealitycheck.com/p/epstein-files-revive-pizzagate-with Top Of Page Afghan Taliban Launched Digital Campaign Against Pakistan After Alleged Airstrikes An article by The Jamestown Foundation  reports that, following alleged Pakistani airstrikes near Kabul in October 2025 targeting leaders of Tehreek-e-Taliban Pakistan (TTP), the Taliban launched a coordinated digital campaign to counter Pakistan’s accusations and reshape regional narratives. Using social media, affiliated news outlets, militant songs, and propaganda videos, the Taliban sought to portray Pakistan as the aggressor while denying claims that it sheltered militant groups. A widely circulated video featuring TTP leader Mufti Noor Wali Mehsud claimed he was not in Afghanistan, aiming to undermine Pakistan’s justification for the strikes and weaken its credibility. Territorial narratives rejecting the Durand Line border and circulated maps depicting parts of Pakistan as belonging to a so-called Greater Afghanistan were also promoted, reinforcing nationalist and revisionist claims. Source : The Jamestown Foundation, R. Nasar, The Afghan Taliban’s ‘Digital War’ Against Pakistan . [online] Published 12 February 2026. Available at: https://jamestown.org/the-afghan-talibans-digital-war-against-pakistan// Top Of Page India Mandates AI Content Labeling and Rapid Takedowns to Curb Synthetic Misinformation According to a TechXplore article , India has amended its 2021 information technology regulations to tighten oversight of artificial intelligence–generated content, requiring social media platforms to clearly and permanently label synthetic media and comply with government takedown requests within three hours, replacing the previous 36-hour window. The updated rules, effective February 20, apply broadly to content “created, generated, modified, or altered” through computer resources, excluding routine or good-faith editing. Synthetic data is defined as material that appears real or is likely to be perceived as indistinguishable from an actual person or event, and it directly targets the growing spread of AI-generated misinformation among India’s more than one billion internet users. The regulations mandate that platforms such as X, Instagram, and Facebook deploy automated tools to prevent the dissemination of illegal content, including false documents and sexually abusive material, and ensure that AI-manipulated content carries indelible markings that cannot be removed or suppressed. The government has also operationalized enforcement through the Sahyog portal, which automates and streamlines the submission of official takedown notices. The announcement precedes a major global AI summit in New Delhi, underscoring India’s intent to assert regulatory control over synthetic media and to hold platforms accountable amid rapid AI expansion and rising disinformation risks. Source : TechXplore, A. Zinin, India ramps up AI rules for social media platforms . [online] Published 10 February 2026. Available at: https://techxplore.com/news/2026-02-india-ramps-ai-social-media.html Top Of Page Foreign Hybrid Campaigns Intensify Across the European Union The latest Institute for Strategic Dialogue (ISD)   publication documents a sustained campaign of hybrid activity targeting all 27 EU member states since Russia’s full-scale invasion of Ukraine in February 2022. Russia is identified as the primary threat actor in most cases, with China and Iran also implicated. The report categorizes hybrid threats into information operations, cyber operations, kinetic actions, political and civil society subversion, and malign finance. Across the cases, adversarial states deploy coordinated disinformation campaigns, inauthentic news sites, bot amplification networks, deceptive branding, and impersonation tactics to manipulate public opinion, inflame ethnic and political tensions, interfere in elections, and erode trust in democratic institutions. These efforts are often synchronized with cyberattacks, espionage, and covert financing of political actors to maximize destabilizing impact. The analysis highlights a notable escalation in kinetic and disruptive tactics accompanying information manipulation, including sabotage of critical infrastructure, arson, drone incursions, assassination plots, and the recruitment of local proxies through encrypted platforms. Many incidents combine physical acts, such as vandalism or migrant weaponization, with coordinated narrative amplification through state media, Telegram channels, and bot networks to magnify psychological and political effects. The report concludes that hybrid operations increasingly blur the line between foreign and domestic actors, leveraging criminal networks, diaspora communities, and local intermediaries to obscure attribution while advancing strategic goals of undermining European unity, weakening institutional trust, and challenging liberal democratic governance. Source : Institute for Strategic Dialogue, K. Sikora & L. Savoia & B. Schafer, Europe’s Other Battlefields: Foreign Hybrid Threats in the EU . [online] Published 12 February 2026. Available at: https://www.isdglobal.org/publication/europes-other-battlefields-foreign-hybrid-threats-in-the-eu/ Top Of Page TikTok Algorithms Amplify Disinformation and Youth Vulnerabilities in Taiwan and Myanmar A comparative study by Doublethink Lab  examines TikTok’s impact on adolescents in Taiwan and Myanmar, highlighting its role as both a vector for disinformation and a platform vulnerable to algorithmically amplified influence operations. In Myanmar, following the 2021 military coup and restrictions on Facebook and VPNs, propaganda and disinformation campaigns shifted toward Telegram and TikTok, which were perceived as having weaker moderation. Youth reported exposure to political propaganda, hate speech, and misinformation, often responding by skipping or reporting content but facing structural challenges in verification. In Taiwan, concerns extend beyond harmful content to the platform’s potential role in amplifying narratives aligned with the People’s Republic of China (PRC). During the 2024 presidential election, adolescents reported increased exposure to short, decontextualized political videos that criticized Taiwan’s democratic institutions and subtly promoted favorable perceptions of PRC governance. A 2025 nationwide study cited in the report found that active TikTok users showed higher agreement with conspiracy-like and PRC-aligned narratives, suggesting a measurable relationship between platform engagement and increased populist skepticism toward democratic institutions. The report highlights TikTok’s highly personalized recommendation algorithm as a central driver of user exposure. Adolescents in both contexts described the platform as “precise” and immersive, with rapid adaptation to user behavior reinforcing content themes and potentially narrowing informational diversity. In Taiwan, researchers documented algorithmic promotion of borderline sexually suggestive content involving minors, alongside accounts linked to potential exploitation and scam networks, while in Myanmar, the primary risks centered on political manipulation and harassment. Across both contexts, the study identifies algorithmic amplification, subtle embedding of political messaging within entertainment content, and the use of emotionally charged, fragmented narratives as key tactics. The findings underscore how platform design, limited transparency, and weak regulatory oversight can facilitate information manipulation, normalize harmful behaviors, and erode digital literacy, particularly among adolescents in politically sensitive environments. Source : Doublethink Lab, E. Hsu & Iris & P. S. S. Thant, Algorithms, Digital literacy, and Social influence: TikTok use among youths of Taiwan and Myanmar . [online] Published 13 February 2026. Available at: https://medium.com/doublethinklab/algorithms-digital-literacy-and-social-influence-tiktok-use-among-youths-of-taiwan-and-myanmar-5ac4ee947b8c Top Of Page G7 Ranks Cyberattacks and Disinformation Among Top Security Threats in 2026 Index As reported by Infosecurity Magazine , the Munich Security Index (MSI) 2026, released at the Munich Security Conference, identifies cyberattacks as the top perceived national security risk among G7 countries for the second consecutive year, followed by economic or financial crises and disinformation campaigns from adversaries. The ranking reflects a sustained increase in cyber threats in G7 risk assessments, with particularly great concern in Germany, the UK, and Japan. Disinformation campaigns are widely recognized as a major hostile tactic, a core security concern alongside cyber operations, indicating growing awareness of information manipulation as a strategic threat vector. In contrast, among Brazil, India, China, and South Africa (BICS), cyber threats have declined in perceived urgency, while disinformation ranks seventh, suggesting differing threat prioritization across geopolitical blocs. The report is based on a November 2025 survey of more than 11,000 respondents across G7 and BICS countries, employing representative sampling and multiple evaluative criteria to assess perceived risk severity and imminence. While most countries reported declining threat perceptions overall, concerns have intensified in the UK, the US, and India, particularly regarding economic instability, democratic fragility, civil unrest, and political violence, factors that can intersect with cyber and disinformation campaigns in destabilizing democratic systems. The authors caution that polling in autocratic contexts, particularly in China, may limit respondents’ willingness to express views freely, highlighting methodological constraints in assessing perceptions of information and cyber threats across political systems. Source:  Infosecurity Magazine, K. Poireault, Munich Security Conference: Cyber Threats Lead G7 Risk Index, Disinformation Ranks Third . [online] Published 13 February 2026. Available at: https://www.infosecurity-magazine.com/news/munich-security-index-cyberattacks/ Top Of Page Coordinated Disinformation Campaign Fuels Anti-Migrant Vigilantism in Poland An investigation by the DFRLab  found that citizen patrols organized along the Polish-German border in 2025 were accompanied by a coordinated online disinformation campaign that amplified anti-migration rhetoric, legitimized vigilante activity, and portrayed the Polish government as weak. Inauthentic Facebook pages, AI-generated images, recycled protest photos, and copy-paste narratives were used to stoke fear and frame migrants as a violent “invasion.” Right-wing actors, including members of the Confederation Party and Law and Justice (PiS), boosted this content, at times sharing AI-manipulated visuals and unverified claims. Political posts amplified incidents, such as a soldier’s injury later clarified as accidental, to fuel calls for violence against migrants and accusations that border officials were “traitors.” The investigation identified coordinated networks of Facebook pages posting identical content within seconds or minutes of one another, often with large follower counts but minimal engagement, suggesting likely inauthentic amplification. The PATRIOCI Facebook group emerged as a key amplification node, administered by pages impersonating media outlets and linked to previously dormant or disinformation-associated entities, including accounts that had shared content from Russian state-controlled RT. These networks promoted articles from outlets such as Tysol[.]pl in synchronized bursts and targeted Polish border guards with harassment and doxxing-style identification efforts. AI-generated banners depicting aggressive “foreign” figures and violent imagery were used to mobilize offline protests, while the absence of platform labeling for synthetic media enabled emotionally manipulative visuals to circulate unchecked. The report concludes that coordinated digital manipulation, leveraging algorithmic amplification, opaque platform governance, and political endorsement, distorted Poland’s migration debate, normalized extremist rhetoric, and blurred the line between grassroots activism and orchestrated influence operations. Source : DFRLab, G. Gigitashvili & S. Gelava, Inauthentic networks fuel Poland’s border vigilante movement . [online] Published 11 February 2026. Available at: https://dfrlab.org/2026/02/11/inauthentic-networks-fuel-polands-border-vigilante-movement/ Top Of Page [Appendix - Frameworks to Counter Disinformation] Design Thinking Workshop Informed New Approaches to Counter Disinformation and FIMI Based on a design thinking workshop, a paper published by DoubleThink Lab  argued that participatory design methods can play a constructive, bounded role in counter-FIMI policy development. The workshop, which involved civil society, defense practitioners, and journalists, noted that traditional top-down, reactive policies often fail to keep pace with fast-moving influence operations. Instead, participatory and human-centered approaches can help policymakers better understand the emotional, institutional, and operational pressures caused by FIMI, including declining trust, decision-making under uncertainty, and professional burnout. The workshop reframed FIMI not only as a problem of false content but also asa broader human and systems problem. Several tensions, such as speed versus due process, control versus trust, and short-term response versus long-term sustainability, appeared across all groups. Based on these insights, the paper proposed exploratory policy recommendations using a four-part framework: Education, Engineering, Enforcement, and Ethics (4E). These include improving media and crisis literacy, investing in shared verification tools and secure communication systems, clarifying institutional roles, strengthening legal protections, and ensuring responses respect rights and wellbeing. The paper concluded that design thinking can play a valuable but limited role in counter-FIMI policymaking. While it does not measure effectiveness or replace legal and evidence-based analysis, it helps policymakers surface hidden assumptions, understand trade-offs, and design more adaptable and human-centered responses. The recommendations were presented as starting points that require further testing, legal review, and empirical validation. Source : Doublethink Lab, D. V. Ligot, Crafting Policy Recommendations for Countering Disinformation and FIMI . [online] Published 11 February 2026. Available at: https://medium.com/doublethinklab/crafting-policy-recommendations-for-countering-disinformation-and-fimi-952825b9aa2b Top Of Page EU Launches Initiative to Counter Online Polarization and Disinformation The European Commission  has launched a €785,000 pilot call for proposals to strengthen social cohesion and improve understanding of online political polarisation, recognizing the growing threat that digitally amplified division poses to democratic resilience in the European Union. The initiative highlights how algorithm-driven platforms, while central to civic engagement, can also amplify extreme, divisive, or manipulative content, creating conditions in which coordinated malicious influence operations and disinformation campaigns can thrive. Such dynamics risk eroding trust in democratic institutions and distorting public understanding of societal issues. The project seeks to build cross-sector and cross-border partnerships to design, test, and scale mitigation strategies that counter harmful polarization without restricting freedom of expression. Proposed activities may include multilingual data collection; the development of AI- and IT-based tools; practitioner toolkits; policy and technical interventions; and the establishment of durable stakeholder networks. By engaging civil society, researchers, media actors, and technology companies, the initiative aims to produce actionable responses that align with the European Democracy Shield and reinforce long-term democratic resilience while safeguarding open and pluralistic debate. Source : European Commission, New call for proposals to advance social cohesion amid online polarisation . [online] Published 12 February 2026. Available at: https://digital-strategy.ec.europa.eu/en/news/new-call-proposals-advance-social-cohesion-face-online-polarisation Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • Cyber-based hostile influence campaigns 2nd February - 8th February 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [State Actors] Russia   Pro-Kremlin Network Promoted Alberta Separatism to U.S. Audiences Italy Accused Russia-Linked Hackers of Cyberattacks Ahead of 2026 Winter Olympics Russia intensified disinformation as the EU tightened scrutiny of its shadow fleet [General Reports] France Caught Between Russian and U.S.-Linked Disinformation Campaigns Nearly Half of Americans Believed Top False Claims in 2025 Social Media Users Circulated Fake Images Linking Politicians to Epstein Files Pro-IS Propaganda Circulated Online Following Syrian Detention Camp Takeover Digital Risks for the 2026 Winter Olympics Gabbard’s Seizure of Voting Machines Raised Fears of Election Interference The Rise of Violent Rhetoric Targeting US Public Officials Foreign State Disinformation and Cyber Influence Threaten Norway’s Information Space in 2026 [CRC Glossary] [ Report Highlights] An article by EU VS Disinfo showed that Russia has intensified disinformation efforts as the EU tightened scrutiny of its so-called "shadow fleet", hundreds of ageing oil tankers that operate under false flags to evade sanctions on Russian oil exports. According to an article by Le Monde , France has increasingly become a target of overlapping disinformation efforts from pro-Russian networks and Trump-aligned American accounts, creating a complex and unprecedented challenge. Ahead of the Milano Cortina 2026 Winter Olympics, Graphika analyzed online conversations from the past 60 days to assess potential digital risks surrounding the Games and highlighted how major sporting events attract influence operations and hacktivist activity. New analysis by the Institute for Strategic Dialogue (ISD)  found that violent online rhetoric targeting US public officials more than tripled between 2021 and 2025. The Norwegian Police Security Service’s (PST)  2026 “National Threat Assessment 2026” warns that Russia, China, and Iran are intensifying cyber-enabled disinformation, influence operations, and proxy activities to undermine public trust, suppress dissent, and exploit extremist and digital ecosystems within the Norwegian information space. [State Actors] Russia Pro-Kremlin Network Promoted Alberta Separatism to U.S. Audiences As reported by DisinfoWatch , a pro-Kremlin outlet in the Pravda News Network amplified claims that Alberta was moving toward independence and that its secession would deliver "two red senators and electoral votes" to the United States. Identified by France’s Viginum agency as part of the Russian “Portal Kombat” disinformation ecosystem, the network repackaged real but limited developments, including meetings between Alberta separatist figures and U.S. officials, into a misleading narrative. While such meetings did occur, U.S. officials have denied offering any support or commitments. Alberta is a Canadian province, not a U.S. state, and there is no legal or political pathway for it to gain U.S. senators or electoral votes. Alberta is also not institutionally “moving toward independence.” Public support remains a minority position, and current activity is limited to the early stages of a referendum petition process. Even a hypothetical referendum would not result in immediate independence, as Canadian law requires a clear majority, a clear question, and negotiations. Source:  DisinfoWatch. Moscow to MAGA: Pro-Kremlin Pravda Network Amplifying Alberta Separatism . [online] Published 2 February 2026. Available at: https://disinfowatch.org/disinfo/moscow-to-maga-pro-kremlin-pravda-network-amplifying-alberta-separatism/ Top Of Page Italy Accused Russia-Linked Hackers of Cyberattacks Ahead of 2026 Winter Olympics An article by The Record  reports that Foreign Minister Antonio Tajani said Italy has blocked a wave of cyberattacks of “Russian origin” targeting its diplomatic missions abroad and infrastructure linked to the 2026 Winter Olympics. The attempted attacks originated in Washington and extended to consulates in Sydney, Toronto, and Paris, as well as to hotels associated with the Games in Milan and the Alpine resort of Cortina d’Ampezzo. Around 120 targets were hit in total, but authorities reported no significant disruption. The pro-Russian hacker group NoName057(16) claimed responsibility, describing the campaign as retaliation for Italy’s support for Ukraine. Active since Russia’s full-scale invasion of Ukraine in 2022, the group has repeatedly targeted European countries backing Kyiv using relatively simple but disruptive distributed DoS attacks. Italian officials noted that Russia has been barred from competing as a nation in the 2026 Winter Olympics, a context that echoes previous incidents in which Russia-linked hackers targeted countries hosting major sporting events. Russian state officials have not publicly commented on the allegations. Source:  The Record, D. Antoniuk. Italy blames Russia-linked hackers for cyberattacks ahead of Winter Olympics . [online] Published 5 February 2026. Available at: https://therecord.media/italy-blames-russia-linked-hackers-winter-games-cyberattack Top Of Page Russia intensified disinformation as the EU tightened scrutiny of its shadow fleet EUvsDisinfo reports that Russia has intensified disinformation efforts as the EU tightened scrutiny of its so-called “shadow fleet”, hundreds of ageing oil tankers that operate under false flags to evade sanctions on Russian oil exports. These vessels, often with obscured ownership and inadequate insurance, transport oil and other goods to countries such as China and India, and are also suspected of carrying stolen Ukrainian property or military cargo. EU member states expanded sanctions, now covering nearly 600 vessels, and debated an even tougher 20th sanctions package. Recent interceptions of suspected sanctions-busting tankers in the North Sea and Mediterranean, including a January seizure by the French navy, have triggered a wave of false Kremlin-backed narratives. Pro-Kremlin outlets have portrayed lawful inspections and seizures as “piracy” or a covert naval blockade, falsely claiming Europe is provoking open confrontation. In parallel, deterrence messaging has become increasingly explicit, hinting at retaliation against Western shipping or undersea infrastructure and warning of instability in the Baltic Sea—claims that have no basis in reality, as neither the EU nor NATO is preparing a blockade or military escalation. Source:  EUvsDisinfo. Sailing under false flag: Moscow’s ‘shadow fleet’ meets Europe’s resolve . [online] Published 6 February 2026. Available at: https://euvsdisinfo.eu/sailing-under-false-flag-moscows-shadow-fleet-meets-europes-resolve/ Top Of Page [General Reports] France Caught Between Russian and U.S.-Linked Disinformation Campaigns France has increasingly become a target of overlapping disinformation efforts from pro-Russian networks and Trump-aligned American accounts, as detailed by Le Monde , creating a complex and unprecedented challenge. Since early 2026, Russian-linked actors have continued to circulate fabricated stories aimed at discrediting Ukraine and France, including a false claim that French and Ukrainian officials embezzled billions of euros intended for Rafale fighter jets. Investigations by French and international fact-checkers confirmed the story was entirely fabricated and likely originated from “Storm-1516”, a Russian propaganda group active in France since 2025. At the same time, French authorities have had to counter misleading narratives spreading from U.S. sources, including false claims about President Emmanuel Macron’s role in drug pricing and France’s military record in Afghanistan. This dual threat poses a new challenge for France, as Russia and the United States rely on radically different methods. Russian disinformation operates as a long-term, organised background campaign, often using fake news sites and AI-generated content to subtly influence public opinion. By contrast, Trump-aligned messaging relies on high-visibility, immediate amplification through personal social media accounts, normalising distortions of fact through repetition and provocation. Adding to the challenge is a convergence between pro-Russian and MAGA-aligned narratives, which share anti-elite, anti-EU, anti-Macron, and anti-Ukraine themes and often relay each other’s claims. French authorities have responded by more actively debunking falsehoods on social media, including through the Foreign Ministry’s French Response account and direct interventions from the Élysée. Source:  Le Monde, W. Audureau. France faces a crossfire of Russian and American disinformation . [online] Published 2 February 2026. Available at: https://www.lemonde.fr/en/les-decodeurs/article/2026/02/02/france-faces-a-crossfire-of-russian-and-american-disinformation_6750034_8.html Top Of Page Nearly Half of Americans Believed Top False Claims in 2025 NewsGuard’s Reality Gap   Index   found that, from June to December 2025, an average of 46 percent of Americans believed at least one major false claim circulating in the news. In the first six months of tracking, belief levels averaged 50 percent before dipping slightly in December. Fluctuations largely reflected the virality and sensationalism of particular hoaxes, with peaks in midsummer driven by widely shared false claims, such as reports that President Donald Trump had declared martial law in Washington, D.C., or that a Florida detention center was surrounded by an alligator-filled moat. The index, based on monthly YouGov surveys, measured the share of Americans who believed at least one of the top three false claims identified each month by NewsGuard. The surveys also revealed widespread confusion around AI-generated and manipulated media. Large shares of respondents either believed in or were unsure about the authenticity of fake images, videos, and audio, including AI-generated visuals falsely linking Trump to Jeffrey Epstein and fabricated audio clips purporting to capture him berating Cabinet members. Differences by age and political affiliation were present but modest. Younger adults were more likely to express uncertainty, while older respondents showed greater confidence but were not consistently better at identifying false claims. Across all age groups, only about 8–9 percent correctly identified all false claims in a given month. Politically, Republicans were slightly more likely than Democrats and Independents to believe at least one false claim, but the overall difficulty in consistently spotting misinformation cut across party lines. Source:  NewsGuard Reality Check, S. Tanner. Nearly Half of Americans in 2025 Believed False Claims Across Seven Months of Surveys . [online] Published 2 February 2026. Available at: https://www.newsguardrealitycheck.com/p/nearly-half-of-americans-believed Top Of Page Social Media Users Circulated Fake Images Linking Politicians to Epstein Files Following the U.S. Department of Justice’s release of millions of documents related to Jeffrey Epstein in January 2026, NewsGuard’s Reality Check   reported that social media users began circulating AI-generated and digitally manipulated images falsely linking prominent politicians to the convicted sex offender. NewsGuard identified at least seven such images, which together amassed more than 21 million views on X. The images targeted figures across the political spectrum, including New York City Mayor Zohran Mamdani, former South Carolina governor Nikki Haley, and Venezuelan opposition leader Maria Corina Machado, with users claiming the materials were part of the newly released Epstein files. In one widely shared case, fabricated images purported to show Epstein posing with Mamdani as a child and with his mother, filmmaker Mira Nair. Although DOJ files confirm Nair once attended an event hosted by Epstein associate Ghislaine Maxwell, NewsGuard found the images themselves were AI-generated, bearing digital watermarks from Google’s image-generation tools. Other posts falsely claimed Haley had emailed Epstein to arrange flights with her children, but no such message exists in the DOJ database, and the alleged screenshot contained clear inconsistencies. Similarly, an image linking Machado to Epstein was shown to be a manipulated version of an authentic photo that did not include her. As described in a related article , a “red-teaming” audit by NewsGuard  of three leading AI image generators found that X’s Grok Imagine consistently produced false images depicting prominent politicians hanging out with convicted sex offender Jeffrey Epstein. Google Gemini’s Nano Banana Pro produced convincing visuals as well, but only after some prodding, while OpenAI’s ChatGPT flatly declined all prompts seeking to place Epstein with public figures. Sources: NewsGuard Reality Check, L. Marchl & J. Schofield & I. Chomnalez. AI and Manipulated Images Falsely Implicate Politicians in the Epstein Files . [online] Published 3 February 2026. Available at: https://www.newsguardrealitycheck.com/p/these-shocking-images-were-not-in NewsGuard Reality Check, I. Chomnalez. Prompted to Create Photos of Politicians with Jeffrey Epstein, X’s ‘Grok Imagine’ Produced Convincing Fakes in Seconds, Google’s Gemini Hesitated but Then Complied, while OpenAI’s ChatGPT Refused . [online] Published 5 February 2026. Available at: https://www.newsguardrealitycheck.com/p/ai-tools-willingly-generate-fake Top Of Page Pro-IS Propaganda Circulated Online Following Syrian Detention Camp Takeover An investigation by the Centre for Information Resilience (CIR)  found a sharp increase in pro-Islamic State (IS) activity on Facebook following the Syrian government’s takeover of the Al-Hol detention camp in late January 2026. After Kurdish-led forces withdrew from north-eastern Syria, including Al-Hol, IS supporters used social media to spread propaganda and coordinate logistical and financial assistance for detainees attempting to escape. The activity emerged within days of the transfer of control and coincided with President Ahmed al-Sharaa’s efforts to reassert state authority one year after the fall of the Assad regime. CIR documented dozens of Facebook accounts openly calling for violence against Syrian government forces and urging armed resistance. Some posts explicitly encouraged supporters to attack security personnel, while others focused on practical support, such as requests for vehicles to transport detainees’ families out of the camp. Financial appeals were also widespread, with pro-IS users soliciting donations from supporters abroad and directing them to private Telegram channels for coordination. Source: CIR Centre for Information Resilience. Pro-IS propaganda circulates online following Syrian detention camp takeover . [online] Published 5 February 2026. Available at: https://www.info-res.org/cir/articles/pro-is-propaganda-circulates-online-following-syrian-detention-camp-takeover/ Top Of Page Digital Risks for the 2026 Winter Olympics Ahead of the Milano Cortina 2026 Winter Olympics, the Graphika  Team highlighted in a Graphika Blog post that it analyzed online conversations from the past 60 days to assess potential digital risks surrounding the Games and how major sporting events attract influence operations and hacktivist activity. While online discussion remained largely neutral or supportive of the Games, the shift toward controversy-focused narratives created openings for coordinated influence campaigns. IO actors use high-profile events to further their ideological messages or attract engagement. Graphika hasn’t yet seen any known IOs weighing in on the upcoming Games, but will continue to monitor signals related to this threat, as the monitoring and public reporting during the 2024 Paris Games led them to expect that IOs will exploit the Milan Cortina Games. Potential threats include the spread of false or misleading claims about political, security, and even accommodation-related developments, which may be supported by AI-generated content. Additionally, no major hacktivist campaigns have yet targeted the 2026 Winter Olympics. Past events indicate a strong likelihood of cyber disruptions, data leaks, doxing, and DDoS attacks. During the Paris 2024 Olympics, “pro-Russia” and “anti-Zionist” hacktivist groups leaked sensitive athlete data and attacked national websites. Politico   reported that Italian authorities have already thwarted hacking attempts against several embassies and Olympic venues and attributed a series of DDoS attacks to a pro-Kremlin gang that claimed responsibility on Telegram. Security groups tracking these threats have also warned that hackers may be preparing to spread disinformation online to discredit the Games or to launch further cyberattacks on Olympic sites ahead of the Opening Ceremony. The Italian National Cybersecurity Agency (ACN) has embedded experts inside the Milano-Cortina security center, with additional staff at ACN HQ, while coordinating with the IOC, CISA, and other international partners to share real-time threat intelligence. Source: Graphika Blog, The Graphika Team. Slippery Slopes: Online Risk and the 2026 Winter Olympics . [online] Published 3 February 2026. Available at: https://graphika.com/posts/slippery-slopes-online-risk-and-the-2026-winter-olympics Politico, M. Miller & D. Nickel. ‘Irresistible stage’: Olympic authorities on high alert for cyberattacks as winter games kick off . [online] Published 6 February 2026. Available at: https://www.politico.com/news/2026/02/06/olympics-high-alert-cyberattacks-00767434 Top Of Page Gabbard’s Seizure of Voting Machines Raised Fears of Election Interference As reported by Politico , while the U.S. midterm elections approached, President Donald Trump intensified questioning election integrity, reviving baseless claims about a "rigged" 2020 vote and urging federal involvement in election administration. These concerns have deepened following revelations that Director of National Intelligence Tulsi Gabbard oversaw the seizure of voting machines in Puerto Rico and appeared at an FBI raid tied to past election fraud allegations. Democratic lawmakers argued that these actions blur the line between foreign intelligence oversight and domestic election matters, raising concerns that the administration may be laying the groundwork to challenge future election outcomes. Gabbard’s office claimed the investigation uncovered serious cybersecurity vulnerabilities, including cellular modems that could connect machines to foreign networks. However, election security experts and bipartisan lawmakers noted that such vulnerabilities have been publicly documented for years and are extremely difficult to exploit without physical access. They emphasized there is no evidence these issues have ever altered U.S. election results. Critics also pointed to recent cuts to federal programs designed to protect election infrastructure and counter foreign threats to U.S. democracy. Source: Politico, M. Miller & D. Nickel. Gabbard’s seizure of voting machines heightens concerns of midterm meddling . [online] Published 6 February 2026. Available at: https://www.politico.com/news/2026/02/06/gabbard-trump-election-midterms-voting-integrity-00769768 Top Of Page The Rise of Violent Rhetoric Targeting US Public Officials New analysis by the Institute for Strategic Dialogue (ISD)  found that violent online rhetoric targeting US public officials more than tripled between 2021 and 2025, rising by 241 percent overall with a median monthly increase of 5 percent. Threats intensified around major political events and continued to rise after the 2024 election and presidential transition. Republicans were disproportionately targeted, driven largely by threats against President Donald Trump, who accounted for 47 percent of all violent rhetoric in the dataset. While threats against Democrats also rose, the growth rate for Republicans was far steeper, especially following the July 2024 assassination attempt on Trump. The study found that most violent rhetoric did not originate from organized extremist groups but from partisan individuals reacting to high-profile news and political controversy. Only a small fraction of cases were linked to extremist organizations, pointing instead to a shift toward decentralized, post-organizational political violence. Inflammatory statements by public officials themselves often acted as catalysts, with supporters directing threats at perceived opponents and fueling cycles of escalation across party lines. Real-world violence further intensified this dynamic, triggering waves of online threats, celebration of attacks, or calls for retaliation. Violent rhetoric remained widespread across major platforms, much of it still accessible despite clear policy violations. Platform dynamics shaped who was targeted: left-leaning platforms such as Reddit and Bluesky hosted more threats against right-wing officials, while X contained a disproportionate share of threats against left-wing figures. Source: ISD, N. Doctor & K. Keneally & C. Zoschak. ‘Tick tock traitor:’ The rise of violent rhetoric targeting US public officials . [online] Published 3 February 2026. Available at: https://www.isdglobal.org/digital-dispatch/tick-tock-traitor-the-rise-of-violent-rhetoric-targeting-us-public-officials/ Top Of Page Foreign State Disinformation and Cyber Influence Threaten Norway’s Information Space in 2026 The Norwegian Police Security Service’s (PST)   National Threat Assessment 2026   outlines a deteriorating security and information environment in Norway, driven largely by hostile state actors employing influence operations, cyber activity, and disinformation as core tools. Russia, China, and Iran are identified as the primary actors, with Russia expected to intensify cyber and influence operations aimed at weakening Norwegian public support for Ukraine, NATO, and Western cooperation, including through targeted disinformation, proxy sabotage, and recruitment via digital platforms. China’s activities are assessed as increasingly cyber-centric, combining cyber espionage, covert influence, and transnational repression to silence critics of the Chinese Communist Party, including through malware disguised as legitimate apps and pressure on diaspora communities. Iran is expected to continue intelligence and influence operations using proxy actors and criminal networks, alongside cyber intrusions and intimidation campaigns targeting dissidents, journalists, and Western- or Israeli-linked interests in Norway. The report emphasizes that disinformation and influence operations increasingly intersect with extremist ecosystems and digital radicalisation dynamics. Foreign state actors exploit online platforms, fake accounts, AI-generated content, and proxy networks to spread polarising narratives, undermine trust in democratic institutions, and amplify conspiracy theories, sometimes leveraging existing right-wing or anti-government extremist forums as dissemination channels. These activities contribute to a more diffuse and unpredictable threat landscape, where state-driven influence operations, cyber-enabled repression, and extremist propaganda reinforce one another. The assessment highlights that such tactics risk eroding public trust, increasing societal polarisation, and complicating attribution, thereby strengthening hostile actors’ ability to manipulate the information space while remaining below the threshold of overt conflict. Source:  Norwegian Police Security Service (PST). National Threat Assessment 2026 . [online] Published February 2026. Available at: https://www.pst.no/wp-content/uploads/2026/02/National-Threat-Assessment-2026.pdf [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • Cyber-based hostile influence campaigns 26th January - 1st February 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [State Actors] Russia   Pro-Kremlin Network Spread Fake Media Reports to Undermine EU Unity After Davos NewsGuard’s 2025 Disinformer of the Year: Yevgeny Shevchenko, Creator of the Pravda Network Russian Pravda Network Amplified Alberta Secessionist Content Kremlin claimed Russia and the US won’t talk to EU politicians EU Sanctioned Six Individuals Over Russian Information Manipulation Activities Latvia Claimed Russia Remained its Top Cyber Threat as Attacks Hit Record High The War in Ukraine Fake Videos Targeted Ukrainian Refugees in France China China’s Influence in the Nordic–Baltic Information Environment in Denmark and Lithuania Pro-China AI Videos Falsely Claim Taiwanese Support for Unification [General Reports] Disinformation Surrounding the Shooting of Alex Pretti   Disinformation As a Systemic Threat to Democratic Resilience Disinformation Vortex Around Minnesota ICE Protests ICE Surveillance Sparks Online Counterattacks [Appendix - Frameworks to Counter Disinformation] UK Warned It Risks Absorbing Cyber and Hybrid Attacks Without Deterrence Commercialized Social Media Manipulation and Disinformation Amplification EU-Supported Media Literacy Effort Against Disinformation in Kosovo [CRC Glossary] [ Report Highlights] The European External Action Service (EEAS)  reported that the EU Council imposed sanctions on six additional individuals for their involvement in Russian hybrid activities, particularly foreign information manipulation and interference targeting the EU, its member states, and partners. In its annual report, Latvia’s “Constitution Protection Bureau” (SAB) , said 2025 marked an all-time high in registered cyber threats targeting the country, while Russia remained the primary source of cyber risk. A report by NATO Strategic Communications Centre of Excellence  examined how Beijing has sought to shape the Nordic-Baltic information environment, documenting a shift from earlier partnership approaches to a more cautious, contested relationship. According to an EU VS Disinfo  article, Global risk assessments increasingly identify foreign information manipulation, disinformation, and misinformation as structural threats that undermine democracy, human rights, economic stability, and crisis response, making investment in trustworthy public-interest media and stronger policy safeguards essential to protecting information integrity. A Politico  article describes a growing digital struggle in which expanded federal surveillance and data use for deportations is met by activist and hacker efforts to track, leak, and disrupt ICE operations, fueling an increasingly contested and misinformation-prone information environment. An article from the NATO Strategic Communications Centre of Excellence explains how coordinated actors exploit social media platforms through fake accounts, automated amplification, and targeted narrative tactics to manipulate public opinion, spread disinformation, and undermine trust in democratic information environments. [State Actors] Russia Pro-Kremlin Network Spread Fake Media Reports to Undermine EU Unity After Davos NewsGuard’s Reality Check   reveals how a pro-Kremlin influence operation sought to undermine the European Union during and immediately after the World Economic Forum in Davos by circulating fabricated videos that impersonated trusted outlets. The fake content included a fabricated Reuters video alleging that France and Germany were preparing to leave the EU, a bogus Gallup report claiming that most young Europeans supported exiting the union, and a falsified Economist video attributing anti-EU remarks to former German Chancellor Angela Merkel. All of these claims were demonstrably false, and none appeared on the outlets’ official platforms. The individuals cited consistently expressed strong pro-EU positions. Reuters explicitly confirmed that the video attributed to it was fake, while independent polling showed strong EU support among European youth. The campaign also promoted additional fabricated quotes, anti-EU statements attributed to global leaders, claims about fabricated anti-EU protests, and reports of escalating tensions among EU member states. Although individual videos achieved only modest reach, collectively they garnered thousands of views across platforms such as Telegram, where audiences have limited tools for verifying authenticity. Source:  NewsGuard, A. Lee. Post Davos, Russian Influence Operation Uses Phony Videos Impersonating Reuters, Gallup and the Economist to Foment EU Discord . [online] Published 28 January 2026. Available at: https://www.newsguardrealitycheck.com/p/russians-seed-fake-reports-to-sow Top Of Page NewsGuard’s 2025 Disinformer of the Year: Yevgeny Shevchenko, Creator of the Pravda Network Yevgeny Shevchenko was designated by NewsGuard’s Reality Check  as its 2025 Disinformer of the Year for his role in building the Pravda network, one of the most prolific pro-Kremlin disinformation operations globally. The network comprises hundreds of automated, news-style websites published in 49 languages, collectively producing approximately 6.3 million articles in 2025 alone. These sites repeatedly amplified false claims aligned with Kremlin narratives, targeting topics such as the war in Ukraine, European and U.S. elections, public health, and geopolitics, while using domain names designed to appear legitimate and local. A key impact of the Pravda network was its success in polluting search results and influencing generative AI systems. NewsGuard audits found that while some AI systems successfully debunked false claims sourced directly from Pravda articles, others reproduced them at significant rates. Shevchenko, a Crimea-based web entrepreneur and founder of the company TigerWeb, has kept a low public profile despite the network’s reach. The Pravda network expanded rapidly after Russia’s full-scale invasion of Ukraine in 2022 and was sanctioned by the European Union in July 2025 for coordinated information manipulation. Source:  NewsGuard, A. Lee & E. Maitland. NewsGuard’s 2025 Disinformer of the Year: Yevgeny Shevchenko, Creator of the Pravda Network . [online] Published 27 January 2026. Available at: https://www.newsguardrealitycheck.com/p/newsguards-2025-disinformer-of-the Top Of Page Russian Pravda Network Amplified Alberta Secessionist Content As reported by DisinfoWatch , Russia’s Pravda News Network published what appeared to be an AI-generated video promoting an Alberta secessionist rally scheduled for Monday, January 26, on its Russian VK social media platform. The Pravda network, also known as Portal Kombat, was first identified by France’s VIGINUM agency as a coordinated pro-Kremlin disinformation ecosystem that aggregates and republishes content from Russian state media, official channels, and aligned online sources rather than producing original reporting. The network operates a Canada-focused site that republishes material daily, drawing heavily from outlets such as RT, TASS, and the Russian Embassy in Canada. Canadian civil society group Cyber Alberta has warned that Pravda is targeting Canadian interests. Source:  DisinfoWatch, Russian Pravda News Platform Amplifying Alberta Secessionist Events . [online] Published 25 January 2026. Available at: https://disinfowatch.org/disinfo/russian-pravda-news-platform-amplifying-alberta-secessionist-events/ Top Of Page Kremlin Claimed Russia and the US won’t talk to EU politicians DisinfoWatch   documents how Russian state outlet RT reported claims by Kremlin spokesperson Dmitry Peskov asserting that Russia would not engage with EU foreign policy chief Kaja Kallas and that it was “obvious” the United States would also refuse to engage with her. The claim sought to widen EU–US fractures amid UAE-hosted talks. Kallas is the EU’s High Representative for Foreign Affairs and Security Policy and a Commission Vice-President, a role appointed through formal EU processes and documented in EU institutional records. The report relied on delegitimizing language and provided no evidence to support the assertion that Washington shared Moscow’s position. Source:  DisinfoWatch, Kremlin Claims Russia and US won’t talk to EU . [online] Published 26 January 2026. Available at: https://disinfowatch.org/disinfo/kremlin-claims-russia-and-us-wont-talk-to-eu/ Top Of Page EU Sanctioned Six Individuals Over Russian Information Manipulation Activities The European External Action Service (EEAS) reported that the EU Council imposed sanctions on six additional individuals for their involvement in Russian hybrid activities, particularly foreign information manipulation and interference targeting the EU, its member states, and partners. Those sanctioned include prominent Russian television presenters Dmitry Guberniev, Ekaterina Andreeva, and Maria Sittel; propagandist Pavel Zarubin; actor Roman Chumakov; and ballet dancer Sergey Polunin. The Council said these figures have actively promoted pro-Kremlin disinformation, anti-Ukraine and anti-Western narratives, and, in some cases, helped raise funds for the Russian armed forces, directly contributing to Russia’s war effort against Ukraine. With this decision, EU restrictive measures now apply to 65 individuals and 17 entities. Sanctions include asset freezes, bans on EU citizens and companies providing funds or economic resources to those listed, and travel restrictions preventing entry into or transit through EU territory. The legal acts formalizing the measures have been published in the Official Journal of the European Union. Source:  EEAS, Russian hybrid threats: Council sanctions six individuals over information manipulation activities . [online] Published 30 January 2026. Available at: https://www.eeas.europa.eu/delegations/ukraine/russian-hybrid-threats-council-sanctions-six-individuals-over-information-manipulation-activities_en Top Of Page Latvia Claimed Russia Remained its Top Cyber Threat as Attacks Hit Record High In its annual report , Latvia’s Constitution Protection Bureau  ( SAB) , said that 2025 marked an all-time high in the number of registered cyber threats targeting the country, while Russia remained the primary source of cyber risk.  While most incidents involved cybercrime and digital fraud, state-linked threats remained elevated, particularly from Russia. From a national security perspective, the most significant risks included intrusion attempts, malware, system compromises, and distributed DDoS attacks. Latvian authorities noted that effective defensive measures, particularly by CERT, were in place. LV helped limit the impact of many attacks, including during politically sensitive events such as municipal elections. A key concern highlighted in the report was the growing role of Russian hacktivists, who have demonstrated both intent and capability to target critical and industrial systems across Latvia and other Western countries. These actors aim to disrupt essential services, intimidate populations, punish support for Ukraine, and deter further assistance. Examples cited included hacktivist attacks on operational technologies, such as dams and power plants, in Norway and Poland, where weak security controls enabled attackers to manipulate industrial control systems and, in one case, shut down a hydroelectric facility. Although Latvia has so far avoided major incidents affecting critical infrastructure, vulnerabilities in operational technologies remain a significant risk. Russian DDoS campaigns continued to target Latvian government institutions, municipalities, and critical infrastructure, often timed to coincide with political decisions or symbolic events. In most cases, DDoS attacks had little or no effect on services’ availability. To counter this threat, Latvia has invested in centralized, state-funded DDoS protection for public institutions and strengthened oversight of ICT critical infrastructure through new cybersecurity laws and regulations. Source:  SAB, Annual Report 2025 . [online] Published January 2026. Available at: https://www.sab.gov.lv/files/uploads/2026/01/SABs-annual-report_2025_ENG.pdf Top Of Page War in Ukraine Fake Videos Targeted Ukrainian Refugees in France StopFake   reports that a series of fake videos on Telegram falsely alleged that Ukrainian refugees in France committed mass crimes, including murder, theft, drug distribution, and even terrorism. These videos, which mimicked the logos and formats of prominent French and international media outlets, including Le Parisien, Le Figaro, L’Équipe, Reuters, Le Point, and Euronews, were part of a coordinated disinformation campaign. Analysis showed that the videos were released over a short period, from January 12 to 16, 2026, via at least three anonymous Telegram channels, and were then widely amplified across pro-Russian networks. Fact-checks confirmed that none of the alleged crimes or news stories were real. Official websites, social media accounts, and publications from the cited media outlets contain no reports that match the videos’ claims. France also does not maintain official statistics on crime specifically among Ukrainian refugees, and available data suggest that Ukrainians do not pose a higher criminal threat than other migrant groups. Source:  StopFake, Фейк: Мировые СМИ сообщили о массовых преступлениях украинских беженцев во Франции . [online] Published 28 January 2026. Available at: https://www.stopfake.org/ru/fejk-mirovye-smi-soobshhili-o-massovyh-prestupleniyah-ukrainskih-bezhentsev-vo-frantsii/ Top Of Page China China’s Influence in the Nordic–Baltic Information Environment in Denmark and Lithuania The NATO Strategic Communications Centre of Excellence   examined , in a report on China’s influence in the Nordic–Baltic information environment, how Beijing has sought to shape the region’s information space, documenting a shift from earlier partnership approaches to a more cautious, contested relationship. It mapped China’s objectives (protecting core interests, acquiring technology, and improving perceptions), described eight avenues of influence, and analysed official PRC frames and their resonance in local media using country case studies of Lithuania and Denmark. In Lithuania, relations with China deteriorated sharply after 2019, culminating in Vilnius's withdrawal from the China-CEEC format and the authorization of Taiwan to open a representative office under its own name. China responded with economic pressure and a coordinated diplomatic and information campaign. However, the study found that China’s influence in Lithuania’s media space remained limited. Chinese narratives had little resonance, partly due to the absence of strong Chinese media channels and partly because Beijing’s coercive tactics proved counterproductive, reinforcing public skepticism rather than shaping opinion. Media debates largely reflected domestic political dissatisfaction and broader geopolitical shocks, particularly Russia’s invasion of Ukraine, rather than successful Chinese messaging. In Denmark, the report identified a different pattern, described as “Shadow Wolf Warrior” diplomacy. Rather than relying on aggressive public messaging, China relied more on backstage influence through elite networks, business ties, and United Front activities, while maintaining a low public profile. Although Danish public discourse was generally skeptical of Chinese frames, and official messaging failed to gain broad traction, the report warned that China’s covert channels and long-standing institutional ties create a durable influence. Source: NATO Strategic Communications Centre of Excellence, M. Lanteigne & L. Stünkel & K. Andrijauskas & A. K. Jakobsson. China’s Influence in the Nordic – Baltic Information Environment: Denmark and Lithuania . [online] Published 28 January 2026. Available at: https://stratcomcoe.org/pdfjs/?file=/publications/download/Chinas-Influence-in-the-Nordic-Baltic---Denmark-Lithuania-FINAL-FILE.pdf?zoom=page-fit Top Of Page Pro-China AI Videos Falsely Claim Taiwanese Support for Unification NewsGuard’s Reality Check  has tracked how, since December 2025, pro-China sources have circulated AI-generated videos showing people purportedly from Taiwan speaking Mandarin with authentic Taiwanese accents and calling for unification with China. One account on the Chinese platform RedNote, “Taiwanese come home,” posted 35 such videos featuring teachers, doctors, police officers, firefighters, and students, garnering over 21,000 likes. NewsGuard confirmed that the videos were AI-generated using OpenAI’s Sora 2 tool, and that the individuals depicted do not exist or were misrepresented. For example, Taipei Municipal Chien Kuo High School, shown in one video, has never admitted female students, and the National Taiwan University professors depicted in another video were entirely fabricated. These AI-generated videos are part of a broader cognitive warfare effort by China, which has commissioned companies like Magic Data and iFlytek to create voice databases of native Taiwanese speakers in Mandarin, Hokkien, and Hakka. These databases are intended to lend authenticity to fabricated pro-China messaging. Source:  NewsGuard, C. Lin. Pro-China AI-Generated Videos Use Databanks of Taiwanese Accents to Fake Calls for Taiwan-China Unification . [online] Published 29 January 2026. Available at: https://www.newsguardrealitycheck.com/p/pro-china-ai-fakes-a-taiwanese-accent   Top Of Page [General Reports] Disinformation Surrounding the Shooting of Alex Pretti  NewsGuard’s Reality Check   designated the claim that Alex Pretti pulled a gun on federal agents before being fatally shot in Minneapolis in January 2026 as its “False Claim of the Week,” citing its rapid spread, high engagement, and promotion by high-profile figures. Following the January 2024 shooting, Trump administration officials and conservative commentators alleged that Pretti brandished a firearm and posed an imminent threat, framing the killing as justified. Statements from the Department of Homeland Security, Homeland Security Secretary Kristi Noem, and White House aide Stephen Miller were widely echoed across social media and partisan websites, drawing millions of views. However, a detailed review of eyewitness video footage from five angles by NewsGuard, alongside reporting from major outlets including Reuters, CNN, The New York Times, and ABC News, found no evidence that Pretti pulled or reached for a gun before he was shot. The footage showed Pretti holding a phone, with no weapon visible, as officers confronted him. He was pepper-sprayed, tackled, and pinned to the ground before an officer removed a concealed handgun from his waistband. Authorities later confirmed that Pretti was legally carrying a concealed firearm with a permit, but video analysis indicated it remained holstered and hidden throughout the initial encounter. Additionally, NewsGuard’s Reality Check   reported that shortly after the shooting, an AI-manipulated image circulated widely on social media, falsely claiming to show Pretti holding a gun at the moment he was shot, with posts reaching millions of views within hours. Investigators and journalists confirmed the image was fabricated. Sources:   NewsGuard, C. Vercellone, Reality Check. Debunk: Pretti Didn’t Pull Out a Firearm, Contrary to the Trump Administration’s Claims . [online] Published 30 January 2026. Available at: https://www.newsguardrealitycheck.com/p/did-alex-pretti-brandish-a-gun-newsguards NewsGuard, M. Calamaio, Reality Check. AI-Manipulated Image Cited as False Evidence that Victim in Latest ICE Shooting Was Brandishing a Gun . [online] Published 26 January 2026. Available at: https://www.newsguardrealitycheck.com/p/ai-manipulated-image-shows-gun-not Top Of Page Disinformation As a Systemic Threat to Democratic Resilience EUvsDisinfo   argues that recent global risk assessments characterize foreign information manipulation and interference (FIMI), disinformation, and misinformation as systemic threats that undermine democratic resilience worldwide. Reports from the World Economic Forum, the United Nations, and the European External Action Service highlight how these campaigns deepen societal divides, erode trust in institutions, and weaken crisis response by undermining the shared evidence base required for collective decision-making. The Human Rights Council further warns that FIMI increasingly targets marginalized communities, independent media, and human rights defenders, demonstrating that information disorder is not a marginal issue but a global risk multiplier affecting governance, security, and social cohesion. The article also emphasizes the economic and structural damage caused by disinformation, noting that misleading narratives can destabilize markets, distort financial expectations, and erode long-term investment and policy stability. Conflict-related and climate-focused manipulation campaigns can incite hatred, obstruct humanitarian efforts, and delay sustainable development by casting doubt on scientific consensus or promoting false solutions. As a safeguard, the article emphasizes robust public-interest media ecosystems, independent journalism, and media literacy, alongside initiatives such as the European Democracy Shield, which seeks to strengthen election integrity and counter FIMI through detection, cooperation, and proactive investment in trustworthy information spaces. Source:  EUvsDisinfo, FIMI and disinformation as global threats . [online] Published 30 January 2026. Available at: https://euvsdisinfo.eu/fimi-and-disinformation-as-global-threats/ Top Of Page Disinformation Vortex Around Minnesota ICE Protests A podcast episode of Uncanny Valley   by Wired   describes a fragmented, high-tempo information environment surrounding intensified ICE activity in Minnesota, in which far-right and pro-administration messaging rapidly shaped and distorted public understanding of events. Hosts discuss how a right-wing influencer, Nick Shirley, promoted an unproven claim that Somali-run daycare centers in Minneapolis misappropriated millions in a Medicaid-related fraud narrative; they link this amplification to subsequent harassment and violence dynamics, including an attack on Rep. Ilhan Omar and attempts to frame the incident as staged. They also describe a rolling cycle of narrative shifts and reputational smears after the killing of protester Alex Pretti, moving from claims of an assassination attempt and “terrorist” labeling to alternative insinuations (e.g., immigration status) and finally to blame-shifting arguments about protest behavior, illustrating a “spin-to-fit” approach that prioritizes ideological utility over verifiable facts, including repetition by figures at the highest levels of the administration. The episode then broadens to platform governance and credibility crises as accelerants for misinformation and perceived censorship. It highlights how users interpreted TikTok outages and content-performance changes as politically motivated suppression following a U.S. ownership restructuring, noting that distrust is compounded by opaque, personalized recommendation systems that are difficult to audit externally. The hosts suggest that even subtle algorithmic tweaks can influence which narratives gain traction without leaving clear evidence. They further note that TikTok’s updated terms request more granular location permissions and enable the broader collection of user input for AI features, raising concerns about surveillance, targeting, and the erosion of user trust at a moment when many already suspect political capture of major information channels. Source:  WIRED, B. Barrett & Z. Schiffer, & T. Marchman. Uncanny Valley: Minneapolis Misinformation, TikTok’s New Owners, and Moltbot Hype . [online] Published 29 January 2026. Available at: https://www.wired.com/story/uncanny-valley-podcast-ice-minneapolis-tiktok-moltbot/ Top Of Page ICE Surveillance Sparks Online Counterattacks Politico   reports an escalating digital information conflict surrounding the Trump administration’s mass deportation agenda, in which federal agencies have expanded domestic surveillance capabilities while online activists and hacker groups deploy countermeasures to track and expose immigration enforcement operations. ICE has reportedly increased its use of advanced surveillance tools and data access, including contracts with firms such as Paragon and Palantir, forensic phone-cracking technologies, facial recognition systems, and data brokers collecting sensitive personal information. The administration has also granted ICE access to large federal datasets from agencies like the IRS, Medicaid, and Social Security. In response, activists have used encrypted messaging platforms, social media, and community-built tools to report raid locations, map surveillance infrastructure, and identify agents, while cybercriminal collectives have escalated tactics by leaking names and personal details of ICE and DHS officials online. The article highlights how these developments create fertile ground for disinformation, coercive influence, and contested narratives over legitimacy and safety. Digital tools intended to document or resist enforcement actions have prompted aggressive efforts by the government and major technology companies to suppress information sharing, including app removals, Meta’s content restrictions, and federal investigations into encrypted communications. Officials have framed ICE-tracking platforms as threats to agent security, while critics argue these actions represent intimidation and censorship aimed at silencing opposition. The environment is characterized by breaches, doxxing, surveillance expansion, and attempts to control online discourse, illustrating how both state and non-state actors use digital tactics to influence public perception, disrupt organizing, and shape the information space around immigration enforcement. Source : Politico, D. Nickel & A. Ng. ICE has expanded its mass surveillance efforts. Online activists are fighting back. [online] Published 29 January 2026. Available at: https://www.politico.com/news/2026/01/29/ice-tracking-tools-protesters-00755703 Top Of Page [Appendix - Frameworks to Counter Disinformation] UK Warned It Risks Absorbing Cyber and Hybrid Attacks Without Deterrence Warnings from UK security leaders, reported by The Record , highlighted that Britain risks exposing itself to cyberattacks, sabotage, and disinformation campaigns unless it develops credible offensive deterrence alongside defensive resilience. Former national security adviser Lord Sedwill told a parliamentary hearing that resilience measures alone would not discourage hostile states. The warnings came as ministers defended plans agreed at last year’s NATO summit to raise total security spending to 5 percent of GDP within a decade, including 1.5 percent for indirect defense and resilience such as cybersecurity. Committee members questioned whether this resilience funding would deliver new capabilities or merely repackage existing spending, given the lack of clear NATO definitions. Sedwill expressed concern that creative accounting could undermine the effort, urging ministers to clarify the additional capacity that would be delivered in the coming years. Ministers acknowledged that cyber incidents and hybrid attacks below the threshold of armed conflict are already having serious strategic effects. The government plans to publish a revised National Cyber Action Plan, shifting from a strategic framework to an operational plan focused on countering threats, strengthening resilience, and supporting economic growth. Source:  The Record, A. Martin. UK leaders warned country risks 'absorbing' cyber and hybrid attacks without offensive deterrence . [online] Published 28 January 2026. Available at: https://therecord.media/uk-government-warned-cyber-hybrid-threats-offensive-operations Top Of Page Commercialized Social Media Manipulation and Disinformation Amplification An experiment published in 2025 by NATO Strategic Communications Centre of Excellence  examined how major social media platforms detect and counter commercially purchased inauthentic engagement. Despite regulatory advances such as the EU Digital Services Act, manipulation services remain widely accessible and inexpensive, allowing actors to buy fake likes, comments, shares, and followers at scale. The study found that more than 30,000 inauthentic accounts generated more than 100,000 units of engagement, with enforcement varying significantly across platforms: X and YouTube removed a larger share of fake activity, whereas Instagram, TikTok, and others left most purchased engagement intact. The experiment also showed that manipulation is not confined to organic posts, as paid advertising systems can be exploited to distribute inauthentic narratives to targeted audiences. The report highlights a shift toward more sophisticated influence tactics, including AI-enabled bot networks designed to blend into authentic conversations rather than relying on overt spam. These bots increasingly amplify politically sensitive and military-related narratives, including pro-Kremlin and pro-China themes, while commercial providers use cryptocurrency payments to obscure traceability and sustain a resilient ecosystem of manipulation. Overall, the findings underscore how hostile actors can exploit low-cost, automated tools to shape discourse, erode trust, and embed disinformation within legitimate online communities, emphasizing the need for behavioural detection, financial disruption, and stronger cross-platform accountability. Source: NATO Strategic Communications Centre of Excellence , Social Media Manipulation for Sale: 2025 Experiment on Platform Capabilities to Detect and Counter Inauthentic Social Media Engagement. [online] Published 30 January 2026. Available at: https://stratcomcoe.org/publications/social-media-manipulation-for-sale-2025-experiment-on-platform-capabilities-to-detect-and-counter-inauthentic-social-media-engagement/338 Top Of Page EU-Supported Media Literacy Effort Against Disinformation in Kosovo As published by the EEAS , digital and AI-driven technologies have increased vulnerability to misinformation and information manipulation across the Western Balkans, prompting the European Union and its local European Houses to support public resilience initiatives. At the launch of the exhibition The Glass Room: Misinformation Edition in Pristina, EU Ambassador Airo Orav emphasized the shared challenge of safeguarding societies from disinformation and the EU’s commitment to equipping citizens with tools to recognize and counter misleading narratives. With EU support, the exhibition toured multiple towns in Kosovo between October and December 2025, using posters, interactive applications, and animations to explain how misinformation spreads, why it is persuasive, and how everyday online behaviors such as clicks, likes, and shares amplify false content. The tour also included capacity-building workshops led by experts Kreshnik Gashi and Darko Dimitrijević, which addressed risks such as deepfakes, algorithmic bias, and the influence of digital design on public opinion, while promoting source verification and privacy awareness as key defenses against hostile information dynamics. Source: EEAS, Press and information team of the EU Office/EU Special Representative in Kosovo. The Kosovo Journey of The Glass Room – Misinformation Exhibition Edition . [online] Published 29 January 2026. Available at: https://www.eeas.europa.eu/delegations/kosovo/kosovo-journey-glass-room-%E2%80%93-misinformation-exhibition-edition_en [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • Cyber-based hostile influence campaigns 19th-25th January 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] X Grok’s Deepfake Scandal and Its Regulatory Conclusions [State Actors] Russia   Pro-Russian Monuments Were Used to Manipulate History and Justify War Russian Hacktivists Increased Disruptive Cyber Pressure on UK Organizations Disinformation in Lavrov’s January Press Conference Kremlin Disinformation Mechanism and Debunk The War in Ukraine Disinformation Targeting Greenland and Ukraine Aid Disinformation Around Russia’s Winter Attacks on Ukraine Iran Iranian State TV Feed Reportedly Hijacked to Air Anti-Regime Messages [AI Related Articles] The Silicon Gaze and Geographic Bias in AI Systems AI Realistic Deepfake Videos New Influence Risks Follow OpenAI’s Move to Ads [Cyfluence Attacks] Sandworm-Linked Cyberattack Targets Poland’s Energy Infrastructure [General Reports] Anti-Israel Hoaxes Targeted Israeli Citizens Abroad Old Military Exercise Footage Misrepresented as US Seizure of Russia-Linked Tanker False Claims Misrepresented European Response to U.S. Remarks on Greenland Psychological Defence Against Hostile Disinformation and Influence [Appendix - Frameworks to Counter Disinformation] Countering Information Influence Operations in the Nordic-Baltic Region [CRC Glossary] [ Report Highlights] A report by EU vs Disinfo examined how Russia used cultural heritage as a tool of FIMI, a strategy described as Cultural Heritage Exploitation. Infosecurity Magazine reports that UK authorities warned of increased disruptive cyber activity by Russian-aligned hacktivist groups against UK organizations. Oxford and Kentucky researchers introduced the concept of the “silicon gaze” to explain how large language models, such as ChatGPT, systematically reproduce and amplify long-standing spatial inequalities. According to Infosecurity   Magazine , ESET attributes a late-December 2025 cyberattack on Poland’s energy infrastructure to the GRU-linked Sandworm group. The attack involved DynoWiper malware but caused no confirmed disruption. Poland is moving ahead with stricter cybersecurity regulations in response. The Psychological Defence Agency (PDA) has developed an approach to counter disinformation and hostile influence in democratic societies, which are often amplified through social media and AI. It focuses on a whole-of-society psychological defence that strengthens citizen resilience, media and information literacy, and institutional readiness, favoring transparency, fact-based communication, and attribution over censorship. A report by NATO`s Strategic Communications Centre of Excellence examined how the eight Nordic-Baltic countries (NB8) have developed strategies to counter information influence operations amid a deteriorating security environment. [Social Media Platforms] X Grok’s Deepfake Scandal and Its Regulatory Conclusions Following the latest events regarding Grok, in which the platform allowed users to create realistic, fake sexual images (for further information, see W02 January 2026 Cyfluence Report here ), TechXplore   reported that Musk has promised the UK authorities that it will block such capabilities to comply with local law. However, this response appears limited to the UK and came only after investigations were launched. Current laws in jurisdictions such as New Zealand also struggle to address these problems, particularly when images are fully synthetic. Voluntary codes and light-touch regulation allow platforms to claim compliance while still enabling predictable misuse of AI tools. Nevertheless, the article argued that focusing only on criminalizing users comes too late and fails to address the systemic roots of AI-enabled disinformation. Without robust regulation that compels companies to assess risks and implement safeguards, platforms will continue to prioritize speed and engagement over safety. Source:  TechXplore, C. Mudgway & A. Lensen. Sexualized deepfakes on X are a sign of things to come. NZ law is already way behind. [online] Published 20 January 2026. Available at: https://techxplore.com/news/2026-01-sexualized-deepfakes-nz-law.html Top Of Page [State Actors] Russia Pro-Russian Monuments Were Used to Manipulate History and Justify War A report by EU vs Disinfo  examined how Russia used cultural heritage as a tool of FIMI, in a strategy described as Cultural Heritage Exploitation (CHX). Through CHX, pro-Kremlin historical narratives are deliberately embedded in physical cultural objects, especially monuments, to legitimize Russia’s war against Ukraine and its territorial claims. These monuments promote recurring propaganda themes, including claims that Ukraine is controlled by Nazis, that Russian-speaking communities are under threat, and that Ukrainian statehood is artificial. Thousands of new pro-Russian monuments have been erected in Russia, occupied Ukrainian territories, and other regions, often placed in highly visible public spaces such as parks, schools, and religious sites. Research showed these monuments are not isolated acts but part of a coordinated system involving Kremlin-linked monument manufacturers, the Russian military, the Russian Orthodox Church, and local authorities. Beyond their physical presence, these monuments have been digitally amplified through state media, diplomatic channels, exhibitions abroad, and social media. Source:  EU vs Disinfo, F. Greenland. Built to lie: how new pro-Russian monuments exploit cultural heritage.  [online] Published 20 January 2026. Available at: https://euvsdisinfo.eu/built-to-lie-how-new-pro-russian-monuments-exploit-cultural-heritage/ Top Of Page Russian Hacktivists Increased Disruptive Cyber Pressure on UK Organizations Infosecurity Magazine reported that UK authorities warned Russian-aligned hacktivist groups had intensified disruptive cyber activity against UK organizations, with local government bodies and critical national infrastructure as key targets. A new alert from the UK National Cyber Security Centre (NCSC) stated that these actors focused primarily on DoS attacks designed to overwhelm websites and online systems, temporarily taking services offline and limiting public access to essential functions.The NCSC noted that the attacks are ideologically motivated rather than financially driven, linked to perceived Western support for Ukraine. One prominent actor, NoName057(16), has been active since 2022 and has repeatedly targeted UK local authorities, NATO members, and other European states perceived as opposing Russian geopolitical interests. The group coordinated activities via Telegram channels and distributed tools and tactics on platforms such as GitHub, enabling supporters to participate in coordinated disruption efforts. Gary Barlet, Public Sector CTO at Illumio, stated that modern supply chains and critical infrastructure are deeply interconnected, making disruption easier than ever. The agency recommends that organizations review their DoS protections, improve resilience, and engage with its heightened cyber threat collection efforts. The Record also noted that the warning echoed a separate advisory issued in December by the U.S. Cybersecurity and Infrastructure Security Agency (CISA), which said pro-Russian hacktivists were conducting opportunistic, low-impact attacks against global critical infrastructure. While these operations are typically less sophisticated than those conducted by state-sponsored hackers, U.S. officials cautioned that they can still cause real-world disruption and, in some cases, physical damage. Sectors targeted have included water and food systems, agriculture, and energy. Sources: Infosecurity Magazine, A. Mascellino. Russian Hacktivists Intensify Disruptive Cyber Pressure on UK Orgs.  [online] Published 19 January 2026. Available at: https://www.infosecurity-magazine.com/news/russia-cyber-pressure-uk-orgs/ The Record, D. Antoniuk. UK warns of sustained cyberthreat from pro-Russian hacktivists.  [online] Published 20 January 2026. Available at: https://therecord.media/uk-ncsc-warning-russia-aligned-hacktivist-groups Top Of Page Disinformation in Lavrov’s January Press Conference  In its analysis of Russia’s information tactics, EU vs Disinfo framed Foreign Minister Sergey Lavrov’s January 2026 press conference as less a diplomatic briefing and more a coordinated exercise in foreign information manipulation and interference. Over several hours, he recycled well-known Kremlin narratives aimed at undermining trust in the EU, portraying Europe as aggressive and unstable, and presenting Russia as a besieged but lawful actor. Personal attacks on European leaders, rather than engagement with policy, were used to undermine public confidence in democratic decision-making and to frame EU leadership as reckless and dangerous. Lavrov relied heavily on distortion and false equivalence. He invoked Nazi-era imagery to discredit Germany’s defensive military posture, falsely depicted the EU as a coercive and illegitimate institution, and repeated long-debunked claims about Ukraine being run by Nazis, repressing Russian culture, and provoking the war. Beyond the EU, Lavrov targeted the Baltic states and Moldova with narratives designed to delegitimise their security concerns and European aspirations, while also adopting anti-colonial language to justify Russia’s own territorial expansion in Ukraine. Source:  EU vs Disinfo. Lavrov’s 2026 presser: a three-hour FIMI offensive against Europe and its leaders. [online] Published 23 January 2026. Available at: https://euvsdisinfo.eu/lavrovs-2026-presser-a-three-hour-fimi-offensive-against-europe-and-its-leaders/ Top Of Page Kremlin Disinformation Mechanism and Debunk NATO`s   Strategic Communications Centre of Excellence research shows that Russian propaganda thrives on contradiction, presenting Russia at once as a defender of conservative Christian Europe and as an anti-colonial champion of the Global South. Studies focusing on Bulgaria show that exposing these logical inconsistencies is not enough: disinformation succeeds not because it is coherent, but because it resonates emotionally with specific social groups. Rather than persuading through facts, Kremlin narratives exploit grievances, identity politics, and distrust of liberal elites, making simple fact-checking an inadequate response. A key disinformation technique is what researchers call “confirmation through denial”, where denials are framed in ways that indirectly reinforce threats or culpability. Kremlin messaging is also rarely openly pro-Russian; instead, it embeds itself within national-populist and anti-democratic movements across Europe. Its core goal is not to promote Russia, but to sow polarisation, weaken trust in democratic institutions, and link toxic identity narratives to political actors hostile to liberal democracy and European unity. The Bulgarian case also highlights how social media algorithms, opaque networks of websites, and resistance to regulation amplify disinformation, while efforts to govern the digital space are reframed as “censorship”. Source:  NATO Strategic Communications Centre of Excellence, M. Iakimova & B. Dimitrova & V. Dimitrova & O. Nanova & L. Donchev & V. Valkanov & D. Vatsov, Defence Strategic Communications, Volume 16. Bulgaria as a Russian Propaganda Laboratory.  [online] Published 2025. Available at: https://stratcomcoe.org/pdfjs/?file=/publications/download/DSC_NATO_journal_V16_05_Milena-Iakimova.pdf?zoom=page-fit Top Of Page War in Ukraine Disinformation Targeting Greenland and Ukraine Aid As stated in an article by NewsGuard , Pro-Kremlin sources are spreading fabricated videos that imitate trusted European and international media to claim that support for Ukraine has left Europe too weak to defend Greenland from supposed U.S. annexation plans. These fake reports exploit recent remarks by U.S. President Donald Trump about acquiring Greenland, using them as a hook to advance disinformation that portrays European assistance to Ukraine as reckless and self-damaging. The false content includes manipulated videos allegedly from Danish broadcaster TV2 Nord, France’s Le Figaro , and the U.S. think tank Institute for the Study of War. In each case, the videos were entirely fabricated. The outlets never published the reports, and the individuals quoted never made the statements attributed to them. Linguistic inconsistencies, direct denials from media editors, and the absence of any credible records confirm that the material is fake. Despite this, the claims were widely amplified by the Russian state and pro-Kremlin media networks. Source:  NewsGuard, E. Maitland. Pro-Kremlin Sources Impersonate Credible Media to Suggest Western Support for Ukraine Harms Europe and Greenland.  [online] Published 21 January 2026. Available at: https://www.newsguardrealitycheck.com/p/russian-fakes-make-the-most-of-europes Top Of Page Disinformation Around Russia’s Winter Attacks on Ukraine EU vs. Disinfo   reported that as Russia intensified winter strikes on Ukraine’s energy infrastructure, causing blackouts and heating shortages during extreme cold, pro-Kremlin information channels launched a parallel disinformation campaign. While Russian missiles and drones were the direct cause of the outages, these narratives sought to obscure responsibility by downplaying the attacks and shifting blame onto Ukraine’s leadership for “refusing peace.” A core tactic has been the inversion of responsibility. Pro-Russian commentators express performative sympathy for civilians while portraying the destruction of energy infrastructure as a legitimate tool to force Ukraine into surrender. By describing the war as a “technical” or “energy” problem and avoiding mention of Russian strikes, these narratives whitewash the causes of the crisis and present capitulation on Moscow’s terms as the only path to relief. At the same time, propaganda outlets openly weaponize hardship, celebrating the impact of cold and blackouts as pressure on Ukrainian society. Western reporting on civilian suffering is selectively quoted or distorted to suggest that Ukrainians are losing the will to resist and are ready to accept territorial concessions. Source:  EU vs Disinfo. Weaponising winter: how pro-Russian outlets justify strikes against Ukrainian energy infrastructure. [online] Published 23 January 2026. Available at: https://euvsdisinfo.eu/weaponising-winter-how-pro-russian-outlets-justify-strikes-against-ukrainian-energy-infrastructure/ Top Of Page Iran Iranian State TV Feed Reportedly Hijacked to Air Anti-Regime Messages The Record   stated that several Iranian state television channels were briefly hijacked on 18 January 2026, interrupting regular programming to broadcast protest footage and anti-regime messages. The affected channels, transmitted via the Badr satellite and used by Iran’s state broadcaster for provincial stations, reportedly aired Farsi-language calls for continued demonstrations, footage of solidarity rallies abroad, and statements attributed to international supporters. The broadcast also featured Reza Pahlavi, the U.S.-based son of Iran’s last shah, urging further protests and calling on Iran’s military and security forces to back demonstrators. Clips of the incident were shared by Iran International, local media, and Pahlavi’s media team, though the footage could not be independently verified. Reports indicate that the unauthorized transmission lasted approximately 10 minutes, and neither the group responsible nor Iranian authorities have publicly commented on the incident. The apparent breach came amid unrest that began in late December 2025, following a severe economic downturn marked by high inflation and rising food prices. Protests have increasingly targeted alleged government corruption and mismanagement, with some demonstrators calling for political change. The government also imposed a near-total shutdown of internet and mobile communications for nearly two weeks, though restrictions may soon be eased. Source:  The Record, D. Antoniuk. Iranian state TV feed reportedly hijacked to air anti-regime messages.  [online] Published 19 January 2026. Available at: https://therecord.media/iran-state-television-reported-hack-opposition Top Of Page [AI Related Articles] The Silicon Gaze and Geographic Bias in AI Systems Drawing on a new analysis by Oxford and Kentucky researchers , the concept of the “silicon gaze” was introduced to explain how large language models (LLMs), such as ChatGPT, systematically reproduce and amplify long-standing spatial inequalities. Based on a large-scale audit of more than 20 million geographically framed queries, the study showed that bias in generative AI is not a technical flaw that can be easily corrected, but a structural feature rooted in historically uneven data production, model design, and platform power. Because LLMs are trained primarily on English-language and Global North–centric sources, they tend to privilege already visible, wealthy, and well-documented places while rendering others vague, stereotyped, or invisible. The authors developed a five-part typology to describe how these biases operate: availability bias (uneven data coverage), pattern bias (reliance on frequent language associations), averaging bias (flattening complexity into dominant narratives), trope bias (recycling cultural stereotypes), and proxy bias (substituting measurable indicators for subjective qualities). Together, these mechanisms shape how AI evaluates places across scales, from countries to neighborhoods, often reinforcing racial, economic, and geographic hierarchies. Rankings that appear neutral or authoritative, such as those that label places as “smarter”, “happier”, or “more cultured”, frequently reflect dominant media narratives and global metrics rather than lived realities. The report concluded that addressing these issues requires moving beyond technical debiasing toward a power-aware, relational understanding of AI systems. Bias in LLMs reflects centuries of unequal knowledge production and cannot be solved by more data alone. Instead, the authors call for greater transparency, independent auditing, stronger governance, and critical literacy among users to question what, and who, AI systems leave out. Without such shifts, generative AI risks hard-coding historical inequalities into everyday decision-making, shaping how places are imagined, valued, and acted upon in the future. Source:  SAGE Journals, F. W. Kerche & M. Zook & M. Graham. The silicon gaze: A typology of biases and inequality in LLMs through the lens of place.  [online] Published 20 January 2026. Available at: https://journals.sagepub.com/doi/10.1177/29768624251408919 Top Of Page AI Realistic Deepfake Videos NewsGuard  testing demonstrated that OpenAI’s video generator Sora can produce fake videos so convincing that both humans and leading AI chatbots struggle to identify them as false. When Sora videos had their watermarks removed, ChatGPT, Grok, and Gemini failed to recognize them as AI-generated in most cases, often confidently describing fabricated scenes as real and supported by “news sources.” This creates a serious disinformation risk, as realistic AI videos can easily circulate without clear signals that they are fake. Even when Sora’s watermark was visible, detection was inconsistent. While watermarks improved accuracy, ChatGPT and Grok still misidentified some videos, and all three systems sometimes treated fabricated footage as legitimate reporting. Compounding the problem, watermark removal tools are widely available, making it easy to strip away the main visual indicator that a video is AI-generated. As a result, false videos promoting provably untrue claims can spread while appearing credible to both users and automated systems. The tests also revealed a broader transparency issue. The chatbots rarely warned users that they lack reliable AI-detection capabilities and instead often expressed unwarranted confidence in their incorrect assessments. Source: NewsGuard, I. Chomnalez & L. Marchl. AI Fools Itself: Top Chatbots Don’t Recognize AI-Generated Videos.  [online] Published 22 January 2026. Available at: https://www.newsguardtech.com/special-reports/top-ai-chatbots-dont-recognize-ai-generated-videos/ Top Of Page New Influence Risks Follow OpenAI’s Move to Ads As TechXplore   highlights , OpenAI raised new influence risks by announcing plans to introduce advertising in ChatGPT in the United States, showing ads to users on the free version and the low-cost Go tier, while excluding Pro, Business, and Enterprise subscribers. The company said ads will be clearly separated from chatbot responses, will not affect answers, and will respect user privacy, including opt-outs for personalized ads and bans on ads for minors or around sensitive topics. Still, the move has raised concerns among some users. The key question is whether OpenAI’s voluntary safeguards will hold once advertising becomes central to its business. Running large-scale AI systems is extremely costly, and OpenAI expects significant losses over the coming years. History has shown that targeted advertising is the most scalable way to monetize large user bases, as seen with Google and Facebook. Experience has also shown how commercial pressure can erode self-imposed guardrails, leading to harmful outcomes such as political manipulation, data misuse, and the promotion of damaging products. Concerns are heightened because chatbots are used in more personal and intimate ways than social media. Ads placed alongside private, trust-based conversations may exert a stronger influence on user decisions, even if formally separated from AI responses. The report argued that this creates a structural conflict between public interest and advertising incentives, raising the risk of manipulation by advertisers, scammers, or political actors. Source:  TechXplore, R. F. Ciriello & K. Backholer. OpenAI will put ads in ChatGPT, opening a new door for dangerous influence. [online] Published 25 January 2026. Available at: https://techxplore.com/news/2026-01-openai-ads-chatgpt-door-dangerous.html Top Of Page [Cyfluence Attacks] Sandworm-Linked Cyberattack Targets Poland’s Energy Infrastructure In an Infosecurity Magazine  article, a coordinated cyberattack targeting Poland’s energy infrastructure on 29th–30th of December 2025 has been attributed by ESET to the Russia-aligned APT group Sandworm (also tracked as UAC-0113, APT44, and Seashell Blizzard), which is believed to be linked to Russia’s military intelligence service, the GRU. ESET reported that the attackers deployed a data-wiping malware strain known as DynoWiper, though the company stated it was not aware of any successful disruption to energy services. ESET assessed the attribution with medium confidence, citing a strong overlap between the malware’s behavior and Sandworm’s previous wiper-related tactics, techniques, and procedures. The incident remains under investigation, but ESET suggested the timing may have been deliberate, coinciding with the 10th anniversary of Sandworm’s 2015 cyberattack on Ukraine’s power grid. Polish Prime Minister Donald Tusk stated that Poland successfully repelled the late-2025 attack and that critical infrastructure was not threatened, while also emphasizing the government’s push to finalize the National Cybersecurity System Act, which implements NIS2 to strengthen risk management, IT/OT security, and incident response. The report notes that the attack apparently targeted two combined heat and power plants and a renewable energy system, reflecting continued Russian-linked focus on energy assets as a high-impact target set. Source:  Infosecurity Magazine, P. Muncaster. Wiper Attack on Polish Power Grid Linked to Russia’s Sandworm.  [online] Published 26 January 2026. Available at: https://www.infosecurity-magazine.com/news/wiper-attack-polish-power-grid/ Top Of Page [General Reports] Anti-Israel Hoaxes Targeted Israeli Citizens Abroad According to a report by NewsGuard's Reality Check , since March 2025, anti-Israel disinformation has increasingly shifted from focusing on the war in Gaza to targeting Israeli citizens abroad on a personal level. At least eight false narratives have circulated widely on social media, particularly on X, collectively generating more than 16 million views. Several viral posts falsely claimed that the Russian government has begun deporting all Israeli citizens from Russia, and that Japan has banned Israeli tourists from entering the country. In reality, no such policies exist, and none of the claims were supported by official statements or credible media reporting. Similar false narratives have also referenced supposed travel bans or punitive measures targeting Israelis in China, France, Spain, and South Korea. Other hoaxes alleged criminal behavior by Israeli tourists abroad, including a widely shared claim that Israelis were involved in an attempted rape in Thailand. Thai authorities confirmed that the suspects were not Israeli. Another narrative falsely claimed that Israeli tourists set fire to a Christmas tree in China. NewsGuard sent an email to X, the platform on which these false claims have spread most, regarding these findings but did not receive a response. Source:  NewsGuard’s Reality Check, M. Calamaio & G. Pozzi. False Claims Target Israeli Citizens Abroad with Fabricated Travel Bans, Deportations, and Misconduct Allegations.  [online] Published. Available at: https://www.newsguardrealitycheck.com/p/anti-israel-hoaxes-get-personal Top Of Page Old Military Exercise Footage Misrepresented as US Seizure of Russia-Linked Tanker Yahoo News   described how two videos widely shared on social media and presented as footage of a recent US boarding of a Russia-linked oil tanker were actually recordings of earlier US military exercises. Reverse-image and keyword searches traced one clip to a casualty-evacuation drill during Exercise Balikatan 25 in Subic Bay, and the other to a SEAL demonstration during the Titans of the Sea Presidential Review. The circulating videos, however, differ from the grainy footage of the tanker's seizure published by DVIDS and by US Department of Homeland Security Secretary Kristi Noem on her official X account , and archival checks showed the viral material predated the operation by months. The mis-captioned videos spread across X, TikTok, Instagram, and other platforms shortly after U.S. authorities said they had seized the tanker in the North Atlantic for alleged sanctions violations. Washington said the tanker was part of a shadow fleet that carries oil for countries such as Venezuela, Russia, and Iran in violation of US sanctions, and seized it despite the ship being escorted by the Russian navy. Source:   Yahoo News, S. Heung. Months-old videos show US military exercises, not seizure of Russia-linked vessel. [online] Published 21 January 2026. Available at: https://www.yahoo.com/news/articles/months-old-videos-show-us-024450959.html?guccounter=2   X, K. Noem. Post on X (@Sec_Noem).  [online] Published 7 January 2026. Available at: https://x.com/Sec_Noem/status/2008907194691915945 Top Of Page False Claims Misrepresented European Response to U.S. Remarks on Greenland NewsGuard’s Reality Check: “False Claim of the Week”  focused on a widely shared false narrative alleging that Italian Prime Minister Giorgia Meloni said Europe would shut U.S. military bases, scrap trade agreements, and boycott McDonald’s in response to U.S. threats to acquire Greenland. The claim, which circulated widely on social media, gained millions of views after anti-Trump users misrepresented a short video clip from Meloni’s press conference on the 9th of January 2026. In reality, Meloni’s comments were rhetorical questions intended to criticize commentators who advocated distancing Italy from its allies. At the press conference, she reaffirmed Italy’s commitment to the European Union and the Atlantic alliance, emphasizing continued cooperation with the United States. NewsGuard found no credible news reports that Italy is considering leaving NATO, closing U.S. bases, or encouraging the boycott of McDonald’s. Source : NewsGuard’s Reality Check. Europe Shutting U.S. Bases, Scrapping Trade Deals and Boycotting McDonald’s? NewsGuard’s False Claim of the Week.  [online] Published 23 January 2026. Available at: https://www.newsguardrealitycheck.com/p/europe-shutting-us-bases-scrapping Top Of Page Psychological Defence Against Hostile Disinformation and Influence Psychological Defence Agency (PDA)   presents Psychological Defence and Information Influence  as a comprehensive framework for understanding and countering hostile information activities aimed at undermining democratic societies. It explains how foreign and domestic actors exploit openness, free expression, cognitive biases, media systems, and emerging technologies, particularly social media and artificial intelligence, to conduct disinformation, propaganda, and broader malign influence operations. These campaigns aim to distort public debate, polarize societies, erode trust in institutions, and weaken democratic decision-making by manipulating attention, emotions, and perceptions rather than relying solely on overt coercion. Drawing on theory, case studies, and policy practice, especially in the Swedish and Nordic context, the article outlines how democratic states can respond without undermining their own values. It emphasizes a whole-of-society psychological defence, combining citizen resilience, media and information literacy, institutional preparedness, intelligence coordination, and legally grounded countermeasures. Rather than censorship or state control of truth, the approach prioritizes transparency, factual communication, attribution of hostile actors, and ethical deterrence, framing the defence against disinformation as a long-term effort to strengthen democratic norms, societal trust, and resistance to hostile influence. Source:  Psychological Defence Agency, J. Falkheimer & J. Pamment. Psychological Defence and Information Influence – A Textbook on Theory and Practice.  [online] Published 2026. Available at: https://mpf.se/psychological-defence-agency/publications/archive/2026-01-19-psychological-defence-and-information-influence----a-textbook-on-theory-and-practice Top Of Page [Appendix - Frameworks to Counter Disinformation] Countering Information Influence Operations in the Nordic-Baltic Region A report by NATO Strategic Communications Centre of Excellence examined how the eight Nordic-Baltic countries (NB8) have developed strategies to counter information influence operations amid a deteriorating security environment. Drawing on shared democratic values, historical experience with foreign interference, and close regional cooperation, the countries have built national frameworks that emphasize resilience, public awareness, and coordinated responses. Since 2014, and especially following Russia’s full-scale invasion of Ukraine, information influence operations (IIO) have increasingly been paired with cyberattacks, sabotage, and pressure on critical infrastructure, making them a central national and regional security concern. Across the region, resilience is the cornerstone of counter-IIO efforts. This includes educating citizens through media literacy and critical thinking, strengthening institutional preparedness, and involving civil society, independent media, and fact-checking organizations. Defence and intelligence bodies conduct monitoring and situational analysis, while governments rely on a mix of strategic communications, case-by-case responses, and, increasingly, legal and regulatory measures such as sanctions, media restrictions, and amendments to the criminal code. Attribution and response thresholds vary by country, but international cooperation through NATO, the EU, and bilateral partnerships is a consistent feature. The report concluded that the NB8 countries are among the most advanced globally in countering IIO, largely due to their whole-of-society approach and strong coordination culture. However, challenges remain, including resource constraints, fragmented legal tools, duplication across international forums, and the need to balance free expression with disruptive measures. The authors identified significant potential for deeper regional cooperation, including joint training, shared capability development, and coordinated response projects, to further strengthen resilience against increasingly sophisticated influence operations. Source:  NATO Strategic Communications Centre of Excellence, J. Lindgren & J. Pamment & A. Palmer & S. Svetoka & E. Lange-Ionatamišvili. Countering Information Influence Operations in the Nordic-Baltic Region.  [online] Published 22 January 2026. Available at: https://stratcomcoe.org/pdfjs/?file=/publications/download/Countering-IIOs-in-the-Nordic-Baltic-FINAL-FILE.pdf?zoom=page-fit Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • CRC Weekly: Cyber-based hostile influence campaigns 22dn - 28th December 2025

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] Meta Meta Report Warns of Persistent State-Linked Disinformation and AI-Accelerated Influence Operations [State Actors] Russia Disinformation in Putin’s “Direct Line” Broadcast Russia’s Historical Disinformation Campaign Against Finland The War in Ukraine The Expansion of Russia’s AI-Driven Disinformation Campaigns Disinformation Campaigns Targeting Ukrainian Refugees False Claims About Microchipped Vaccines in Ukraine Coordinated Disinformation Fuels Anti-Ukrainian Sentiment in Poland China China’s Cognitive Warfare Strategy Seeks Narrative Dominance Through Disinformation China Index Events Highlight PRC Influence [AI Related Articles] AI and LLM Disinformation Risks   [Appendix - Frameworks to Counter Disinformation] EU Sanctions Target Diana Panchenko  U.S. Action Targets Foreign Disinformation and Censorship Agents US Judge Blocks Deportation of Anti-Disinformation Advocate The WikiSignals Project Strengthens Wikipedia Against Disinformation EU Sanctions Act against Russian Aggression Uzbekistan Strengthens Youth Fact-Checking to Combat Disinformation EU - Western Balkans Conference Strengthens Fight Against Disinformatio n [CRC Glossary] [ Report Highlights] Meta’s Threat Report finds that state-linked actors continue to run coordinated disinformation campaigns using fake media, local intermediaries, and AI tools to scale deceptive narratives and evade detection. According to the Cyfluence Research Center report, China conducts coordinated cognitive warfare by pairing gray-zone military actions with disinformation, legal manipulation, and psychological operations to reshape narratives of sovereignty and achieve strategic dominance without open conflict. A report by the Psychological Defence Agency described a sustained Kremlin-led disinformation campaign that uses falsified history as a strategic tool against Finland. An article by Stop Fake outlined a Russian disinformation strategy that portrays Ukrainians abroad as traitors or threats. As published by DoubleThink Lab , the China Index Regional Event Fund, launched in 2025, has supported local events worldwide to discuss Chinese influence and disinformation, helping communities understand the PRC’s global strategies. The European Union announced it has renewed its restrictive measures against Russia for six more months, until July 31,   2026, in response to Moscow’s ongoing destabilization of Ukraine. [Social Media Platforms] Meta Meta Report Warns of Persistent State-Linked Disinformation and AI-Accelerated Influence Operations Meta’s   Q2–Q3 2025 Adversarial Threat Report  outlines how hostile actors continue to exploit online platforms through coordinated inauthentic behavior (CIB), fraud, and AI-enabled manipulation, with a particular focus on state-linked influence operations and transnational criminal networks. The report documents how adversaries use false identities, deceptive media outlets, and coordinated amplification to mislead audiences, obscure attribution, and advance geopolitical or financial objectives, while deliberately adapting tactics to evade detection. The report highlights several state-linked influence campaigns, including a long-running Iranian operation known as Endless Mayfly , attributed to the International Union of Virtual Media , which used fake news sites, impersonated journalists, and cross-platform amplification to launder misleading narratives into public discourse. It also details Russian-linked operations that increasingly rely on authentic local intermediaries, particularly in sub-Saharan Africa, where freelancers and local media managers were covertly used to promote pro-Russian narratives and undermine Western partnerships—blurring the line between authentic and inauthentic behavior. In parallel, Meta reports that AI has become an accelerant rather than a fundamentally new threat, enabling adversaries to scale deception through AI-generated personas, content, translations, and impersonation while maintaining familiar behavioral patterns. These techniques are used across influence operations and scams, including celebrity impersonation and the production of fake news, reinforcing the need for behavior-based detection, cross-sector intelligence sharing, and coordinated defenses to counter evolving disinformation and hybrid threats. Source: Meta, Adversarial Threat Report: Second–Third Quarter . [online] Published December 2025. Available at: https://transparency.meta.com/sr/Q2-Q3-2025-Adversarial-threat-report/ Top Of Page [State Actors] Russia Disinformation in Putin’s “Direct Line” Broadcast According to an article  by EUvsDisinfo , Russia’s annual “Direct Line with Vladimir Putin,” first held in 2001, functions as a central instrument of state-led disinformation. In the broadcast that aired on December 19, Putin created an illusion of openness while legitimizing false narratives at the highest political level. Emotional appeals, personalized anecdotes, and claims of hands-on leadership were used to humanize the president and distract from systemic problems. At the same time, inconvenient realities were downplayed or presented as already resolved. A key tactic was the management of criticism through false equivalence and projection. By allowing Western journalists to ask questions, the Kremlin sought to showcase what it claimed was “free debate,” while dismissing criticism as biased and hostile. Putin repeated long-standing disinformation narratives about Ukraine, portraying Russia as seeking peace, shifting blame for the war onto Ukraine, the EU, and NATO, and insisting that Russia did not start the conflict. He also relied on cherry-picked facts and outright denial to claim military success and economic resilience, despite sanctions and battlefield realities. Assertions that Ukraine lacks the capacity to resist, that Russian advances are inevitable, and that Western actions amount to “robbery” or provocation were used to reinforce a victimhood narrative. Source:  EUvsDisinfo, Controlled questions, crafted lies: inside Putin’s year-end messaging machine . [online] Published 23 December 2025. Available at: https://euvsdisinfo.eu/controlled-questions-crafted-lies-inside-putins-year-end-messaging-machine/ Top Of Page Russia’s Historical Disinformation Campaign Against Finland A report  by the Psychological Defence Agency  described a sustained Kremlin-led disinformation campaign that uses falsified history as a strategic tool against Finland. Russian authorities and state-controlled media portray Finland as a Nazi collaborator responsible for war crimes, reviving and distorting Second World War narratives to undermine Finland’s legitimacy, damage its international reputation, and justify future pressure or aggression. This historical revisionism is embedded in Russia’s broader hybrid warfare, in which disinformation, fear-mongering, and victim narratives are coordinated from the highest political level and amplified through state media such as “TASS”. A central element of the campaign is “lawfare”: Russian courts and prosecutors issue rulings and indictments that falsely accuse Finland of genocide and complicity in the siege of Leningrad. These legal actions create a façade of legitimacy for potential reparations or territorial claims. They are reinforced by physical acts, including the vandalism and removal of Finnish memorials on Russian territory, as well as staged provocations involving nationalist activists. The campaign follows an escalatory logic in which rhetoric, legal measures, and physical acts combine to create conditions for renewed aggression. The report concluded that history is no longer treated as an academic debate but as an active weapon of security policy, used to mobilize the Russian population, portray Russia as a victim, and frame Finland as an enemy. Countering this threat requires coordinated, long-term, and fact-based international responses, as fragmented or reactive approaches allow disinformation to set the terms of historical and political debate. Source:  Psychological Defence Agency Report Series, P. Oksanen, History as a Battlefield: Russia’s Information War Against Finland 2025 . [online] Published 2025. Available at: https://mpf.se/download/18.6888ebfe19b2bdfbd24ae4/1766154667213/History_as_a_Battlefield-tga.pdf Top Of Page The War in Ukraine The Expansion of Russia’s AI-Driven Disinformation Campaigns NewsGuard has debunked  400 false claims related to the Russia–Ukraine war promoted by Russian actors. The most prominent of these operations is Storm-1516, which Microsoft and NewsGuard have identified as the fastest-growing and most prolific campaign. Rather than relying on sanctioned outlets like RT and Sputnik, the Kremlin is increasingly using unofficial, harder-to-trace networks to spread false narratives. Storm-1516 specialises in accusing Ukraine and its allies of corruption and criminal activity, using AI-generated news websites, deepfake videos, and fake social media accounts. These false claims often achieve substantial reach, drawing millions of views across platforms such as X and Telegram. The campaign has rapidly scaled its output since 2023, surpassing not only other influence operations but also RT and Sputnik combined in the number of false claims spread in 2025. Similar operations, such as Matryoshka and the Foundation to Battle Injustice, further amplify disinformation through fake media outlets. A defining feature of this new phase is the use of AI to increase speed, scale, and credibility. Storm-1516 has impersonated trusted news organisations, fabricated official-looking videos, and exploited real corruption investigations to inject false claims about President Volodymyr Zelensky. Source:  NewsGuard Reality Check, E. Maitland & A. Lee, 400 and Counting: A Russian Influence Operation Overtakes Official State Media in Spreading Russia-Ukraine False Claims . [online] Published 22 December 2025. Available at: https://www.newsguardrealitycheck.com/p/a-russian-influence-operation-surpasses Top Of Page Disinformation Campaigns Targeting Ukrainian Refugees An article  published by StopFake outlines a long-standing Russian disinformation strategy that depicts Ukrainians abroad as traitors or threats—an approach with roots in Soviet-era propaganda that intensified after Russia’s full-scale invasion of Ukraine. As millions of Ukrainians were forced to flee, Russian information attacks increasingly shifted blame onto the victims of the war, using fabricated stories to distort public perception. StopFake reports a sharp rise in such narratives since 2022, describing a sustained effort to damage Ukraine’s image and weaken international support. Russian disinformation about Ukrainian refugees tends to follow three core narratives: portraying them as criminals, as an economic and social burden, or as people who will be forcibly deported and sent back to fight. These claims commonly rely on invented stories, manipulated photos and videos, and forged websites that impersonate trusted Western media—a tactic associated with the “Doppelganger” operation. Refugees are falsely accused of violence, spreading disease, abusing welfare systems, or receiving preferential treatment, and many of these stories are amplified by far-right actors and fringe outlets to gain political traction. Over time, the emphasis of these narratives has shifted from petty crime stories in 2022 to economic strain and “war fatigue” messaging in 2023, and then to fears of forced mobilisation in 2024–2025. Countries hosting large numbers of Ukrainians, especially Poland and Germany, have been frequent targets, with disinformation shaping public debate and, in some cases, election campaigning. Source:  StopFake, Как Россия ведет войну против украинских беженцев . [online] Published 23 December 2025. Available at: https://www.stopfake.org/ru/kak-rossiya-vedet-vojnu-protiv-ukrainskih-bezhentsev/ Top Of Page False Claims About Microchipped Vaccines in Ukraine NewsGuard’s analysis  described a fabricated “investigation” by the so-called Foundation to Battle Injustice, which claimed that Ukrainian authorities are injecting soldiers with COVID-19 vaccines containing microchips that can remotely paralyse or kill them. Widely circulated by pro-Kremlin social media accounts in December 2025, the allegation asserted that President Volodymyr Zelensky had issued secret orders and cited an unnamed “leak” as evidence. In reality, the Foundation to Battle Injustice is a known front with documented ties to pro-Kremlin influence efforts and a record of spreading disinformation about Ukraine. There is no evidence that Ukraine mandates COVID-19 vaccination for soldiers, nor that vaccines can contain microchips. Medical experts and health authorities have repeatedly stated that such technology does not exist, and that vaccine components are far smaller than any microchip. Ukrainian officials have explicitly denied the claim, calling it false. In addition, a member of the press department of the Ukrainian President’s Office told NewsGuard in an email: “This information is fake.” Source:  NewsGuard Reality Check, I. Blachez, Russia Fabricates Report that Ukraine Injects Soldiers with Vaccines Containing Microchips that Can Kill Them for Disobeying Orders . [online] Published 23 December 2025. Available at: https://www.newsguardrealitycheck.com/p/no-ukraine-doesnt-have-a-microchip Top Of Page Coordinated Disinformation Fuels Anti-Ukrainian Sentiment in Poland A DFRLab   report detailed a surge in anti-Ukrainian disinformation in Poland following two triggering events: President Karol Nawrocki’s veto of expanded benefits for Ukrainian refugees and Russian drone incursions into Polish airspace. Both events were exploited by coordinated online networks that pushed misleading narratives portraying Ukrainians as economic burdens and security threats, while downplaying or denying Russia’s role. These information attacks sought to fracture Polish public support for Ukraine and weaken the Polish–Ukrainian alliance at a critical moment. Despite robust economic data showing that most Ukrainians in Poland are employed and contribute far more in taxes than they receive in benefits, false claims depicting refugees as “freeloaders,” “parasites,” or criminals spread widely on social media. Investigations identified coordinated amplification by networks of Facebook pages, fringe groups, clickbait websites, and monetized platforms, often using synchronized posting, misleading page names, and AI-generated videos. These narratives were also injected into non-political spaces—such as hobby and lifestyle groups—making them harder to detect and counter. Disinformation intensified further after Russian drones entered Polish airspace, with conspiratorial narratives falsely blaming Ukraine for staging provocations to drag Poland and NATO into the war. These claims circulated across Facebook, TikTok, and the comment sections of major Polish media outlets, often originating from suspicious or inauthentic accounts. One example involved the site udostepnij[.]pl and a network of Facebook pages with innocuous, non-political names that simultaneously shared identical posts about a deportation case, exaggerating and misrepresenting the facts. With hundreds of thousands of followers, these pages systematically amplified similar content, monetized audience attention through ads and YouTube, and redirected users to clusters of right-wing websites.. Source:  DFRLab, G. Gigitashvili & S. Gelava, How social media manipulation fuels anti-Ukraine sentiment in Poland . [online] Published 21 December 2025. Available at: https://dfrlab.org/2025/12/21/how-social-media-manipulation Top Of Page China China’s Cognitive Warfare Strategy Seeks Narrative Dominance Coordinated Disinformation The Cyfluence Research Center (CRC)  has published a piece  by Athena Tong  that casts China’s doctrine of cognitive warfare as a tightly integrated form of hostile influence—melding physical actions with sustained narrative manipulation to reshape perceptions of sovereignty and legitimacy across East Asia. Maritime manoeuvres in the South China Sea, the East China Sea, and around Taiwan are embedded in coordinated foreign information manipulation and interference (FIMI) campaigns, using state media, legal instruments, environmental claims, and diplomatic messaging to portray Beijing as restrained, lawful, and environmentally responsible while framing rivals as provocateurs or foreign proxies. The piece explains how maritime incidents are systematically distorted to advance strategic narratives. Collisions, water-cannon attacks, and blockades are rapidly reframed through manipulated images, recycled footage, and fabricated environmental scandals—such as claims of ecological harm caused by Philippine forces—to blur accountability and confuse audiences. This narrative engineering is reinforced through lawfare, including the designation of disputed features such as Scarborough Shoal as Chinese “nature reserves” and selective appeals to international law at the United Nations to recast sovereignty disputes as matters of development and stewardship rather than territorial rights. Beyond the South China Sea, Beijing employs historical revisionism, digital disinformation, and psychological operations to erode Japanese and Taiwanese sovereignty narratives. Coordinated social media campaigns amplify divisive local grievances in Okinawa, seed fake grassroots movements, and link disputes over the Senkaku Islands to Taiwan’s status, while misrepresenting UN Resolution 2758 to legitimise claims over Taiwan. Collectively, these tactics constitute a comprehensive political-warfare playbook that combines media warfare, lawfare, and psychological pressure to normalise grey-zone aggression, weaken the adversary's resolve, and establish narrative dominance without direct military confrontation. Source:  Cyfluence Research, A. Tong, Cognitive Warfare Masterclass: China’s Doctrine for Strategic Narrative Superiority . [online] Published 29 December 2025. Available at: https://www.cyfluence-research.org/post/cognitive-warfare-masterclass-china-s-doctrine-for-strategic-narrative-superiority Top Of Page China Index Events Highlight PRC Influence As published  by DoubleThink  Lab, the China Index Regional Event Fund, launched in 2025 , has supported local events worldwide to foster discussion about Chinese influence and disinformation, helping communities better understand the PRC’s global strategies. Events ranged from policy roundtables and workshops for journalists to university sessions for students. In the UK, discussions focused on how the Chinese Communist Party's influence affects higher education, including concerns about surveillance and self-censorship among Chinese students. Meanwhile, Italian forums highlighted the PRC’s impact on politics, media, and democracy in Europe. In South and Southeast Asia, workshops emphasised the role of online information operations and PRC-backed initiatives in shaping public opinion and reinforcing control. At universities in India, Thailand, and Pakistan, participants learned to detect patterns of influence, assess risks linked to initiatives such as the China–Pakistan Economic Corridor, and understand how surveillance technology and economic dependencies can be used to shape behaviour. Discussions highlighted how disinformation and selective narratives are deployed to manipulate populations. Across all events, the China Index facilitated evidence-based discussions on disinformation and broader PRC influence, illustrating how academic research can inform public debate, policymaking, and media coverage. The initiative aimed to strengthen regional understanding of authoritarian digital strategies, raise awareness of state-driven propaganda, and provide practical tools to detect and mitigate the effects of information operations globally. Source:  Medium (Doublethink Lab), From Global Insight to Local Action: The 2025 China Index Regional Event Fund Report . [online] Published 23 December 2025. Available at: https://medium.com/doublethinklab/from-global-insight-to-local-action-the-2025-china-index-regional-event-fund-report-f7f1bacd6800 [AI Related Articles] AI and LLM Disinformation Risks Concerns about digital misinformation have flared up again following Meta’s decision to wind down professional fact-checking, a development covered by TechXplore  (for further information, see the W02 January 2025 Cyfluence Report here ). Still, the larger challenge lies in the subtler influence of AI large language models (LLMs) on public perception. Still, the larger challenge lies in the subtler influence of AI large language models (LLMs) on public perception. These models increasingly generate news summaries, headlines, and content across platforms before traditional moderation can respond, shaping not only what information people see but also how it is framed. Even when the underlying facts are accurate, LLMs can foreground certain viewpoints while downplaying others, creating communication bias that quietly steers opinions and amplifies particular narratives at the expense of others. This bias is distinct from outright disinformation, but it can produce similar effects on public understanding. Research suggests that LLMs exhibit persona-based steerability, aligning outputs with perceived user expectations. For example, the same query about legislation may yield responses tailored to an environmental activist versus a business owner—emphasising different angles while remaining factually accurate. When a small number of dominant companies control most widely used AI models, even minor biases can scale rapidly, shaping public discourse without users fully realizing the extent of that influence. Current regulatory frameworks, such as the EU AI Act and the Digital Services Act, prioritise transparency, pre-launch audits, and post-deployment oversight. However, they remain limited in their ability to address subtle communication bias. Effective mitigation, therefore, requires more than regulation: fostering competition, transparency, and meaningful user participation is essential to ensuring that AI systems reflect diverse perspectives. Source:  TechXplore, A. Kuenzler, People are getting their news from AI—and it's altering their views . [online] Published 22 December 2025. Available at: https://techxplore.com/news/2025-12-people-news-ai-views.html Top Of Page [Appendix - Frameworks to Counter Disinformation] EU Sanctions Target Diana Panchenko EUvsDisinfo   notes  that the European Union has introduced a new round of sanctions aimed at individuals and entities linked to coordinated Russian disinformation and cyber operations (for further information, see the W51 December 2025 Cyfluence Report here ). One of the listed individuals is Diana Panchenko, a Ukrainian-born journalist whom EUvsDisinfo describes as promoting pro-Kremlin, anti-Ukrainian, and anti-NATO narratives while presenting this material as “investigative journalism” in ways intended to mislead audiences and lend credibility to false claims. Her output blends the aesthetics of professional news production with content filmed in Russian-occupied territories, amplifying narratives aligned with Russia’s war messaging. She uses platforms such as Telegram and X to reach large audiences, including English-speaking users. She relies on techniques often associated with influence operations, clickbait framing, coordinated amplification, and the systematic promotion of pro-Russian accounts, rather than on conventional independent reporting. Ukrainian authorities and disinformation researchers have warned that her content contributes to justifications for Russian abuses and undermines Ukraine’s security and stability. Source:  EUvsDisinfo, From “journalism” to FIMI: EU sanctions Diana Panchenko . [online] Published 22 December 2025. Available at: https://euvsdisinfo.eu/from-journalism-to-fimi-eu-sanctions-diana-panchenko/ Top Of Page U.S. Action Targets Foreign Disinformation and Censorship Agents The U.S. State Department  has imposed  restrictions on five individuals identified as leading organised efforts to pressure American platforms to censor, demonetise, or suppress viewpoints they oppose. These actors—including radical activists and weaponised NGOs—have advanced foreign-backed censorship campaigns targeting American speakers and companies. By imposing these restrictions, the State Department aims to prevent foreign disinformation networks from influencing domestic discourse and manipulating online platforms. In some cases, the Department of Homeland Security can initiate removal proceedings under U.S. law against individuals engaged in such activities, underscoring the seriousness with which the U.S. treats extraterritorial disinformation and censorship campaigns. By curbing the reach of these actors, the U.S. seeks to safeguard free expression, protect American companies from coercion, and mitigate the impact of disinformation campaigns. Source:  U.S. Department of State, Announcement of Actions to Combat the Global Censorship-Industrial Complex  (Press Statement by Marco Rubio, Secretary of State). [online] Published 23 December 2025. Available at: https://www.state.gov/releases/office-of-the-spokesperson/2025/12/announcement-of-actions-to-combat-the-global-censorship-industrial-complex/ Top Of Page US Judge Blocks Deportation of Anti-Disinformation Advocate ABC News   published  that a U.S. judge has temporarily prevented the Trump administration from detaining Imran Ahmed, a British anti-disinformation campaigner and chief executive of the Center for Countering Digital Hate, over visa restrictions aimed at five Europeans accused of targeting U.S. tech companies or censoring speech. Ahmed, a permanent U.S. resident, fears deportation that would separate him from his U.S.-citizen family while he continues his work combating online hate and misinformation. The visa bans sparked criticism from European governments, which argued that Ahmed’s work enhances online safety by exposing disinformation, hate speech, and child-exploitation content—highlighting the tension between foreign-policy concerns and anti-disinformation efforts. Ahmed filed a lawsuit alleging violations of his free-speech and due-process rights, prompting U.S. District Judge Vernon Broderick to issue a temporary restraining order halting any detention or removal until the case can be heard. Source:  ABC News, US judge blocks deportation of British anti-disinformation activist . [online] Published 26 December 2025. Available at: https://www.abc.net.au/news/2025-12-26/us-barred-from-deporting-campaigner-imran-ahmed/106179794 Top Of Page The WikiSignals Project Strengthens Wikipedia Against Disinformation MisinfoCon’s coverage of WikiCredCon 2025 highlights a core reality for Wikipedia: credibility depends not only on accurate articles, but on the trustworthiness of the sources behind them. WikiSignals brings together organisations, including Veri.FYI, Factiverse, SimPPL, AfroCrowd, Iffy.news, and the Internet Archive Reference Explorer. The WikiSignals project is developing tools to help editors assess the reliability of references by compiling data from press associations, fact-checking ratings, domain registration records, media indexes, and lists of predatory or hijacked journals. This resource—now covering 70,000 websites—provides editors with a searchable credibility score for each source, helping prevent disinformation from entering Wikipedia. WikiSignals also offers a practical tool that lets editors evaluate a URL’s credibility and format it as a Wikipedia citation. The project integrates machine-harvestable data to estimate reliability programmatically, supporting both manual and automated efforts to counter misleading or biased sources. Input from active Wikipedia editors has guided the development of additional resources, including a compendium of favourite editing tools and an expanded index of predatory journals. Source:  MisinfoCon, B. Golding, WikiSignals: Strengthening the Credibility of Wikipedia Citations . [online] Published 24 October 2025. Available at: https://misinfocon.com/wikisignals-strengthening-the-credibility-of-wikipedia-citations-8531d2673099 Top Of Page EU Sanctions Act against Russian Aggression The European External Action Service (EEAS) announced it has renewed its restrictive measures against Russia for six more months, until 31 th July 2026, in response to Moscow’s ongoing destabilization of Ukraine. These sanctions, first imposed in 2014 and significantly expanded after Russia’s 2022 invasion, cover a wide range of sectors, including trade, finance, energy, technology, and luxury goods. The measures include suspending the broadcasting activities and licenses of Kremlin-backed disinformation outlets within the EU, directly targeting coordinated propaganda campaigns aimed at influencing European public opinion. The EU’s sanctions also include asset freezes, travel bans, and restrictions on economic activity in Crimea, Sevastopol, and non-government-controlled areas of Ukraine. Source:  EEAS, Press and information team of the Delegation to Ukraine, Russia’s war of aggression against Ukraine: Council extends economic sanctions for a further 6 months . [online] Published 22 December 2025. Available at: https://www.eeas.europa.eu/delegations/ukraine/russia%E2%80%99s-war-aggression-against-ukraine-council-extends-economic-sanctions-further-6-months_en Top Of Page Uzbekistan Strengthens Youth Fact-Checking to Combat Disinformation A report by the EEAS  revealed that on December 25 th , 2025, Uzbekistan inaugurated a Fact-Checking Club at the Youth Affairs Agency in Tashkent under the EU-funded “Digital Detectives” project, in collaboration with the “Qizlar Ovozi” (Voices of Girls) initiative. The event emphasized the importance of promoting media literacy, critical thinking, and responsible information consumption among young people, while linking fact-checking to gender equality and inclusive public discourse. Discussions highlighted the societal role of fact-checkers, the dangers of misinformation, and the negative impact of gender stereotypes. The project has trained 10 new fact-checking trainers and expanded a network of freelance contributors, delivering workshops and regional sessions that reached over 220 participants from schools, universities, and youth centers. Between October and December 2025, participants produced more than 50 fact-checking articles, approximately 20 videos, and multiple awareness campaigns addressing disinformation, cyberbullying, gender equality, and public health. Fact-checking clubs are now being launched in multiple institutions, including the University of Journalism and Mass Communications, and in Nukus, Karakalpakstan. Source:  EEAS, Press and information team of the Delegation to Uzbekistan, EU-Funded Project “Digital Detectives” Launches Fact-Checking Club at the Youth Affairs Agency . [online] Published 25 December 2025. Available at: https://www.eeas.europa.eu/delegations/uzbekistan/eu-funded-project-%E2%80%9Cdigital-detectives%E2%80%9D-launches-fact-checking-club-youth-affairs-agency_en Top Of Page EU - Western Balkans Conference Strengthens Fight Against Disinformation According to the EEAS publication , the EU–Western Balkans Media Literacy Conference in Skopje, on 17 th November, convened representatives from across the region, EU Member States, and international partners to address the growing threat of disinformation. The conference emphasized media literacy as a cornerstone of informed citizenship and democracy. Panels and workshops focused on key challenges in the regional information landscape, including social media’s role in the spread and counteracting of manipulated content, gendered disinformation, digital hate, and the specific vulnerabilities of younger audiences. Sessions also explored how EU policy frameworks, such as the Digital Services Act, can support human-rights-based, whole-of-society approaches to combating misinformation and influence operations. Participants exchanged practical strategies for detecting, mitigating, and preventing disinformation while enhancing institutional capacity. Source:  European External Action Service (EEAS), Press and information team of the EU Delegation to the Republic of North Macedonia, EU–Western Balkans Media Literacy Conference 2025 strengthens regional cooperation to counter disinformation . [online] Published 25 December 2025. Available at: https://www.eeas.europa.eu/delegations/north-macedonia/eu%E2%80%93western-balkans-media-literacy-conference-2025-strengthens-regional-cooperation-counter_en Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • CRC Weekly: Cyber-based hostile influence campaigns 29th December 2025 - 04th January 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] Tik Tok EU Probes TikTok Over AI-Driven “Polexit” Disinformation Campaign [State Actors] Russia   RT Falsely Blames Ukraine for Mariupol Theatre Destruction RT Pushes Evidence-Free Narrative Around Alleged Drone Attack on Putin's Residence How Russia Sought to Win the Narrative in 2025 China Taiwan Warns of Coordinated Chinese Disinformation During Military Exercises [AI Related Articles] The US Invasion of Venezuela Exposed Limitations in AI Chatbot Reporting [Appendix - Frameworks to Counter Disinformation] Social Media Flooded with Disinformation After Maduro’s Capture South Korea’s Anti-Disinformation Law Sparks Free Speech and Diplomatic Concerns [CRC Glossary] [ Report Highlights] Heise Online reports that the European Commission has launched an investigation into TikTok over the spread of AI-generated videos promoting Poland’s exit ("Polexit") from the European Union, amid concerns that the platform may have breached its obligations under the Digital Services Act. Throughout 2025, EUvsDisinfo identified a consistent pattern in Russian FIMI, marked by the deliberate construction of an image of strength despite mounting military, economic, and political constraints. Biometric Update covers legal developments in South Korea. The country has enacted a law imposing heavy penalties for spreading false or AI-manipulated information, prompting domestic concerns over censorship and international criticism from the United States over its broader impact on media freedom and digital regulation. Wired covers that, following the US announcement of Venezuelan President Nicolás Maduro and his wife, Cilia Flores, being captured, disinformation about the operation flooded social media. AI-generated images and videos, repurposed old footage, and doctored posts quickly spread across TikTok, Instagram, and X. [Social Media Platforms] TikTok EU Probes TikTok Over AI-Driven “Polexit” Disinformation Campaign Heise Online reports that the European Commission has launched an investigation into TikTok over the spread of AI-generated videos promoting Poland’s exit (“Polexit”) from the European Union, amid concerns that the platform may have breached its obligations under the Digital Services Act. The probe was triggered after Poland’s Deputy Minister for Digitalization formally alerted Brussels to a coordinated wave of content that mimicked the style of the UK’s Brexit campaign and targeted young voters. The Commission confirmed it is working closely with Polish authorities to assess whether TikTok adequately identified and mitigated the risks posed by this material. The videos, which have gained significant traction on TikTok, often featured young women in patriotic clothing delivering far-right, anti-EU messages. Polish officials describe the effort as a deliberate disinformation campaign, while investigations suggest that the content exhibits characteristics of Russian influence, as the texts sometimes contain specific Russian syntax. While TikTok has stated that it removed some content for violating its own rules, it has not provided detailed responses to the broader allegations. The case follows earlier scrutiny of the platform, including a formal DSA procedure opened in late 2024 over alleged failures to curb manipulation during Romania’s presidential election. The investigation comes as the EU braces for a dense election calendar in 2026, raising fears that AI-driven disinformation could distort democratic processes across multiple member states. Under the DSA, very large platforms face fines of up to 6% of their global turnover if they fail to address systemic risks.   Source: Heise Online, S. Krempl. AI disinformation on TikTok: EU Commission examines "Polexit" campaign . [online] Published 3 January 2026. Available at: https://www.heise.de/en/news/AI-disinformation-on-TikTok-EU-Commission-examines-Polexit-campaign-11127844.html Top Of Page [State Actors] Russia RT Falsely Blames Ukraine for Mariupol Theatre Destruction A report by DisinfoWatch  notes that the Russian state outlet RT has claimed Mariupol’s Drama Theatre has reopened nearly three years after what it described as a “Ukrainian Armed Forces’ terrorist attack,” a framing that directly contradicts extensive independent investigations. Multiple credible inquiries by Amnesty International, the OSCE, Human Rights Watch, and the Associated Press concluded that the theater was destroyed in March 2022 by a Russian airstrike, despite clear indications that it was being used as a civilian shelter. Large signs reading “CHILDREN” were painted outside the building to prevent an attack, yet the strike killed hundreds of civilians, with AP estimates placing the death toll at around 600. The theater’s reopening under Russian occupation authorities has been widely described as a propaganda exercise aimed at normalizing Moscow’s control of Mariupol and erasing evidence of a likely war crime. Source:  DisinfoWatch, RT claims Mariupol Theatre reopens “after the Ukrainian Armed Forces’ terrorist attack”. [online] Published 2025. Available at: https://disinfowatch.org/disinfo/rt-claims-mariupol-theatre-reopens-after-the-ukrainian-armed-forces-terrorist-attack/ Top Of Page RT Pushes Evidence-Free Narrative Around Alleged Drone Attack on Putin's Residence DisinfoWatch   reported that RT presented an unverified claim that Ukraine carried out a large-scale drone attack on a residence linked to Vladimir Putin as fact, despite offering no evidence that the incident occurred. The coverage asserted that Ukraine launched the "terror attack" using 91 long-range drones and then amplified supposed foreign reactions and Kremlin warnings of a tougher, non-diplomatic response. Independent reporting directly contradicts these claims. Multiple credible international media outlets note that Russia has provided no visual, local, or official corroboration consistent with past verified Ukrainian strikes, and the Ukrainian government has categorically denied the attack. Analysis by the Institute for the Study of War highlighted inconsistencies between Kremlin statements and Russian Defense Ministry figures, as well as the total absence of open-source evidence such as footage, damage reports, or local authority accounts. Source:  DisinfoWatch, Lavrov and Russian state propagandist allege Ukrainian attack on Putin’s residence without evidence. [online] Published 29 December 2025. Available at: https://disinfowatch.org/disinfo/lavrov-and-russian-state-propagandist-allege-ukrainian-attack-on-putins-residence-without-evidence/ Top Of Page How Russia Sought to Win the Narrative in 2025 Throughout 2025, EUvsDisinfo identified a consistent pattern in Russian FIMI, marked by the deliberate construction of an image of strength despite mounting military, economic, and political constraints. Kremlin-aligned outlets repeatedly exaggerated battlefield developments in Ukraine, portraying limited or ambiguous advances as decisive victories and promoting a narrative of Russian invincibility. This messaging drew heavily on historical revisionism and wartime glorification, amplified through “mapaganda” and symbolic moments such as Victory Day, to mask Russia’s failure to achieve key strategic objectives, including air superiority or sustained territorial control. In reality, Russian forces have suffered significant losses, failed to hold major regional capitals in the long term, and relied on external support, even as propaganda continued to present the war as steadily progressing in Moscow’s favor. Beyond Ukraine, 2025 also highlighted the growing scale and sophistication of Russian FIMI operations. EUvsDisinfo documented coordinated interference efforts in Moldova, Ukraine, and Georgia, as well as influence activities extending to Africa. The year saw the maturation of a disinformation-for-hire industry, increasingly professionalized and amplified by AI and large language model grooming. In response, the EU advanced new analytical tools, such as the “iceberg” and “galaxy” frameworks, to expose both visible narratives and hidden infrastructures behind disinformation, while launching the Democracy Shield to strengthen resilience against manipulation. Crucially, the Kremlin’s inflated claims of success served not only to bolster domestic morale but also to advance its strategic objectives ahead of potential negotiations. Source:  EUvsDisinfo, 2025 in review: winning the narrative. [online] Published 29 December 2025. Available at: https://euvsdisinfo.eu/2025-in-review-winning-the-narrative/ Top Of Page China Taiwan Warns of Coordinated Chinese Disinformation During Military Exercises In a Focus Taiwan article , Taiwan’s Ministry of National Defense warned of a coordinated wave of Chinese disinformation accompanying Beijing’s large-scale military exercises around the island, aimed at undermining public morale and confidence in Taiwan’s institutions and international partners. Officials identified 46 false or misleading narratives, many of which targeted government policies, the armed forces, and trust in the United States as part of broader psychological pressure during the drills. The disinformation included fabricated claims by Chinese state media that ports had been blockaded, that PLA forces had advanced close to Taiwan’s shores, as well as misleading video content suggesting that Chinese aircraft or drones had flown near Taipei 101. Taiwanese authorities rejected these claims and stressed that such information operations have become a routine component of PLA exercises, prompting ongoing monitoring, public corrections, and calls from political leaders for societal unity to counter disinformation. Source:  Focus Taiwan, S. Lin. Taiwan's military highlights Chinese disinformation during large-scale drill. [online] Published 30 December 2025. Available at: https://focustaiwan.tw/politics/202512300030 Top Of Page [AI Related Articles] The US Invasion of Venezuela Exposed Limitations in AI Chatbot Reporting A Wired   report highlighted how leading AI chatbots diverged sharply when asked about an alleged U.S. invasion of Venezuela and the capture of President Nicolás Maduro—an event publicly claimed by U.S. officials but denied by some AI systems. When tested, Google’s Gemini and Anthropic’s Claude provided timely summaries of the reported events, citing multiple news sources and acknowledging competing narratives. By contrast, ChatGPT and Perplexity flatly rejected the premise, insisting that no such invasion or capture had occurred and attributing the claims to misinformation or sensational headlines.The discrepancy underscores a core limitation of large language models: their reliance on training cutoffs and, in some cases, the absence of real-time search. ChatGPT’s response reflected outdated knowledge and expressed high confidence in an incorrect conclusion, while Perplexity later acknowledged routing the query to a lower-tier model after flagging it as “likely fraud.” Experts note that models with live web access are better equipped to handle fast-moving events; however, accuracy still depends on data sourcing and system design. Source:  Wired, B. Barrett. Business. The US Invaded Venezuela and Captured Nicolás Maduro. ChatGPT Disagrees. [online] Published 3 January 2026. Available at: https://www.wired.com/story/us-invaded-venezuela-and-captured-nicolas-maduro-chatgpt-disagrees/ Top Of Page [General Reports] Social Media Flooded with Disinformation After Maduro’s Capture Wired   highlighted how leading AI chatbots diverged sharply when asked about an alleged U.S. invasion of Venezuela and the capture of President Nicolás Maduro—an event publicly claimed by U.S. officials but denied by some AI systems. When tested, Google’s Gemini and Anthropic’s Claude provided timely summaries of the reported events, citing multiple news sources and acknowledging competing narratives. By contrast, ChatGPT and Perplexity flatly rejected the premise, insisting that no such invasion or capture had occurred and attributing the claims to misinformation or sensational headlines.The discrepancy underscores a core limitation of large language models: reliance on training cutoffs and, in some cases, the absence of real-time search. ChatGPT’s response reflected outdated knowledge and expressed high confidence in an incorrect conclusion, while Perplexity later acknowledged routing the query to a lower-tier model after flagging it as “likely fraud.” Experts note that models with live web access are better equipped to handle fast-moving events; however, accuracy still depends on data sourcing and system design. Source:  Wired, B. Barrett. Business. The US Invaded Venezuela and Captured Nicolás Maduro. ChatGPT Disagrees. [online] Published 3 January 2026. Available at: https://www.wired.com/story/us-invaded-venezuela-and-captured-nicolas-maduro-chatgpt-disagrees/ Top Of Page South Korea’s Anti-Disinformation Law Sparks Free Speech and Diplomatic Concerns As published  by Biometric Update , South Korea has passed a new law aimed at curbing false and fabricated information, including deepfakes and synthetic media, reflecting growing concern over the impact of disinformation on public order and democratic accountability. The legislation allows courts to impose punitive damages of up to five times proven losses on traditional media outlets and large online channels, including YouTube creators, found to have disseminated illegal or false content that causes measurable harm. It also empowers the country’s media regulator to fine repeat offenders up to 1 billion won if courts confirm that they have repeatedly distributed manipulated information. The law has triggered domestic and international controversy, with journalists’ unions and civil society groups warning that vague definitions of “false” or “fabricated” information could encourage self-censorship and enable abuse. Critics have urged the adoption of clearer standards and have called on the president to veto the bill. The debate is shaped by South Korea’s recent experience with disinformation, including the circulation of misleading YouTube content that preceded the 2024 declaration of martial law by then-President Yoon Suk Yeol. The legislation has also drawn criticism from the United States, which views such digital regulations as a threat to free expression and to U.S.-based technology companies. U.S. officials have expressed concern that the revised network law extends beyond addressing deepfakes and could jeopardize technological cooperation, echoing similar disputes over digital regulation in Europe and elsewhere. Despite these objections, the bill passed the National Assembly by a wide margin, highlighting the growing global tension between countering disinformation and preserving media freedom in an era of AI-driven content manipulation. Source:  Biometric Update, J. McConvey. South Korea law targeting deepfakes, disinformation gets frosty reception from US. [online] Published 2 January 2026. Available at: https://www.biometricupdate.com/202601/south-korea-law-targeting-deepfakes-disinformation-gets-frosty-reception-from-us Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • Weekly: Cyber-based hostile influence campaigns 05th - 11th January 2026

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] Disinformation and Digital Distortion of Geopolitics Through Social Media X Grok Spreads AI-Generated Sexualized Images [State Actors] Russia   Russian Influence Networks Flood U.S. Discourse After Maduro’s Capture Disinformation Campaign Targeting Estonian History on Wikipedia Report Warns of Growing Russia–China Coordination in Technology-Driven Hybrid and Disinformation Operations The War in Ukraine Kremlin Disinformation Framing Ukraine and its Supporters as “Neo-Nazi” China China’s AI and Disinformation Campaign Following Maduro’s Capture [AI Related Articles] AI Turbocharges Hoaxes About the Minneapolis ICE Shooting Deepfakes as Disinformation Threats to Digital Identity and Trust AI-Generated Disinformation on TikTok Targeting Ukraine AI Is Becoming a Force Multiplier for Disinformation and Democratic Manipulation [General Reports] Disinformation Surge After U.S. Raid in Venezuela Climate Disinformation Identified as a Long-Term Influence Threat in Europe Social Media Amplifies Disinformation and Polarization Around Venezuela Crisis AI-Driven Disinformation Exploits Maduro Arrest to Shape Global Narratives [Appendix - Frameworks to Counter Disinformation] U.S. Withdrawal from Cyber and Information Security Forums Raises Disinformation Concerns [CRC Glossary] [ Report Highlights] A Jamestown Foundation publication shows how Russia and China are increasingly fusing cyber capabilities, AI, and information warfare into coordinated hybrid operations aimed at eroding Western resilience through disinformation, infrastructure disruption, and strategic ambiguity. Reporting by The Record indicates that the European Commission is weighing further action against platform X after its AI tool Grok was used to generate sexualized images of a minor. A Carnegie Endowment for International Peace report finds that AI is accelerating disinformation and hostile influence by enabling the scalable production of synthetic content, deepfakes, and automated narratives that undermine elections, public trust, and democratic institutions. The HEAT report describes climate-related disinformation in Europe as persistent and adaptive, driven by conspiratorial and anti-elite narratives that undermine trust in science, institutions, and environmental policy, and amplified by fringe actors and cross-platform dynamics that push these claims into the mainstream. WIRED argues that the U.S. intervention in Venezuela illustrates how social media accelerates disinformation and polarization by prioritizing emotional, simplified, and propagandistic content over verified reporting, reshaping public understanding, and weakening citizens’ ability to distinguish truth from manipulation. A CyberScoop article warns that the Trump administration’s withdrawal from key international cybersecurity and counter-disinformation bodies risks weakening global coordination against hybrid threats and opening space for expanded authoritarian influence. [Social Media Platforms] Disinformation and Digital Distortion of Geopolitics Through Social Media A Wired   article examines how social media has transformed geopolitics into short, viral videos that simplify and distort complex events. The recent US attack on Venezuela illustrated that while serious reporting detailed casualties, the capture of President Maduro, and broader geopolitical consequences, social platforms such as TikTok and Instagram circulated fragmented, emotionally charged narratives. Users encounter simplified, highly partisan content that frames reality in dramatic, often misleading ways, creating an illusion of immediate understanding while omitting nuance and context. This environment allows disinformation—whether intentional or algorithmically amplified—to spread rapidly, shaping public perception before facts can be verified. Experts have noted that digital disintermediation has worsened the problem. Each user effectively becomes a media outlet, selecting content that confirms preexisting beliefs. Emotional reactions, memes, and viral sketches often dominate the discourse, drowning out fact-based reporting. The result is a polarized, reactive public that interprets events through ideological lenses rather than verified evidence. The Venezuelan case highlighted how social media algorithms reward engagement over accuracy, creating echo chambers that reinforce biases and erode critical judgment. Source:  Wired, A. Lagos. Culture. The Danger of Reducing America’s Venezuela Invasion to a 60-Second Video . [online] Published 11 January 2026. Available at: https://www.wired.com/story/the-danger-of-reducing-a-americas-venezuela-invasion-to-a-60-second-video/ Top Of Page X Grok Spreads AI-Generated Sexualized Images The European Commission is weighing further action against social media platform X, The Record   reports , after its AI tool Grok was used to generate sexualized images of a minor. The Commission described Grok’s output as illegal and unacceptable. EU officials noted that Grok has previously been linked to the spread of other unlawful content, including material undermining recognition of the Holocaust, which is criminalized in several European countries. Under the Digital Services Act (DSA), platforms are required to protect users from illegal content and systemic risks. Following these developments, X has come under urgent review by the UK media regulator Ofcom, while Australia’s online safety regulator has also opened an investigation, as reported by   CyberNews  citing The Guardian . Governments have warned X to comply with digital safety laws or face serious consequences, including potential blocking in the UK. UK Technology Secretary Liz Kendall said she would support blocking the platform if it refuses to comply with the Online Safety Act, and The Record   noted that three U.S. senators have asked Google and Apple to remove the Grok and X apps from their app stores. Politico   also reported  that users targeted political figures such as Sweden’s deputy prime minister, Ebba Busch, whose manipulated image was later withheld in the European Union under local laws. X has responded by framing EU enforcement actions as “political censorship” and an attack on free speech. Nevertheless, it has limited Grok’s image-generation feature to paying users, presenting this as a temporary safeguard following recent misuse. This step is widely viewed as damage control rather than a solution to the systemic risks posed by AI-driven disinformation and manipulation. As CyberNews   reported , experts and regulators argue that restricting access fails to address the deeper problem: Grok’s lack of effective safeguards against abuse and deception. Sources : The Record (Recorded Future), A. Martin. Government. EU looking ‘very seriously’ at taking action against X over Grok . [online] Published 5 January 2026. Available at: https://therecord.media/eu-grok-regulation-deepfake CyberNews , A. Zhadan. UK considers blocking X over Grok’s AI-generated sexualized images . [online] Published 10 January 2026. Available at: https://cybernews.com/ai-news/uk-considers-blocking-x-over-groks-ai-generated-sexualized-images/ The Record  (Recorded Future), S. Smalley. Lawmakers call on app stores to remove Grok, X over sexualized deepfakes . [online] Published 9 January 2026. Available at: https://therecord.media/lawmakers-call-on-app-stores-to-remove-grok-x Politico , M. Pollet & P. Haeck. Elon Musk’s Grok chatbot limits access to image generator that put women in bikinis . [online] Published 9 January 2026. Available at: https://www.politico.eu/article/elon-musk-x-grok-chatbot-limits-access-image-generator-women-bikini-deepfakes/ CyberNews , N. Ancell. Elon Musk limits access to Grok as experts criticize his childlike behaviour over deepfake nudes . [online] Published 9 January 2026. Available at: https://cybernews.com/ai-news/musk-limits-access-grok-deepfake-nudes/ Top Of Page [State Actors] Russia Russian Influence Networks Flood U.S. Discourse After Maduro’s Capture DFR Lab   details how , following the capture of Venezuelan President Nicolás Maduro by U.S. forces, a broad Russia-linked influence infrastructure rapidly mobilized to target U.S. information spaces. Unable to materially protect a key Latin American ally, Moscow pivoted to information warfare, flooding social media and fringe news ecosystems with contradictory narratives, conspiracy theories, pseudo–fact checks, and AI-generated or manipulated visuals. Rather than promoting a single coherent story, Russian-linked actors focused on saturating the discourse to confuse audiences and reframe the episode as evidence of U.S. instability and unreliability as an ally, particularly for Europe and NATO. A central role was played by Storm-1516, a Russia-linked operation that leveraged influencer networks, fabricated media sites, and cross-platform amplification to push mutually conflicting claims. These ranged from false-flag allegations and antisemitic conspiracies to recycled or AI-generated footage portraying Venezuelan reactions in misleading ways. The contradictions were often deliberate, with some accounts promoting AI-generated videos while others “debunked” the same content. Parallel efforts came from the Pravda Network, which translated and laundered Kremlin-aligned material into U.S.-facing websites and pro-Trump or QAnon-adjacent Telegram ecosystems, ensuring steady reinforcement of Russian state narratives for English-speaking audiences. In Russian-language spaces, state-aligned actors and military bloggers emphasized geopolitical consequences, portraying Maduro as a legitimate leader and framing his capture as an attack on the rules-based international order. Commentary amplified figures such as Viktor Bout to cast U.S. courts as tools of political repression and to deflect attention from Russia’s limited response. Source:  DFR Lab, E. Buziashvili. How Russia’s influence machine mobilized immediately after Maduro’s capture . [online] Published 10 January 2026. Available at: https://dfrlab.org/2026/01/10/how-russias-influence-machine-mobilized-immediately-after-maduros-capture/ttps://disinfowatch.org/disinfo/rt-claims-mariupol-theatre-reopens-after-the-ukrainian-armed-forces-terrorist-attack/ Top Of Page Disinformation Campaign Targeting Estonian History on Wikipedia StopFake   reports that Estonian authorities and Wikimedia Estonia are meeting to address growing concerns about historical disinformation on Wikipedia, particularly in the English-language edition. Hundreds of biographies and history articles have been altered to falsely portray Estonia and other Baltic states as integral parts of the Soviet Union or Russia, downplaying their occupation, independence, and defensive struggles, such as Estonia’s War of Independence. These edits misrepresent historical facts and shape misleading narratives for international readers. Wikimedia Estonia believes the changes are largely driven by a single Russian-speaking editor promoting pro-Kremlin interpretations of history. While most questionable edits are quickly detected and reversed, officials warn that such coordinated manipulation reflects long-standing information-warfare tactics against Estonia dating back to 2007. The goal, they say, is not accuracy but narrative distortion—rewriting history to legitimize Russian geopolitical claims. Source:  StopFake, V. Troyan. для ІМІ. В Эстонии фиксируют попытки искажения истории страны в Википедии . [online] Published 9 January 2026. Available at: https://www.stopfake.org/ru/v-estonii-fiksiruyut-popytki-iskazheniya-istorii-strany-v-vikipedii / Top Of Page Report Warns of Growing Russia–China Coordination in Technology-Driven Hybrid and Disinformation Operations A Jamestown Foundation   publication outlines how Russia and the People’s Republic of China (PRC) are increasingly integrating technology into coordinated hybrid operations, combining cyber activity, information warfare, and emerging military technologies to pressure Western states and U.S. allies. It emphasizes that digital influence campaigns, cyberattacks, and electromagnetic disruption are now integral to this hybrid approach, alongside conventional military power. These activities rely on ambiguity and deniability, enabling both actors to undermine adversaries while avoiding clear escalation thresholds. A central finding is the growing role of artificial intelligence, cyber capabilities, and information dominance in shaping the operational environment. Russia has accelerated AI integration following lessons learned in Ukraine, while the PRC is adapting AI systems—often built on Western technologies—to support “intelligentized warfare” that prioritizes algorithms, data control, and narrative influence. Cyber and information operations are portrayed as tools to exploit societal and institutional vulnerabilities, particularly through disinformation, cyber sabotage, and pressure on critical infrastructure, including NATO frontline states. The article also highlights targeted hybrid tactics against infrastructure and information systems, including cyberattacks, GPS spoofing, and suspected sabotage of undersea cables in the Baltic, the Arctic, and near Taiwan, raising concerns about potential coordination between Russia and the PRC. These actions illustrate how technology-enabled hybrid operations blend physical disruption with information effects to create insecurity, weaken trust, and strain Western resilience. Overall, the report frames these activities as part of a long-term strategy to reshape the strategic environment through sustained technological, cyber, and informational pressure rather than overt conflict. Source:  The Jamestown Foundation. Strategic Snapshot: Russia–PRC Technology and Hybrid Operations . [online] Published 11 January 2026. Available at: https://jamestown.org/strategic-snapshot-russia-prc-technology-and-hybrid-operations / Top Of Page War in Ukraine Kremlin Disinformation Framing Ukraine and its Supporters as “Neo-Nazi” DisinfoWatch   reported that Russian state outlet RT is using a real event to spread a misleading and recycled disinformation narrative. While Ukrainian President Volodymyr Zelenskyy did appoint Canada’s Chrystia Freeland as a non-staff economic adviser, RT distorted this fact by reviving the long-standing Kremlin trope equating Ukraine with Nazism. The outlet relied on comments by Russian official Maria Zakharova and used insinuation rather than evidence to push this claim. RT alleged that Freeland’s appointment proves Ukraine is a “haven for neo-Nazis,” pointing to the historical fact that her grandfather edited a newspaper under Nazi occupation during World War II. A documented family history was presented as misleading proof of Freeland’s beliefs or of the present-day Ukrainian government’s ideology, while no evidence was offered to connect this historical detail to current Ukrainian policy or leadership. In a separate article, DisinfoWatch added that  Russian state media TASS and the Russian Foreign Ministry presented a “counter-sanctions” list barring 28 Canadian citizens from entering Russia, falsely accusing them of promoting neo-Nazi ideology and framing even Western support for Ukraine as Nazism. In reality, the list was not evidence-based at all. Since 2022, Moscow has repeatedly issued symbolic entry bans against Canadians in response to Ottawa’s sanctions, routinely using “neo-Nazi” rhetoric without substantiation. Those named include respected academics, civil society activists, journalists, and NGO leaders, many of whom are linked to Ukrainian studies or human rights work. There is no credible evidence that any of them supports neo-Nazism. Sources: DisinfoWatch. RT uses Freeland appointment to launder “Nazi Ukraine” trope . [online] Published 5 January 2026. Available at: https://disinfowatch.org/disinfo/rt-uses-freeland-appointment-to-launder-nazi-ukraine-trope/   DisinfoWatch. Russia bans 28 Canadians, falsely branding them as “neo-Nazis . [online] Published 5 January 2026. Available at: https://disinfowatch.org/disinfo/russia-bans-28-canadians-falsely-branding-them-as-neo-nazis/ Top Of Page China China’s AI and Disinformation Campaign Following Maduro’s Capture DFR Lab   described how Chinese state-linked social media accounts deployed a distinctive strategic response to the US capture of Venezuelan President Nicolás Maduro, relying heavily on AI-generated memes, parody videos, and hashtag campaigns to undermine US credibility. Chinese accounts adopted an irreverent, digitally native style—mirroring the aesthetic often used by U.S. President Donald Trump—to criticize U.S. foreign interventions and portray Washington as a unilateral power that leaves disorder in its wake. Official statements from Beijing, echoed across state media and social platforms, framed China as a defender of sovereignty and international law, in contrast to what it described as U.S. hegemony. China’s domestic information ecosystem reinforced this messaging at scale. On platforms such as Weibo and Zhihu, hashtags and discussions tied to the foreign ministry’s response dominated trending lists, accumulating tens of millions of views. Popular posts questioned the legitimacy of the U.S. arrest of a foreign leader and warned that regime change in Venezuela could set a dangerous precedent for smaller states. In parallel, inauthentic accounts on X, exhibiting traits associated with the Spamouflage network, amplified narratives accusing the United States of acting out of economic self-interest, particularly in the oil sector. Chinese state media also experimented with English-language AI-generated meme videos to reach global audiences, though these saw limited organic engagement outside China. Source: DFR Lab, K. Thibaut. AI, memes, and hashtags: How China is battling the US online over Venezuela . [online] Published 10 January 2026. Available at: https://dfrlab.org/2026/01/10/how-china-is-battling-us-online-over-venezuela/ Top Of Page [AI Related Articles] AI Turbocharges Hoaxes About the Minneapolis ICE Shooting NewsGuard   reported that, in the immediate aftermath of the 7 January 2026 shooting of 37-year-old Renee Nicole Good by an ICE agent in Minneapolis, social media was rapidly flooded with AI-generated and AI-altered misinformation. Within 24 hours, at least six false claims circulated widely, four of them relying on AI to fabricate images or distort the circumstances of the shooting, collectively drawing more than 4 million views across X, Instagram, Threads, and TikTok. These hoaxes included AI-enhanced images falsely claiming to reveal the masked agent’s identity and face, as well as fabricated visuals portraying the encounter as calm and non-confrontational, contradicting verified video footage. Some posts even misidentified the agent as Steve Grove, the CEO of the Minnesota Star Tribune, forcing the newspaper to publicly deny any connection. Investigations found that the viral images were digitally altered or entirely generated by AI tools, including content produced directly by X’s chatbot Grok, which later acknowledged that the images were speculative and not based on real data. To demonstrate how easily such hoaxes can be created, NewsGuard conducted a “red-teaming” audit of OpenAI’s Sora 2 and xAI’s Grok Imagine. Both tools generated convincing fake videos that advanced false claims about the shooting in roughly 75 percent of tests, including fabricated news reports and invented identities, and sometimes added realistic but false details on their own. Source:  NewsGuard's Reality Check, C. Vercellone & N. Huet. AI Used to ‘Unmask’ ICE Agent, in Latest Case of ‘AI Enhancement’ of Breaking News . [online] Published 8 January 2026. Available at: https://www.newsguardrealitycheck.com/p/minneapolis-ice-shooting-gets-ai Top Of Page Deepfakes as Disinformation Threats to Digital Identity and Trust Research by the Cybercrime Atlas  research group analyzed 17 face-swapping tools and eight camera injection tools to assess whether they effectively enable KYC (know-your-customer) bypass and to characterize the current deepfake landscape. The paper found that the misuse of deepfakes in KYC and remote verification processes enables attackers to inject false identities into financial systems, leading to fraud, impersonation, and reputational harm. By combining face-swapping technology, camera injection tools, and stolen or AI-generated documents, threat actors can present synthetic identities as real, undermining trust in digital verification at individual, organizational, and systemic levels. The research shows that while many deepfake tools remain imperfect, some already enable real-time, high-fidelity impersonation that can bypass biometric checks under certain conditions. This form of identity-based disinformation is becoming more scalable and dangerous as advanced AI tools become widely available and regulation remains fragmented. The study stressed that countering deepfake-driven disinformation requires coordinated, multi-layered defenses. Stronger liveness detection, continuous monitoring, risk-based fraud analysis, and shared governance standards are essential to prevent synthetic identities from becoming embedded in digital systems. Source:  World Economic Forum, N. Umansky. Unmasking Cybercrime: Strengthening Digital Identity Verification against Deepfakes . [online] Published January 2026. Available at: https://reports.weforum.org/docs/WEF_Unmasking_Cybercrime_Strengthening_Digital_Identity_Verification_against_Deepfakes_2026.pdf Top Of Page AI-Generated Disinformation on TikTok Targeting Ukraine StopFake   has documented  a recent surge of AI-generated videos pushing pro-Russian disinformation on Ukrainian TikTok. These clips, frequently depicting staged mass protests in Kyiv, Odesa, and Kharkiv, have accumulated millions of views. Leveraging realistic AI models such as OpenAI’s Sora, propagandists fabricate reporters, crowds, and events that never took place, combining emotionally charged delivery with vague, alarming claims. Several accounts play a particularly prominent role in these campaigns. They circulate AI-generated footage showing crowds chanting pro-Russian slogans, waving Russian flags, or portraying Ukrainian citizens as eager to join Russia. While some videos reference real concerns such as corruption or a desire for peace, these themes are manipulated to advance the false narrative that Russia offers the solution. Advances in AI now allow such actors to bypass traditional production barriers, resulting in content that appears highly realistic and is more difficult to detect than archival or staged footage. Source: StopFake, A. Korol & S. Mikhalkov. для Текстів. «Россия, помоги!» ИИ-видео с протестами в Украине набирают миллионы просмотров в TikTok . [online] Published 8 January 2026. Available at: https://www.stopfake.org/ru/rossiya-pomogi-ii-video-s-protestami-v-ukraine-nabirayut-milliony-prosmotrov-v-tiktok/ Top Of Page AI Is Becoming a Force Multiplier for Disinformation and Democratic Manipulation As a Carnegie Endowment for International Peace   report notes, artificial intelligence is increasingly intersecting with democratic systems in ways that both amplify disinformation risks and create new avenues for political influence. The report identifies AI-generated misinformation, deepfakes, and automated content as growing tools used by domestic and foreign actors to manipulate elections, distort public debate, and undermine trust in democratic institutions, particularly during election cycles. These tactics exploit AI’s ability to scale persuasive, personalized, and synthetic content across digital platforms, complicating efforts to verify information and safeguard information integrity. The analysis maps four key domains where these risks manifest: elections and campaigns, citizen deliberation, government institutions and services, and social cohesion. Across these areas, hostile influence actors can leverage AI to spread misleading narratives, suppress or confuse voters, and erode confidence in governance, while authoritarian governments may deploy AI for surveillance, censorship, and state-led disinformation. The report notes that existing countermeasures, such as content labeling, monitoring, and platform policies, remain fragmented and insufficient given the speed and adaptability of AI-enabled influence operations. At the same time, the report emphasizes that responses to AI-driven disinformation depend heavily on the actors involved, including governments, technology companies, civil society, and political campaigns. It argues that without stronger coordination, transparency, and guardrails, AI’s role in the information environment risks deepening polarization and weakening democratic resilience. Addressing hostile influence, therefore, requires not only technical solutions but also institutional capacity, cross-sector cooperation, and sustained oversight to prevent AI from becoming a force multiplier for disinformation campaigns. Source:  Carnegie Endowment for International Peace, R. George & I. Klaus. AI and Democracy: Mapping the Intersections . [online] Published 8 January 2026. Available at: https://carnegieendowment.org/research/2026/01/ai-and-democracy-mapping-the-intersections?lang=en Top Of Page [General Reports] Disinformation Surge After U.S. Raid in Venezuela CyberScoop  reported that, following the surprise U.S. operation in Caracas that led to the capture of Venezuelan President Nicolás Maduro, a lack of clear and timely information from the Trump administration created an information vacuum. This was quickly filled by disinformation online, including realistic but fake images and videos, recycled footage from past conflicts, and AI-generated content falsely portraying Maduro’s arrest. Social media platforms were flooded with manipulated visuals, some of which even inserted other world leaders into fabricated “perp walk” scenes, amplifying confusion during a fast-moving news event. At the same time, coordinated narratives framed the U.S. action as imperialist aggression driven by a desire to seize Venezuela’s oil. Spanish- and English-language online groups widely spread claims that the raid was theft rather than a law enforcement operation, echoing long-standing anti-U.S. themes promoted by actors aligned with Russia and China. The disinformation also revived debunked domestic conspiracy theories. Right-wing influencers falsely linked Maduro’s capture to claims that Venezuela helped manipulate the 2020 U.S. election—allegations that have been repeatedly disproven and legally repudiated. Experts stressed that much of the AI-driven, conspiratorial content was not intended to persuade skeptics but to reinforce existing beliefs and mobilize supporters. Source:  CyberScoop, D. B. Johnson. AI, voting machine conspiracies fill information vacuum around Venezuela operation . [online] Published 5 January 2026. Available at: https://cyberscoop.com/ai-voting-machine-conspiracies-fill-information-vacuum-venezuela-raid/ Top Of Page Climate Disinformation Identified as a Long-Term Influence Threat in Europe Harmful Environmental Agendas and Tactics (HEAT) post-project reflection report analyzes how climate-related misinformation and hostile narratives circulate across France, Germany, and the Netherlands, undermining trust in climate science, environmental policy, and public institutions. The research identifies persistent narrative clusters, including climate denial, anti-elite and anti-government framing, and conspiracy theories portraying environmental action as coercive, manipulative, or part of hidden global agendas. These narratives are shown to be resilient, adaptive, and responsive to political events, extreme weather, and public anxiety, allowing them to re-emerge repeatedly across different national contexts. The report highlights key tactics and actors involved in amplifying harmful environmental narratives. Fringe influencers, partisan commentators, alternative media outlets, and loosely coordinated online communities act as initial catalysts, introducing misleading claims that later migrate into more mainstream discourse when they align with salient political debates. Cross-platform dissemination enables these narratives to move between fringe and mainstream spaces, while cross-border information flows allow conspiratorial content—often originating in English-language ecosystems—to be adapted to local grievances in each country. Although national contexts differ, common structures persist: attacks on scientific authority, portrayals of climate policy as economically punitive, and claims of climate manipulation or geoengineering. Overall, the HEAT project concludes that environmental misinformation functions as a long-term influence vector, embedded within broader patterns of institutional distrust, identity politics, and socio-economic anxiety rather than isolated false claims. The findings stress that harmful narratives do not dissipate after exposure but evolve alongside political and social developments, requiring sustained monitoring and comparative analysis. The report’s dissemination and uptake by media, researchers, and international institutions underscore its relevance for understanding how climate disinformation intersects with democratic resilience and policy legitimacy across Europe. Source:  Logically & EU DisinfoLab. Prepared for the European Media and Information Fund (EMIF). HEAT: Harmful Environmental Agendas & Tactics: A look at France, Germany, and the Netherlands . [online] Published 23 June 2025. Available at: https://www.disinfo.eu/wp-content/uploads/2025/06/20250623-heat-harmful-environmental-agendas-tactics.pdf Top Of Page Social Media Amplifies Disinformation and Polarization Around Venezuela Crisis A WIRED   article examines how social media–driven information dynamics have reshaped the interpretation of major geopolitical events, accelerating the spread of misleading, polarized, and propagandistic narratives surrounding the U.S. military intervention in Venezuela. Experts cited in the piece argue that the collapse of traditional fact-checking timelines has allowed platforms to function as real-time constructors of reality, privileging speed, emotion, and simplification over verification. Political actors, most notably U.S. President Donald Trump, used social media to frame the intervention as legitimate and triumphant, while competing narratives from Venezuelan officials, activists, and commentators circulated simultaneously, producing confusion rather than clarity. Disinformation and distortion did not rely solely on outright falsehoods but also on selective framing, satire, memes, and emotionally charged short-form videos that spread rapidly across platforms such as TikTok, Instagram, WhatsApp, and X. Accounts such as Historia Para Tontos illustrated how simplified, ironic content can go viral and shape geopolitical understanding, while comment sections became arenas for ideological confrontation, often reinforcing pre-existing beliefs. Scholars and journalists highlighted how algorithms reward outrage and immediacy, amplifying polarized narratives and reducing space for nuance, dialogue, or contextual reporting.The article further underscores how information control, digital blackouts, and fear of persecution in Venezuela have driven citizens toward fragmented, private communication channels, particularly WhatsApp, making coordinated, high-quality information dissemination more difficult. Independent media collectives and diaspora voices attempt to counter misinformation, but the overall environment remains saturated with noise, bias, and competing propaganda. Analysts warn that this disintermediated ecosystem, in which every user acts as a broadcaster, creates fertile ground for manipulation, echo chambers, and the erosion of the public’s capacity to distinguish truth from fabricated or legitimized narratives, turning geopolitical crises into ongoing cognitive battles rather than discrete events. Source:  Wired, A. Lagos. The Danger of Reducing America’s Venezuela Invasion to a 60-Second Video . [online] Published 11 January 2026. Available at: https://www.wired.com/story/the-danger-of-reducing-a-americas-venezuela-invasion-to-a-60-second-video/ Top Of Page AI-Driven Disinformation Exploits Maduro Arrest to Shape Global Narratives Graphika  found that the arrest and extradition of Venezuelan President Nicolás Maduro generated a highly volatile information environment that was quickly filled with misleading and synthetic content across major platforms. Within 48 hours, Graphika identified AI-generated videos, fabricated images, and repurposed footage circulating widely on TikTok, Facebook, Instagram, X, and YouTube. This content was shared by both engagement-driven accounts seeking virality and ideologically motivated users advancing political narratives, illustrating how disinformation thrives when facts are still emerging, and verification lags behind real-time sharing. Graphika could identify three dominant disinformation tactics. First, AI-generated videos falsely depicting Venezuelans celebrating Maduro’s arrest spread quickly, often posted by newly created TikTok accounts that had previously shared other emotionally charged synthetic content, suggesting opportunistic engagement farming rather than a single coordinated political campaign. Second, fabricated booking images showing Maduro in a U.S.-style mugshot circulated widely, despite clear visual inconsistencies indicating manipulation. These images were initially shared by generic “breaking news” accounts and later amplified by opposition voices, who framed them as proof of justice, eventually appearing in mainstream media coverage and demonstrating how unverified social media content can be laundered to appear legitimate. Third, the strategic repurposing of authentic but unrelated footage to advance misleading narratives. Ideologically aligned accounts, including pro-Maduro and anti-U.S. actors across multiple languages, reused videos from prior U.S. protests and falsely presented them as contemporary demonstrations against U.S. intervention in Venezuela. Some versions deliberately obscured original logos to conceal provenance. Together, these cases show how generative AI, recycled media, and platform incentives enable rapid narrative shaping without centralized coordination—reinforcing ideological positions, confusing audiences, and allowing misleading content to penetrate mainstream discourse in ways that mirror tactics used in elections, scams, and foreign influence campaigns beyond this specific event. Source:  Graphika. How Graphika Tracked Synthetic Content After Maduro's Arrest . [online] Published 9 January 2026. Available at: https://graphika.com/posts/venezuela Top Of Page [Appendix - Frameworks to Counter Disinformation] U.S. Withdrawal from Cyber and Information Security Forums Raises Disinformation Concerns The Trump administration is pulling the United States out of several international organizations focused on cybersecurity cooperation, online freedom, and countering hybrid threats, CyberScoop   reports , including the Global Forum on Cyber Expertise, the Freedom Online Coalition, and the European Centre of Excellence for Countering Hybrid Threats. The move is framed by the administration as part of a broader retreat from multilateral institutions it views as mismanaged, ideologically driven, or incompatible with U.S. sovereignty. Critics warn that the decision risks creating a leadership vacuum in areas central to countering cyber threats, disinformation, and hybrid influence campaigns. Experts cited in the article argue that these organizations play a critical role in strengthening collective defenses against cybercrime, infrastructure attacks, and information manipulation, and that U.S. withdrawal weakens international coordination at a time of escalating cyber and disinformation threats. The Global Forum on Cyber Expertise supports capacity building on cybersecurity and emerging technologies, while the Freedom Online Coalition promotes digital rights and opposes cyber-enabled repression. The European Centre of Excellence focuses on countering hybrid threats that blend cyber operations, disinformation, and other non-military tactics targeting democratic states. Former officials, academics, and civil society leaders caution that disengagement from these forums undermines the rules-based international order and reduces democratic states' ability to respond collectively to hostile influence operations. They warn that diminished U.S. participation could enable authoritarian actors to expand their influence, weaken protections for free expression online, and create conditions in which disinformation, cyber repression, and hybrid threats spread with fewer constraints. Source:  CyberScoop, T. Starks. Trump pulls US out of international cyber orgs . [online] Published 8 January 2026. Available at: https://cyberscoop.com/trump-pulls-us-out-of-international-cyber-orgs/ Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

bottom of page