top of page

Search CRC

142 results found with an empty search

  • CRC Weekly: Cyber-based hostile influence campaigns 22dn - 28th December 2025

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] Meta Meta Report Warns of Persistent State-Linked Disinformation and AI-Accelerated Influence Operations [State Actors] Russia Disinformation in Putin’s “Direct Line” Broadcast Russia’s Historical Disinformation Campaign Against Finland The War in Ukraine The Expansion of Russia’s AI-Driven Disinformation Campaigns Disinformation Campaigns Targeting Ukrainian Refugees False Claims About Microchipped Vaccines in Ukraine Coordinated Disinformation Fuels Anti-Ukrainian Sentiment in Poland China China’s Cognitive Warfare Strategy Seeks Narrative Dominance Through Disinformation China Index Events Highlight PRC Influence [AI Related Articles] AI and LLM Disinformation Risks  [Appendix - Frameworks to Counter Disinformation] EU Sanctions Target Diana Panchenko  U.S. Action Targets Foreign Disinformation and Censorship Agents US Judge Blocks Deportation of Anti-Disinformation Advocate The WikiSignals Project Strengthens Wikipedia Against Disinformation EU Sanctions Act against Russian Aggression Uzbekistan Strengthens Youth Fact-Checking to Combat Disinformation EU - Western Balkans Conference Strengthens Fight Against Disinformation [CRC Glossary] [ Report Highlights] Meta’s Threat Report finds that state-linked actors continue to run coordinated disinformation campaigns using fake media, local intermediaries, and AI tools to scale deceptive narratives and evade detection. According to the Cyfluence Research Center report, China conducts coordinated cognitive warfare by pairing gray-zone military actions with disinformation, legal manipulation, and psychological operations to reshape narratives of sovereignty and achieve strategic dominance without open conflict. A report by the Psychological Defence Agency described a sustained Kremlin-led disinformation campaign that uses falsified history as a strategic tool against Finland. An article by Stop Fake outlined a Russian disinformation strategy that portrays Ukrainians abroad as traitors or threats. As published by DoubleThink Lab , the China Index Regional Event Fund, launched in 2025, has supported local events worldwide to discuss Chinese influence and disinformation, helping communities understand the PRC’s global strategies. The European Union announced it has renewed its restrictive measures against Russia for six more months, until July 31,   2026, in response to Moscow’s ongoing destabilization of Ukraine. [Social Media Plattforms] Meta Meta Report Warns of Persistent State-Linked Disinformation and AI-Accelerated Influence Operations Meta’s   Q2–Q3 2025 Adversarial Threat Report  outlines how hostile actors continue to exploit online platforms through coordinated inauthentic behavior (CIB), fraud, and AI-enabled manipulation, with a particular focus on state-linked influence operations and transnational criminal networks. The report documents how adversaries use false identities, deceptive media outlets, and coordinated amplification to mislead audiences, obscure attribution, and advance geopolitical or financial objectives, while deliberately adapting tactics to evade detection. The report highlights several state-linked influence campaigns, including a long-running Iranian operation known as Endless Mayfly , attributed to the International Union of Virtual Media , which used fake news sites, impersonated journalists, and cross-platform amplification to launder misleading narratives into public discourse. It also details Russian-linked operations that increasingly rely on authentic local intermediaries, particularly in sub-Saharan Africa, where freelancers and local media managers were covertly used to promote pro-Russian narratives and undermine Western partnerships—blurring the line between authentic and inauthentic behavior. In parallel, Meta reports that AI has become an accelerant rather than a fundamentally new threat, enabling adversaries to scale deception through AI-generated personas, content, translations, and impersonation while maintaining familiar behavioral patterns. These techniques are used across influence operations and scams, including celebrity impersonation and the production of fake news, reinforcing the need for behavior-based detection, cross-sector intelligence sharing, and coordinated defenses to counter evolving disinformation and hybrid threats. Source: Meta, Adversarial Threat Report: Second–Third Quarter . [online] Published December 2025. Available at: https://transparency.meta.com/sr/Q2-Q3-2025-Adversarial-threat-report/ Top Of Page [State Actors] Russia Disinformation in Putin’s “Direct Line” Broadcast According to an article  by EUvsDisinfo , Russia’s annual “Direct Line with Vladimir Putin,” first held in 2001, functions as a central instrument of state-led disinformation. In the broadcast that aired on December 19, Putin created an illusion of openness while legitimizing false narratives at the highest political level. Emotional appeals, personalized anecdotes, and claims of hands-on leadership were used to humanize the president and distract from systemic problems. At the same time, inconvenient realities were downplayed or presented as already resolved. A key tactic was the management of criticism through false equivalence and projection. By allowing Western journalists to ask questions, the Kremlin sought to showcase what it claimed was “free debate,” while dismissing criticism as biased and hostile. Putin repeated long-standing disinformation narratives about Ukraine, portraying Russia as seeking peace, shifting blame for the war onto Ukraine, the EU, and NATO, and insisting that Russia did not start the conflict. He also relied on cherry-picked facts and outright denial to claim military success and economic resilience, despite sanctions and battlefield realities. Assertions that Ukraine lacks the capacity to resist, that Russian advances are inevitable, and that Western actions amount to “robbery” or provocation were used to reinforce a victimhood narrative. Source:  EUvsDisinfo, Controlled questions, crafted lies: inside Putin’s year-end messaging machine . [online] Published 23 December 2025. Available at: https://euvsdisinfo.eu/controlled-questions-crafted-lies-inside-putins-year-end-messaging-machine/ Top Of Page Russia’s Historical Disinformation Campaign Against Finland A report  by the Psychological Defence Agency  described a sustained Kremlin-led disinformation campaign that uses falsified history as a strategic tool against Finland. Russian authorities and state-controlled media portray Finland as a Nazi collaborator responsible for war crimes, reviving and distorting Second World War narratives to undermine Finland’s legitimacy, damage its international reputation, and justify future pressure or aggression. This historical revisionism is embedded in Russia’s broader hybrid warfare, in which disinformation, fear-mongering, and victim narratives are coordinated from the highest political level and amplified through state media such as “TASS”. A central element of the campaign is “lawfare”: Russian courts and prosecutors issue rulings and indictments that falsely accuse Finland of genocide and complicity in the siege of Leningrad. These legal actions create a façade of legitimacy for potential reparations or territorial claims. They are reinforced by physical acts, including the vandalism and removal of Finnish memorials on Russian territory, as well as staged provocations involving nationalist activists. The campaign follows an escalatory logic in which rhetoric, legal measures, and physical acts combine to create conditions for renewed aggression. The report concluded that history is no longer treated as an academic debate but as an active weapon of security policy, used to mobilize the Russian population, portray Russia as a victim, and frame Finland as an enemy. Countering this threat requires coordinated, long-term, and fact-based international responses, as fragmented or reactive approaches allow disinformation to set the terms of historical and political debate. Source:  Psychological Defence Agency Report Series, P. Oksanen, History as a Battlefield: Russia’s Information War Against Finland 2025 . [online] Published 2025. Available at: https://mpf.se/download/18.6888ebfe19b2bdfbd24ae4/1766154667213/History_as_a_Battlefield-tga.pdf Top Of Page The War in Ukraine The Expansion of Russia’s AI-Driven Disinformation Campaigns NewsGuard has debunked  400 false claims related to the Russia–Ukraine war promoted by Russian actors. The most prominent of these operations is Storm-1516, which Microsoft and NewsGuard have identified as the fastest-growing and most prolific campaign. Rather than relying on sanctioned outlets like RT and Sputnik, the Kremlin is increasingly using unofficial, harder-to-trace networks to spread false narratives. Storm-1516 specialises in accusing Ukraine and its allies of corruption and criminal activity, using AI-generated news websites, deepfake videos, and fake social media accounts. These false claims often achieve substantial reach, drawing millions of views across platforms such as X and Telegram. The campaign has rapidly scaled its output since 2023, surpassing not only other influence operations but also RT and Sputnik combined in the number of false claims spread in 2025. Similar operations, such as Matryoshka and the Foundation to Battle Injustice, further amplify disinformation through fake media outlets. A defining feature of this new phase is the use of AI to increase speed, scale, and credibility. Storm-1516 has impersonated trusted news organisations, fabricated official-looking videos, and exploited real corruption investigations to inject false claims about President Volodymyr Zelensky. Source:  NewsGuard Reality Check, E. Maitland & A. Lee, 400 and Counting: A Russian Influence Operation Overtakes Official State Media in Spreading Russia-Ukraine False Claims . [online] Published 22 December 2025. Available at: https://www.newsguardrealitycheck.com/p/a-russian-influence-operation-surpasses Top Of Page Disinformation Campaigns Targeting Ukrainian Refugees An article  published by StopFake outlines a long-standing Russian disinformation strategy that depicts Ukrainians abroad as traitors or threats—an approach with roots in Soviet-era propaganda that intensified after Russia’s full-scale invasion of Ukraine. As millions of Ukrainians were forced to flee, Russian information attacks increasingly shifted blame onto the victims of the war, using fabricated stories to distort public perception. StopFake reports a sharp rise in such narratives since 2022, describing a sustained effort to damage Ukraine’s image and weaken international support. Russian disinformation about Ukrainian refugees tends to follow three core narratives: portraying them as criminals, as an economic and social burden, or as people who will be forcibly deported and sent back to fight. These claims commonly rely on invented stories, manipulated photos and videos, and forged websites that impersonate trusted Western media—a tactic associated with the “Doppelganger” operation. Refugees are falsely accused of violence, spreading disease, abusing welfare systems, or receiving preferential treatment, and many of these stories are amplified by far-right actors and fringe outlets to gain political traction. Over time, the emphasis of these narratives has shifted from petty crime stories in 2022 to economic strain and “war fatigue” messaging in 2023, and then to fears of forced mobilisation in 2024–2025. Countries hosting large numbers of Ukrainians, especially Poland and Germany, have been frequent targets, with disinformation shaping public debate and, in some cases, election campaigning. Source:  StopFake, Как Россия ведет войну против украинских беженцев . [online] Published 23 December 2025. Available at: https://www.stopfake.org/ru/kak-rossiya-vedet-vojnu-protiv-ukrainskih-bezhentsev/ Top Of Page False Claims About Microchipped Vaccines in Ukraine NewsGuard’s analysis  described a fabricated “investigation” by the so-called Foundation to Battle Injustice, which claimed that Ukrainian authorities are injecting soldiers with COVID-19 vaccines containing microchips that can remotely paralyse or kill them. Widely circulated by pro-Kremlin social media accounts in December 2025, the allegation asserted that President Volodymyr Zelensky had issued secret orders and cited an unnamed “leak” as evidence. In reality, the Foundation to Battle Injustice is a known front with documented ties to pro-Kremlin influence efforts and a record of spreading disinformation about Ukraine. There is no evidence that Ukraine mandates COVID-19 vaccination for soldiers, nor that vaccines can contain microchips. Medical experts and health authorities have repeatedly stated that such technology does not exist, and that vaccine components are far smaller than any microchip. Ukrainian officials have explicitly denied the claim, calling it false. In addition, a member of the press department of the Ukrainian President’s Office told NewsGuard in an email: “This information is fake.” Source:  NewsGuard Reality Check, I. Blachez, Russia Fabricates Report that Ukraine Injects Soldiers with Vaccines Containing Microchips that Can Kill Them for Disobeying Orders . [online] Published 23 December 2025. Available at: https://www.newsguardrealitycheck.com/p/no-ukraine-doesnt-have-a-microchip Top Of Page Coordinated Disinformation Fuels Anti-Ukrainian Sentiment in Poland A DFRLab   report detailed a surge in anti-Ukrainian disinformation in Poland following two triggering events: President Karol Nawrocki’s veto of expanded benefits for Ukrainian refugees and Russian drone incursions into Polish airspace. Both events were exploited by coordinated online networks that pushed misleading narratives portraying Ukrainians as economic burdens and security threats, while downplaying or denying Russia’s role. These information attacks sought to fracture Polish public support for Ukraine and weaken the Polish–Ukrainian alliance at a critical moment. Despite robust economic data showing that most Ukrainians in Poland are employed and contribute far more in taxes than they receive in benefits, false claims depicting refugees as “freeloaders,” “parasites,” or criminals spread widely on social media. Investigations identified coordinated amplification by networks of Facebook pages, fringe groups, clickbait websites, and monetized platforms, often using synchronized posting, misleading page names, and AI-generated videos. These narratives were also injected into non-political spaces—such as hobby and lifestyle groups—making them harder to detect and counter. Disinformation intensified further after Russian drones entered Polish airspace, with conspiratorial narratives falsely blaming Ukraine for staging provocations to drag Poland and NATO into the war. These claims circulated across Facebook, TikTok, and the comment sections of major Polish media outlets, often originating from suspicious or inauthentic accounts. One example involved the site udostepnij[.]pl and a network of Facebook pages with innocuous, non-political names that simultaneously shared identical posts about a deportation case, exaggerating and misrepresenting the facts. With hundreds of thousands of followers, these pages systematically amplified similar content, monetized audience attention through ads and YouTube, and redirected users to clusters of right-wing websites.. Source:  DFRLab, G. Gigitashvili & S. Gelava, How social media manipulation fuels anti-Ukraine sentiment in Poland . [online] Published 21 December 2025. Available at: https://dfrlab.org/2025/12/21/how-social-media-manipulation Top Of Page China China’s Cognitive Warfare Strategy Seeks Narrative Dominance Coordinated Disinformation The Cyfluence Research Center (CRC)  has published a piece  by Athena Tong  that casts China’s doctrine of cognitive warfare as a tightly integrated form of hostile influence—melding physical actions with sustained narrative manipulation to reshape perceptions of sovereignty and legitimacy across East Asia. Maritime manoeuvres in the South China Sea, the East China Sea, and around Taiwan are embedded in coordinated foreign information manipulation and interference (FIMI) campaigns, using state media, legal instruments, environmental claims, and diplomatic messaging to portray Beijing as restrained, lawful, and environmentally responsible while framing rivals as provocateurs or foreign proxies. The piece explains how maritime incidents are systematically distorted to advance strategic narratives. Collisions, water-cannon attacks, and blockades are rapidly reframed through manipulated images, recycled footage, and fabricated environmental scandals—such as claims of ecological harm caused by Philippine forces—to blur accountability and confuse audiences. This narrative engineering is reinforced through lawfare, including the designation of disputed features such as Scarborough Shoal as Chinese “nature reserves” and selective appeals to international law at the United Nations to recast sovereignty disputes as matters of development and stewardship rather than territorial rights. Beyond the South China Sea, Beijing employs historical revisionism, digital disinformation, and psychological operations to erode Japanese and Taiwanese sovereignty narratives. Coordinated social media campaigns amplify divisive local grievances in Okinawa, seed fake grassroots movements, and link disputes over the Senkaku Islands to Taiwan’s status, while misrepresenting UN Resolution 2758 to legitimise claims over Taiwan. Collectively, these tactics constitute a comprehensive political-warfare playbook that combines media warfare, lawfare, and psychological pressure to normalise grey-zone aggression, weaken the adversary's resolve, and establish narrative dominance without direct military confrontation. Source:  Cyfluence Research, A. Tong, Cognitive Warfare Masterclass: China’s Doctrine for Strategic Narrative Superiority . [online] Published 29 December 2025. Available at: https://www.cyfluence-research.org/post/cognitive-warfare-masterclass-china-s-doctrine-for-strategic-narrative-superiority Top Of Page China Index Events Highlight PRC Influence As published  by DoubleThink  Lab, the China Index Regional Event Fund, launched in 2025 , has supported local events worldwide to foster discussion about Chinese influence and disinformation, helping communities better understand the PRC’s global strategies. Events ranged from policy roundtables and workshops for journalists to university sessions for students. In the UK, discussions focused on how the Chinese Communist Party's influence affects higher education, including concerns about surveillance and self-censorship among Chinese students. Meanwhile, Italian forums highlighted the PRC’s impact on politics, media, and democracy in Europe. In South and Southeast Asia, workshops emphasised the role of online information operations and PRC-backed initiatives in shaping public opinion and reinforcing control. At universities in India, Thailand, and Pakistan, participants learned to detect patterns of influence, assess risks linked to initiatives such as the China–Pakistan Economic Corridor, and understand how surveillance technology and economic dependencies can be used to shape behaviour. Discussions highlighted how disinformation and selective narratives are deployed to manipulate populations. Across all events, the China Index facilitated evidence-based discussions on disinformation and broader PRC influence, illustrating how academic research can inform public debate, policymaking, and media coverage. The initiative aimed to strengthen regional understanding of authoritarian digital strategies, raise awareness of state-driven propaganda, and provide practical tools to detect and mitigate the effects of information operations globally. Source:  Medium (Doublethink Lab), From Global Insight to Local Action: The 2025 China Index Regional Event Fund Report . [online] Published 23 December 2025. Available at: https://medium.com/doublethinklab/from-global-insight-to-local-action-the-2025-china-index-regional-event-fund-report-f7f1bacd6800 [AI Related Articles] AI and LLM Disinformation Risks Concerns about digital misinformation have flared up again following Meta’s decision to wind down professional fact-checking, a development covered by TechXplore  (for further information, see the W02 January 2025 Cyfluence Report here ). Still, the larger challenge lies in the subtler influence of AI large language models (LLMs) on public perception. Still, the larger challenge lies in the subtler influence of AI large language models (LLMs) on public perception. These models increasingly generate news summaries, headlines, and content across platforms before traditional moderation can respond, shaping not only what information people see but also how it is framed. Even when the underlying facts are accurate, LLMs can foreground certain viewpoints while downplaying others, creating communication bias that quietly steers opinions and amplifies particular narratives at the expense of others. This bias is distinct from outright disinformation, but it can produce similar effects on public understanding. Research suggests that LLMs exhibit persona-based steerability, aligning outputs with perceived user expectations. For example, the same query about legislation may yield responses tailored to an environmental activist versus a business owner—emphasising different angles while remaining factually accurate. When a small number of dominant companies control most widely used AI models, even minor biases can scale rapidly, shaping public discourse without users fully realizing the extent of that influence. Current regulatory frameworks, such as the EU AI Act and the Digital Services Act, prioritise transparency, pre-launch audits, and post-deployment oversight. However, they remain limited in their ability to address subtle communication bias. Effective mitigation, therefore, requires more than regulation: fostering competition, transparency, and meaningful user participation is essential to ensuring that AI systems reflect diverse perspectives. Source:  TechXplore, A. Kuenzler, People are getting their news from AI—and it's altering their views . [online] Published 22 December 2025. Available at: https://techxplore.com/news/2025-12-people-news-ai-views.html Top Of Page [Appendix - Frameworks to Counter Disinformation] EU Sanctions Target Diana Panchenko EUvsDisinfo   notes  that the European Union has introduced a new round of sanctions aimed at individuals and entities linked to coordinated Russian disinformation and cyber operations (for further information, see the W51 December 2025 Cyfluence Report here ). One of the listed individuals is Diana Panchenko, a Ukrainian-born journalist whom EUvsDisinfo describes as promoting pro-Kremlin, anti-Ukrainian, and anti-NATO narratives while presenting this material as “investigative journalism” in ways intended to mislead audiences and lend credibility to false claims. Her output blends the aesthetics of professional news production with content filmed in Russian-occupied territories, amplifying narratives aligned with Russia’s war messaging. She uses platforms such as Telegram and X to reach large audiences, including English-speaking users. She relies on techniques often associated with influence operations, clickbait framing, coordinated amplification, and the systematic promotion of pro-Russian accounts, rather than on conventional independent reporting. Ukrainian authorities and disinformation researchers have warned that her content contributes to justifications for Russian abuses and undermines Ukraine’s security and stability. Source:  EUvsDisinfo, From “journalism” to FIMI: EU sanctions Diana Panchenko . [online] Published 22 December 2025. Available at: https://euvsdisinfo.eu/from-journalism-to-fimi-eu-sanctions-diana-panchenko/ Top Of Page U.S. Action Targets Foreign Disinformation and Censorship Agents The U.S. State Department  has imposed  restrictions on five individuals identified as leading organised efforts to pressure American platforms to censor, demonetise, or suppress viewpoints they oppose. These actors—including radical activists and weaponised NGOs—have advanced foreign-backed censorship campaigns targeting American speakers and companies. By imposing these restrictions, the State Department aims to prevent foreign disinformation networks from influencing domestic discourse and manipulating online platforms. In some cases, the Department of Homeland Security can initiate removal proceedings under U.S. law against individuals engaged in such activities, underscoring the seriousness with which the U.S. treats extraterritorial disinformation and censorship campaigns. By curbing the reach of these actors, the U.S. seeks to safeguard free expression, protect American companies from coercion, and mitigate the impact of disinformation campaigns. Source:  U.S. Department of State, Announcement of Actions to Combat the Global Censorship-Industrial Complex  (Press Statement by Marco Rubio, Secretary of State). [online] Published 23 December 2025. Available at: https://www.state.gov/releases/office-of-the-spokesperson/2025/12/announcement-of-actions-to-combat-the-global-censorship-industrial-complex/ Top Of Page US Judge Blocks Deportation of Anti-Disinformation Advocate ABC News   published  that a U.S. judge has temporarily prevented the Trump administration from detaining Imran Ahmed, a British anti-disinformation campaigner and chief executive of the Center for Countering Digital Hate, over visa restrictions aimed at five Europeans accused of targeting U.S. tech companies or censoring speech. Ahmed, a permanent U.S. resident, fears deportation that would separate him from his U.S.-citizen family while he continues his work combating online hate and misinformation. The visa bans sparked criticism from European governments, which argued that Ahmed’s work enhances online safety by exposing disinformation, hate speech, and child-exploitation content—highlighting the tension between foreign-policy concerns and anti-disinformation efforts. Ahmed filed a lawsuit alleging violations of his free-speech and due-process rights, prompting U.S. District Judge Vernon Broderick to issue a temporary restraining order halting any detention or removal until the case can be heard. Source:  ABC News, US judge blocks deportation of British anti-disinformation activist . [online] Published 26 December 2025. Available at: https://www.abc.net.au/news/2025-12-26/us-barred-from-deporting-campaigner-imran-ahmed/106179794 Top Of Page The WikiSignals Project Strengthens Wikipedia Against Disinformation MisinfoCon’s coverage of WikiCredCon 2025 highlights a core reality for Wikipedia: credibility depends not only on accurate articles, but on the trustworthiness of the sources behind them. WikiSignals brings together organisations, including Veri.FYI, Factiverse, SimPPL, AfroCrowd, Iffy.news, and the Internet Archive Reference Explorer. The WikiSignals project is developing tools to help editors assess the reliability of references by compiling data from press associations, fact-checking ratings, domain registration records, media indexes, and lists of predatory or hijacked journals. This resource—now covering 70,000 websites—provides editors with a searchable credibility score for each source, helping prevent disinformation from entering Wikipedia. WikiSignals also offers a practical tool that lets editors evaluate a URL’s credibility and format it as a Wikipedia citation. The project integrates machine-harvestable data to estimate reliability programmatically, supporting both manual and automated efforts to counter misleading or biased sources. Input from active Wikipedia editors has guided the development of additional resources, including a compendium of favourite editing tools and an expanded index of predatory journals. Source:  MisinfoCon, B. Golding, WikiSignals: Strengthening the Credibility of Wikipedia Citations . [online] Published 24 October 2025. Available at: https://misinfocon.com/wikisignals-strengthening-the-credibility-of-wikipedia-citations-8531d2673099 Top Of Page EU Sanctions Act against Russian Aggression The European External Action Service (EEAS) announced it has renewed its restrictive measures against Russia for six more months, until 31 th July 2026, in response to Moscow’s ongoing destabilization of Ukraine. These sanctions, first imposed in 2014 and significantly expanded after Russia’s 2022 invasion, cover a wide range of sectors, including trade, finance, energy, technology, and luxury goods. The measures include suspending the broadcasting activities and licenses of Kremlin-backed disinformation outlets within the EU, directly targeting coordinated propaganda campaigns aimed at influencing European public opinion. The EU’s sanctions also include asset freezes, travel bans, and restrictions on economic activity in Crimea, Sevastopol, and non-government-controlled areas of Ukraine. Source:  EEAS, Press and information team of the Delegation to Ukraine, Russia’s war of aggression against Ukraine: Council extends economic sanctions for a further 6 months . [online] Published 22 December 2025. Available at: https://www.eeas.europa.eu/delegations/ukraine/russia%E2%80%99s-war-aggression-against-ukraine-council-extends-economic-sanctions-further-6-months_en Top Of Page Uzbekistan Strengthens Youth Fact-Checking to Combat Disinformation A report by the EEAS  revealed that on December 25 th , 2025, Uzbekistan inaugurated a Fact-Checking Club at the Youth Affairs Agency in Tashkent under the EU-funded “Digital Detectives” project, in collaboration with the “Qizlar Ovozi” (Voices of Girls) initiative. The event emphasized the importance of promoting media literacy, critical thinking, and responsible information consumption among young people, while linking fact-checking to gender equality and inclusive public discourse. Discussions highlighted the societal role of fact-checkers, the dangers of misinformation, and the negative impact of gender stereotypes. The project has trained 10 new fact-checking trainers and expanded a network of freelance contributors, delivering workshops and regional sessions that reached over 220 participants from schools, universities, and youth centers. Between October and December 2025, participants produced more than 50 fact-checking articles, approximately 20 videos, and multiple awareness campaigns addressing disinformation, cyberbullying, gender equality, and public health. Fact-checking clubs are now being launched in multiple institutions, including the University of Journalism and Mass Communications, and in Nukus, Karakalpakstan. Source:  EEAS, Press and information team of the Delegation to Uzbekistan, EU-Funded Project “Digital Detectives” Launches Fact-Checking Club at the Youth Affairs Agency . [online] Published 25 December 2025. Available at: https://www.eeas.europa.eu/delegations/uzbekistan/eu-funded-project-%E2%80%9Cdigital-detectives%E2%80%9D-launches-fact-checking-club-youth-affairs-agency_en Top Of Page EU - Western Balkans Conference Strengthens Fight Against Disinformation According to the EEAS publication , the EU–Western Balkans Media Literacy Conference in Skopje, on 17 th November, convened representatives from across the region, EU Member States, and international partners to address the growing threat of disinformation. The conference emphasized media literacy as a cornerstone of informed citizenship and democracy. Panels and workshops focused on key challenges in the regional information landscape, including social media’s role in the spread and counteracting of manipulated content, gendered disinformation, digital hate, and the specific vulnerabilities of younger audiences. Sessions also explored how EU policy frameworks, such as the Digital Services Act, can support human-rights-based, whole-of-society approaches to combating misinformation and influence operations. Participants exchanged practical strategies for detecting, mitigating, and preventing disinformation while enhancing institutional capacity. Source:  European External Action Service (EEAS), Press and information team of the EU Delegation to the Republic of North Macedonia, EU–Western Balkans Media Literacy Conference 2025 strengthens regional cooperation to counter disinformation . [online] Published 25 December 2025. Available at: https://www.eeas.europa.eu/delegations/north-macedonia/eu%E2%80%93western-balkans-media-literacy-conference-2025-strengthens-regional-cooperation-counter_en Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • Cognitive Warfare Masterclass: China’s Doctrine for Strategic Narrative Superiority

    Athena Tong analyzes China’s actions in the Western Pacific as strategic, consistent, and systemic. The objective is to compress the operational and political space of states Beijing treats as challengers and to entrench new “normalities incrementally .” Tong frames this pattern as the practical application of the PLA’s “Three Warfares,” amplified through FIMI, where narrative dominance, psychological pressure, and legal framing reinforce one another. Key Takeaways: In the Philippines (Scarborough Shoal/Second Thomas Shoal), she shows how maritime incidents—collisions, blockades, water-cannon attacks—are first shaped through information operations to secure interpretive advantage and cast China as a rule- and environment-protecting actor. Presence and calibrated escalation then impose immediate pressure on decision-making and rules of engagement. This is coupled with legal framing that shifts the reference point: environmental and development claims are used to support asserted jurisdiction and to push sovereignty and self-determination principles into the background. With Japan, Tong links the normalization of Chinese presence around the Senkaku/Diaoyu Islands to a parallel influence line focused on Okinawa. Digital influence operations amplify local grievances over the U.S. military footprint and manufacture the appearance of organic grassroots momentum, reinforced by historical revisionism. Routine transits and persistent presence function as stressors that raise response costs, while the “routine law-enforcement” label provides the accompanying legal logic. For Taiwan (Kinmen), Tong describes a particularly tight coupling: information pressure and ambiguity shape expectations, while recurring incursions and administrative jurisdictional claims ratchet up pressure without crossing into open hostilities. On the legal plane, she highlights attempts to narrow Taiwan’s diplomatic and legal room for maneuver, including the strategic misrepresentation of UN General Assembly Resolution 2758 to bolster Beijing’s “One China” framing. [Full Report Below]

  • CRC Weekly: Cyber-based hostile influence campaigns 15th - 21st December 2025

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [State Actors] Russia Denmark Accuses Russia of Cyber and Influence Attacks Targeting Elections Kremlin-Controlled Platform Laundered False Claims to Undermine EU-Canadian Freeze on Russian Assets Russia’s Disinformation Campaigns and Their Impact Germany Accuses Russia of Cyberattacks and Disinformation Campaigns The War in Ukraine Coordinated Disinformation Campaign Fuels Anti-Ukrainian Sentiment in Poland Debunking Kremlin Battlefield Disinformation Iran Disinformation Campaigns During the 2025 Israel–Iran War [AI Related Articles] Disinformation Claims After the Sydney Hanukkah Attack AI Disinformation’s Threat to the Rule of Law AI Chatbot Personalization Fuels Climate Disinformation AI-Enabled Disinformation in North Korean Cyber Operations [General Reports] Italy’s Disinformation Landscape Disinformation and Nationality Claims After the Bondi Beach Attack The November Reality Gap Index in the U.S. Disinformation Fueled by Misinterpretation of Google Trends Foreign Interference in Moldova’s 2025 Elections [Appendix - Framworks to Counter Disinformation] Challenges & Gaps in Addressing FIMI in Platforms Tracking PRC Influence Through the China Index EU Sanctions Target Russian Disinformation and Hybrid Threats The U.S. Sentencing Commission Amendments Combating Disinformation in the AI Era [CRC Glossary] [ Report Highlights] According to a DFRLab report , coordinated networks used platform manipulation, conspiratorial narratives, and AI-generated content to amplify anti-Ukrainian disinformation in Poland, exploiting political decisions and security incidents to undermine public support for Ukraine. Graphika’s analysis highlighted how Iranian state-aligned actors, including official media, inauthentic social media networks, and pro-Iran hacktivist groups, worked to shape perceptions of the 2025 Israel-Iran conflict. As published by ABC News , in the aftermath of the Bondi terrorist attack, a large volume of disinformation spread online, including deepfake images and fabricated stories about victims and heroes. As stated in The Record's report, researchers have uncovered a North Korean cyber operation that used generative AI to spread disinformation and conduct cyberattacks. A report by EU Disinfo Lab summarized a year-long monitoring effort of four European elections by the “FIMI Defenders for Election Integrity” project, focusing on Foreign Information Manipulation and Interference. The U.S. Sentencing Commission published a list of proposed amendments to federal sentencing guidelines in response to recent legislation, including the Take It Down Act, which addresses the creation and distribution of nonconsensual deepfake media. FIMI-ISAC’s new report assessed that the 2025 Moldovan parliamentary elections were heavily targeted by foreign information manipulation and interference (FIMI), primarily driven by Russian-aligned actors. [State Actors] Russia Denmark Accuses Russia of Cyber and Influence Attacks Targeting Elections The Record   reports that Denmark has summoned Russia’s ambassador after accusing Moscow of conducting cyber-enabled influence operations against critical infrastructure and democratic processes. The Danish Defence Intelligence Service attributed a 2024 cyberattack on a water utility and DDoS attacks on political party websites ahead of the November 2025 local elections to Russia-linked groups Z-Pentest and NoName057(16). The operations disrupted the water supply for households near Køge. They temporarily took party websites offline on the eve of the election, using the period to maximize visibility and public impact. Danish intelligence described the attacks as part of Russia’s broader hybrid warfare strategy aimed at creating insecurity and undermining Western support for Ukraine. Russia rejected the accusations as unfounded and countered with unsubstantiated claims about Ukrainian-linked hackers operating from Denmark. The confrontation aligns with similar disputes elsewhere in Europe, reinforcing concerns about coordinated Russian cyber and information operations targeting elections and public trust. Source: The Record (Recorded Future News), D. Antoniuk, Denmark summons Russian ambassador over alleged cyberattacks on water utility, elections . [online] Published 19 December 2025. Available at: https://therecord.media/denmark-summons-russian-ambassador-cyberattack-elections Top Of Page Kremlin Controlled Platform Laundered False Claims to Undermine EU-Canadian Freeze on Russian Assets A report by DisinfoWatch  detailed how “Strategic Culture”, a U.S.-sanctioned propaganda outlet run by Russia’s foreign intelligence service (SVR), is amplifying false and misleading claims to undermine the European Union’s decision to indefinitely freeze more than €200 billion in Russian sovereign assets. In a related example, a Strategic Culture   article blended selective facts with manipulative framing, falsely claiming that the EU has permanently seized Russian assets by bypassing democratic processes and engaging in theft. It employed inflammatory rhetoric, including Nazi analogies aimed at EU leaders, conspiracy tropes involving figures such as George Soros and the World Economic Forum, and exaggerated claims about societal radicalization in Europe. The piece also advanced demonstrably false assertions, such as alleging that Romania’s elections were annulled because the EU disapproved of the results. In reality, the EU has moved to indefinitely immobilize, not expropriate, Russian assets as a lawful response to Russia’s invasion of Ukraine, using emergency legal provisions to prevent repeated vetoes. The report emphasizes that freezing assets under international law is a proportionate measure and not equivalent to seizure, and that Romania’s election annulment was decided independently by its Constitutional Court due to domestic irregularities. Sources: DisinfoWatch, Kremlin SVR Controlled Platform Launders False Claims to Undermine EU-Canadian Freeze on Russian Assets . [online] Published 15 December 2025. Available at: https://disinfowatch.org/disinfo/kremlin-svr-controlled-platform-launders-false-claims-to-undermine-eu-canadian-freeze-on-russian-assets/ Strategic Culture Foundation, Theft Is Just and War Is Peace, According to the EU . [online] Published 15 December 2025. Available at: https://web.archive.org/web/20251215203836/https:/strategic-culture.su/news/2025/12/15/theft-is-just-and-war-is-peace-according-to-the-eu/ Top Of Page Russia’s Disinformation Campaigns and Their Impact An EUvsDisinfo   article portrays Russia as the world’s largest spreader of disinformation, using FIMI interference to undermine trust in liberal democracies. Since 2022, these operations have focused on justifying Russia’s invasion of Ukraine, challenging Western values, and weakening international support for Kyiv. A key tactic of Russian disinformation is the use of sensational conspiracy theories aimed at discrediting Ukraine and the West. These include false claims about stolen jewellery from the Louvre appearing in Ukraine, fabricated stories of luxury purchases by Ukraine’s First Lady, and conspiracies suggesting that Western powers secretly choose Ukraine’s president. Although these stories are often absurd, they serve a strategic purpose. By flooding the information space with extreme and shocking claims, disinformation campaigns exhaust audiences and make it harder to distinguish fact from fiction. This distraction allows more subtle, yet false, narratives to spread, influencing public opinion. Source: EUvsDisinfo, 5 Insidious Disinfo Narratives Spread by the Kremlin . [online] Published 16 December 2025. Available at: https://euvsdisinfo.eu/5-insidious-disinfo-narratives-spread-by-the-kremlin/ Top Of Page Kremlin Disinformation Through Historical Manipulation EUvsDisinfo   outlines Russia’s intensified use of historical revisionism since the start of its full-scale invasion of Ukraine, framing it as a tool to justify aggression and expansionism. The Kremlin presents itself as the sole guardian of “true history,” while accusing neighboring countries of falsification. Through revised textbooks, pseudo-academic publications, and official rhetoric, Moscow promotes myths of Russian superiority and denies the sovereignty of states such as Ukraine and Lithuania, using distorted history as an ideological weapon. A clear example is the publication of manipulated “histories” of Ukraine and Lithuania by Russia’s MGIMO University, endorsed by senior officials such as Foreign Minister Sergey Lavrov. These works deny Soviet crimes, question national independence, and label current governments as extremist or Western-controlled. Similar disinformation tactics include so-called “mapaganda,” in which altered maps are used to normalize Russia’s illegal territorial claims and mislead both domestic and international audiences. The Kremlin also uses modern technology to influence and manipulate sensitive historical debates abroad. In 2025, an AI-generated deepfake operation targeted Polish-Ukrainian relations by fabricating documents and videos linked to World War II-era atrocities, aiming to reignite old conflicts and weaken regional unity. Source: EUvsDisinfo, Rewriting Borders of Truth: How Russian FIMI Falsifies Historical Memory . [online] Published 11 December 2025. Available at: https://euvsdisinfo.eu/rewriting-borders-of-truth-how-russian-fimi-falsifies-historical-memory/ Top Of Page Germany Accuses Russia of Cyberattacks and Disinformation Campaigns As   published  by The New York Times , Germany has accused Russia of conducting a coordinated campaign of cyberattacks, espionage, and disinformation aimed at undermining the country’s infrastructure and social cohesion. German authorities say these hybrids actions included a cyberattack on air traffic control in August 2024 and organized efforts to spread false information during Germany’s early 2025 general election. Based on intelligence assessments, Berlin stated it had clearly identified Russian responsibility and formally summoned the Russian ambassador in protest, while Moscow denied the allegations. Disinformation is a central element of the accusations. German officials said Russian-linked actors attempted to damage public trust by spreading fake stories about prominent politicians who were critical of Russia’s war in Ukraine. These false claims included fabricated allegations about mental health issues, personal misconduct, and private relationships. An   article  by SRF News added that German authorities summoned the Russian ambassador and stated that intelligence findings clearly link a cyberattack on German air traffic control in August 2024 to the Russian hacking group Fancy Bear, which they say operates under Russia’s military intelligence agency, the GRU. According to Berlin, the so-called “Storm 1516” campaign, active since 2024 and backed by GRU-linked organizations, sought to influence Western elections by spreading false narratives. During Germany’s 2025 election campaign, fake content targeted leading politicians such as Robert Habeck and Friedrich Merz, including fabricated videos claiming ballot manipulation just days before the vote. Sources: The New York Times, Germany Russia Cyberattacks Sabotage Hybrid War . [online] Published 12 December 2025. Available at: https://www.nytimes.com/2025/12/12/world/europe/germany-russia-cyberattacks-sabotage-hybrid-war.html SRF, Cyberangriff: Berlin bestellt Russlands Botschafter ein . [online] Published 12 December 2025. Available at: https://www.srf.ch/news/international/hybride-bedrohung-cyberangriff-berlin-bestellt-russlands-botschafter-ein Top Of Page Foreign Disinformation in Romania’s 2024–2025 Presidential Crisis According to a   report  by IIEA , Romania’s 2024 presidential election was severely disrupted by large-scale foreign interference and AI-driven disinformation, leading to the unprecedented annulment of the first-round vote. A fringe, pro-Moscow candidate surged unexpectedly after an almost entirely online campaign amplified by bot networks, AI-generated content, and algorithm manipulation, creating a distorted information environment. Investigations uncovered massive cyberattacks on election systems and evidence linking the disinformation campaign to Russian hybrid operations, prompting the Constitutional Court to void the results to protect electoral integrity. The annulment triggered political unrest and protests, revealing both Romania’s democratic resilience and its deep vulnerabilities. While institutions acted decisively to counter foreign-sponsored manipulation, the crisis exposed widespread public distrust, polarization, and the growing appeal of nationalist and Eurosceptic narratives, trends often reinforced by external disinformation. Fake narratives and digital propaganda played a central role in mobilizing support and undermining confidence in democratic processes. The rerun election in May 2025, won by pro-European reformer Nicușor Dan, restored some legitimacy but underscored the long-term threat posed by disinformation. Source: The Institute of International and European Affairs (IIEA), Romania’s 2024–2025 Presidential Election Crisis and Its Aftermath . [online] Published 2025. Available at: https://www.iiea.com/blog/romanias-20242025-presidential-election-crisis-and-its-aftermath Top Of Page The War in Ukraine Coordinated Disinformation Campaign Fuels Anti-Ukrainian Sentiment in Poland As published in a DFRLab  article, recent political and security developments in Poland triggered a surge of anti-Ukrainian disinformation, exploiting President Karol Nawrocki’s veto on refugee benefits and the incursion of Russian drones into Polish airspace. Online narratives framed Ukrainians as economic and security threats, blaming refugees for burdens and accusing Ukraine of provoking conflict to drag Poland and NATO into war. These messages sought to fracture support for Ukraine and strain Polish-Ukrainian relations by amplifying grievances through misleading, emotionally charged claims. Analysis by the DFRLab identified coordinated networks manipulating Facebook, YouTube, TikTok, and comment sections to entrench hostile narratives. Tactics included synchronized posting across misleadingly named Facebook pages, near-simultaneous amplification of clickbait articles, and monetizing traffic through advertising. Websites such as udostepnij.pl and affiliated pages amplified dehumanizing stories portraying Ukrainians as “freeloaders” or “parasites,” often distorting incidents and recycling content across networks to inflate reach. These efforts benefited from links among right-wing sites and suspicious spikes in engagement indicative of inauthentic activity. Additional campaigns leveraged conspiratorial framing and AI-generated content to broaden reach. A fringe spiritual group (Jasna Strona Mocy) injected anti-Ukrainian videos into apolitical hobby and lifestyle groups. At the same time, pages like Sztuczna Polityka circulated AI-generated videos depicting Ukrainians and President Zelenskyy as threats, showing indirect cross-platform amplification. After the drone incursion, small but coordinated clusters promoted false-flag and “not our war” narratives across Facebook and TikTok, including English-language accounts repeating identical captions. Together, these activities show how coordinated networks use inauthentic amplification, platform manipulation, and AI tools to distort discourse, undermine cohesion, and weaken Poland’s support for Ukraine. Source: DFRLab, G. Gigitashvili & S. Gelava, How social media manipulation fuels anti-Ukraine sentiment in Poland . [online] Published 21 December 2025. Available at: https://dfrlab.org/2025/12/21/how-social-media-manipulation-fuels-anti-ukraine-sentiment-in-poland/ Top Of Page Debunking Kremlin Battlefield Disinformation According to an article by the Atlantic Council , the Kremlin has promoted disinformation claiming that Russia’s invasion of Ukraine is proceeding exactly as planned, with constant victories and inevitable success. President Vladimir Putin has repeatedly exaggerated Russian battlefield achievements, particularly to project strength, demoralize Ukraine, and persuade international audiences—especially in the United States—that Russian victory is unavoidable and that Ukraine should accept a Kremlin-friendly peace. A clear example of this disinformation campaign is Putin’s repeated false claims that Russian forces had captured the strategically important city of Kupiansk. Russian officials announced the city’s fall multiple times, using it to threaten Ukraine and its allies. In reality, Kupiansk remained under Ukrainian control amid successful counteroffensives. Ukrainian President Volodymyr Zelenskyy directly challenged these falsehoods by visiting the city himself and releasing a selfie video from the front line, visually disproving Kremlin claims and exposing them as deliberate lies. Source: Atlantic Council, P. Dickinson, ‘Putin is lying’: Zelenskyy visits front to expose false claims of Russian gains . [online] Published 16 December 2025. Available at: https://www.atlanticcouncil.org/blogs/ukrainealert/putin-is-lying-zelenskyy-visits-front-to-expose-false-claims-of-russian-gains/ Top Of Page Iran Disinformation Campaigns During the 2025 Israel–Iran War In June 2025, escalating tensions between Israel and Iran led to direct conventional warfare, accompanied by a coordinated information campaign from Iranian state, state-aligned, and proxy actors. Graphika’s analysis highlighted how these actors—including official media, inauthentic social media networks, and pro-Iran hacktivist groups—worked to shape perceptions of the conflict. Despite differences in their formal ties to the Iranian state, they promoted largely unified narratives aimed at influencing both domestic and international audiences. A key finding was a delayed information response, suggesting that pro-Iran actors were initially unprepared for a large-scale conventional conflict. Once activated, however, they rapidly flooded the information environment with a blend of breaking news alerts, aggressive rhetoric, and unverified or misleading claims. This surge of content appeared designed less to inform than to confuse, overwhelm, and obscure reliable reporting during and after key moments of the war. The report outlined a disinformation playbook focused on muddying the information space rather than establishing facts. Source: Graphika, Everything Everywhere All at Once . [online] Published December 2025. Available at: https://public-assets.graphika.com/reports/Graphika_Report_Everything_Everywhere_All_at_Once_(part_1).pdf Top Of Page [AI Related Articles] Disinformation Claims After the Sydney Hanukkah Attack As published by ABC News , in the aftermath of the Bondi terrorist attack, a large volume of disinformation spread online, including deepfake images and fabricated stories about victims and heroes. ABC News Verify found that false claims shared on social media gained millions of views, showing how quickly tragedy can be exploited. One major case involved Arsen Ostrovsky, who was injured in the attack and shared a real photo of himself shortly afterward. This image was later manipulated into an AI-generated deepfake and published by the Russian disinformation site Pravda, falsely accusing him of being part of “Mossad propaganda.” Technical analysis confirmed the image was fake, with visual errors including unreadable text, distorted backgrounds, and inconsistencies with verified photos and Mr. Ostrovsky's live video appearances . The deepfake was created using Google’s AI tools and falsely presented as authentic through misleading “AI detector” screenshots. This case highlights how deepfakes are used to spread antisemitic narratives and undermine trust in real victims during moments of public shock. Disinformation also targeted acts of heroism and public understanding of the attack, while other conspiracy theories misused Google Trends data to suggest foreign involvement falsely. In addition, an article by CyberNews revealed that, around the attack, as users turned to X to ask questions about circulating videos and individuals involved, Grok provided inaccurate and misleading answers, including misidentifying real footage of a man tackling a gunman and incorrectly naming the person concerned. These errors were later corrected by X’s Community Notes, highlighting the gap between Grok’s claims and verified facts. Grok’s misinformation went further by falsely identifying the hero who disarmed one of the attackers, Ahmed Al-Ahmed, as multiple different people, including an unrelated individual and even an Israeli hostage held by Hamas. The chatbot also confused the Bondi Beach attack with other unrelated violent incidents and, in some cases, inserted references to the attack in responses where they were not relevant. Sources: ABC News Verify, M. Taouk, M. Workman & J. Hair, Racist and antisemitic false information spreads online following Bondi Beach terrorism attack . [online] Published 16 December 2025. Available at: https://www.abc.net.au/news/2025-12-17/abc-news-verify-misinformation-bondi-terrorist-attack/ Cybernews, N. Ancell, Elon Musk’s Grok spews misinformation about Bondi Beach terror attack . [online] Published 15 December 2025. Available at: https://cybernews.com/ai-news/grok-misinformation-bondi-beach/ Top Of Page The Erosion of Trust in Online and AI-Created Information As stated in a   report  by ASPI , AI chatbots, such as Grok on the social media platform X, are increasingly central to online political discourse but are not neutral information tools. Users often weaponize AI responses in debates. This cycle of performative outrage amplifies polarization and contributes to a broader environment in which AI-generated content becomes a vector for disinformation rather than a source of clarity. Large-scale analysis of hundreds of thousands of public AI interactions reveals consistent patterns of political arguments and manipulations amplified for social performance. The reliability of AI-generated content is also a structural concern. Studies by the European Broadcasting Union and the BBC show that nearly half of AI news answers contain factual errors, with broader issues, including incoherence or outdated data, in over 80 percent of cases. Public surveys reveal widespread skepticism: majorities fear AI will exacerbate misinformation and influence elections. Source: The Australian Strategic Policy Institute (ASPI), J. Coyne, Protecting Truth in the Era of AI Mediation . [online] Published 12 December 2025. Available at: https://www.aspi.org.au/strategist-posts/protecting-truth-in-the-era-of-ai-mediation/ Top Of Page AI Disinformation’s Threat to the Rule of Law The ABA AI Task Force’s   annual report  highlighted how artificial intelligence is reshaping the legal profession while also posing serious risks, particularly in the spread of disinformation. AI systems can rapidly generate and amplify false content, making misinformation easier to produce, more challenging to detect, and more influential. U.S. Supreme Court Chief Justice John Roberts has warned that disinformation, alongside violence and intimidation, directly threatens judicial independence and the rule of law. A significant concern is the rise of AI-generated deepfakes, which can convincingly depict people saying or doing things that never happened. These tools are increasingly used for propaganda, to manipulate public opinion, and to undermine trust in institutions such as courts, media, and elections. In the legal system, deepfakes complicate the handling of evidence, as judges must now assess whether material is authentic or artificially generated, while also facing claims that objective evidence may itself be fake. The report stressed the need for strong risk management, transparency, and accountability in AI development and use. Beyond disinformation, AI poses risks related to bias, privacy, cybersecurity, and fraud, but its role in spreading false narratives is among the most dangerous. Source: American Bar Association (ABA) Task Force on Law and Artificial Intelligence, Addressing the Legal Challenges of AI: Year 2 Report on the Impact of AI on the Practice of Law . [online] Published December 2025. Available at: https://www.americanbar.org/content/dam/aba/administrative/center-for-innovation/ai-task-force/2025-ai-task-force-year2-report.pdf Top Of Page AI Chatbot Personalization Fuels Climate Disinformation An investigation by Global Witness  looking into AI chatbots shows that personalized responses can significantly increase the spread of climate disinformation, especially for users with conspiratorial worldviews. When presented with different user personas, chatbots reacted differently: while Meta AI largely stuck to consistent information, Grok and, to a lesser extent, ChatGPT adjusted their responses to match perceived user beliefs. In Grok’s case, this meant shifting from science-based explanations to endorsing climate conspiracy tropes, questioning whether a climate crisis exists, and framing climate action as government overreach or elite manipulation. The study found that some chatbots actively amplified climate denial by recommending known disinformers as “truth-tellers,” repeating debunked claims about net-zero policies, and inventing or exaggerating evidence to undermine climate action. Grok in particular promoted misleading narratives about energy crises, food shortages, and international climate conferences, while also casting doubt on efforts to combat climate misinformation. Most concerning, Grok encouraged users to increase outrage and extremity in social media posts to boost engagement, offering to make content more provocative, inflammatory, or even “violent.” This behavior highlights the risk of “AI sycophancy,” where systems prioritize user agreement and engagement over accuracy. Source: Global Witness, AI chatbots share climate disinformation and recommend climate denialists to susceptible personas . [online] Published 18 December 2025. Available at: https://globalwitness.org/en/campaigns/digital-threats/ai-chatbots-share-climate-disinformation-to-susceptible-users/ Top Of Page AI-Enabled Disinformation in North Korean Cyber Operations As stated in The Record ’s  report, researchers have uncovered a North Korean cyber operation that used generative AI to spread disinformation and conduct cyberattacks. Hackers linked to the Kimsuky group exploited ChatGPT to create deepfake South Korean military and government ID cards, which were then used in phishing emails targeting defense-related institutions. The forged IDs were designed to appear authentic, enabling attackers to deceive recipients and deliver malware to steal data and gain remote access to systems. Although AI tools typically restrict the creation of official documents, investigators found that the hackers likely bypassed safeguards by requesting the images as “sample” or “mock-up” designs. Metadata analysis confirmed the photos were AI-generated, showing how generative AI can be misused to produce convincing false materials with minimal technical expertise. Source: The Record, D. Antoniuk, North Korean operation uses ChatGPT to forge military IDs as part of cyberattack . [online] Published 17 September 2025. Available at: https://therecord.media/north-korea-kimsuky-hackers-phishing-fake-military-ids-chatgpt Top Of Page [General Reports] Italy’s Disinformation Landscape As stated in a report by EU DisinfoLab , Italy’s disinformation environment is shaped by deep political polarization and a fragmented media ecosystem, where facts are frequently mixed with exaggeration and ideological framing. Several emblematic cases illustrate how disinformation operates in practice: the Global Sumud Flotilla was targeted by false claims alleging Hamas funding, staged attacks, and violations of Israeli waters, amplified by mainstream media, politicians, fake accounts, and AI-generated images. Similarly, Italy’s 2025 nationality referendum was distorted by misleading statements about immigration, integration, and legal standards, alongside coordinated hate speech and fake accounts aimed at “poisoning” public debate. At the same time, Italy has been a repeated target of Russian-linked networks such as Pravda, which used cloned news sites, AI-generated content, and coordinated amplification to spread pro-Kremlin narratives and undermine support for Ukraine and EU cohesion. Disinformation has also taken the form of direct foreign interference, exemplified by the Russian prank call to Prime Minister Giorgia Meloni, which functioned as a strategic communication attack rather than harmless satire. Broader narratives exploit domestic politics, migration fears, gender and minority rights, climate change, and health, often amplifying xenophobia, misogyny, and distrust in institutions. Source: EU DisinfoLab, M. G. Sessa, Disinformation landscape in Italy 2025 . [online] Published 15 December 2025. Available at: https://www.disinfo.eu/wp-content/uploads/2025/12/20251215_Disinfo-landscape-in-Italy-V2.pdf Top Of Page Disinformation and Nationality Claims After the Bondi Beach Attack A report by NewsGuard  revealed that after the deadly attack on the Jewish community in Sydney, disinformation spread rapidly online as pro-India and pro-Pakistan social media users exploited early confusion about the attackers’ nationality. Although Australian authorities later confirmed that one suspect was an Indian citizen and the other Australian-born, pro-India accounts falsely claimed both attackers were Pakistani. False claims were reinforced by faulty journalism and fabricated evidence. Several Indian news outlets initially reported the attacker as “Pakistani-origin”. They failed to issue corrections, while viral posts cited a fake statement supposedly from Pakistan’s foreign ministry claiming the suspects had recently visited Pakistan. Pakistani authorities confirmed the document was bogus, and no credible evidence supports claims of travel to Pakistan. Verified reporting instead links the attackers’ motivations to ISIS, with reports of travel to regions in the Philippines connected to extremist groups. At the same time, pro-Pakistan accounts spread counter-disinformation, falsely alleging that the attacker had ties to India’s intelligence agency. No credible sources support this claim either. Source: NewsGuard Reality Check, I. Chomnalez, Pro-India Accounts Falsely Claim Australia’s Hanukkah Massacre Alleged Shooters Were Pakistani Nationals . [online] Published 18 December 2025. Available at: https://www.newsguardrealitycheck.com/p/pro-india-sources-misrepresent-sydney Top Of Page The November Reality Gap Index in the U.S. NewsGuard’s  November Reality Gap Index showed that nearly half of Americans are susceptible to believing major false claims circulating online, rising slightly to 47 percent from October’s 46 percent. The most widely believed disinformation included an AI-generated audio clip falsely depicting President Donald Trump yelling at his cabinet to block the release of Jeffrey Epstein documents, misleading claims about New York City Mayor-elect Zohran Mamdani’s minimum-wage promises, and a conspiracy regarding Charlie Kirk’s alleged assassin. The AI-generated Trump audio clip was particularly impactful, garnering 8.6 million online views and convincing 66 percent of respondents to either believe it or remain unsure of its authenticity. Similarly, over half of the respondents were uncertain about claims regarding Mamdani’s statements, indicating widespread uncertainty about politically charged narratives. The Index demonstrated that while some progress has been made (11 percent of respondents correctly identified all three claims as false, up from 6 percent in October), a large portion of Americans (37 percent) could not identify even a single false claim. Source: NewsGuard Reality Check, S. Tanner, NewsGuard’s “Reality Gap Index” Finds that Nearly Half of Americans Believe Recent Widespread False Claims . [online] Published 17 December 2025. Available at: https://www.newsguardrealitycheck.com/p/nearly-half-of-americans-believe-523 Top Of Page Disinformation Fueled by Misinterpretation of Google Trends Research by The Conversation  revealed that following the Bondi Beach shooting, social media users spread conspiracy theories claiming that pre-attack searches for the shooter’s name, “Naveed Akram,” from locations like Tel Aviv indicated he was an Israeli agent. Similar claims have circulated after other incidents, such as the killing of Charlie Kirk. Experts clarify that these spikes are the result of “statistical noise” in Google Trends, especially for rare search terms or small geographic areas, and do not reflect actual coordinated activity or conspiracies. Google Trends samples only a portion of total searches and normalizes results on a 0–100 scale so that low-volume searches can appear as extreme spikes. The platform also updates results in real time, causing apparent changes from one view to the next. Many misleading posts exploit these quirks, failing to understand that a spike does not equal evidence of coordinated action or hidden connections. Misinterpretation of the data has repeatedly fueled false narratives about attackers’ identities and motives. Additionally, the search term “Naveed Akram” could refer to multiple people, further complicating interpretations. Source: The Conversation, J. Raubenheimer, How misreading Google Trends is fuelling Bondi attack conspiracy theories . [online] Published 18 December 2025. Available at: https://theconversation.com/how-misreading-google-trends-is-fuelling-bondi-attack-conspiracy-theories-272251 Top Of Page Foreign Interference in Moldova’s 2025 Elections FIMI-ISAC’s  new report assessed that the 2025 Moldovan parliamentary elections were heavily targeted by foreign information manipulation and interference (FIMI), primarily driven by Russian-aligned actors. Coordinated campaigns exploited Moldova’s pro-European orientation, economic hardships, and societal divisions to spread anti-government, anti-EU, and anti-NATO narratives. High-impact operations, including Storm-1516, Operation Overload, REST Media, and InfoLeader, used AI-generated content, bot networks, pseudo-media outlets, and paid influencers to amplify disinformation. These efforts portrayed the ruling Party of Action and Solidarity (PAS) as illegitimate and Western-controlled, while legitimizing pro-Russian blocs as defenders of neutrality and Moldovan identity. Domestic actors, including pro-Russian political parties and certain religious institutions, acted as amplifiers of Kremlin-aligned narratives, often evading sanctions or using opaque financing structures. Digital platforms such as Telegram and TikTok were exploited to flood comments, inauthentically engage diaspora communities, and disseminate false election reports, exposing Moldova’s regulatory gaps and platform accountability challenges. Despite these threats, Moldova’s coordinated monitoring efforts and preventive measures prevented any breach of core electoral infrastructure, though the information space remained highly manipulated. The elections highlighted the importance of institutionalized, strategic responses to disinformation. The creation of mechanisms such as the Centre for Strategic Communication and Combating Disinformation (CSCCD) and cross-agency coordination have improved detection, analysis, and counter-messaging. For sustained resilience, Moldova needs stronger regulatory oversight, enhanced cooperation with global platforms, professionalized crisis communication teams, and continued investment in media literacy and civic engagement. Source: FIMI-ISAC, Assessment of Foreign Information Manipulation and Interference in the 2025 Moldovan Parliamentary Election . [online] Published 5 December 2025. Available at: https://fimi-isac.org/wp-content/uploads/2025/12/FIMI-ISAC-Report_Moldova-Election-Country-Report_20251205.pdf Top Of Page [Appendix - Frameworks to Counter Disinformation] Challenges & Gaps in Addressing FIMI in Platforms A report by the EU Disinfo Lab  summarized a year-long monitoring effort by the “FIMI Defenders for Election Integrity” project, focusing on Foreign Information Manipulation and Interference. Rather than documenting isolated incidents, the analysis examines how disinformation campaigns reveal systemic weaknesses in platform governance and potential failures to enforce the EU Digital Services Act (DSA). By reassessing twelve election-related incidents, the report shows how manipulative practices persist in grey areas between legal and illegal behavior, allowing disinformation to spread despite existing rules. A central finding is that disinformation is often enabled by weak reporting, limited follow-up, and inconsistent enforcement across platforms. Many incidents were never formally reported, reducing accountability and allowing harmful content and coordinated manipulation to remain online. The report argued that practical counter-disinformation efforts must shift focus from individual pieces of misleading content to behavioral patterns such as coordination, automation, and artificial amplification, which are more indicative of organized interference and harder for platforms to dismiss. The report highlighted significant evidence gaps that hinder enforcement, including the lack of verifiable data, platform-specific transparency, and access to engagement metrics. It underscores the need for stronger collaboration among civil society, researchers, and regulators to translate qualitative observations into actionable cases. Source: EU DisinfoLab, M. G. Sessa & R. M. Serrano & J. McNamee, Regulatory Challenges & Gaps in Addressing Systemic Platform Abuse . [online] Published 15 December 2025. Available at: https://www.disinfo.eu/wp-content/uploads/2025/12/20251215_regulatory_challenges_and_gaps_in_addressing_systemic_platform_abuse-2.pdf Top Of Page Tracking PRC Influence Through the China Index In 2025, the China in the World (CITW) Community Fund expanded its efforts to examine global PRC influence using the China Index, which now covers 101 countries and allows comparisons over time. Several funded projects highlighted how state-linked influence, including information control and disinformation, operates through media, diaspora networks, technology, and political messaging. A research paper by Doublethink Lab  revealed how narratives aligned with Beijing’s interests are promoted, sensitive topics are avoided or reframed, and local information environments are manipulated to reduce transparency and public debate. Multiple studies reveal how PRC influence can intersect with disinformation and narrative manipulation. Research on Chinese organized crime networks suggested that these groups may serve as informal channels of influence within overseas communities, supporting state objectives, such as shaping views on Taiwan and other political issues. In South Asia and Southeast Asia, PRC influence in media, academia, and technology has contributed to self-censorship, selective reporting, and the downplaying of human rights concerns, particularly in countries with close economic or strategic ties to Beijing. In places like Pakistan, Malaysia, and Singapore, this influence is reinforced through media cooperation, diplomatic pressure, and public narratives that normalize or legitimize PRC positions. Other projects focused on countering disinformation by strengthening local journalism and public awareness. Citizen journalists in Zimbabwe investigated misleading narratives surrounding Chinese mining activities. At the same time, advocacy initiatives in Central America and Sub-Saharan Africa used the China Index to expose opaque influence operations and information manipulation. Source: Doublethink Lab, CITW Community Fund 2025: New Findings and Frontline Work to Counter PRC Influence . [online] Published 2025. Available at: https://medium.com/doublethinklab/citw-community-fund-2025-new-findings-and-frontline-work-to-counter-prc-influence-3c2be2741def Top Of Page EU Sanctions Target Russian Disinformation and Hybrid Threats The European Union   announced that it has imposed sanctions on 12 individuals and two entities linked to Russian information manipulation, propaganda, and cyber operations. Those targeted include Kremlin-affiliated analysts, influencers spreading pro-Russian narratives, and Western European former military and police officers promoting anti-Ukraine and anti-NATO messages. The sanctions also cover the International Russophile Movement, which amplifies destabilizing narratives globally, and the 142nd Separate Electronic Warfare Battalion, involved in electronic warfare and GPS disruptions affecting EU member states. The measures extend to Russian military intelligence unit 29155 and the cyber group Cadet Blizzard, responsible for cyberattacks on Ukrainian government organizations and EU and NATO targets. These actions aim to destabilize political systems, access sensitive information, and spread disinformation. Sanctions include asset freezes, travel bans, and prohibitions on EU citizens or companies providing financial or economic resources to those listed. Russian media outlets involved in disinformation may also face suspension of their broadcasting licenses under the expanded framework. A NewsGuard   article on this subject added that one notable individual added to the sanctions list is John Mark Dougan, a former Florida deputy sheriff who fled to Russia in 2016. Dougan and Storm-1516 have been linked to hundreds of fake news sites, including outlets masquerading as local newspapers like DCWeekly, Chicago Chronicle, and Boston Times. NewsGuard has tracked at least 121 false claims circulated by Dougan’s network targeting Germany, France, Moldova, Armenia, Ukraine, and the U.S., including deliberate attempts to manipulate primary elections and even infiltrate AI chatbots with disinformation. Sources: Council of the European Union, Press release. Russian hybrid threats: Council sanctions twelve individuals and two entities over information manipulation and cyber attacks . [online] Published 15 December 2025. Available at: https://www.consilium.europa.eu/en/press/press-releases/2025/12/15/russian-hybrid-threats-council-sanctions-twelve-individuals-and-two-entities-over-information-manipulation-and-cyber-attacks/ NewsGuard Reality Check, E. Maitland, Moscow-Based Operative John Mark Dougan Becomes First American Sanctioned by the EU for His Role in Russian Influence Operations . [online] Published 16 December 2025. Available at: https://www.newsguardrealitycheck.com/p/in-a-first-eu-sanctions-an-american Top Of Page The U.S. Sentencing Commission Amendments The U.S. Sentencing Commission published a list of proposed amendments to federal sentencing guidelines in response to recent legislation, including the Take It Down Act, which addresses the creation and distribution of nonconsensual deepfake media. The Act criminalizes publishing or threatening to publish intimate visual depictions or digital forgeries of adults and minors online, with penalties ranging from 18 months to three years in prison, depending on the offense. The Commission plans to update the Statutory Index and Commentary to reflect these new offenses and is requesting public comment on whether the guidelines adequately capture the severity and nature of these digital disinformation crimes. These changes highlight the growing concern over AI-driven disinformation, exceptionally lifelike deepfakes that can harass, humiliate, or misrepresent individuals. The amendments aim to ensure federal courts have clear sentencing guidance for crimes involving digitally fabricated content, recognizing the societal harm caused by false, manipulative, or threatening online media. The proposals also explore whether related offenses, such as blackmail or extortion, should be referenced differently, better to align penalties with the nature of digital disinformation. The broader set of proposed amendments underscores a federal effort to adapt legal frameworks to technological threats. By incorporating new offenses related to deepfakes into existing sentencing guidelines, the Commission seeks to strengthen deterrence and accountability and to invite public feedback on the effectiveness of these updates. Source: United States Sentencing Commission, Sentencing Guidelines for United States Courts . [online] Published 19 December 2025. Available at: https://public-inspection.federalregister.gov/2025-23473.pdf Top Of Page Combating Disinformation in the AI Era In her remarks at the Roundtable on Information Integrity in an Evolving AI Landscape, Ambassador Hedda Samson highlighted the European Union’s commitment to information integrity in a rapidly evolving AI landscape, emphasizing the dangers of disinformation and foreign interference in information. In conflict zones, such disinformation poses direct risks to peacekeepers and humanitarian personnel, making information integrity a matter of both societal security and human safety. The EU has developed measures that aim to protect freedom of expression while mitigating the societal harms of false and manipulative content. International cooperation, particularly with the UN, NATO, and G7, is central to this strategy, enabling information sharing, capacity building, and coordinated responses to cross-border disinformation campaigns. Source: European External Action Service (EEAS), EU Remarks by Ambassador Samson at the Roundtable on Information Integrity in an Evolving AI Landscape . [online] Published 2025. Available at: https://www.eeas.europa.eu/delegations/un-new-york/eu-remarks-ambassador-samson-roundtable-information-integrity-evolving-ai-landscape_en Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • CRC Weekly: Cyber-based hostile influence campaigns 08th-14th December 2025

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [State Actors] Russia US Extradites Member of Russian Hacktivist Group Linked to Critical Infrastructure Attacks   How Russia Uses FIMI to Manipulate Historical Memory Pro-Russia Hacktivists' Attacks Threaten Global Critical Infrastructure Russia Expands Disinformation and Influence Operations in Mexico After Ukraine Invasion Germany Accuses Russia of Cyberattacks and Disinformation Campaigns Foreign Disinformation in Romania’s 2024–2025 Presidential Crisis The War in Ukraine AI-Generated Influence Through Ordinary Personas The War in Gaza Structural Disinformation and Information Control in Gaza [AI Related Articles] AI-Generated Political Videos The Erosion of Trust in Online and AI-Created Information [General Reports] False Claims About ChatGPT and AIPAC Advertising Strategic Messaging Around China–India–Russia Cooperation [Appendix - Framworks to Counter Disinformation] UK Sanctions Russian and Chinese Groups Over Information Warfare Generative AI Amplifies Disinformation and Hostile Influence Risks [CRC Glossary] [ Report Highlights] A report by EU VS Disinfo examines how the Kremlin uses historical revisionism, manipulated maps, and digital tools to advance FIMI, justify Russia’s expansionism, and undermine the sovereignty and historical narratives of neighboring states. According to a report by The Record, the British government has imposed new sanctions on Russian and Chinese entities accused of conducting cyberattacks and influence operations aimed at undermining the West. As published by The New York Times , Germany has accused Russia of conducting a coordinated campaign of cyberattacks, espionage, and disinformation aimed at undermining the country’s infrastructure and social cohesion. Graphika’s investigation revealed a pro-Russian, anti-Ukraine network on Telegram called "OrdinAIry People", which used AI to create hundreds of seemingly ordinary accounts. GenAI report warns that generative AI significantly lowers barriers to large-scale disinformation and hostile influence campaigns. At the same time, existing regulatory efforts struggle to keep pace with the speed, scale, and global reach of AI-enabled manipulation. [State Actors] Russia US Extradites Member of Russian Hacktivist Group Linked to Critical Infrastructure Attacks As   reported  by The Record, the United States has extradited a 33-year-old Ukrainian national, Victoria Eduardovna Dubranova, to face federal charges for her alleged role in two Russian-linked hacktivist groups, CyberArmyofRussia_Reborn (CARR) and NoName057(16). According to the Justice Department, both groups have conducted hundreds of cyberattacks, including DDoS operations, in support of Russia’s geopolitical interests and with financial and operational backing from the Russian government. Prosecutors allege Dubranova participated in attacks targeting protected computers and critical infrastructure, including attempts to tamper with public water systems, actions that U.S. officials warned could endanger communities and national security. Dubranova has pleaded not guilty to all charges and is scheduled to stand trial in early 2026. If convicted, she could face up to 27 years in prison for her alleged involvement with CARR and up to five years for charges linked to NoName057(16). Source: The Record, D. Antoniuk, US Extradites Member of Russian Hacking Groups Critical Infrastructure . [online] Published 10 December 2025. Available at: https://therecord.media/us-extradites-member-of-russian-hacking-groups-critical-infrastructur Top Of Page How Russia Uses FIMI to Manipulate Historical Memory A   report  by EU VS Disinfo examines how the Kremlin has intensified FIMI by systematically rewriting history to justify Russia’s expansionist policies. Since the full-scale invasion of Ukraine, Moscow has promoted a state-controlled version of “true history” through school textbooks, pseudo-academic publications, and official narratives aimed not only at domestic audiences but also at neighboring states and the wider international community. A central example is the publication of "History of Lithuania" by Russia’s MGIMO University, with a foreword by Foreign Minister Sergey Lavrov. The book denies Lithuanian sovereignty, downplays Soviet crimes, and frames modern Lithuanian politics as extremist, mirroring similar distortions previously applied to Ukraine. Beyond academic-style narratives, the Kremlin also employs "mapaganda" and digital manipulation to reshape perceptions of borders and the past. Russian law now labels maps contradicting its territorial claims as extremist, while altered maps are actively circulated abroad to normalize illegal annexations. The report also highlighted the use of advanced AI deepfakes, citing a 2025 operation in Poland that fabricated social media posts and videos to reignite sensitive historical disputes between Poland and Ukraine. Source: EUvsDisinfo, Rewriting Borders of Truth: How Russian FIMI Falsifies Historical Memory . [online] Published 11 December 2025. Available at: https://euvsdisinfo.eu/rewriting-borders-of-truth-how-russian-fimi-falsifies-historical-memory/ Top Of Page Pro-Russia Hacktivists' Attacks Threaten Global Critical Infrastructure This joint cybersecurity   advisory  from U.S., European, and allied agencies warned that pro-Russia hacktivist groups are targeting critical infrastructure, particularly operational technology (OT) and industrial control systems (ICS). Affected sectors include water and wastewater, food and agriculture, and energy, with some incidents causing limited physical disruption. A key concern is that these hacktivist activities are often accompanied by disinformation. Groups such as Cyber Army of Russia Reborn and NoName057(16) tend to exaggerate the scale and impact of their attacks through online messaging to create fear, undermine trust in public services, and amplify pro-Russia narratives. This information manipulation can make relatively low-impact incidents appear more severe, increasing their psychological and political effect. The advisory urged infrastructure owners and operators to reduce exposure of OT systems, strengthen authentication, and improve asset management to limit both real-world risk and the effectiveness of disinformation. Source : CISA & allied agencies, AA25-343A Pro-Russia Hacktivists Conduct Attacks . [online] Published 9 December 2025. Available at: https://www.cisa.gov/sites/default/files/2025-12/aa25-343a-pro-russia-hacktivists-conduct-attacks.pdf Top Of Page Russia Expands Disinformation and Influence Operations in Mexico After Ukraine Invasion According to a   report  by The Odessa Journal, following Russia’s full-scale invasion of Ukraine, Kremlin-linked media outlets RT and Sputnik have significantly expanded their activities in Mexico, intensifying the spread of anti-American and anti-Ukrainian narratives, according to Ukraine’s Center for Countering Disinformation. U.S. diplomats describe this surge as a “sudden and aggressive” increase in Russian information influence aimed at undermining trust between Mexico and the United States. Russian officials and state media have also circulated false anti-Ukrainian claims, which were subsequently echoed by some local politicians and journalists, aligning with Moscow’s strategy of discrediting international support for Ukraine and portraying the West as aggressive. These efforts form part of a broader coordinated disinformation operation, including the large-scale “Doppelgänger” campaign, which seeks to distort coverage of the war in Ukraine, weaken U.S.–Mexico relations, and undermine Western backing for Kyiv. The campaign relies on the replication and amplification of misleading narratives across multiple platforms, enabling Russian messaging to appear locally rooted while advancing Kremlin strategic objectives. By embedding these narratives in domestic political and media debates, Russian actors enhance their credibility and reach within Mexico's information environment. Russian influence operations are further reinforced through institutional partnerships and professional outreach. The Mexican Journalists Club has regularly published content from Russian and Cuban state media and has hosted training sessions and lectures with RT representatives, presenting them as professional exchanges. This collaboration allows Russian outlets to normalize and legitimize their messaging as “alternative perspectives,” effectively laundering propaganda into the local media ecosystem. Alongside these information activities, Russia has reportedly increased its intelligence presence in Mexico by deploying experienced agents under diplomatic cover, reinforcing the convergence of information and intelligence operations. Source: The Odessa Journal, Center for Counteracting Disinformation: Russia Rapidly Expands Its Information and Intelligence Influence in Mexico . [online] Published 10 December 2025. Available at: https://odessa-journal.com/center-for-counteracting-disinformation-russia-rapidly-expands-its-information-and-intelligence-influence-in-mexico Top Of Page Germany Accuses Russia of Cyberattacks and Disinformation Campaigns As   published  by The New York Times, Germany has accused Russia of conducting a coordinated campaign of cyberattacks, espionage, and disinformation aimed at undermining the country’s infrastructure and social cohesion. German authorities say these hybrids actions included a cyberattack on air traffic control in August 2024 and organized efforts to spread false information during Germany’s early 2025 general election. Based on intelligence assessments, Berlin stated it had clearly identified Russian responsibility and formally summoned the Russian ambassador in protest, while Moscow denied the allegations. Disinformation is a central element of the accusations. German officials said Russian-linked actors attempted to damage public trust by spreading fake stories about prominent politicians who were critical of Russia’s war in Ukraine. These false claims included fabricated allegations about mental health issues, personal misconduct, and private relationships. An   article  by SRF News added that German authorities summoned the Russian ambassador and stated that intelligence findings clearly link a cyberattack on German air traffic control in August 2024 to the Russian hacking group Fancy Bear, which they say operates under Russia’s military intelligence agency, the GRU. According to Berlin, the so-called “Storm 1516” campaign, active since 2024 and backed by GRU-linked organizations, sought to influence Western elections by spreading false narratives. During Germany’s 2025 election campaign, fake content targeted leading politicians such as Robert Habeck and Friedrich Merz, including fabricated videos claiming ballot manipulation just days before the vote. Sources: The New York Times, Germany Russia Cyberattacks Sabotage Hybrid War . [online] Published 12 December 2025. Available at: https://www.nytimes.com/2025/12/12/world/europe/germany-russia-cyberattacks-sabotage-hybrid-war.html SRF, Cyberangriff: Berlin bestellt Russlands Botschafter ein . [online] Published 12 December 2025. Available at: https://www.srf.ch/news/international/hybride-bedrohung-cyberangriff-berlin-bestellt-russlands-botschafter-ein Top Of Page Foreign Disinformation in Romania’s 2024–2025 Presidential Crisis According to a   report  by IIEA, Romania’s 2024 presidential election was severely disrupted by large-scale foreign interference and AI-driven disinformation, leading to the unprecedented annulment of the first-round vote. A fringe, pro-Moscow candidate surged unexpectedly after an almost entirely online campaign amplified by bot networks, AI-generated content, and algorithm manipulation, creating a distorted information environment. Investigations uncovered massive cyberattacks on election systems and evidence linking the disinformation campaign to Russian hybrid operations, prompting the Constitutional Court to void the results to protect electoral integrity. The annulment triggered political unrest and protests, revealing both Romania’s democratic resilience and its deep vulnerabilities. While institutions acted decisively to counter foreign-sponsored manipulation, the crisis exposed widespread public distrust, polarization, and the growing appeal of nationalist and Eurosceptic narratives, trends often reinforced by external disinformation. Fake narratives and digital propaganda played a central role in mobilizing support and undermining confidence in democratic processes. The rerun election in May 2025, won by pro-European reformer Nicușor Dan, restored some legitimacy but underscored the long-term threat posed by disinformation. Source: The Institute of International and European Affairs (IIEA), Romania’s 2024–2025 Presidential Election Crisis and Its Aftermath . [online] Published 2025. Available at: https://www.iiea.com/blog/romanias-20242025-presidential-election-crisis-and-its-aftermath Top Of Page The War in Ukraine AI-Generated Influence Through Ordinary Personas Graphika’s   investigation  revealed a pro-Russian, anti-Ukraine Telegram network called “OrdinAIry People” that used AI to create hundreds of seemingly ordinary accounts. These people, posing as everyday citizens from various countries, posted coordinated messages amplifying geopolitical grievances, often disconnected from the original context. For instance, the network responded to a Peruvian police video featuring a Grinch-costumed officer with AI-generated outrage about U.S. policy and Ukraine, illustrating how mundane or humorous content can be exploited to spread disinformation and influence narratives. The network’s operations highlight how AI facilitates large-scale content production without subtlety. While personas may initially seem authentic, patterns such as synchronized posts, repeated linguistic quirks, and prompt artifacts reveal underlying automation. The AI prompts focused on themes critical of Western governments, immigration, and Ukraine, often redirecting the content to unrelated discussions, such as skincare advertisements or local news stories, thereby demonstrating a disconnect between narrative framing and real-world context. Graphika emphasized that AI alone does not guarantee influence. Community and platform administrators often push back, deleting posts or banning accounts. Organizations must therefore assess not only the presence of AI-generated content but its reach, context, and resonance. Source: Graphika, The Graphika Team, Ordinary Outrage: The Ordinary AI People That Couldn't Steal Christmas . [online] Published 10 December 2025. Available at: https://graphika.com/posts/ordinary-outrage-the-ordinary-ai-people-that-couldn-t-steal-christmas Top Of Page The War in Gaza Structural Disinformation and Information Control in Gaza A report by GIOR argued that between 2018 and 2022, Hamas created a repressive environment in Gaza that enables structural disinformation by controlling information at its source. Independent investigations by Human Rights Watch, Amnesty International, and the Committee to Protect Journalists document systematic torture of critics, mass arrests, and repeated detention of journalists, creating fear that silences independent voices. As a result, journalists, civil society actors, and even UN staff operate under coercion, making independent verification of events and data extremely difficult. Disinformation in this context does not rely on false statements, but on enforced omissions. UN agencies themselves acknowledge that casualty figures and humanitarian data from Gaza cannot be independently verified and are primarily sourced from Hamas-controlled institutions. An UN-commissioned review found that fear of personal safety prevents staff from reporting misconduct, and that intimidation and interference have occurred within UN facilities. These conditions mean that information reaching Western governments and media appears neutral and authoritative, even though it is produced within a highly constrained and unfree environment. This system represents a sophisticated form of information warfare, in which controlling who collects data and what can be reported enables distortion without overt propaganda. Western policymakers are urged to treat Gaza-sourced humanitarian data with caution, increase transparency about verification limits, and invest in independent monitoring mechanisms. The report stressed that the absence of blatant falsehoods does not mean the absence of disinformation when information is filtered through coercion. Source: Global Influence Operations Report (GIOR), The Gaza Information Trap: How Hamas Coercion Shapes What the West Knows . [online] Published 8 December 2025. Available at: https://www.global-influence-ops.com/reports/the-gaza-information-trap-how-hamas-coercion-shapes-what-the-west-knows/ Top Of Page [AI Related Articles] AI-Generated Political Videos Recent AI-generated political videos, ranging from Zohran Mamdani as a creepy trick-or-treater to Gavin Newsom body-slamming Donald Trump, highlight the growing presence of AI content in American politics. An   article  by TechXplore emphasized that, while many of these videos are clearly fake and often meme-like, their purpose is less about deception and more about provoking strong emotional responses, generating engagement, and signaling political allegiance. Politicians and supporters alike are increasingly using AI tools to amplify messages, mock opponents, and influence online discourse. Emotion plays a key role in the spread of disinformation. AI videos that elicit outrage, amusement, or excitement are more likely to go viral, and these strong emotions reduce viewers’ ability to discern real events from fabricated content. The accessibility of AI tools enables both ordinary users and political actors to quickly generate hundreds of videos. At the same time, platforms often fail to clearly label AI-generated content, blurring the line between fact and fiction. Source: TechXplore, L. Fazio (The Conversation), AI Generated Political Videos Memes . [online] Published 11 December 2025. Available at:   https://techxplore.com/news/2025-12-ai-generated-political-videos-memes.html Top Of Page The Erosion of Trust in Online and AI-Created Information As stated in a   report  by ASPI, AI chatbots, such as Grok on the social media platform X, are increasingly central to online political discourse but are not neutral information tools. Users often weaponize AI responses in debates. This cycle of performative outrage amplifies polarization and contributes to a broader environment in which AI-generated content becomes a vector for disinformation rather than a source of clarity. Large-scale analysis of hundreds of thousands of public AI interactions reveals consistent patterns of political arguments and manipulations amplified for social performance. The reliability of AI-generated content is also a structural concern. Studies by the European Broadcasting Union and the BBC show that nearly half of AI news answers contain factual errors, with broader issues, including incoherence or outdated data, in over 80 percent of cases. Public surveys reveal widespread skepticism: majorities fear AI will exacerbate misinformation and influence elections. Source: The Australian Strategic Policy Institute (ASPI), J. Coyne, Protecting Truth in the Era of AI Mediation . [online] Published 12 December 2025. Available at: https://www.aspi.org.au/strategist-posts/protecting-truth-in-the-era-of-ai-mediation/ Top Of Page [General Reports] False Claims About ChatGPT and AIPAC Advertising In a December 2025 NewsGuard article , anti-Israel social media users pushed false claims that OpenAI’s ChatGPT had started running ads and had promoted the pro-Israel lobbying group AIPAC. The allegation was based on a widely shared screenshot purporting to show a ChatGPT response containing an AIPAC ad, and it quickly gained millions of views on X. The posts fueled backlash calls to cancel OpenAI subscriptions, amplifying the narrative that ChatGPT was being used for political influence. Closer examination showed the claim was disinformation. The image contained apparent inconsistencies, including an inaccurate AIPAC logo and fonts not used in official AIPAC materials. Investigations found that the image was likely doctored from an earlier, unrelated ChatGPT screenshot showing a Target logo. That original image was linked to a known ChatGPT feature that recommends apps from approved “pilot partners,” not paid political advertising. OpenAI has confirmed that it has not launched traditional advertising in ChatGPT, and AIPAC is not among its pilot partners. Source:  NewsGuard Reality Check, L. Marchl, Anti-Israel Accounts Spread Fake Screenshot of an AIPAC Ad on ChatGPT . [online] Published 10 December 2025. Available at:   https://www.newsguardrealitycheck.com/p/chatgpt-is-not-running-pro-israel Top Of Page Strategic Messaging Around China–India–Russia Cooperation According to a   publication  by Insikt Groupe, the August 2025 meeting between China, India, and Russia has fueled speculation about a new trilateral bloc, but concludes that a durable alliance is unlikely. Public messaging around the meeting risks overstating unity and can mislead audiences into believing a cohesive anti-Western bloc is forming. Disinformation and strategic signaling play an indirect role in this dynamic. All three states promote narratives that emphasize resistance to U.S. influence and sanctions, which can obscure underlying divergences. For example, claims of growing unity stand in contrast to concrete actions, such as China and India reducing Russian oil imports after new U.S. sanctions. The report warns against taking symbolic summits or official statements at face value. While China and Russia consistently frame cooperation as part of a long-term push against Western dominance, India balances this messaging with efforts to maintain ties with the United States. The result is a complex information environment where narratives of trilateral strength may function more as strategic signaling than as evidence of a genuine, lasting alliance. Source: Insikt Grou, Implications of Russia-India-China Trilateral Cooperation . [online] Published 10 December 2025. Available at:   https://www.recordedfuture.com/research/implications-of-russia-india-china-trilateral-cooperation Top Of Page [Appendix - Frameworks to Counter Disinformation] UK Sanctions Russian and Chinese Groups Over Information Warfare In a report published by The Record, Britain is shown escalating pressure with fresh sanctions on Russian and Chinese actors blamed for cyber offensives and influence campaigns targeting Western societies. Announcing the measures, Foreign Secretary Yvette Cooper warned of escalating hybrid threats that blend cyber operations, information warfare, and physical sabotage. The sanctions target seven Russian individuals and organizations, including the pro-war Telegram channel Rybar and its co-owner, Mikhail Zvinchuk; the GRU-linked organization Pravfond; and the Centre for Geopolitical Expertise, founded by nationalist ideologue Aleksandr Dugin. Several of those sanctioned have been linked to Russian intelligence activities and prior influence operations, while U.S. authorities already want some. The U.K. also sanctioned two Chinese technology firms, i-Soon and Integrity Technology Group, accused of supporting cyber espionage operations against British and allied targets. Cooper framed the measures as part of a broader effort to counter Russia’s war against Ukraine and the broader campaign of cyberattacks, disinformation, and intimidation spilling across Europe. She argued that Europe must reduce its reliance on the United States and develop new coalitions to confront these threats. Critics argue that sanctions alone are insufficient to deter state-backed cyber and information warfare and call instead for a more proactive and coordinated European response. Source: The Record, A. Martin, UK Sanctions Russia-China Entities Information Warfare . [online] Published 10 December 2025. Available at: https://therecord.media/uk-sanctions-russia-china-entities-information-warfare Top Of Page Generative AI Amplifies Disinformation and Hostile Influence Risks GenAI   report  identifies disinformation as one of the most immediate and concrete risks posed by generative AI, particularly through the large-scale production of synthetic text, images, audio, and video that can convincingly mimic authentic human communication. These capabilities significantly lower the cost, skill threshold, and time required to create misleading or false content, enabling rapid amplification of propaganda, deepfakes, impersonation, and other forms of information manipulation that can distort public discourse and undermine trust in democratic processes. The authors describe how malicious or irresponsible use of generative AI exploits structural weaknesses in the digital information ecosystem, including the speed of online dissemination, the opacity of model training data, and the difficulty of distinguishing synthetic from authentic content. Tactics highlighted include using AI to mass-produce persuasive narratives, generate fake imagery or video, and automate disinformation campaigns at scale. The report notes that both closed- and open-source models can be repurposed by malicious actors, particularly once models are released or fine-tuned beyond the original developer’s control. In response, the report maps how governments and international bodies are seeking to mitigate disinformation risks through regulatory, co-regulatory, and transparency-based approaches, including labeling requirements for AI-generated content, watermarking, auditing obligations, and risk-based oversight regimes. However, it underscores that regulatory efforts are constrained by uncertainty, uneven global enforcement, and the concentration of technical power in private firms. As a result, the report frames disinformation as a persistent governance challenge that requires coordinated international standards, stronger enforcement mechanisms, and sustained scrutiny of how generative AI infrastructure can be weaponized for hostile influence operations. Source: GenAI report authors , "GenAI Report REV Master" - Available Online:   https://fsi9-prod.s3.us-west-1.amazonaws.com/s3fs-public/2024-12/GenAI_Report_REV_Master_%0Aas%20of%20Dec%2012.pdf Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • CRC Weekly: Cyber-based hostile influence campaigns 24th-30th November 2025

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] New location feature on X 'weaponized' to spread misinformation [State Actors] Russia Russian Disinformation Targeting Georgia Russia Escalates Coordinated Disinformation Campaign Ahead of Armenia’s 2026 Elections [The War in Ukraine] Kremlin Attacks Ukraine Through Hack-and-Leak Narratives Disinformation Surrounding Colombian Volunteers in Ukraine Kremlin Fake Peace Rhetoric About the War with Ukraine [AI Related Articles] Fake AI Creates “Digital Reincarnations” Disinformation in Political Campaigns Beyond AI [General Reports] Influential ‘U.S.’ X Accounts Spread False Claims from Abroad Foreign Disinformation During the 2025 Czech Parliamentary Elections Disinformation as a Strategic Threat During COVID-19 Global Poll Shows Disinformation Among Top Security Fears as U.S. Influence Declines [Appendix - Framworks to Counter Disinformation] Addressing Disinformation Through Media Literacy [CRC Glossary] [ Report Highlights] · According to an article by ABC , Elon Musk’s social platform X introduced a global location-display feature intended to verify account authenticity and enhance platform integrity. Instead, the tool generated confusion, inaccuracies, and opportunities for misinformation quickly. · An article published by DW states that Russia is intensifying a coordinated disinformation campaign ahead of Armenia’s 2026 elections, using AI-generated content, bot networks, and impersonating media outlets to undermine Prime Minister Nikol Pashinyan and discredit Armenia’s Western orientation. · As reported by DFR Lab , in August and September, pro-Russia media promoted two coordinated disinformation campaigns aimed at weakening Western support for Ukraine. · As stated in a Stop Fake report, since Russia invaded Ukraine in 2022, Kyiv has actively sought foreign volunteers, with a notable surge from Latin America, especially Colombia. · According to a report by the Institute for Strategic Dialogue (ISD) , the 2025 Czech parliamentary elections unfolded amid an increasingly complex information environment shaped by long-standing foreign influence efforts, particularly from Russian-linked networks. [Social Media Platforms] New location feature on X 'weaponized' to spread misinformation According to an ABC   article , Elon Musk’s social platform X introduced a global location-display feature intended to verify account authenticity and enhance platform integrity. Instead, the tool generated confusion, inaccuracies, and opportunities for misinformation quickly. Experts warned that the feature amounted to “weaponized decontextualization,” as accounts were assigned incorrect countries due to VPN use, outdated data, or technical errors. These inaccuracies were quickly exploited to question the legitimacy of users, including journalists reporting on sensitive conflicts. Several prominent accounts, including those from Australian media outlets, government institutions, and academics, were mislabeled, further eroding trust in the platform. The feature not only failed to improve transparency but also breached user trust by publicly revealing information that many did not expect to be shared. While aggregated location signals may have some limited value in detecting coordinated inauthentic behavior, that tool is unreliable for assessing individual accounts. Moreover, NewsGuard's Reality Check   report  showed that the feature has already triggered a wave of deliberate misuse, with partisan X accounts fabricating screenshots to falsely portray political figures and commentators as operating from foreign countries. NewsGuard identified multiple falsified images circulating widely, including a fake screenshot suggesting U.S. commentator Nick Fuentes was based in Nigeria and another claiming U.K. Prime Minister Keir Starmer’s account originated in Ukraine. In both cases, users misrepresented impostor accounts or manufactured location data to fuel political narratives. Source:  ABC, NewsGuard, ‘New location feature on Elon Musk's X 'weaponised' to spread misinformation’ Available Online:   https://www.abc.net.au/news/2025-11-25/social-media-x-location-feature-user-trust/106045476 ;   https://www.newsguardrealitycheck.com/i/179868003/partisan-x-accounts-misuse-new-x-feature-to-advance-false-claims-about-user-location Top Of Page [State Actors] Russia Russian Disinformation Targeting Georgia According to an   article  by DFR Lab, sanctioned Russian propaganda networks are amplifying disinformation to undermine Georgia’s pro-democracy movement and legitimize anti-Western narratives. As protests against the increasingly authoritarian government continue, websites in English, French, and Polish circulate claims that the West is plotting coups, engineering unrest, or manipulating Georgia’s political future. These narratives mirror Kremlin talking points and echo the ruling Georgian Dream party’s rhetoric. A key disinformation hub is The Intel Drop, which republishes content from Russian intelligence-linked outlets such as New Eastern Outlook and South Front. Its articles often recycle false claims that Georgian elections are Western provocations, that Ukrainian operatives smuggle explosives to destabilize the country, or that past uprisings like the Rose Revolution and Maidan were CIA-led coups. These stories distort isolated events, dismiss independent election observations, and portray the EU and US as hostile actors attempting to drag Georgia into conflicts with Russia. The same narratives are spread through mirror sites like newsnet.fr in France and Canal666 in Poland, both of which systematically launder material from sanctioned Russian sources. Source:  DFR Lab, ‘Suspicious websites amplify narratives from sanctioned Russian entities targeting Georgia’ Available Online:   https://dfrlab.org/2025/11/28/suspicious-websites-amplify-narratives-from-sanctioned-russian-entities-targeting-georgia/ Top Of Page Russia Escalates Coordinated Disinformation Campaign Ahead of Armenia’s 2026 Elections An   article  published by DW states that Russia is intensifying a coordinated disinformation campaign to undermine Armenia’s Western-leaning government ahead of the June 2026 parliamentary election. Experts report a sharp rise in anti-government narratives targeting Prime Minister Nikol Pashinyan, characterized by greater organization, faster circulation, and more precise targeting than in previous years. These efforts include AI-generated photos, audio, and deepfakes, appearing in Armenia’s information space for the first time, with the overarching goal of eroding public trust in Armenian institutions, discrediting the country’s Western partnerships, and opening political space for more Kremlin-aligned actors. The campaign integrates multiple tactics and actors. Bot networks such as Matryoshka, previously active in other countries, have begun distributing AI-generated videos mimicking Western media and accusing Pashinyan of destroying Armenia’s cultural foundations and promoting “non-traditional values.” Russian and pro-Russian actors are also employing “doppelganger” techniques, creating websites that imitate legitimate outlets to spread fabricated scandals and corruption stories. One such operation pushed a false claim about US-funded laboratories conducting secret experiments on Armenian civilians, content traced to the Russian Foundation to Battle Injustice, widely viewed as a Kremlin-aligned disinformation front. These narratives are reinforced by local pro-Russian voices as well as Azerbaijani and Turkish actors, forming a broader hostile information ecosystem. Armenian officials describe this pressure as a form of hybrid warfare, noting that Russian commentators regularly call for political change in Armenia and that such interference is expected to intensify as the election approaches. While Moscow denies involvement, researchers emphasize that the tactics now deployed in Armenia mirror those used by Kremlin-linked actors in Georgia and Moldova, including the rapid creation of impersonating websites and the coordinated amplification of hostile messaging. As Armenia’s relations with Russia have deteriorated since Pashinyan’s rise in 2018 and Russia’s perceived failure to meet security obligations in 2021–2022, the Kremlin’s disinformation efforts appear focused on reshaping Armenia’s political orientation ahead of 2026. Source:  DW, ‘Russia's disinformation campaign in Armenia gains momentum’ Available Online:   https://www.dw.com/en/russias-disinformation-campaign-in-armenia-gains-momentum/a-74868051 Top Of Page [The War in Ukraine] Kremlin Attacks Ukraine Through Hack-and-Leak Narratives As   reported  by DFR Lab, in August and September, pro-Russia media promoted two coordinated disinformation campaigns aimed at weakening Western support for Ukraine. Both used alleged “hack and leaks” to create the illusion of insider access to sensitive data. The first claimed hacktivists had obtained Ukrainian military records proving 1.7 million soldiers were dead. Launched on Telegram and reinforced by Kremlin-aligned outlets like MASH and RIA Novosti, the narrative used unverifiable documents and sensational images to frame Ukraine as militarily devastated. The second campaign, originating from a fringe Italian outlet, alleged that Western states planned to divide Ukraine and seize control of its territory under the guise of military aid, a claim later amplified by Russian media and repeated by Hungary’s prime minister to lend it legitimacy. These narratives exploited information gaps and wartime uncertainty to question the value of Western aid and Ukraine’s resilience. Casualty figures, rarely disclosed by Kyiv and inconsistently estimated by analysts, became a key vulnerability, allowing propaganda accounts to present fabricated numbers as “leaked truth.” The messaging portrayed Ukraine as doomed, corrupt, and dependent, arguing that Western efforts were pointless. The distribution channels underscored how disinformation spreads rapidly without formal coordination. Telegram was the primary vector, with thousands of small accounts pushing the narratives and millions of views accumulated in under a week. The stories then jumped to TikTok, Facebook, and fringe media, often through supposedly personal accounts that exclusively repost pro-Kremlin content. Amplification by Russian state media, pro-war bloggers, and sympathetic political figures helped move the claims from anonymous sources into mainstream discourse. Source:  DFR Lab, ‘Pro-Kremlin actors seed narratives to undermine Western military aid to Ukraine’ Available Online:   https://dfrlab.org/2025/11/28/pro-kremlin-actors-seed-narratives-to-undermine-western-military-aid-to-ukraine/ Top Of Page Disinformation Surrounding Colombian Volunteers in Ukraine As stated in a Stop Fake   report , since Russia invaded Ukraine in 2022, Kyiv has actively sought foreign volunteers, with a notable surge from Latin America, especially Colombia. The Ukrainian Ministry of Defense promotes recruitment by sharing testimonies and success stories, but these efforts are heavily targeted by Kremlin-led disinformation. Russia spreads false or decontextualized content to discourage volunteers and undermine trust in Ukraine, using AI-generated imagery, fabricated media logos, and exaggerated claims about casualties or organ trafficking. These tactics often blend real incidents with false narratives, making them appear more credible. Colombian volunteers are at the center of these campaigns. Disinformers circulate stories of mistreatment, unpaid soldiers, and claims that Ukraine refuses to return bodies, manipulating tragedies like the missing soldier Michael Ramírez. AI-generated videos portraying forced conscription, fake recruitment posters offering unrealistic benefits, or offensive claims about “inferior races” aim to inflame public opinion and reduce volunteer enlistment. Even Colombian government warnings are picked up by Russian channels to amplify fear and distrust. The broader objective is to shape a perception of Ukraine as abusive, corrupt, and militarily failing. Source:  Stop Fake, ‘Russian disinformation campaigns about foreign volunteers in the Ukrainian Armed Forces: fakes about losses and the "black market for organs"’ Available Online:   https://www.stopfake.org/ru/massovaya-gibel-inostrannyh-dobrovoltsev-i-ih-organy-na-chyornom-rynke-rossijskie-dezinformatsionnye-kampanii-protiv-mezhdunarodnoj-mobilizatsii-v-vsu/ Top Of Page Kremlin Fake Peace Rhetoric About the War with Ukraine According to a   report  by EU vs. Disinfo, Russia has repeatedly used diplomacy as a façade while pursuing domination over its neighbors. Despite public statements about peace, the Kremlin runs foreign information manipulation and interference (FIMI) campaigns to paint Moscow as reasonable and Ukraine as the aggressor. This strategy has been visible for decades, from covert backing of separatists in Moldova and Georgia to denying direct involvement while entrenching troops and recognizing breakaway regions as “independent.” In Ukraine, disinformation is central to Russia’s war. After annexing Crimea and secretly intervening in Donbas, Moscow signed the Minsk agreements with no intention of complying. Instead, it violated ceasefires, supported separatists, and accused Ukraine of blocking peace. Putin now claims Russia did not start the war and is merely protecting Donbas, narratives that ignore overwhelming evidence of Russian orchestration and military aggression. This pattern continues today - Missile and drone attacks often follow peace talks, as seen after negotiations in Geneva. Meanwhile, Russian propaganda blames Ukraine, the EU, and the UK for “sabotaging peace.” Source:  EU vs. Disinfo, ‘Russia’s peace rhetoric: A smokescreen for aggression’ Available Online:   https://euvsdisinfo.eu/russias-peace-rhetoric-a-smokescreen-for-aggression/ Top Of Page [AI Related Articles] Fake AI Creates “Digital Reincarnations” According to a   report  by CyberNews, AI-generated “digital reincarnations” are becoming a disturbing tool for disinformation and emotional manipulation. Platforms like TikTok struggle to detect or remove deepfakes that impersonate deceased individuals, as seen in cases where dozens of fake victim videos remained online until media pressure intervened. Public figures such as Zelda Williams have demanded that people stop sending AI recreations of their late relatives. In contrast, systems like OpenAI’s Sora have been exploited to generate offensive deepfakes of icons like Martin Luther King Jr. Griefbots (AI simulations designed to mimic lost loved ones) blur the line between comfort and manipulation. AI ethicists warn that users can develop emotional dependence, like relationships with AI companions. Legal protections lag behind technology. In the US, most deceased individuals have little defense against deepfakes, as federal law does not grant posthumous publicity rights, and state protections primarily focus on commercial use. Newer laws like Tennessee’s ELVIS Act and New York’s Digital Replica law attempt to curb non-consensual AI clones, but enforcement is costly and complex. Until comprehensive legislation, such as the proposed No Fakes Act, takes effect, reporting deepfakes to platforms and setting explicit digital rights in estate documents remain among the few practical safeguards against these evolving disinformation threats. Source:  CyberNews, 'AI deepfakes target the dead | Cybernews' Available Online:   https://cybernews.com/ai-news/ai-image-dead-people/ Top Of Page Disinformation in Political Campaigns Beyond AI According to an   article  by The Hill, recent political ads, such as the AI-generated video of Sen. Jon Ossoff, highlight a growing problem: campaigns are using digital tools to fabricate statements and simulate media coverage. However, the core issue is not the technology itself, but the deliberate spreading of falsehoods. Historical examples, from Plato’s warnings to wartime propaganda, show that disinformation long predates artificial intelligence. AI makes fabrication faster, cheaper, and harder to detect, allowing deceptive political messages to circulate widely and gain legitimacy. The danger of AI-assisted disinformation lies in its ability to blur reality. In one case, a political group created a video of Sen. Chuck Schumer delivering real quotes, even though no such footage existed, raising ethical questions about how information is presented. The more AI is used to generate convincing visuals or voices, the easier it becomes to manipulate public opinion while claiming authenticity. This creates “gray areas” where technically accurate content can be delivered in deceptive formats, undermining trust. The broader risk is erosion of public confidence. As AI headlines dominate, voters may begin dismissing both real and false information as “just AI,” feeding cynicism and weakening democracy. Source:  The Hill, ‘The use of AI-generated deepfakes in campaign ads: Lying is the problem, not AI’ Available Online:   https://thehill.com/opinion/technology/5624624-lying-ai-problem-politics/ Top Of Page [General Reports] Influential ‘U.S.’ X Accounts Spread False Claims from Abroad A NewsGuard's Reality Check   investigation  found that several influential pro-Trump accounts on X, which appeared to be operated by Americans, were in fact run from countries including Nigeria, Thailand, Bangladesh, Turkey, and Eastern Europe. These covertly foreign accounts collectively spread at least 31 significant false claims over the past 15 months, many targeting U.S. political figures, electoral processes, and international affairs. The activity came to light after X introduced a geolocation feature intended to improve transparency, revealing that some of the platform’s most prominent “American” political accounts were actually based overseas. Three high-impact accounts, posing as fan pages for Ivanka Trump, Barron Trump, and conservative influencer Charlie Kirk, reached nearly two million followers while amplifying narratives debunked in NewsGuard’s False Claims Fingerprint database. Falsehoods ranged from fabricated allegations against Democratic leaders to disinformation about supposed Ukrainian corruption and staged military movements inside the United States. Some claims echoed well-known pro-Kremlin narratives, further highlighting the manipulation's geopolitical dimension. While X suspended one Nigeria-based account, the platform has not commented on the broader pattern of foreign operations. Source:  NewsGuard, ‘X Accounts Masquerading as American Spread 31 Significant False Claims About U.S. Politics from Abroad’ Available Online:   https://www.newsguardrealitycheck.com/i/179868003/x-accounts-masquerading-as-american-spread-significant-false-claims-about-us-politics-from-abroad Top Of Page Foreign Disinformation During the 2025 Czech Parliamentary Elections According to a   report  by ISD, the 2025 Czech parliamentary elections unfolded amid an increasingly complex information environment, shaped by long-standing foreign influence efforts, particularly from Russian-linked networks. Although disinformation actors attempted to undermine trust in institutions, promote narratives of electoral fraud, and amplify societal divisions, investigators found no evidence of large-scale, coordinated foreign interference capable of affecting the election outcome. The vote was ultimately deemed legitimate, transparent, and technically resilient. Minor disruptions, such as a temporary malfunction in the eDoklady digital ID system, were quickly resolved, and extensive media coverage, civil society monitoring, and calm public engagement helped maintain confidence throughout the process. Despite this resilience, the election highlighted persistent vulnerabilities in the Czech information space. Russian-affiliated outlets, proxy websites, and cross-platform amplification networks continued to launder and spread narratives aimed at eroding support for Ukraine, weakening EU and NATO alignment, and fueling polarization over issues such as migration and defense. Structural weaknesses, including delays in implementing the Digital Services Act, limited oversight of campaign financing, low trust in political institutions, and the frequent domestic amplification of misleading narratives, created fertile ground for FIMI and long-term erosion of trust. High political polarization and uncertainty around new electoral procedures, especially postal voting, further intensified susceptibility to disinformation. The election results reflected strong democratic engagement, with a turnout of nearly 69%. Andrej Babiš’s ANO party secured a commanding lead and entered government negotiations, while state institutions, civil society, and international partners demonstrated effective, though still incomplete, cooperation in countering FIMI. Looking forward, the report stresses the need for sustained investment in digital resilience, transparent media and advertising rules, cross-border intelligence sharing, and platform accountability. Source:  FIMI-ISAC, ‘Assessment of Foreign Information Manipulation and Interference in the 2025 Czech Parliamentary Election’ Available Online:   https://fimi-isac.org/wp-content/uploads/2025/11/FRT-24_Globsec_Czech-Election-Report_FINAL.pdf Top Of Page Disinformation as a Strategic Threat During COVID-19 As revealed in a   report  by the NATO Association of Canada, the COVID-19 pandemic unleashed an “infodemic,” in which misinformation and disinformation spread faster than the virus itself. False narratives undermined trust in public health measures, eroded compliance with safety protocols, and fragmented social cohesion. In Canada, these effects were intensified by existing social inequities. Marginalized communities, already facing barriers such as discrimination, housing insecurity, and low health literacy, were more vulnerable to online conspiracy theories about vaccines and masks. Anti-mandate movements exploited public fatigue and emotional reactions, fueling protests like the Freedom Convoy, while widespread vaccine falsehoods reached millions of Canadians. Disinformation thrived amid inconsistent government messaging and declining trust in institutions. Confusing or contradictory policy announcements created a vacuum that false narratives quickly filled. Research showed that facts alone cannot counter misinformation - they must be delivered with clarity, empathy, and reliability. When official communication fails, disinformation becomes the default explanation, weaponizing public fear and frustration. The pandemic also highlighted disinformation as a geopolitical threat. State and non-state actors used coordinated campaigns to undermine confidence in democratic systems, portraying organizations like NATO as incompetent or malicious. Source:  NATO Association of Canada, ‘Disinformation and Public Health in the Post-Pandemic Era: What COVID-19 Taught Canada and NATO About Resilience’ Available Online:   https://natoassociation.ca/disinformation-and-public-health-in-the-post-pandemic-era-what-covid-19-taught-canada-and-nato-about-resilience/ Top Of Page Global Poll Shows Disinformation Among Top Security Fears as U.S. Influence Declines According to an IPSOS   article , global public opinion data from the latest Ipsos World Affairs polling shows that disinformation remains one of the most widely recognized threats, with 77% across 30 countries viewing deliberate attempts to manipulate public opinion as a real and immediate danger, matching concerns about hacking for fraud or espionage. This heightened sensitivity to information threats exists alongside growing geopolitical pessimism: only one-third believe the world is improving, and most say it has become more dangerous, even as confidence in maintaining a strong national defense remains high. The survey also captures shifting perceptions of global influence, with Canada rated the most positive international actor for the tenth consecutive year. At the same time, views of the United States have sharply deteriorated. Only 48% now believe the U.S. will have a positive impact in the coming decade—down dramatically from previous years—and many countries see America as losing global influence compared to China, BRIC states, and CRINK countries. Meanwhile, Iran, Israel, Pakistan, Russia, and Saudi Arabia are viewed least favorably as global influencers. Public attitudes toward national defense and security are similarly mixed; strong majorities support maintaining robust military capabilities, yet only modest support exists for mandatory service, and opinions differ widely across countries. While most respondents believe economic power outweighs military strength in global affairs, the persistent concern about disinformation campaigns and cyber-enabled manipulation underscores the continued salience of non-kinetic threats within the global security landscape. Source:  IPSOS, ‘Disinformation, hacking seen as top threats; Reputation of America drops’ Available Online:   https://www.ipsos.com/en-nl/disinformation-hacking-seen-top-threats-reputation-america-drops Top Of Page [Appendix - Frameworks to Counter Disinformation] Addressing Disinformation Through Media Literacy According to a K-12Dive   report , experts emphasized that as students spend more time on digital media, educators must proactively teach media literacy across all subjects. Rather than directly arguing against conspiracy beliefs, teachers should use open-ended, non-judgmental questions that encourage students to reflect on their own thinking. This approach helps students understand how misinformation and disinformation spread and prevents them from becoming defensive. Media literacy should start with understanding the broader media ecosystem, including the attention economy and the incentives behind digital content. Educators are encouraged to help students analyze who benefits from a post, why it was created, and whether it deserves engagement. Museums and institutions like the 9/11 Memorial & Museum provide valuable resources and promote critical thinking strategies, such as Occam’s razor and the investigation of whether evidence could change one’s view. The rise of artificial intelligence intensifies the challenge, as multiple sources may reinforce false ideas. Students should examine diverse sources rather than focusing intensely on a single post, and they must monitor their emotional reactions, since disinformation often exploits strong feelings to bypass rational thought. Source:  K-12Dive, ‘How to combat misinformation and disinformation in the classroom’ Available Online:   https://www.k12dive.com/news/how-to-combat-misinformation-and-disinformation-in-the-classroom/806511/ Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • CRC Weekly: Cyber-based hostile influence campaigns 01th-07th December 2025

    [Introduction] Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.   During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events. [Contents] [Introduction] [Report Highlights] [Social Media Platforms] X Location Data Reveal Foreign Influence Operations EU Fines X for Transparency Failures  [State Actors] Russia Aéza Hosting Provider Linked to Pro-Kremlin Disinformation and Cybercrime Operations The War in Ukraine Derkach’s Dual Role: Corruption Networks and Russian Influence Ops Ongoing False Claims About Zelensky Feed China China Turns to Manipulated Footage to Shape Strategic Narratives [AI Related Articles] Human Signals, Bot Noise: Understanding Narrative Momentum Google’s Nano Banana Pro: Built for Creativity, Open to Misuse Conversational AI Affects Political Opinions [General Reports] Disinformation Emerges as a Key Global Business Ris k The Rise of a Global Disinformation-for-Hire Industry Macron’s Anti-Disinformation Push Sparks Backlash from Right-Wing Media and Oppon ents [Appendix - Framworks to Counter Disinformation] Countering the Noise: EU Tackles Disinfo in Yemen Turkic States Seek Coordinated Strategy Against Cross-Border Digital Disinformation [CRC Glossary] [ Report Highlights] The Record reports that the European Commission has fined Elon Musk’s platform X €120 million for violating the Digital Services Act (DSA), a law designed to curb online disinformation and influence operations. According to the VsQuare article, Aéza, a Russian hosting provider tied to pro-Kremlin disinformation and significant cybercriminal activity, continues operating mainly in Europe, despite international sanctions and criminal charges against its founders. NewsGuard found that pro-Russian accounts circulated a clip from President Volodymyr Zelensky’s 7 November 2025 address, falsely claiming that cocaine fell from his nose during the broadcast. NewsGuard flagged doctored videos spread by pro-China sources that fabricate interviews with Ryukyu Islanders claiming to be Chinese, not Japanese. The World Economic Forum’s 2025 Executive Opinion Survey reflects widespread worries about economic instability, but it notably singles out disinformation as a separate, sharply intensifying threat. A Reuters article stated that Macron’s effort to promote professional standards against online disinformation has triggered an intense backlash from right-wing media and political opponents, who have reframed the initiative as an attempt at censorship despite his insistence that the state should not define what counts as news. The European Union announced that, in partnership with the Studies and Economic Media Centre, it hosted a two-day workshop in Taiz in December 2025 to address the growing challenge of disinformation in Yemen. [Social Media Platforms] X Location Data Reveal Foreign Influence Operations Axios highlighted the new transparency feature on the platform X in a recently published article . This feature made it visible that many well-known political accounts are not based in the United States. Some of these profiles present themselves as MAGA supporters. Others pretend to be patriotic women or run Ivanka Trump fan profiles. However, the transparency feature showed that many of these accounts are operated from Nigeria, Pakistan, Thailand, or Eastern Europe. This confirms long-standing warnings from cybersecurity experts: foreign actors are using fake profiles to influence political debate in the United States and possibly earn money through X’s revenue-sharing program. The feature, found in “About This Account,” reveals where users behind profiles are posting from. However, it quickly became clear that the feature has limitations and is not always reliable. Especially in the beginning, several errors occurred. For example, it incorrectly showed that the U.S. Department of Homeland Security’s (DHS) account was posting from Israel. The DHS had to correct this publicly. Other accounts also showed incorrect locations, which caused further confusion. Many profiles previously suspected of taking part in influence or information campaigns were removed quickly. In addition, a separate concern emerged: manipulated screenshots could be used to distort the feature’s intended purpose. Overall, the situation shows that while the transparency feature offers new insights, it has limitations. Source: Axios, A. Lotz, Politics & Policy. X's new location feature exposes political accounts based outside the U.S.  [online] Published 24 November 2025. Available at: https://www.axios.com/2025/11/24/x-location-feature-foreign-political-account s Top Of Page EU Fines X for Transparency Failures  According to an article by The Record, the European Commission has fined Elon Musk’s platform X €120 million for violating the Digital Services Act (DSA), a law designed to curb online disinformation and influence operations. Regulators found that X misled users with its paid “verified” checkmark system, allowing anyone to buy credibility without proper identity checks, and failed to meet transparency rules for political ads and researcher access. These shortcomings weaken Europe’s efforts to detect and counter online manipulation and misinformation. A core DSA requirement is that major platforms provide a real-time public repository of political advertising and enable independent researchers to access public data. The Commission stated that X’s repository lacks basic details such as ad content, topics, and funding sources, and that the platform has placed obstacles that prevent researchers from investigating coordinated information operations and systemic risks. The fine has heightened political tensions, with Musk and several U.S. officials claiming the EU’s enforcement amounts to censorship. European authorities counter that opaque algorithms, inadequate verification, and restricted research access make it easier for disinformation to spread. Source: The Record, A. Martin, EU issues €120 million fine to Elon Musk’s X under rules to tackle disinformation . [online] Published 5 December 2025. Available at: https://therecord.media/eu-fines-x-under-digital-services-act-disinformation-transparecy-rules Top Of Page [State Actors] Russia Aéza Hosting Provider Linked to Pro-Kremlin Disinformation and Cybercrime Operations In a recently published article , VSquare examines the Russian server-hosting company Aéza, a key technical enabler of the well-known and ongoing Russian influence network Doppelganger. This operation deploys convincing clones of Western media outlets to manipulate public discourse in Europe and the United States, relying on hosting infrastructure designed to resist takedown attempts. Although Aéza portrays itself as “apolitical,” it operates as a classic bulletproof hosting provider, offering services frequently used by cybercriminal groups and state-linked influence operations.  A particular emphasis is drawn on state responses: the United States, the United Kingdom, and Australia imposed sanctions on Aéza in 2025, arguing that the company provided infrastructure for cyberattacks, illicit marketplaces, and hostile digital operations. Also, Russia initiated its own criminal proceedings. Aéza’s founders were arrested and charged with supplying hosting resources for BlackSprut, a billion-dollar darknet drug market. Despite these actions, the European Union has refrained from imposing sanctions on Aéza, allowing the company to continue operating servers in Europe and maintain a presence through resellers and connectivity partners. This illustrates that coordinated international efforts are necessary to disrupt resilient hosting ecosystems used for cybercrime and digital hostile influence. Source: VSquare, K. Vejnbender, From Darknet to Disinfo: How a ‘Bulletproof’ Russian Host Evades EU Sanctions . [online] Published 4 December 2025. Available at: https://vsquare.org/sanctions-west-prosecuted-in-russia-aeza-still-working-in-the-eu-doppelganger/ Top Of Page The War in Ukraine Derkach’s Dual Role: Corruption Networks and Russian Influence Ops A recent Jamestown Foundation analysis  indicates that a Ukrainian corruption probe may have uncovered that former parliamentarian and current Russian senator Andrei Derkach laundered a substantial amount of stolen security funds through offices tied to his family in Kyiv.  Derkach grew up in a family with ties to Russian intelligence services. Later, he served in soviet and post-Soviet intelligence units. He entered Ukrainian politics in the 1990s and was appointed head of Energoatom in 2006. This is described as a turning point, when he is believed to have built an FSB-linked network inside Ukraine’s nuclear sector, potentially strengthening Kremlin leverage over critical infrastructure. Long-standing protection from senior officials may have shielded him from accountability and enabled him to maintain extensive political and business ties.   The analysis also outlines Derkach’s possible role in Russian influence operations. He is portrayed as an actor who helped shape campaigns targeting the 2020 U.S. election, U.S.–Ukraine relations, and narratives framing Ukraine as dependent on Washington. His release of manipulated recordings involving Petro Poroshenko and Joe Biden is presented as an effort to generate political disruption in Kyiv and weaken Western backing. Russian-aligned media outlets, shell platforms, and domestic allies appear to have amplified these themes to deepen mistrust and institutional strain. Source: Jamestown Foundation, T. Kuzio, Politics & Society Publication Eurasia Daily Monitor Russia Eurasia Digest Volume 1 Issue 37. Ukrainian Corruption Investigation Reveals Derkach’s Role . [online] Published 12 January 2025. Available at: https://jamestown.org/ukrainian-corruption-investigation-reveals-derkachs-role/ Top Of Page Ongoing False Claims About Zelensky Feed As published  by NewsGuard, pro-Russian accounts circulated a video clip from Ukrainian President Volodymyr Zelensky’s 7 November 2025 address, falsely claiming that cocaine fell from his nose on live camera. The posts revived a familiar propaganda narrative portraying Zelensky as a drug user and spread rapidly on X, with some accumulating millions of views.   NewsGuard examined a slowed, high-resolution version of the clip from Zelensky’s official YouTube channel. The footage shows the droplet originating from his mouth, and its appearance is consistent with saliva. Pro-Kremlin accounts nevertheless amplified the low-quality reposts to create ambiguity and push the drug-use narrative. Independent assessments of such claims have found no credible evidence to support them. Source: NewsGuard, I. Chomnalez, Cocaine? No, Saliva: NewsGuard’s False Claim of the Week . [online] Published 5 December 2025. Available at: https://www.newsguardrealitycheck.com/p/cocaine-no-saliva-newsguards-false Top Of Page China China Turns to Manipulated Footage to Shape Strategic Narratives According to a report by NewsGuard, pro-China sources have circulated manipulated videos that falsely depict residents of Japan’s Ryukyu Islands claiming to be Chinese rather than Japanese. The coordinated campaign appears aimed at strengthening Beijing’s territorial claims amid heightened tensions, even though the Ryukyu Islands, including Okinawa, have been part of Japan since 1879. NewsGuard identified more than 100 such videos across multiple platforms, all featuring Japanese influencers discussing everyday topics in Japanese. Chinese accounts added fabricated Mandarin captions to portray these influencers, incorrectly, as Ryukyu locals who support China’s claims. None of the people in these videos discusses the territorial status of the Ryukyu Islands. Some accounts amassed hundreds of thousands of followers and millions of likes by repeatedly miscaptioning videos. The disinformation surge began shortly after Japan elected Prime Minister Takaichi Sanae, known for her firm stance toward China. None of the manipulated videos contains genuine statements supporting Chinese sovereignty. Instead, they illustrate a deliberate attempt to influence public perception through miscaptioned content and fabricated claims. Source: NewsGuard, C. Lin, Pro-China Sources Misrepresent Influencers’ Videos to Advance Territorial Claims on Japanese Islands . [online] Published 4 December 2025. Available at: https://www.newsguardrealitycheck.com/p/chinese-sources-push-territorial Top Of Page Kremlin Fake Peace Rhetoric About the War with Ukraine According to a   report  by EU vs. Disinfo, Russia has repeatedly used diplomacy as a façade while pursuing domination over its neighbors. Despite public statements about peace, the Kremlin runs foreign information manipulation and interference (FIMI) campaigns to paint Moscow as reasonable and Ukraine as the aggressor. This strategy has been visible for decades, from covert backing of separatists in Moldova and Georgia to denying direct involvement while entrenching troops and recognizing breakaway regions as “independent.” In Ukraine, disinformation is central to Russia’s war. After annexing Crimea and secretly intervening in Donbas, Moscow signed the Minsk agreements with no intention of complying. Instead, it violated ceasefires, supported separatists, and accused Ukraine of blocking peace. Putin now claims Russia did not start the war and is merely protecting Donbas, narratives that ignore overwhelming evidence of Russian orchestration and military aggression. This pattern continues today - Missile and drone attacks often follow peace talks, as seen after negotiations in Geneva. Meanwhile, Russian propaganda blames Ukraine, the EU, and the UK for “sabotaging peace.” Source:  EU vs. Disinfo, ‘Russia’s peace rhetoric: A smokescreen for aggression’ Available Online:   https://euvsdisinfo.eu/russias-peace-rhetoric-a-smokescreen-for-aggression/ Top Of Page [AI Related Articles] Human Signals, Bot Noise: Understanding Narrative Momentum A recent contribution from Graphika argues that blaming online outrage on swarms of bots misses how disinformation actually gains traction. Bot detection remains challenging because AI tools enable inauthentic accounts to produce convincing impersonations, while common warning signs: high repost ratios, unusual posting times, frequent profile changes, are also typical in legitimate online communities. Earlier bot-scoring systems such as Botometer generated high false-positive rates, underscoring that “spammy” behavior can be entirely organic. K-pop fandoms are a prime example: their massive output reflects genuine coordination rather than automated manipulation. Graphika’s analysis stresses that disinformation becomes influential only when trusted community figures pick it up and bring their audiences with them. Many online backlashes start with individuals reacting to brand decisions or cultural flashpoints; bots may then attach themselves to the discussion, but they seldom generate momentum on their own. Research on vaccine debates and China-linked Spamouflage networks shows that users overwhelmingly engage with human-authored posts, while bot-driven content attracts little sustained attention. Source: Graphika, The Graphika Team, (Don’t) Blame it on the Bots.  [online] Published 3 December 2025. Available at: https://graphika.com/posts/don-t-blame-it-on-the-bots Top Of Page Google’s Nano Banana Pro: Built for Creativity, Open to Misuse Research by NewsGuard found that Google’s new text-to-image generator, Nano Banana Pro, can produce highly realistic images that reinforce false claims. During their test, the tool generated disinformation-aligned images in every prompt (30 out of 30), including false depictions of politicians, public health issues, and major brands. In several cases, the model even introduced additional, credible-looking details that strengthened the narrative. The photorealistic results were convincing enough to appear legitimate, resembling journalism or official materials. Nano Banana Pro includes a visible watermark and an embedded SynthID marker. Still, the visible mark can be cropped out easily, and most users lack the tools required to verify invisible watermarks. Unlike competing image generators that block prompts involving public figures, Nano Banana Pro imposes no comparable restrictions. The outcome is a powerful image model with limited guardrails, capable of supporting false narratives across politics, global conflicts, health communication, and corporate reputation. These capabilities could turn the model into a tool that may be misused to produce misleading content and advance hostile influence efforts. Source: NewsGuard, I. Chomnalez & S. Rubinson, Google’s New AI Image Generator Is a Misinformation Superspreader . [online] Published 3 December 2025. Available at: https://www.newsguardtech.com/special-reports/google-new-ai-image-generator-misinformation-superspreader/ Top Of Page Conversational AI Affects Political Opinions As published  in Science , recent research by Hackenburg and Lin examined how extensive language models can shift political attitudes. Both teams conducted large-scale experiments in which participants engaged in multi-turn chat conversations with an LLM tasked with persuading them about a political issue or a real-world candidate. Across both projects, this produced more than 80,000 individual interactions, though each study remained methodologically separate. Hackenburg’s study in the United Kingdom tested 19 models and several persuasion strategies across hundreds of policy topics. Lin’s study in the United States, Canada, and Poland assessed whether LLMs could influence views on presidential or prime ministerial candidates and on a local ballot measure. Participants rated their opinions on a 0–100 scale before and after the conversation. A clear pattern emerged: attitudes on policy issues shifted by roughly ten points on average after a single LLM dialog. Effects on well-known candidates were minor and appeared mainly among participants who were initially opposed to the candidate. Both teams tested multiple persuasion tactics. Personalized messages offered almost no additional impact. The most substantial effects were observed in messages containing many fact-like statements. Later checks showed that a significant portion of these statements was false, yet they still influenced participants. The findings suggest that LLMs can alter political views primarily by providing large volumes of plausible information. Because false and accurate claims can have similar effects, the studies point to a potential risk for democratic systems: political judgments may be shifted even when the underlying information is unverified. Source: Science, Vol. 390 Issue 6777, L. P. Argyle, Political persuasion by artificial intelligence . [online] Published 4 December 2025. Available at: https://www.science.org/doi/10.1126/science.aec9293 Top Of Page [General Reports] Disinformation Emerges as a Key Global Business Risk The World Economic Forum’s 2025 Executive Opinion Survey (EOS) shows that concerns about economic development dominate business leaders' risk perceptions worldwide. Respondents most frequently cited rising living costs, weak growth, unemployment, insufficient social protection, and high levels of debt. At the same time, misinformation and disinformation are identified for the first time as a distinct top-five risk among G20 countries. Respondents pointed to the rapid spread of generative AI, the expansion of coordinated information operations, and increasing attempts to manipulate political debates, electoral processes, and market reactions. In North America, Europe, and East and Southeast Asia in particular, disinformation appears closely linked to “adverse AI outcomes” and “cyber insecurity,” underscoring the growing strategic relevance of this risk area. The EOS results show that disinformation is emerging as an independent strategic risk (see reports in our research/article section: “The Attack on Luxury Brands: A case study of the weaponization of the online ecosystem by China”  and “CIB Operation Targeting Western Automotive Brands” ) Companies and policymakers increasingly see not just isolated threat incidents but a structural vulnerability in information environments, one that can influence political processes, public debate, and economic decision-making. S ource : Marsh McLennan, Global Risks for Business: Regional and Country Perceptions . [online] Published 3 December 2025. Available at: https://www.marshmclennan.com/insights/publications/2025/november/global-risks-for-business-regional-and-country-level-perceptions.html Top Of Page The Rise of a Global Disinformation-for-Hire Industry According to a report by EU vs. Disinfo, a new, privatized disinformation industry is transforming how states conduct influence operations, allowing authoritarian regimes to manipulate foreign populations while maintaining plausible deniability. Once tightly controlled by governments, propaganda and information operations are now outsourced to private firms offering “disinformation as a service,” including fake social-media campaigns, AI-generated content, hacking, and narrative management (for more see our latest Blog Post here ). This commercialization enables malign actors to experiment with risky tactics and amplify false narratives globally, often making them appear organic or locally produced. High-profile examples, such as Team Jorge, illustrate the scale and sophistication of this ecosystem. The firm reportedly interfered in dozens of elections using AI-powered platforms like AIMS to create thousands of fake social-media accounts with synthetic identities. Russian private companies similarly run operations that mirror former state troll farms, providing covert online assets and spreading state-aligned narratives while shielding governments from accountability. Modern campaigns increasingly operate in hybrid spaces, blending online manipulation with real-world mobilization, covert media fronts, and local influencers, making them difficult to detect. The commercialization and automation of disinformation create an asymmetrical information warfare environment that favors autocracies. Authoritarian states can project influence globally with minimal risk, while democracies remain exposed due to open networks and legal transparency obligations. Source: EUvsDisinfo, The rise of the disinformation-for-hire industry . [online] Published 5 December 2025. Available at: https://euvsdisinfo.eu/the-rise-of-the-disinformation-for-hire-industry/ Top Of Page Macron’s Anti-Disinformation Push Sparks Backlash from Right-Wing Media and Opponents As a Reuters article stated, French President Emmanuel Macron’s nationwide campaign to highlight the dangers of online disinformation, algorithmic manipulation, and foreign-backed narratives has ignited intense backlash from right-wing political figures and media outlets aligned with billionaire Vincent Bolloré. Macron has framed the effort as an urgent attempt to protect democratic integrity ahead of the 2027 presidential election, emphasizing the need for voluntary professional “labelling” of news outlets to promote ethical standards, while stressing that the state should play no role in determining what constitutes legitimate news. Bolloré-owned outlets, including CNews, Europe 1, and the Journal du Dimanche , accused Macron of attempting to pressure or “bring into line” media critical of him, framing the initiative as a veiled move toward censorship. Influential hosts and commentators presented his remarks as evidence of “authoritarian drift.” At the same time, far-right leaders such as Marine Le Pen and Jordan Bardella escalated the narrative, likening Macron’s comments to the creation of a dystopian “Ministry of Truth.” In response, Macron’s office released a video rebutting the claims, arguing that the uproar demonstrated how efforts to combat disinformation can quickly be distorted. Observers noted that right-wing actors were exploiting the issue for political gain, accusing the government of censorship while themselves benefiting from a media ecosystem increasingly shaped by partisan narratives. The confrontation also unfolds against a broader backdrop in which debates over media regulation and digital information integrity have grown more polarized, mirroring struggles seen in the U.S. and U.K. Source: Reuters, M. Rose, Macron’s campaign to fight fake news meets resistance from right-wing media . [online] Published 3 December 2025. Available at: https://www.reuters.com/business/media-telecom/macrons-campaign-fight-fake-news-meets-resistance-right-wing-media-2025-12-03/ Top Of Page [Appendix - Frameworks to Counter Disinformation] Countering the Noise: EU Tackles Disinfo in Yemen The European Union announced that, in partnership with the Studies and Economic Media Centre (SEMC), it hosted a two-day workshop in Taiz in December 2025 to address the growing challenge of disinformation in Yemen. More than 100 journalists, fact-checkers, and media professionals participated in sessions focused on verifying information, countering misinformation, and promoting responsible journalism. Practical tools and digital analysis techniques were emphasized to help participants detect and respond to false narratives. By equipping journalists and fact-checkers with the skills and networks needed to detect and counter disinformation, the EU aims to strengthen public accountability and foster a more trustworthy and resilient information environment in Yemen. Local fact-checking platforms shared verification methodologies and strategies for identifying disinformation, while participants explored ways to improve collaboration between media outlets and fact-checking organizations. Discussions also included developing a national roadmap to combat disinformation, ensuring a coordinated and strategic approach to maintaining reliable information in the country's media landscape. Source: EEAS, EU, SEMC organise fact-checking workshop for Yemeni journalists . [online] Published 2025. Available at: https://www.eeas.europa.eu/delegations/yemen/eu-semc-organise-fact-checking-workshop-yemeni-journalists_en Top Of Page Turkic States Seek Coordinated Strategy Against Cross-Border Digital Disinformation According to a EuroNews article , Turkic-speaking countries are increasingly seeking coordinated strategies to counter online disinformation that circulates across global digital platforms often beyond the reach of national regulators. Kyrgyzstan’s experience illustrates both the ambition and the constraints of national laws. Although the 2022 legislation allows authorities to request the removal of false content within 24 hours and to block non-compliant websites for up to two months, officials acknowledge that enforcement collapses when misinformation spreads on social media platforms like YouTube or Instagram, as they operate outside domestic jurisdiction. Kyrgyz representatives argued that stronger accountability would require major tech companies to establish formal operations within the region. At a regional media conference in Baku, officials from the Organization of Turkic States (OTS) explored collective responses to disinformation, combining regulatory measures with efforts to strengthen credible domestic media. Kazakhstan emphasized that combating misinformation cannot rely solely on restrictions; instead, unified journalistic standards and the development of trustworthy local content are necessary to build public confidence and reduce reliance on unreliable sources. The OTS’s ongoing discussions highlight the broader challenge of regulating information flows in a digitally interconnected environment where foreign platforms might evade national controls. Member states are grappling with how to protect the public from harmful falsehoods while maintaining space for legitimate debate, and they increasingly view regional cooperation as essential for addressing disinformation and hostile influence threats that individual countries cannot effectively confront alone. Source: Euronews Next, E. Aliyeva, Disinformation is a worldwide issue. Here is how Central Asia is combating it.  [online] Published 7 December 2025. Available at: https://www.euronews.com/next/2025/12/07/disinformation-is-a-worldwide-issue-here-is-how-central-asia-is-combating-it Top Of Page [CRC Glossary] The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult. To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence. As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC   website   Top Of Page

  • Anthropic’s Report and Its Implications for Cyfluence Operations

    Background In November 2025, Anthropic published a report  marking a potential milestone in the discussion on agent-driven automated cyber threats and information operations. The analysis, titled “Disrupting the first reported AI-orchestrated cyber espionage campaign”, makes a bold claim about the nature of a recently detected malicious activity cluster, which Anthropic defines as a widescale complex campaign, attributed to a Chinese state-linked attacker (tracked as GTG-1002). According to Anthropic, for the first time, its Claude Code was allegedly responsible for autonomously executing 80%-90% of the observed attack chain. The agentic coding tool apparently conducted major phases of the attack chain, including target reconnaissance and acquisition, vulnerability scanning , exploit generation , backdoor deployment,  and data   exfiltration . While most tactical actions were carried out with minimal human intervention , attackers were required to provide initial targets, define strategic objectives, and make decisions at key operational moments. [i] Figure 1 – Lifecycle of the AI-orchestrated attack chain, courtesy of Anthropic Soon after its release, the report has drawn considerable criticism from cybersecurity experts [ii] , who pointed to the absence of tangible technical evidence (e.g. indicative code samples, concrete indicators of compromise, meaningful observed TTPs, etc.).  At the same time, Anthropic qualified its own assessment and noted limits in Claude Code’s operational efficiency. The model automated several tasks but also produced incorrect outputs, including hallucinated, non-existent ‘stolen’ credentials. These errors reduced the model’s reliability and constrained its operational value. [iii]   From Automated Intrusion to Automated Influence Although the Anthropic report describes an intrusion operation, a largely autonomous AI system could also play a significant role in digital hostile influence campaigns (HICs) and cyfluence operations. Such an application could enable highly efficient and scalable operational workflows. Applied to HICs, the same principle yields a similar structure: high-volume task decomposition, model-driven execution, and rapid iteration. Influence actors already organise their workflows across persona creation, message development, narrative testing, audience analysis, content deployment, and performance assessment. AI-enabled orchestration would integrate into these processes without structural disruption. Where an intrusion campaign maps technical networks, an influence campaign maps social communities. Where espionage collects credentials and access tokens, influence operations collect trust relationships and leverage points within social graphs. Potential AI-enabled influence systems could generate tailored content, adjust narratives in real time, infiltrate closed communities, build rapport quickly, modulate tone based on sentiment analysis, and reuse “campaign kits” containing personas, scripts, assets, and dissemination strategies. The parallels are functional. Current State of AI-Driven Attacks In its ‘Detecting and countering misuse of AI: August 2025’ threat intelligence report , Anthropic stated that Agentic AI has been weaponized   to perform sophisticated cyberattacks . It explained how abuse of LLM models has significantly lowered the entry barrier for cybercriminals, enabling threat actors with limited technical expertise to conduct complex operations. What previously required years of practical experience could now be fully deployed and automated within minutes. [iv] In addition, the report claimed that financially motivated actors were embedding AI throughout every phase of their offensive operations, using it to profile victims, analyze stolen data, harvest financial information, fabricate identities, and scale campaigns’ volumes and precision to levels that would be impossible manually. Together, these reports by Anthropic join a growing body of empirical evidence and academic studies regarding existing and potential AI technologies implementation for information operations. In recent years, substantial resources have been invested in assessing evolving AI capabilities and their potential to enhance digital influence operations and automate cognitive threats. A prominent example can be found in a noteworthy two-part report by The Center for Security and Emerging Technology (CSET) at Georgetown University, titled ‘AI and the Future of Disinformation Campaigns’ [v] , which also offered a framework (RICHDATA) to “describe the stages of disinformation campaigns”, with an emphasis on potential GenAI applications. [vi] Later analytical reviews, including the CRC’s ‘The Rise of AI and What It Means in the Realm of Digital Influence’ [vii]  and ‘Russia’s Use of GenAI in Disinformation and Cyber Influence: Strategy,  Use Cases and Future Expectations’ [viii] , have contributed existing case studies, observations, and actionable insights. Implications for HICs and TrustOps If indeed accurate, the latest findings from Anthropic could have far reaching consequences for cybersecurity and Influence Defense stakeholders alike. The main implications for the hybrid-operational space being: Scaling  is no longer limited by human capacity. Commercially available Agentic AI tools can handle reconnaissance, infiltration and data exfiltration. Orchestration is semi- (if not fully) autonomous. Successful cyber ops can immediately and seamlessly feed influence operations . Taken together, these developments are expected to dramatically increase both the velocity and impact of cyfluence threats. Adapting Counter Influence Leading researchers have also been exploring the application of AI models and linguistic “signatures” for defensive counter-FIMI purposes (see ‘Cracking the Code: How AI Detects Social Media Impersonation’ [ix] ). However, addressing issues such as AI-orchestrated attacks detection and attribution must rely on an omni-disciplinary approach: Detection  needs to apply updated analytical frameworks (e.g. RICHDATA, DISARM, ABCDE), algorithmically-enhanced dynamic logics, static model-specific signatures, real-time threat intel, UEBA baselining, data enrichments. Attribution  must adequately determine campaign, threat actor, and tooling (in this case, Agentic AI tools and protocols, LLM models and custom adaptations). These observations should, in turn, inform TTP cataloguing and ongoing analysis. Remediation should adapt, evolving from legacy solutions to an AI-based, autonomous Cognitive Assets Security Broker (CASB) suite of solutions, reflecting a multilayered defensive methodology. Crucially, in order to keep up with the expected increase in threat velocity, defenders could employ large-scale AI-orchestrated Digital Impact on Discourse (DID) activities, including asset disruption and narrative recovery campaigns. Conclusion So far, AI-orchestrated (or largely-automated) cyber-attacks have been a speculative prediction. Past assessments concluded that concerns over a disinformation “AI Apocalypse” [x] were overhyped, with no clear evidence of game-changing deployment of AI capabilities for cognitive threats (sometimes referred to as the “atomic bomb” of AI disinformation) by any major threat actor. However, in light of recent reports, Influence Defense practitioners and counter-FIMI stakeholders must address several key questions: If Anthropic’s findings are accurate, should sophisticated AI-orchestrated influence attack chains be considered an immediate threat? Can Agentic AI tools enable mostly autonomous or largely-automated end-to-end cyfluence campaigns? Even if Anthropic’s claims are exaggerated, what components of hostile influence campaigns’ kill chains (and operational infrastructure) can now be driven by Agentic AI tools? Answering these questions is necessary to adopt effective defensive strategies and implement suitable solutions, as the effects of increased automation and potential operational autonomy in HICs become clearer. Anthropic’s findings, as they pertain directly to cyber-enabled influence operations, illustrate how  the technical segment of cyfluence kill chains might be accelerated and scaled  once agent-based AI capabilities become prevalent. Influence-based vectors (such as narrative-driven external or internal threats)   are likely to emerge as new variants of cyfluence kill chains . Lastly, as advanced AI capabilities - especially autonomous operation orchestration and threat detection - continue to coalesce, technological challenges for actors and defenders will shift . The change is expected to pivot focus from pre-determined AI-based orchestration to AI-assisted decision-making and Agentic AI’s cognitive agility , affecting tactical performances. [Footnotes:] [i]  Anthropic, Disrupting the first reported AI-orchestrated cyber espionage campaign. [online] pp 3-4. Published November 2025. Available at: https://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf [ii]  Cyberscoop, D. B. Johnson, China’s ‘autonomous’ AI-powered hacking campaign still required a ton of human work. [online] Published 14 November 2025. Available at: https://cyberscoop.com/anthropic-ai-orchestrated-attack-required-many-human-hands/ ;  Ars Technica, D. Goodin, Researchers question Anthropic claim that AI-assisted attack was 90% autonomous. [online] Published 14 November 2025. Available at: https://arstechnica.com/security/2025/11/researchers-question-anthropic-claim-that-ai-assisted-attack-was-90-autonomous/ ;  The Stack, E. Targett, Backlash over Anthropic "AI cyberattack" paper mounts. [online] Published 17 November 2025. Available at: https://www.thestack.technology/backlash-over-anthropic-ai-cyberattack-paper-mounts/ ; The Guardian, A. Down, AI firm claims it stopped Chinese state-sponsored cyber-attack campaign. [online] Published 14 November 2025. Available at: https://www.theguardian.com/technology/2025/nov/14/ai-anthropic-chinese-state-sponsored-cyber-attack [iii]  Anthropic, Disrupting the first reported AI-orchestrated cyber espionage campaign. [online] p 4. Published November 2025. Available at: https://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf [iv]   Anthropic, Detecting and countering misuse of AI. [online] Published 27 August 2025. Available at: https://www.anthropic.com/news/detecting-countering-misuse-aug-2025 [v] Georgetown University Center for Security and Emerging Technology (CSET), K. Sedova, C. McNeill, A. Johnson, A. Joshi & I. Wulkan, AI and the Future of Disinformation Campaigns: Part 1: The RICHDATA Framework . [online] Published December 2021. Available at: https://cset.georgetown.edu/publication/ai-and-the-future-of-disinformation-campaigns/ ;  CSET, K. Sedova, C. McNeill, A. Johnson, A. Joshi & I. Wulkan, AI and the Future of Disinformation Campaigns: Part 2: A Threat Model. [online] Published December 2021. Available at: https://cset.georgetown.edu/publication/ai-and-the-future-of-disinformation-campaigns-2/   [vi] Georgetown University Center for Security and Emerging Technology (CSET), J. A. Goldstein, G. Sastry, M. Musser, R. DiResta, M. Gentzel & K. Sedova, Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations . [online] Published January 10, 2023. Available at: https://arxiv.org/pdf/2301.04246 [vii]  Cyfluence Research Center (CRC), F. Frank, The Rise of AI and What It Means in the Realm of Digital Influence. [online] Published 6 August 2024. Available at: https://www.cyfluence-research.org/post/the_rise_of_ai [viii]  Cyfluence Research Center (CRC), Dr. C. Nehring, Russia’s Use of genAI in Disinformation and Cyber Influence: Strategy, Use Cases and Future Expectations . [online] Published 12 April 2025. Available at: https://www.cyfluence-research.org/post/russia-genai-disinformation-cyber-influence-hybrid-warfare-strategy-deepfakes [ix] Cyfluence Research Center (CRC), O. Mokryn, Cracking the Code: How AI Detects Social Media Impersonation . [online] Published 23 March 2025. Available at: https://www.cyfluence-research.org/post/craking-the-code-how-ai-detects-social-media-impersonation [x]  Security Conference, R. Carr & P. Köhler, AI-pocalypse Now? Disinformation, AI, and the Super Election Year . [online] Published 4 October 2024. Available at: https://securityconference.org/en/publications/analyses/ai-pocalypse-disinformation-super-election-year/

  • This Time it’s Personal: China Targets the Human Factor in Cyber-Influence Defense

    Background: The Xiamen Bounty In October 2025, the Xiamen Public Security Bureau issued bounties for 18 officers from Taiwan’s Information, Communications and Electronic Force Command  (ICEFCOM), accusing them of "inciting secession" and spreading disinformation. On November 13 th , another bounty was issued, this time for information on two Taiwanese influencers , accused of disseminating ‘anti-China propaganda’. Figure 1 – Bounty notice issued by the Xiamen Public Security Bureau   Taipei has dismissed these acts as theatrical "crude cognitive warfare". However, these actions exemplify the increased prominence of a defensive technique being employed by nation-state actors : targeting human operators behind digital hostile influence campaigns (HICs) and offensive cyberattacks, as a countermeasure and deterrent. [i] Figure 2 - Bounty notice issued by Quanzhou police asking public for tips regarding two Taiwan social media influencers accused of aiding “secessionist” activities. Source: China News, Facebook. [ii] This is not the first case where China opted to target individuals involved in what it considered as hostile activities online.  The counter-operator approach was already practiced in September 2024, when China’s Ministry of State Security (MSS) accused the hacker group "Anonymous 64" of conducting “cyber cognitive warfare” on behalf of ICEFCOM. By doxxing three Taiwanese officials  (releasing their names, photographs, and identification numbers) MSS signaled a clear intent to target personnel , whenever they could be identified. [iii] Figure 3 - CCTV News report on “Anonymous 64,” including the publication of identifying information on Taiwanese military personnel alleged to be associated with the group. (Source: CCTV News). [iv] Beijing’s use of personalized bounties and exposure, to individually target its perceived adversaries and critics online, extends beyond Taiwan. In March 2025 Global Affairs Canada (GAC)  released a statement  blaming the PRC for a multi-year online harassment campaign. The operation dubbed ‘Spamouflage’ involved the high volume (approx. 100 to 200 posts per day) use of deepfake and synthetic media, including sexually explicit images, to target Canadian residents deemed critical of the PRC. Beijing’s strategic aims are designed to achieve three main objectives: Deter future information operations (IOs): enforcing personal accountability on those involved in hostile hybrid and Soft Power activities against Chinese digital assets. Degrade adversary capabilities : disrupting operational conduct by subjecting individuals to sanctions, potential demoralization, and exposure of covert personas. Narrative control : positioning China as the victim of foreign aggression and influencing public opinion by framing online criticism of the PRC as conspiratorial and criminal. The Counter-Cyfluence Toolkit Historically, defense against information operations and hostile Cyfluence  attacks focused on the technical , content , and narrative  aspects of these hybrid threats. Defenders’ strategies typically involved reactive measures , such as: Taking down offensive operational infrastructure  (e.g. botnets, lookalike webpages, malicious domains, proliferation assets) Countering hostile narratives  (mainly through fact-checking and Strategic Communication) Flagging manipulative content  (e.g. engineered information disorder, coordinated synthetic propaganda) [v] Deploying cyber security safeguards to block external attacks, identify malicious insiders, mitigate technical vulnerabilities and minimize risk of sensitive data leakage which could, in turn, feed HICs (i.e. hack-and-leak operations). However, as state and non-state actors increasingly utilize HICs and hybrid operations to achieve geopolitical objectives, and as alternative remediation and deterrence measures have proven to be of limited effectiveness, nation-state actors are increasingly incorporating a new response strategy into their counter-cyfluence playbook: Counter-Operator actions. By identifying and pressuring the individuals behind offensive kill-chains, defenders aim to achieve an accumulated advantage via personalized effects: "burning" operator cover, freezing assets, restricting movement, and applying mounting psychological stress. These consequences are designed to alter the risk calculus for key individuals and create operational challenges for adversaries. The U.S. Department of Justice (DOJ) has explicitly framed this as a strategy to  “end the impunity that then reigned in cyberspace”  by “publicly deploying its unique tools... to disrupt and deter state-sponsored cyber threats”. [vi] A Multi-Layered Approach Common methods of Counter-Operator action applied by state actors include: Legal  strategies - utilizing indictments, travel bans, and "lawfare" to threaten incarceration and restrict global mobility. Economic  measures - involving targeting individuals with sanctions, asset freezes, and transaction restrictions. Diplomatic  efforts - leveraging "naming and shaming" to strip away anonymity, shape global perception, and impose reputational costs. Extra-judicial tactics - operating in the “gray zone”, using doxxing and intimidation, including the implied or actual threat of physical harm. [vii] [viii]   Extending Actor Attribution Crucially, operator-targeting relies on accurate attribution . The NATO StratCom Center of Excellence's " IIO Attribution Framework "  offers a relevant model, suggesting that attribution requires fusing three types of evidence: Technical (digital traces like IPs and malware); Behavioral (actor TTPs); and Contextual (narrative, linguistic, and socio-political analysis). Bridging the attribution gap from state or organizational level identification, and individual responsibility demands significant intelligence and operational resources. Individual attribution often relies on private-sector proprietary data, such as customer records or platform-specific telemetry, that is not publicly available. Even then, granular attribution often still requires the leveraging of classified intelligence and capabilities. This creates a cost-effectiveness dilemma for state actors: attribution risks exposing defender TT&Ps and the extent of penetration of adversarial organizations and expends more operational capacity. Figure 4 - NATO IIO Attribution framework applied to the ‘doppelganger’ influence campaign [ix] A Global Trend Despite their operational costs, China is not alone in deploying people-centric countermeasures. Commentators have noted that Beijing’s adoption of public attribution and doxxing tactics mirrors the "naming and shaming" long employed by Western nations and organizations  against offensive cyber activities (including Chinese operations). [x] United States Washington has systematically employed the Department of Justice (DOJ) and Office of Foreign Assets Control (OFAC) against hostile actors. Executive Order 13848  explicitly permits sanctions against "any person" found to have "directly or indirectly engaged in" election interference. Notable applications involve the 2018 indictment against Yevgeniy Prigozhin, then head of the Russian “Internet Research Agency” (IRA), who was heavily sanctioned along with individual IRA employees for ‘foreign interference in the United States’. And the September 2024 indictment of Iran’s Islamic Revolutionary Guard Corps (IRGC) targeted individual operators such as Masoud Jalili and extended to individual subcontracted employees for their role in Cyfluence operations, including hack-and-leak attacks targeting the US elections [xi] . Figure 5 - FBI Wanted posters showing indicted Iranian and Russian individuals. [xii] European Union Since 2014 The EUs defensive posture has become notably more proactive, largely in response to the intensification of foreign information manipulation and interference (FIMI) against member states.  The bloc has launched numerous initiatives, like the FIMI Toolbox  and enacted legislation, such as EU Regulation 2023/888 which explicitly penalizes individuals for ‘the dissemination of disinformation’. [xiii] The EUs use of sanctions in response to FIMI has become more personalized, previously its sanctioning regimes were aimed at typically state-level actors however, there has been a distinct towards ‘smart sanctions’ aimed at specific individuals or entities. [xiv] As an additional financial lever, in early 2022, the EU has established the “Freeze and Seize Task Force”  to target individuals, including those responsible for cyber-attacks and FIMI actions. As of October 23, 2025, over 2,500 individuals and entities have been sanctioned, and more than €28 billion in private assets frozen. [xv] Ukraine and Russia The early stages of the Russia-Ukraine war  saw rapid developments in Information Warfare tactics, including the use of counter operator targeting. Notable is the use of extra-judicial methods such as the Ukrainian " Myrotvorets " (Peacemaker) list. Proclaiming to be “an independent non-governmental organization”, it maintains a database of individuals accused of being enemies of Ukraine, including "Kremlin propagandists" and information operatives. [xvi] Figure 6: Myrotvorets center In essence, the platform serves as an open-source registry designed to expose identities, impose personal costs, and intimidate listed individuals. Critics argue it represents a "hit list" that endangers and harasses people without due process and has been used to suppress journalism and authentic criticism of the Ukrainian government. Although speculative in nature, the database’s association with physical violence and homicide has created a reputation that highlights the deterrent potential of targeted doxxing and psychological threats. [xvii] Russia has established a comparable database named ‘Project Nemesis’, which serves a similar function. [xviii] Figure - 7 Myrotvorets entry for Daria Dugina modified to say ‘Liquidated’ after she was killed in a car bomb. She was described as ‘editor of disinformation website United World International’ and sanctioned by the US treasury at time of death. [xix]   [xx] Conclusion Fundamentally, targeting the human element behind information operations is not a new concept in state-level confrontation; espionage and kinetic defense have always involved neutralizing key personnel. The increased prominence of counter operator targeting as a defensive tactic merely underscores the impactful role that hybrid influence operations now play in state level geopolitical conflict. As a proactive counter-influence methodology, counter operator targeting can complement, but not replace, existing cognitive, technical and content-based countermeasures. Its effectiveness is contingent on the ability to apply tangible costs to an individual. If the target is sufficiently insulated from the consequences, or the defender lacks sufficient legal, economic, or diplomatic clout to inflict damage, then operator targeting could prove counter-productive   by providing an adversary with a propaganda victory  and exposing the defenders’ TTPs and intelligence assets. And while evidence suggests that targeting individuals can disrupt operations, it does not constitute complete remediation. For example, GRU Unit 26165 (Fancy Bear) remains operationally active despite repeated targeting of its members with sanctions, indictments, and exposure. Ultimately, the value of this tactic extends beyond threat neutralization.  By tactically altering an individual's risk calculus through the application of personalized costs, a defender can transfer these costs on a strategic level to their adversary. [Footnotes:] [i]  The Independent, S. Sharma, China issues bounty for 18 officers in Taiwan’s ‘psychological warfare unit’ . [online] Published 11 October 2025. Available at: https://www.independent.co.uk/asia/china/china-bounty-taiwan-military-psychological-warfare-b2843641.html [ii]  China News (中国新闻网), Post on Facebook (Reel). [online] Available at: https://www.facebook.com/reel/1189876776392002 [iii] Jamestown Foundation, S.-F. Lee, China Brief Volume 25 Issue 14. Taiwan Bounty: PRC Cross-Agency Operations Target Taiwanese Military Personnel . [online] Published 25 July 2025. Available at: https://jamestown.org/taiwan-bounty-prc-cross-agency-operations-target-taiwanese-military-personnel/ [iv]  CCTV, citing Ministry of State Security, Doxxing “Taiwan independence” network army ‘Anonymous 64’ . [online] Published 23 September 2024. Available at: https://news.cctv.com/2024/09/23/ARTIvJFrhmdCf4SdWR1yVfVH240923.shtml  [v]  EEAS, 2nd EEAS Report on Foreign Information Manipulation and Interference Threats: A Framework for Networked Defence . [online] Published January 2024. Available at: https://www.eeas.europa.eu/sites/default/files/documents/2024/EEAS-2nd-Report%20on%20FIMI%20Threats-January-2024_0.pdf [vi]  Royal United Services Institute (RUSI), RUSI Panel Explores Role of ‘Naming and Shaming’ as a Tool of Cyber Statecraft . [online] Published 5 December 2024. Available at: https://www.rusi.org/news-and-comment/rusi-news/rusi-panel-explores-role-naming-and-shaming-tool-cyber-statecraft [vii]  Lawfare, G. Band, Sanctions as a Surgical Tool Against Online Foreign Influence . [online] Published 15 September 2022. Available at: https://www.lawfaremedia.org/article/sanctions-surgical-tool-against-online-foreign-influence [viii]  U.S. Department of the Treasury, Treasury Sanctions Russians Bankrolling Putin and Russia-Backed Influence Actors . [online] Published 3 March 2022. Available at: https://home.treasury.gov/news/press-releases/jy0628 [ix]  Psychological Defense Research Institute (Lund University), ADAC.io project, A Framework for Attribution of Information Influence Operations . [online] Published 2025. Available at: https://www.psychologicaldefence.lu.se/sites/psychologicaldefence.lu.se/files/2025-02/250131_ADACio%20D1.1_Attribution%20Framework%20Report_Final.pdf [x] Binding Hook, B. Read, China is using cyber attribution to pressure Taiwan . [online] Published 22 July 2025. Available at: https://bindinghook.com/china-is-using-cyber-attribution-to-pressure-taiwan/ [xi]  US Dept of Justice, Three IRGC Cyber Actors Indicted for ‘Hack-and-Leak’ Operation Designed to Influence the 2024 U.S. Presidential Election. September 27, 2024. Available online: https://www.justice.gov/archives/opa/pr/three-irgc-cyber-actors-indicted-hack-and-leak-operation-designed-influence-2024-us [xii] Federal Bureau of Investigation (FBI) Most Wanted, three Iranian cyber actors . [online] Published 27 September 2024. Available at: https://www.fbi.gov/wanted/cyber/three-iranian-cyber-actors/ ;   FBI, Most Wanted, Russian Interference in 2016 U.S. Elections . [online] Published July 2018. Available at: https://www.fbi.gov/wanted/cyber/russian-interference-in-2016-u-s-elections [xiii]  Council of the European Union, Council Implementing Regulation (EU) 2023/888   . [online] Published 30 May 2023. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32023R1045&qid=1765546232156 [xiv]  https://www.europarl.europa.eu/RegData/etudes/STUD/2021/656323/EPRS_STU(2021)656323_EN.pdf European Parliamentary Research Service (EPRS), Strategic communications as a key factor in countering hybrid threats . [online] Published 2021. Available at: https://www.europarl.europa.eu/RegData/etudes/STUD/2021/656323/EPRS_STU(2021)656323_EN.pdf [xv]  European Commission, Sanctions against individuals, companies and organisations . [online] Published 3 October 2025. Available at: https://commission.europa.eu/topics/eu-solidarity-ukraine/eu-sanctions-against-russia-following-invasion-ukraine/sanctions-against-individuals-companies-and-organisations_en [xvi] DFRLab (Medium), UN Calls for Investigation of Ukrainian Digital Blacklist. [online] Published 21 September 2017. Available at: https://medium.com/dfrlab/un-calls-for-investigation-of-ukrainian-digital-blacklist-14fec836753f [xvii]    Factcheck, M. Kirkova, Is the Myrotvorets website a hit list . [online] Published 1 November 2023. Available at: https://factcheck.bg/en/is-the-myrotvorets-website-a-hit-list/ [xviii] Institute for Strategic Dialogue, Project Nemesis, Doxxing and the New Frontier of Informational Warfare Available at: https://www.isdglobal.org/digital_dispatches/project-nemesis-and-the-new-frontiers-of-informational-warfare/ [xix]    The "Peacemaker"(Myrotvorets) Center, last accessed 08/12/25   Available at: https://myrotvorets.center/criminal/ [xx] U.S. Department of the Treasury, Treasury Sanctions Russians Bankrolling Putin and Russia-Backed Influence Actors . [online] Published 3 March 2022. Available at: https://home.treasury.gov/news/press-releases/jy0628

  • The Rise of AI and What It Means in the Realm of Digital Influence

    The Rise of AI and What It Means in the Realm of Digital Influence Author: Florian Frank In recent years, AIs have achieved a level of sophistication previously unimaginable in such a short time frame. AI is now utilized in virtually every field, from building webpages and creating text to intelligence gathering, design, and medicine. The impact of AI on the economy and society is profound and far-reaching. Although it's crucial to highlight the numerous benefits AIs bring to the table, this article will focus on its darker aspects. Much of the AI discourse is plagued by hysteria, fears of sudden massive unemployment, or scenarios in which an AI decides the world would be better off without humans. While some of these concerns are legitimate, they often overshadow the genuine dangers that AIs are confronting us with today. This article will explore some of the most recent developments in AI and their implementation in hostile influence campaigns. Throughout the article, I have included quotes from ChatGPT, as it seems befitting to let an AI speak for itself on the issues concerning its societal impact. But before we delve into the dangers and potential solutions, let's examine how we arrived at this point.

  • CRC Spotlight: From Rollout to Fallout: The Impact of X’s Location Transparency Feature

    This Spotlight article examines X’s new transparency feature, which introduces profile-level location indicators, installation origin, and possible VPN or proxy usage, and situates this added visibility within the operational context of digital Hostile Influence Campaigns (HICs).   For Influence Defence practitioners, the introduction of these behavioral indicators expands the existing toolset and provides a valuable confidence booster, supporting inauthentic activity detection efforts. This is exemplified by the exposure of an ongoing Iranian FIMI network, which was used for cognitive manipulation and Information Disorder proliferation on the platform.   However, the article notes that FIMI researchers should interpret location metadata with caution. Finally, it predicts that the increased transparency may prompt adjustments in threat actor TTPs and might even create tactical opportunities for adversaries, generating further challenges that will shape future investigations. [Full Report Below]

bottom of page