CRC Weekly: Cyber-based hostile influence campaigns 08th-14th December 2025
- CRC

- Dec 18, 2025
- 14 min read

[Introduction]
Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.
During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events.
[Contents]
[State Actors]
Russia
US Extradites Member of Russian Hacktivist Group Linked to Critical Infrastructure Attacks
Pro-Russia Hacktivists' Attacks Threaten Global Critical Infrastructure
Russia Expands Disinformation and Influence Operations in Mexico After Ukraine Invasion
Germany Accuses Russia of Cyberattacks and Disinformation Campaigns
Foreign Disinformation in Romania’s 2024–2025 Presidential Crisis
The War in Ukraine
The War in Gaza
[AI Related Articles]
[General Reports]
[Appendix - Framworks to Counter Disinformation]
[ Report Highlights]
A report by EU VS Disinfo examines how the Kremlin uses historical revisionism, manipulated maps, and digital tools to advance FIMI, justify Russia’s expansionism, and undermine the sovereignty and historical narratives of neighboring states.
According to a report by The Record, the British government has imposed new sanctions on Russian and Chinese entities accused of conducting cyberattacks and influence operations aimed at undermining the West.
As published by The New York Times, Germany has accused Russia of conducting a coordinated campaign of cyberattacks, espionage, and disinformation aimed at undermining the country’s infrastructure and social cohesion.
Graphika’s investigation revealed a pro-Russian, anti-Ukraine network on Telegram called "OrdinAIry People", which used AI to create hundreds of seemingly ordinary accounts.
GenAI report warns that generative AI significantly lowers barriers to large-scale disinformation and hostile influence campaigns. At the same time, existing regulatory efforts struggle to keep pace with the speed, scale, and global reach of AI-enabled manipulation.
[State Actors]
Russia
US Extradites Member of Russian Hacktivist Group Linked to Critical Infrastructure Attacks
As reported by The Record, the United States has extradited a 33-year-old Ukrainian national, Victoria Eduardovna Dubranova, to face federal charges for her alleged role in two Russian-linked hacktivist groups, CyberArmyofRussia_Reborn (CARR) and NoName057(16). According to the Justice Department, both groups have conducted hundreds of cyberattacks, including DDoS operations, in support of Russia’s geopolitical interests and with financial and operational backing from the Russian government.
Prosecutors allege Dubranova participated in attacks targeting protected computers and critical infrastructure, including attempts to tamper with public water systems, actions that U.S. officials warned could endanger communities and national security. Dubranova has pleaded not guilty to all charges and is scheduled to stand trial in early 2026. If convicted, she could face up to 27 years in prison for her alleged involvement with CARR and up to five years for charges linked to NoName057(16).
Source: The Record, D. Antoniuk, US Extradites Member of Russian Hacking Groups Critical Infrastructure. [online] Published 10 December 2025. Available at: https://therecord.media/us-extradites-member-of-russian-hacking-groups-critical-infrastructur
How Russia Uses FIMI to Manipulate Historical Memory
A report by EU VS Disinfo examines how the Kremlin has intensified FIMI by systematically rewriting history to justify Russia’s expansionist policies. Since the full-scale invasion of Ukraine, Moscow has promoted a state-controlled version of “true history” through school textbooks, pseudo-academic publications, and official narratives aimed not only at domestic audiences but also at neighboring states and the wider international community.
A central example is the publication of "History of Lithuania" by Russia’s MGIMO University, with a foreword by Foreign Minister Sergey Lavrov. The book denies Lithuanian sovereignty, downplays Soviet crimes, and frames modern Lithuanian politics as extremist, mirroring similar distortions previously applied to Ukraine. Beyond academic-style narratives, the Kremlin also employs "mapaganda" and digital manipulation to reshape perceptions of borders and the past. Russian law now labels maps contradicting its territorial claims as extremist, while altered maps are actively circulated abroad to normalize illegal annexations. The report also highlighted the use of advanced AI deepfakes, citing a 2025 operation in Poland that fabricated social media posts and videos to reignite sensitive historical disputes between Poland and Ukraine.
Source: EUvsDisinfo, Rewriting Borders of Truth: How Russian FIMI Falsifies Historical Memory. [online] Published 11 December 2025. Available at: https://euvsdisinfo.eu/rewriting-borders-of-truth-how-russian-fimi-falsifies-historical-memory/
Pro-Russia Hacktivists' Attacks Threaten Global Critical Infrastructure
This joint cybersecurity advisory from U.S., European, and allied agencies warned that pro-Russia hacktivist groups are targeting critical infrastructure, particularly operational technology (OT) and industrial control systems (ICS). Affected sectors include water and wastewater, food and agriculture, and energy, with some incidents causing limited physical disruption.
A key concern is that these hacktivist activities are often accompanied by disinformation. Groups such as Cyber Army of Russia Reborn and NoName057(16) tend to exaggerate the scale and impact of their attacks through online messaging to create fear, undermine trust in public services, and amplify pro-Russia narratives. This information manipulation can make relatively low-impact incidents appear more severe, increasing their psychological and political effect. The advisory urged infrastructure owners and operators to reduce exposure of OT systems, strengthen authentication, and improve asset management to limit both real-world risk and the effectiveness of disinformation.
Source: CISA & allied agencies, AA25-343A Pro-Russia Hacktivists Conduct Attacks. [online] Published 9 December 2025. Available at: https://www.cisa.gov/sites/default/files/2025-12/aa25-343a-pro-russia-hacktivists-conduct-attacks.pdf
Russia Expands Disinformation and Influence Operations in Mexico After Ukraine Invasion
According to a report by The Odessa Journal, following Russia’s full-scale invasion of Ukraine, Kremlin-linked media outlets RT and Sputnik have significantly expanded their activities in Mexico, intensifying the spread of anti-American and anti-Ukrainian narratives, according to Ukraine’s Center for Countering Disinformation. U.S. diplomats describe this surge as a “sudden and aggressive” increase in Russian information influence aimed at undermining trust between Mexico and the United States. Russian officials and state media have also circulated false anti-Ukrainian claims, which were subsequently echoed by some local politicians and journalists, aligning with Moscow’s strategy of discrediting international support for Ukraine and portraying the West as aggressive.
These efforts form part of a broader coordinated disinformation operation, including the large-scale “Doppelgänger” campaign, which seeks to distort coverage of the war in Ukraine, weaken U.S.–Mexico relations, and undermine Western backing for Kyiv. The campaign relies on the replication and amplification of misleading narratives across multiple platforms, enabling Russian messaging to appear locally rooted while advancing Kremlin strategic objectives. By embedding these narratives in domestic political and media debates, Russian actors enhance their credibility and reach within Mexico's information environment.
Russian influence operations are further reinforced through institutional partnerships and professional outreach. The Mexican Journalists Club has regularly published content from Russian and Cuban state media and has hosted training sessions and lectures with RT representatives, presenting them as professional exchanges. This collaboration allows Russian outlets to normalize and legitimize their messaging as “alternative perspectives,” effectively laundering propaganda into the local media ecosystem. Alongside these information activities, Russia has reportedly increased its intelligence presence in Mexico by deploying experienced agents under diplomatic cover, reinforcing the convergence of information and intelligence operations.
Source: The Odessa Journal, Center for Counteracting Disinformation: Russia Rapidly Expands Its Information and Intelligence Influence in Mexico. [online] Published 10 December 2025. Available at: https://odessa-journal.com/center-for-counteracting-disinformation-russia-rapidly-expands-its-information-and-intelligence-influence-in-mexico
Germany Accuses Russia of Cyberattacks and Disinformation Campaigns
As published by The New York Times, Germany has accused Russia of conducting a coordinated campaign of cyberattacks, espionage, and disinformation aimed at undermining the country’s infrastructure and social cohesion. German authorities say these hybrids actions included a cyberattack on air traffic control in August 2024 and organized efforts to spread false information during Germany’s early 2025 general election. Based on intelligence assessments, Berlin stated it had clearly identified Russian responsibility and formally summoned the Russian ambassador in protest, while Moscow denied the allegations.
Disinformation is a central element of the accusations. German officials said Russian-linked actors attempted to damage public trust by spreading fake stories about prominent politicians who were critical of Russia’s war in Ukraine. These false claims included fabricated allegations about mental health issues, personal misconduct, and private relationships.
An article by SRF News added that German authorities summoned the Russian ambassador and stated that intelligence findings clearly link a cyberattack on German air traffic control in August 2024 to the Russian hacking group Fancy Bear, which they say operates under Russia’s military intelligence agency, the GRU. According to Berlin, the so-called “Storm 1516” campaign, active since 2024 and backed by GRU-linked organizations, sought to influence Western elections by spreading false narratives. During Germany’s 2025 election campaign, fake content targeted leading politicians such as Robert Habeck and Friedrich Merz, including fabricated videos claiming ballot manipulation just days before the vote.
Sources:
The New York Times, Germany Russia Cyberattacks Sabotage Hybrid War. [online] Published 12 December 2025. Available at: https://www.nytimes.com/2025/12/12/world/europe/germany-russia-cyberattacks-sabotage-hybrid-war.html
SRF, Cyberangriff: Berlin bestellt Russlands Botschafter ein. [online] Published 12 December 2025. Available at: https://www.srf.ch/news/international/hybride-bedrohung-cyberangriff-berlin-bestellt-russlands-botschafter-ein
Foreign Disinformation in Romania’s 2024–2025 Presidential Crisis
According to a report by IIEA, Romania’s 2024 presidential election was severely disrupted by large-scale foreign interference and AI-driven disinformation, leading to the unprecedented annulment of the first-round vote. A fringe, pro-Moscow candidate surged unexpectedly after an almost entirely online campaign amplified by bot networks, AI-generated content, and algorithm manipulation, creating a distorted information environment. Investigations uncovered massive cyberattacks on election systems and evidence linking the disinformation campaign to Russian hybrid operations, prompting the Constitutional Court to void the results to protect electoral integrity.
The annulment triggered political unrest and protests, revealing both Romania’s democratic resilience and its deep vulnerabilities. While institutions acted decisively to counter foreign-sponsored manipulation, the crisis exposed widespread public distrust, polarization, and the growing appeal of nationalist and Eurosceptic narratives, trends often reinforced by external disinformation. Fake narratives and digital propaganda played a central role in mobilizing support and undermining confidence in democratic processes. The rerun election in May 2025, won by pro-European reformer Nicușor Dan, restored some legitimacy but underscored the long-term threat posed by disinformation.
Source: The Institute of International and European Affairs (IIEA), Romania’s 2024–2025 Presidential Election Crisis and Its Aftermath. [online] Published 2025. Available at: https://www.iiea.com/blog/romanias-20242025-presidential-election-crisis-and-its-aftermath
The War in Ukraine
AI-Generated Influence Through Ordinary Personas
Graphika’s investigation revealed a pro-Russian, anti-Ukraine Telegram network called “OrdinAIry People” that used AI to create hundreds of seemingly ordinary accounts. These people, posing as everyday citizens from various countries, posted coordinated messages amplifying geopolitical grievances, often disconnected from the original context. For instance, the network responded to a Peruvian police video featuring a Grinch-costumed officer with AI-generated outrage about U.S. policy and Ukraine, illustrating how mundane or humorous content can be exploited to spread disinformation and influence narratives.
The network’s operations highlight how AI facilitates large-scale content production without subtlety. While personas may initially seem authentic, patterns such as synchronized posts, repeated linguistic quirks, and prompt artifacts reveal underlying automation. The AI prompts focused on themes critical of Western governments, immigration, and Ukraine, often redirecting the content to unrelated discussions, such as skincare advertisements or local news stories, thereby demonstrating a disconnect between narrative framing and real-world context.
Graphika emphasized that AI alone does not guarantee influence. Community and platform administrators often push back, deleting posts or banning accounts. Organizations must therefore assess not only the presence of AI-generated content but its reach, context, and resonance.
Source: Graphika, The Graphika Team, Ordinary Outrage: The Ordinary AI People That Couldn't Steal Christmas. [online] Published 10 December 2025. Available at: https://graphika.com/posts/ordinary-outrage-the-ordinary-ai-people-that-couldn-t-steal-christmas
The War in Gaza
Structural Disinformation and Information Control in Gaza
A report by GIOR argued that between 2018 and 2022, Hamas created a repressive environment in Gaza that enables structural disinformation by controlling information at its source. Independent investigations by Human Rights Watch, Amnesty International, and the Committee to Protect Journalists document systematic torture of critics, mass arrests, and repeated detention of journalists, creating fear that silences independent voices. As a result, journalists, civil society actors, and even UN staff operate under coercion, making independent verification of events and data extremely difficult.
Disinformation in this context does not rely on false statements, but on enforced omissions. UN agencies themselves acknowledge that casualty figures and humanitarian data from Gaza cannot be independently verified and are primarily sourced from Hamas-controlled institutions. An UN-commissioned review found that fear of personal safety prevents staff from reporting misconduct, and that intimidation and interference have occurred within UN facilities. These conditions mean that information reaching Western governments and media appears neutral and authoritative, even though it is produced within a highly constrained and unfree environment.
This system represents a sophisticated form of information warfare, in which controlling who collects data and what can be reported enables distortion without overt propaganda. Western policymakers are urged to treat Gaza-sourced humanitarian data with caution, increase transparency about verification limits, and invest in independent monitoring mechanisms. The report stressed that the absence of blatant falsehoods does not mean the absence of disinformation when information is filtered through coercion.
Source: Global Influence Operations Report (GIOR), The Gaza Information Trap: How Hamas Coercion Shapes What the West Knows. [online] Published 8 December 2025. Available at: https://www.global-influence-ops.com/reports/the-gaza-information-trap-how-hamas-coercion-shapes-what-the-west-knows/
[AI Related Articles]
AI-Generated Political Videos
Recent AI-generated political videos, ranging from Zohran Mamdani as a creepy trick-or-treater to Gavin Newsom body-slamming Donald Trump, highlight the growing presence of AI content in American politics. An article by TechXplore emphasized that, while many of these videos are clearly fake and often meme-like, their purpose is less about deception and more about provoking strong emotional responses, generating engagement, and signaling political allegiance. Politicians and supporters alike are increasingly using AI tools to amplify messages, mock opponents, and influence online discourse.
Emotion plays a key role in the spread of disinformation. AI videos that elicit outrage, amusement, or excitement are more likely to go viral, and these strong emotions reduce viewers’ ability to discern real events from fabricated content. The accessibility of AI tools enables both ordinary users and political actors to quickly generate hundreds of videos. At the same time, platforms often fail to clearly label AI-generated content, blurring the line between fact and fiction.
Source: TechXplore, L. Fazio (The Conversation), AI Generated Political Videos Memes. [online] Published 11 December 2025. Available at: https://techxplore.com/news/2025-12-ai-generated-political-videos-memes.html
The Erosion of Trust in Online and AI-Created Information
As stated in a report by ASPI, AI chatbots, such as Grok on the social media platform X, are increasingly central to online political discourse but are not neutral information tools. Users often weaponize AI responses in debates. This cycle of performative outrage amplifies polarization and contributes to a broader environment in which AI-generated content becomes a vector for disinformation rather than a source of clarity. Large-scale analysis of hundreds of thousands of public AI interactions reveals consistent patterns of political arguments and manipulations amplified for social performance.
The reliability of AI-generated content is also a structural concern. Studies by the European Broadcasting Union and the BBC show that nearly half of AI news answers contain factual errors, with broader issues, including incoherence or outdated data, in over 80 percent of cases. Public surveys reveal widespread skepticism: majorities fear AI will exacerbate misinformation and influence elections.
Source: The Australian Strategic Policy Institute (ASPI), J. Coyne, Protecting Truth in the Era of AI Mediation. [online] Published 12 December 2025. Available at: https://www.aspi.org.au/strategist-posts/protecting-truth-in-the-era-of-ai-mediation/
[General Reports]
False Claims About ChatGPT and AIPAC Advertising
In a December 2025 NewsGuard article, anti-Israel social media users pushed false claims that OpenAI’s ChatGPT had started running ads and had promoted the pro-Israel lobbying group AIPAC. The allegation was based on a widely shared screenshot purporting to show a ChatGPT response containing an AIPAC ad, and it quickly gained millions of views on X. The posts fueled backlash calls to cancel OpenAI subscriptions, amplifying the narrative that ChatGPT was being used for political influence.
Closer examination showed the claim was disinformation. The image contained apparent inconsistencies, including an inaccurate AIPAC logo and fonts not used in official AIPAC materials. Investigations found that the image was likely doctored from an earlier, unrelated ChatGPT screenshot showing a Target logo. That original image was linked to a known ChatGPT feature that recommends apps from approved “pilot partners,” not paid political advertising. OpenAI has confirmed that it has not launched traditional advertising in ChatGPT, and AIPAC is not among its pilot partners.
Source: NewsGuard Reality Check, L. Marchl, Anti-Israel Accounts Spread Fake Screenshot of an AIPAC Ad on ChatGPT. [online] Published 10 December 2025. Available at: https://www.newsguardrealitycheck.com/p/chatgpt-is-not-running-pro-israel
Strategic Messaging Around China–India–Russia Cooperation
According to a publication by Insikt Groupe, the August 2025 meeting between China, India, and Russia has fueled speculation about a new trilateral bloc, but concludes that a durable alliance is unlikely. Public messaging around the meeting risks overstating unity and can mislead audiences into believing a cohesive anti-Western bloc is forming.
Disinformation and strategic signaling play an indirect role in this dynamic. All three states promote narratives that emphasize resistance to U.S. influence and sanctions, which can obscure underlying divergences. For example, claims of growing unity stand in contrast to concrete actions, such as China and India reducing Russian oil imports after new U.S. sanctions. The report warns against taking symbolic summits or official statements at face value. While China and Russia consistently frame cooperation as part of a long-term push against Western dominance, India balances this messaging with efforts to maintain ties with the United States. The result is a complex information environment where narratives of trilateral strength may function more as strategic signaling than as evidence of a genuine, lasting alliance.
Source: Insikt Grou, Implications of Russia-India-China Trilateral Cooperation. [online] Published 10 December 2025. Available at: https://www.recordedfuture.com/research/implications-of-russia-india-china-trilateral-cooperation
[Appendix - Frameworks to Counter Disinformation]
UK Sanctions Russian and Chinese Groups Over Information Warfare
In a report published by The Record, Britain is shown escalating pressure with fresh sanctions on Russian and Chinese actors blamed for cyber offensives and influence campaigns targeting Western societies. Announcing the measures, Foreign Secretary Yvette Cooper warned of escalating hybrid threats that blend cyber operations, information warfare, and physical sabotage.
The sanctions target seven Russian individuals and organizations, including the pro-war Telegram channel Rybar and its co-owner, Mikhail Zvinchuk; the GRU-linked organization Pravfond; and the Centre for Geopolitical Expertise, founded by nationalist ideologue Aleksandr Dugin. Several of those sanctioned have been linked to Russian intelligence activities and prior influence operations, while U.S. authorities already want some. The U.K. also sanctioned two Chinese technology firms, i-Soon and Integrity Technology Group, accused of supporting cyber espionage operations against British and allied targets.
Cooper framed the measures as part of a broader effort to counter Russia’s war against Ukraine and the broader campaign of cyberattacks, disinformation, and intimidation spilling across Europe. She argued that Europe must reduce its reliance on the United States and develop new coalitions to confront these threats. Critics argue that sanctions alone are insufficient to deter state-backed cyber and information warfare and call instead for a more proactive and coordinated European response.
Source: The Record, A. Martin, UK Sanctions Russia-China Entities Information Warfare. [online] Published 10 December 2025. Available at: https://therecord.media/uk-sanctions-russia-china-entities-information-warfare
Generative AI Amplifies Disinformation and Hostile Influence Risks
GenAI report identifies disinformation as one of the most immediate and concrete risks posed by generative AI, particularly through the large-scale production of synthetic text, images, audio, and video that can convincingly mimic authentic human communication. These capabilities significantly lower the cost, skill threshold, and time required to create misleading or false content, enabling rapid amplification of propaganda, deepfakes, impersonation, and other forms of information manipulation that can distort public discourse and undermine trust in democratic processes.
The authors describe how malicious or irresponsible use of generative AI exploits structural weaknesses in the digital information ecosystem, including the speed of online dissemination, the opacity of model training data, and the difficulty of distinguishing synthetic from authentic content. Tactics highlighted include using AI to mass-produce persuasive narratives, generate fake imagery or video, and automate disinformation campaigns at scale. The report notes that both closed- and open-source models can be repurposed by malicious actors, particularly once models are released or fine-tuned beyond the original developer’s control.
In response, the report maps how governments and international bodies are seeking to mitigate disinformation risks through regulatory, co-regulatory, and transparency-based approaches, including labeling requirements for AI-generated content, watermarking, auditing obligations, and risk-based oversight regimes. However, it underscores that regulatory efforts are constrained by uncertainty, uneven global enforcement, and the concentration of technical power in private firms. As a result, the report frames disinformation as a persistent governance challenge that requires coordinated international standards, stronger enforcement mechanisms, and sustained scrutiny of how generative AI infrastructure can be weaponized for hostile influence operations.
Source: GenAI report authors , "GenAI Report REV Master" - Available Online: https://fsi9-prod.s3.us-west-1.amazonaws.com/s3fs-public/2024-12/GenAI_Report_REV_Master_%0Aas%20of%20Dec%2012.pdf
[CRC Glossary]
The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult.
To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.
As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website
_edited.png)
.png)


