Weekly: Cyber-based hostile influence campaigns 01th-07th December 2025
- CRC

- 2 days ago
- 15 min read

[Introduction]
Cyber-based hostile influence campaigns are aimed at influencing target audiences by promoting information and/or disinformation over the internet, sometimes combined with cyber-attacks, which enhance their effect.
During the last week, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). This week's report is a summary of what we regard as the main events.
[Contents]
[Social Media Platforms]
[State Actors]
Russia
The War in Ukraine
China
[AI Related Articles]
[General Reports]
[Appendix - Framworks to Counter Disinformation]
[ Report Highlights]
A Reuters article stated that Macron’s effort to promote professional standards against online disinformation has triggered an intense backlash from right-wing media and political opponents, who have reframed the initiative as an attempt at censorship despite his insistence that the state should not define what counts as news.
[Social Media Platforms]
X Location Data Reveal Foreign Influence Operations
Axios highlighted the new transparency feature on the platform X in a recently published article. This feature made it visible that many well-known political accounts are not based in the United States. Some of these profiles present themselves as MAGA supporters. Others pretend to be patriotic women or run Ivanka Trump fan profiles. However, the transparency feature showed that many of these accounts are operated from Nigeria, Pakistan, Thailand, or Eastern Europe. This confirms long-standing warnings from cybersecurity experts: foreign actors are using fake profiles to influence political debate in the United States and possibly earn money through X’s revenue-sharing program.
The feature, found in “About This Account,” reveals where users behind profiles are posting from. However, it quickly became clear that the feature has limitations and is not always reliable. Especially in the beginning, several errors occurred. For example, it incorrectly showed that the U.S. Department of Homeland Security’s (DHS) account was posting from Israel. The DHS had to correct this publicly. Other accounts also showed incorrect locations, which caused further confusion. Many profiles previously suspected of taking part in influence or information campaigns were removed quickly. In addition, a separate concern emerged: manipulated screenshots could be used to distort the feature’s intended purpose. Overall, the situation shows that while the transparency feature offers new insights, it has limitations.
Source: Axios, A. Lotz, Politics & Policy. X's new location feature exposes political accounts based outside the U.S. [online] Published 24 November 2025. Available at: https://www.axios.com/2025/11/24/x-location-feature-foreign-political-accounts
EU Fines X for Transparency Failures
According to an article by The Record, the European Commission has fined Elon Musk’s platform X €120 million for violating the Digital Services Act (DSA), a law designed to curb online disinformation and influence operations. Regulators found that X misled users with its paid “verified” checkmark system, allowing anyone to buy credibility without proper identity checks, and failed to meet transparency rules for political ads and researcher access. These shortcomings weaken Europe’s efforts to detect and counter online manipulation and misinformation.
A core DSA requirement is that major platforms provide a real-time public repository of political advertising and enable independent researchers to access public data. The Commission stated that X’s repository lacks basic details such as ad content, topics, and funding sources, and that the platform has placed obstacles that prevent researchers from investigating coordinated information operations and systemic risks.
The fine has heightened political tensions, with Musk and several U.S. officials claiming the EU’s enforcement amounts to censorship. European authorities counter that opaque algorithms, inadequate verification, and restricted research access make it easier for disinformation to spread.
Source: The Record, A. Martin, EU issues €120 million fine to Elon Musk’s X under rules to tackle disinformation. [online] Published 5 December 2025. Available at: https://therecord.media/eu-fines-x-under-digital-services-act-disinformation-transparecy-rules
[State Actors]
Russia
Aéza Hosting Provider Linked to Pro-Kremlin Disinformation and Cybercrime Operations
In a recently published article, VSquare examines the Russian server-hosting company Aéza, a key technical enabler of the well-known and ongoing Russian influence network Doppelganger. This operation deploys convincing clones of Western media outlets to manipulate public discourse in Europe and the United States, relying on hosting infrastructure designed to resist takedown attempts. Although Aéza portrays itself as “apolitical,” it operates as a classic bulletproof hosting provider, offering services frequently used by cybercriminal groups and state-linked influence operations.
A particular emphasis is drawn on state responses: the United States, the United Kingdom, and Australia imposed sanctions on Aéza in 2025, arguing that the company provided infrastructure for cyberattacks, illicit marketplaces, and hostile digital operations. Also, Russia initiated its own criminal proceedings. Aéza’s founders were arrested and charged with supplying hosting resources for BlackSprut, a billion-dollar darknet drug market.
Despite these actions, the European Union has refrained from imposing sanctions on Aéza, allowing the company to continue operating servers in Europe and maintain a presence through resellers and connectivity partners. This illustrates that coordinated international efforts are necessary to disrupt resilient hosting ecosystems used for cybercrime and digital hostile influence.
Source: VSquare, K. Vejnbender, From Darknet to Disinfo: How a ‘Bulletproof’ Russian Host Evades EU Sanctions. [online] Published 4 December 2025. Available at: https://vsquare.org/sanctions-west-prosecuted-in-russia-aeza-still-working-in-the-eu-doppelganger/
The War in Ukraine
Derkach’s Dual Role: Corruption Networks and Russian Influence Ops
A recent Jamestown Foundation analysis indicates that a Ukrainian corruption probe may have uncovered that former parliamentarian and current Russian senator Andrei Derkach laundered a substantial amount of stolen security funds through offices tied to his family in Kyiv.
Derkach grew up in a family with ties to Russian intelligence services. Later, he served in soviet and post-Soviet intelligence units. He entered Ukrainian politics in the 1990s and was appointed head of Energoatom in 2006. This is described as a turning point, when he is believed to have built an FSB-linked network inside Ukraine’s nuclear sector, potentially strengthening Kremlin leverage over critical infrastructure. Long-standing protection from senior officials may have shielded him from accountability and enabled him to maintain extensive political and business ties.
The analysis also outlines Derkach’s possible role in Russian influence operations. He is portrayed as an actor who helped shape campaigns targeting the 2020 U.S. election, U.S.–Ukraine relations, and narratives framing Ukraine as dependent on Washington. His release of manipulated recordings involving Petro Poroshenko and Joe Biden is presented as an effort to generate political disruption in Kyiv and weaken Western backing. Russian-aligned media outlets, shell platforms, and domestic allies appear to have amplified these themes to deepen mistrust and institutional strain.
Source: Jamestown Foundation, T. Kuzio, Politics & Society Publication Eurasia Daily Monitor Russia Eurasia Digest Volume 1 Issue 37. Ukrainian Corruption Investigation Reveals Derkach’s Role. [online] Published 12 January 2025.
Ongoing False Claims About Zelensky Feed
As published by NewsGuard, pro-Russian accounts circulated a video clip from Ukrainian President Volodymyr Zelensky’s 7 November 2025 address, falsely claiming that cocaine fell from his nose on live camera. The posts revived a familiar propaganda narrative portraying Zelensky as a drug user and spread rapidly on X, with some accumulating millions of views.
NewsGuard examined a slowed, high-resolution version of the clip from Zelensky’s official YouTube channel. The footage shows the droplet originating from his mouth, and its appearance is consistent with saliva. Pro-Kremlin accounts nevertheless amplified the low-quality reposts to create ambiguity and push the drug-use narrative. Independent assessments of such claims have found no credible evidence to support them.
Source: NewsGuard, I. Chomnalez, Cocaine? No, Saliva: NewsGuard’s False Claim of the Week. [online] Published 5 December 2025. Available at: https://www.newsguardrealitycheck.com/p/cocaine-no-saliva-newsguards-false
China
China Turns to Manipulated Footage to Shape Strategic Narratives
According to a report by NewsGuard, pro-China sources have circulated manipulated videos that falsely depict residents of Japan’s Ryukyu Islands claiming to be Chinese rather than Japanese. The coordinated campaign appears aimed at strengthening Beijing’s territorial claims amid heightened tensions, even though the Ryukyu Islands, including Okinawa, have been part of Japan since 1879.
NewsGuard identified more than 100 such videos across multiple platforms, all featuring Japanese influencers discussing everyday topics in Japanese. Chinese accounts added fabricated Mandarin captions to portray these influencers, incorrectly, as Ryukyu locals who support China’s claims. None of the people in these videos discusses the territorial status of the Ryukyu Islands. Some accounts amassed hundreds of thousands of followers and millions of likes by repeatedly miscaptioning videos.
The disinformation surge began shortly after Japan elected Prime Minister Takaichi Sanae, known for her firm stance toward China. None of the manipulated videos contains genuine statements supporting Chinese sovereignty. Instead, they illustrate a deliberate attempt to influence public perception through miscaptioned content and fabricated claims.
Source: NewsGuard, C. Lin, Pro-China Sources Misrepresent Influencers’ Videos to Advance Territorial Claims on Japanese Islands. [online] Published 4 December 2025. Available at: https://www.newsguardrealitycheck.com/p/chinese-sources-push-territorial
Kremlin Fake Peace Rhetoric About the War with Ukraine
According to a report by EU vs. Disinfo, Russia has repeatedly used diplomacy as a façade while pursuing domination over its neighbors. Despite public statements about peace, the Kremlin runs foreign information manipulation and interference (FIMI) campaigns to paint Moscow as reasonable and Ukraine as the aggressor. This strategy has been visible for decades, from covert backing of separatists in Moldova and Georgia to denying direct involvement while entrenching troops and recognizing breakaway regions as “independent.”
In Ukraine, disinformation is central to Russia’s war. After annexing Crimea and secretly intervening in Donbas, Moscow signed the Minsk agreements with no intention of complying. Instead, it violated ceasefires, supported separatists, and accused Ukraine of blocking peace. Putin now claims Russia did not start the war and is merely protecting Donbas, narratives that ignore overwhelming evidence of Russian orchestration and military aggression. This pattern continues today - Missile and drone attacks often follow peace talks, as seen after negotiations in Geneva. Meanwhile, Russian propaganda blames Ukraine, the EU, and the UK for “sabotaging peace.”
Source: EU vs. Disinfo, ‘Russia’s peace rhetoric: A smokescreen for aggression’ Available Online: https://euvsdisinfo.eu/russias-peace-rhetoric-a-smokescreen-for-aggression/
[AI Related Articles]
Human Signals, Bot Noise: Understanding Narrative Momentum
A recent contribution from Graphika argues that blaming online outrage on swarms of bots misses how disinformation actually gains traction. Bot detection remains challenging because AI tools enable inauthentic accounts to produce convincing impersonations, while common warning signs: high repost ratios, unusual posting times, frequent profile changes, are also typical in legitimate online communities. Earlier bot-scoring systems such as Botometer generated high false-positive rates, underscoring that “spammy” behavior can be entirely organic. K-pop fandoms are a prime example: their massive output reflects genuine coordination rather than automated manipulation.
Graphika’s analysis stresses that disinformation becomes influential only when trusted community figures pick it up and bring their audiences with them. Many online backlashes start with individuals reacting to brand decisions or cultural flashpoints; bots may then attach themselves to the discussion, but they seldom generate momentum on their own. Research on vaccine debates and China-linked Spamouflage networks shows that users overwhelmingly engage with human-authored posts, while bot-driven content attracts little sustained attention.
Source: Graphika, The Graphika Team, (Don’t) Blame it on the Bots. [online] Published 3 December 2025.
Available at: https://graphika.com/posts/don-t-blame-it-on-the-bots
Google’s Nano Banana Pro: Built for Creativity, Open to Misuse
Research by NewsGuard found that Google’s new text-to-image generator, Nano Banana Pro, can produce highly realistic images that reinforce false claims. During their test, the tool generated disinformation-aligned images in every prompt (30 out of 30), including false depictions of politicians, public health issues, and major brands. In several cases, the model even introduced additional, credible-looking details that strengthened the narrative. The photorealistic results were convincing enough to appear legitimate, resembling journalism or official materials.
Nano Banana Pro includes a visible watermark and an embedded SynthID marker. Still, the visible mark can be cropped out easily, and most users lack the tools required to verify invisible watermarks. Unlike competing image generators that block prompts involving public figures, Nano Banana Pro imposes no comparable restrictions.
The outcome is a powerful image model with limited guardrails, capable of supporting false narratives across politics, global conflicts, health communication, and corporate reputation. These capabilities could turn the model into a tool that may be misused to produce misleading content and advance hostile influence efforts.
Source: NewsGuard, I. Chomnalez & S. Rubinson, Google’s New AI Image Generator Is a Misinformation Superspreader. [online] Published 3 December 2025.
Available at: https://www.newsguardtech.com/special-reports/google-new-ai-image-generator-misinformation-superspreader/
Conversational AI Affects Political Opinions
As published in Science, recent research by Hackenburg and Lin examined how extensive language models can shift political attitudes. Both teams conducted large-scale experiments in which participants engaged in multi-turn chat conversations with an LLM tasked with persuading them about a political issue or a real-world candidate. Across both projects, this produced more than 80,000 individual interactions, though each study remained methodologically separate.
Hackenburg’s study in the United Kingdom tested 19 models and several persuasion strategies across hundreds of policy topics. Lin’s study in the United States, Canada, and Poland assessed whether LLMs could influence views on presidential or prime ministerial candidates and on a local ballot measure. Participants rated their opinions on a 0–100 scale before and after the conversation. A clear pattern emerged: attitudes on policy issues shifted by roughly ten points on average after a single LLM dialog. Effects on well-known candidates were minor and appeared mainly among participants who were initially opposed to the candidate.
Both teams tested multiple persuasion tactics. Personalized messages offered almost no additional impact. The most substantial effects were observed in messages containing many fact-like statements. Later checks showed that a significant portion of these statements was false, yet they still influenced participants.
The findings suggest that LLMs can alter political views primarily by providing large volumes of plausible information. Because false and accurate claims can have similar effects, the studies point to a potential risk for democratic systems: political judgments may be shifted even when the underlying information is unverified.
Source: Science, Vol. 390 Issue 6777, L. P. Argyle, Political persuasion by artificial intelligence. [online] Published 4 December 2025. Available at: https://www.science.org/doi/10.1126/science.aec9293
[General Reports]
Disinformation Emerges as a Key Global Business Risk
The World Economic Forum’s 2025 Executive Opinion Survey (EOS) shows that concerns about economic development dominate business leaders' risk perceptions worldwide. Respondents most frequently cited rising living costs, weak growth, unemployment, insufficient social protection, and high levels of debt.
At the same time, misinformation and disinformation are identified for the first time as a distinct top-five risk among G20 countries. Respondents pointed to the rapid spread of generative AI, the expansion of coordinated information operations, and increasing attempts to manipulate political debates, electoral processes, and market reactions. In North America, Europe, and East and Southeast Asia in particular, disinformation appears closely linked to “adverse AI outcomes” and “cyber insecurity,” underscoring the growing strategic relevance of this risk area.
The EOS results show that disinformation is emerging as an independent strategic risk (see reports in our research/article section: “The Attack on Luxury Brands: A case study of the weaponization of the online ecosystem by China” and “CIB Operation Targeting Western Automotive Brands”) Companies and policymakers increasingly see not just isolated threat incidents but a structural vulnerability in information environments, one that can influence political processes, public debate, and economic decision-making.
Source: Marsh McLennan, Global Risks for Business: Regional and Country Perceptions. [online] Published 3 December 2025.
The Rise of a Global Disinformation-for-Hire Industry
According to a report by EU vs. Disinfo, a new, privatized disinformation industry is transforming how states conduct influence operations, allowing authoritarian regimes to manipulate foreign populations while maintaining plausible deniability. Once tightly controlled by governments, propaganda and information operations are now outsourced to private firms offering “disinformation as a service,” including fake social-media campaigns, AI-generated content, hacking, and narrative management (for more see our latest Blog Post here). This commercialization enables malign actors to experiment with risky tactics and amplify false narratives globally, often making them appear organic or locally produced.
High-profile examples, such as Team Jorge, illustrate the scale and sophistication of this ecosystem. The firm reportedly interfered in dozens of elections using AI-powered platforms like AIMS to create thousands of fake social-media accounts with synthetic identities. Russian private companies similarly run operations that mirror former state troll farms, providing covert online assets and spreading state-aligned narratives while shielding governments from accountability. Modern campaigns increasingly operate in hybrid spaces, blending online manipulation with real-world mobilization, covert media fronts, and local influencers, making them difficult to detect.
The commercialization and automation of disinformation create an asymmetrical information warfare environment that favors autocracies. Authoritarian states can project influence globally with minimal risk, while democracies remain exposed due to open networks and legal transparency obligations.
Source: EUvsDisinfo, The rise of the disinformation-for-hire industry. [online] Published 5 December 2025.
Macron’s Anti-Disinformation Push Sparks Backlash from Right-Wing Media and Opponents
As a Reuters article stated, French President Emmanuel Macron’s nationwide campaign to highlight the dangers of online disinformation, algorithmic manipulation, and foreign-backed narratives has ignited intense backlash from right-wing political figures and media outlets aligned with billionaire Vincent Bolloré. Macron has framed the effort as an urgent attempt to protect democratic integrity ahead of the 2027 presidential election, emphasizing the need for voluntary professional “labelling” of news outlets to promote ethical standards, while stressing that the state should play no role in determining what constitutes legitimate news.
Bolloré-owned outlets, including CNews, Europe 1, and the Journal du Dimanche, accused Macron of attempting to pressure or “bring into line” media critical of him, framing the initiative as a veiled move toward censorship. Influential hosts and commentators presented his remarks as evidence of “authoritarian drift.” At the same time, far-right leaders such as Marine Le Pen and Jordan Bardella escalated the narrative, likening Macron’s comments to the creation of a dystopian “Ministry of Truth.” In response, Macron’s office released a video rebutting the claims, arguing that the uproar demonstrated how efforts to combat disinformation can quickly be distorted. Observers noted that right-wing actors were exploiting the issue for political gain, accusing the government of censorship while themselves benefiting from a media ecosystem increasingly shaped by partisan narratives. The confrontation also unfolds against a broader backdrop in which debates over media regulation and digital information integrity have grown more polarized, mirroring struggles seen in the U.S. and U.K.
Source: Reuters, M. Rose, Macron’s campaign to fight fake news meets resistance from right-wing media. [online] Published 3 December 2025. Available at: https://www.reuters.com/business/media-telecom/macrons-campaign-fight-fake-news-meets-resistance-right-wing-media-2025-12-03/
[Appendix - Frameworks to Counter Disinformation]
Countering the Noise: EU Tackles Disinfo in Yemen
The European Union announced that, in partnership with the Studies and Economic Media Centre (SEMC), it hosted a two-day workshop in Taiz in December 2025 to address the growing challenge of disinformation in Yemen. More than 100 journalists, fact-checkers, and media professionals participated in sessions focused on verifying information, countering misinformation, and promoting responsible journalism. Practical tools and digital analysis techniques were emphasized to help participants detect and respond to false narratives. By equipping journalists and fact-checkers with the skills and networks needed to detect and counter disinformation, the EU aims to strengthen public accountability and foster a more trustworthy and resilient information environment in Yemen.
Local fact-checking platforms shared verification methodologies and strategies for identifying disinformation, while participants explored ways to improve collaboration between media outlets and fact-checking organizations. Discussions also included developing a national roadmap to combat disinformation, ensuring a coordinated and strategic approach to maintaining reliable information in the country's media landscape.
Source: EEAS, EU, SEMC organise fact-checking workshop for Yemeni journalists. [online] Published 2025.
Available at: https://www.eeas.europa.eu/delegations/yemen/eu-semc-organise-fact-checking-workshop-yemeni-journalists_en
Turkic States Seek Coordinated Strategy Against Cross-Border Digital Disinformation
According to a EuroNews article, Turkic-speaking countries are increasingly seeking coordinated strategies to counter online disinformation that circulates across global digital platforms often beyond the reach of national regulators. Kyrgyzstan’s experience illustrates both the ambition and the constraints of national laws. Although the 2022 legislation allows authorities to request the removal of false content within 24 hours and to block non-compliant websites for up to two months, officials acknowledge that enforcement collapses when misinformation spreads on social media platforms like YouTube or Instagram, as they operate outside domestic jurisdiction. Kyrgyz representatives argued that stronger accountability would require major tech companies to establish formal operations within the region.
At a regional media conference in Baku, officials from the Organization of Turkic States (OTS) explored collective responses to disinformation, combining regulatory measures with efforts to strengthen credible domestic media. Kazakhstan emphasized that combating misinformation cannot rely solely on restrictions; instead, unified journalistic standards and the development of trustworthy local content are necessary to build public confidence and reduce reliance on unreliable sources.
The OTS’s ongoing discussions highlight the broader challenge of regulating information flows in a digitally interconnected environment where foreign platforms might evade national controls. Member states are grappling with how to protect the public from harmful falsehoods while maintaining space for legitimate debate, and they increasingly view regional cooperation as essential for addressing disinformation and hostile influence threats that individual countries cannot effectively confront alone.
Source: Euronews Next, E. Aliyeva, Disinformation is a worldwide issue. Here is how Central Asia is combating it. [online] Published 7 December 2025. Available at: https://www.euronews.com/next/2025/12/07/disinformation-is-a-worldwide-issue-here-is-how-central-asia-is-combating-it
[CRC Glossary]
The nature and sophistication of the modern Information Environment is projected to only continue to escalate in complexity. However, across academic publications, legal frameworks, policy debates, and public communications, the same concepts are often described in different ways, making collaboration, cooperation, and effective action more difficult.
To ensure clarity and establish a consistent frame of reference, the CRC is maintaining a standard glossary to reduce ambiguity and promote terminological interoperability. Its scope encompasses foundational concepts, as well as emerging terms relating to Hostile Influence and Cyfluence.
As a collaborative project maintained with input from the community of experts, the CRC Glossary is intended to reflect professional consensus. We encourage you to engage with this initiative and welcome contributions via the CRC website
_edited.png)
.png)


