Weekly Report: Cyber based influence campaigns 12th - 18th of May 2025
- CRC
- May 19
- 15 min read
Updated: May 27

[Listen to the Podcast]
[Report Highlights]
A small-scale experiment published by Global Witness concludes that TikTok's algorithm disproportionately promotes far-right political content in Romania before the presidential election.
The DFRLab reports that pro-Kremlin networks appear to have supported Romanian far-right candidate George Simion in the 2025 presidential election by amplifying nationalist and anti-Western narratives across social media platforms.
An article published by ABC highlights accusations of foreign interference and disinformation tactics in the Solomon Islands. China is implicated following the resignation of newly appointed Rural Development Minister Daniel Waneoroa from the Inter-Parliamentary Alliance on China (IPAC).
As a CSIS Center for Strategic & International Studies report detailed, widespread disinformation campaigns significantly affected the 2025 Philippine midterm elections.
> TABLE OF CONTENTS <
HOSTILE INFLUENCE CAMPAIGNS - SOCIAL MEDIA PLATFORMS
[Google]
[TikTok]
STATE ACTORS
[Russia]
[The War in Ukraine]
[China]
CYFLUENCE ATTACKS
GENERAL REPORTS
HOSTILE INLUENCE CAMPAIGNS - SOCIAL MEDIA PLATFORMS
[Google]
Google TAG Uncovers Global Disinformation Campaigns in Q1 2025
Google’s Threat Analysis Group (TAG) identified and disrupted several coordinated influence operations (CIOs) across its platforms in the first quarter of 2025. These campaigns were primarily linked to state-sponsored actors from Russia, Iran, and Nigeria, who used YouTube channels, Google Ads accounts, and websites to spread content supportive of their respective governments and critical of opposing parties.
Russia’s operations stood out, with over 1,300 YouTube channels taken down and numerous domains blocked. These were connected to the actor “Portal Kombat,” which pushed pro-Russian narratives in multiple languages.
The TAG report reveals a shift in disinformation tactics, including the growing use of multilingual content and the focus on regional issues to sway public opinion. Platforms like YouTube and Google News' involvement highlight the broad scope of these efforts. The findings reflect the ongoing threat posed by state-backed disinformation and the need for constant vigilance and action to protect the integrity of information ecosystems.
Sources:
GOOGLE Threat Analysis Group, Leonard, B., 2025. TAG Bulletin: Q1 2025. [online] Available at: https://blog.google/threat-analysis-group/tag-bulletin-q1-2025/
[TikTok]
TikTok’s algorithm may lean right in Romania
Global Witness conducted a small-scale experiment in early May 2025 in Bucharest to assess TikTok’s political content recommendations ahead of Romania’s presidential election. Over two days, researchers created three new TikTok accounts on factory-reset phones to simulate users without prior history. Each account followed the official pages of both presidential candidates and watched around ten posts per candidate. Then, the For You feed was browsed for ten minutes—political posts were watched, while non-political posts were skipped. All political content shown was manually reviewed.
The results suggest that approximately three-quarters of the political content promoted by TikTok favored far-right narratives or personalities. While the methodology was limited in scope, more exploratory than empirical, it raises concerns about the role of recommendation algorithms in amplifying extremist views. TikTok’s algorithm is designed to surface content based on user interaction and interests, a model that can inadvertently prioritize polarizing or provocative material.
TikTok has rejected the findings, calling the study unscientific and misleading. However, the results echo similar concerns in other countries, including Germany. Under the EU’s Digital Services Act, platforms like TikTok are legally required to assess and mitigate risks to electoral integrity—including those posed by their algorithms. The European Commission is already investigating TikTok’s influence in the Romanian electoral context.
Source:
GLOBAL WITNESS, 2025. TikTok algorithm continues to push multiple times more far-right content to users ahead of Romanian election. [online] Available at: https://globalwitness.org/en/campaigns/digital-threats/tiktok-algorithm-continues-to-push-multiple-times-more-far-right-content-to-users-ahead-of-romanian-election/
STATE ACTORS
[Russia]
White Lies, No Lines on the Kyiv Express
NewsGuard’s Reality Check report reveals a targeted disinformation campaign launched in early May 2025 against several European leaders. The campaign centered on a blurry, low-resolution video that falsely claimed French President Emmanuel Macron, German Chancellor Friedrich Merz, and UK Prime Minister Keir Starmer were seen with cocaine during a train trip to Kyiv on May 9.
The video in question shows a white napkin and a cocktail pick. Despite its harmless content, the footage spread rapidly across Russian state media, far-right platforms, and conspiracy websites. Kremlin-linked figures amplified the false claim, including Foreign Ministry spokesperson Maria Zakharova and RT editor-in-chief Margarita Simonyan. More than 100 articles pushing the hoax appeared across Russian media networks.
The effort aimed to discredit key Western supporters of Ukraine, with a particular focus on Macron, who has emerged as one of the most vocal backers of Kyiv. This incident follows a broader Kremlin strategy of spreading fabricated drug-related claims, a tactic previously used against Ukrainian President Volodymyr Zelensky.High-resolution footage from reliable outlets like AFP and AP disproved the allegation, showing that the supposed “evidence” was entirely misleading.
In addition, according to a Le Monde article, the Elysée broke from France’s traditional diplomatic communication by responding to the cocaine rumor with sarcasm and meme-style messaging on its official X account. This marked the first time the French presidency used such a tone, aiming to counter disinformation in real time and mirror Ukraine’s online tactics.
Source:
NEWSGUARD Reality Check, SADEGHI, M., & MAITLAND, E., 2025. It’s All a Blur: A Fuzzy Video Is Cited to Falsely Claim that European Leaders Snorted Cocaine on Their Way to Kyiv. [online]
Available at: https://www.newsguardrealitycheck.com/p/its-all-a-blur-a-fuzzy-video-is-cited
LE MONDE, AUDUREAU, W., 2025. How the Elysée adapted its communication style to tackle Macron cocaine rumor. [online] Available at: https://www.lemonde.fr/en/les-decodeurs/article/2025/05/16/how-the-elysee-adapted-its-communication-style-to-tackle-macron-cocaine-rumor_6741335_8.html
Operation Overload: Experts You Know, Voices You Don’t
Findings from the Institute for Strategic Dialogue (ISD) reveal that the Russian disinformation campaign Operation Overload, first identified in 2023, has continued into 2025 with new tactics and targets. Between January and March, the campaign published at least 135 deceptive posts across platforms like X, Telegram, and Bluesky, focusing on Germany, France, and Ukraine.
The operation aims to undermine democratic trust and weaken support for Ukraine by impersonating trusted sources. It uses AI-generated voices, fake headlines, and forged logos to mimic media outlets, academics, and law enforcement. In the first quarter of 2025, over 80 organisations were impersonated, with more than three-quarters linked to public institutions. While most posts saw little engagement, one video falsely claiming USAID paid celebrities to visit Ukraine reached over 4 million views, boosted by conspiracy accounts. The rest relied on bot networks to simulate visibility.
Though its direct reach is limited, the campaign creates real-world harm, confusing users, damaging reputations, and draining fact-checkers' resources. Around 80% of identified posts remained online during analysis, increasing long-term risk.
Earlier reports by groups like CheckFirst, Recorded Future, and the Digital Forensics Research Lab have also documented how Operation Overload floods social media with hoaxes to overwhelm journalists and institutions. The 2025 phase shows a continuation of this strategy—more refined, but equally focused on destabilisation through deception.
Source:
INSTITUTE FOR STRATEGIC DIALOGUE, 2025. Stolen voices: Russia-aligned operation manipulates audio and images to impersonate experts. [online] Available at: https://www.isdglobal.org/digital_dispatches/stolen-voices-russia-aligned-operation-manipulates-audio-and-images-to-impersonate-experts/
VIGINUM: Russian Influence Campaign Storm-1516
The recent technical report by VIGINUM [For more background information, please find our recent blog post here] provides an in-depth analysis of 77 documented influence operations attributed to the Russian campaign Storm-1516. The report outlines the campaign’s overarching goals—chief among them, discrediting the Ukrainian government to weaken Western aid—while highlighting its systematic targeting of political figures in France, Germany, and the United States, especially during election periods.
Storm-1516 leverages deepfakes, staged videos, and a sophisticated distribution network involving burner and paid accounts. These narratives are amplified through pro-Russian networks and linked influence operations, including Project Lakhta and CopyCop.
The investigation draws direct lines to individuals and groups aligned with the Russian state, including exiled former U.S. law enforcement officer John Mark Dougan and figures from the Prigozhin and Dugin networks. It further implicates Yury Khoroshenky, a suspected GRU Unit 29155 operative, as a coordinator and financier.
VIGINUM concludes that Storm-1516 constitutes a clear case of foreign digital interference, posing an escalating threat to the integrity of European public discourse.
Source:
ASGDSN, VIGINUM, 2025. Analyse du mode opératoire informationnel russe Storm-1516, [online] Available at: https://www.sgdsn.gouv.fr/files/files/Publications/20250507_TLP-CLEAR_NP_SGDSN_VIGINUM_Technical%20report_Storm-1516.pdf
Kremlin-Aligned Networks Target Romania’s 2025 Presidential Election
A report by the Digital Forensic Research Lab (DFRLab) reveals that pro-Kremlin networks in Romania and Moldova actively supported far-right presidential candidate George Simion during Romania’s 2025 election. These networks, which had previously criticized Simion, shifted to amplifying his nationalist and Eurosceptic messaging. Their efforts relied heavily on platforms such as Telegram and TikTok.
The DFRLab notes that Simion’s rise was enabled by the same digital infrastructure that had earlier promoted Călin Georgescu, a candidate later disqualified by Romania’s electoral bureau and Constitutional Court. Georgescu was removed from the race due to violations of campaign regulations, including opaque financing and suspected Russian-backed online operations. Moscow has denied the allegations.
The findings highlight the ongoing threat of foreign interference in democratic elections. State-aligned influence networks can manipulate public opinion and disrupt electoral processes. This interference is transnational, as shown by the involvement of Moldovan networks linked to oli-garch Ilan Shor, which promoted anti-Western narratives.
Source:
DFRLab, OLARI, V., 2025. From Bucharest to Chisinau: How pro-Kremlin networks shaped Romania’s 2025 election. [online] Available at: https://dfrlab.org/2025/05/16/pro-kremlin-networks-shaping-romania-2025-election/
[The War in Ukraine]
Kremlin Glorifies War Through Religion and Disinformation Campaigns
An article published by EUvsDisinfo on May 17, 2024, describes how, in the fourth year of its full-scale invasion of Ukraine, Russia continues to pursue long-term political and territorial goals. According to the article, this includes seeking control over large parts of Ukrainian territory and weakening the country’s statehood. Russian forces reportedly target Ukraine’s infrastructure, economy, agriculture, and industry, often without distinguishing between civilian and military objectives.
EUvsDisinfo argues that the war is increasingly presented as a central element of Russia’s national identity. President Putin’s public speeches—such as the one delivered on May 9—are described as framing the war as a moral or even quasi-religious duty. The Russian Orthodox Church is said to support this framing. State-affiliated media reportedly portray war widows honoring fallen soldiers as heroic sacrifices. According to the article, this narrative also marginalizes critical civil society voices, such as the Committee of Soldiers’ Mothers of Russia, which has faced increasing legal pressure.
Regarding diplomatic efforts, the article notes that Russia has responded cautiously to initiatives such as a proposed 30-day ceasefire or direct talks between Presidents Zelenskyy and Putin. Russian state media instead emphasize the so-called “root causes” of the conflict. EUvsDisinfo interprets this as a strategy to delay or deflect negotiations. The listed demands, such as a NATO rollback, Ukrainian neutrality, and recognition of Russian claims over occupied territories—are seen not as realistic negotiation points but as political pretexts to prolong the conflict.
Source:
EUvsDisinfo, 2025. Celebrating a new war. [online] Available at: https://euvsdisinfo.eu/celebrating-a-new-war/
[China]
China Accused of Disinformation and Political Pressure in Solomon Islands
According to a report by ABC News Australia, the resignation of Solomon Islands' newly appointed Minister for Rural Development, Daniel Waneoroa, from the Inter-Parliamentary Alliance on China (IPAC) has sparked political controversy and renewed concerns about foreign interference. Waneoroa stated that he stepped down to promote political stability and align with Prime Minister Jeremiah Manele's national direction.
The Inter-Parliamentary Alliance on China (IPAC) is an international, cross-party coalition of legislators from over thirty countries. It advocates for a coordinated and values-based approach to China policy, focusing on human rights, democracy, and global security. IPAC has been outspoken on China’s policies in Hong Kong, Xinjiang, and Taiwan, and maintains ties with Taiwanese institutions.
Civil society groups and IPAC suspect that Waneoroa’s resignation followed pressure from the Chinese embassy in Honiara. According to reports, embassy officials demanded a meeting and suggested his continued affiliation with IPAC could affect development funding. The Transparency Solomon Islands group condemned the alleged interference, warning that it could undermine national sovereignty and political stability.
Waneoroa’s role in IPAC was particularly sensitive, given that the Solomon Islands established formal ties with China in 2019, ending diplomatic relations with Taiwan. The Chinese embassy denied any wrongdoing, calling the allegations baseless and reaffirming its support for Solomon Islands’ sovereignty.
Source:
ABC NEWS, DZIEDZIC, S. & AUMANU-LEONG, C., 2025. China is accused of foreign interference in Solomon Islands after minister quits international group. [online] Available at: https://www.abc.net.au/news/2025-05-12/china-embassy-solomon-islands-embroiled-foreign-interference/105280538
CYFLUENCE ATTACKS
Cyber Attacks Persist After India-Pakistan Ceasefire
A recent post by CyberKnow states that despite the ceasefire announced several days ago between India and Pakistan, hacktivist groups remain active and continue to manipulate the information space. As noted in our last Weekly Review (more information available here), the primary threat now lies less in technical attacks and more in spreading misleading or exaggerated claims.
One example is the widely circulated report that cyberattacks took 70% of India’s power grid offline. According to CyberKnow, this is part of a disinformation campaign to generate public uncertainty and confusion.
DDoS attacks and website defacements remain the most commonly used methods. In addition, new groups emerge, either announcing their intent to get involved or already carrying out attacks.
Several alleged data breaches have also been reported recently. However, CyberKnow clarifies that many cases are fake or based on publicly available information misrepresented as sensitive or compromised data. These developments highlight how information manipulation has become a key element of modern cyber conflict—shaping public perception.
Source:
CyberKnow (@Cyberknow20), 2025. India-Pakistan Cybertracker #2. [online] Available at: https://x.com/Cyberknow20/status/1922269417137942839
GENERAL REPORTS
The Impact of Disinformation on the Philippine Midterm Elections
In an article, the Center for Strategic and International Studies (CSIS) outlines key developments from the Philippine midterm elections held on May 12, 2025. Voters elected officials at all levels, with particular attention on the 12 contested Senate seats—widely seen as a preview of the 2028 presidential race.
Three major political blocs shaped the vote: the Marcos administration, the Duterte camp, and a resurgent liberal opposition. President Marcos’s slate secured six Senate seats, fewer than expected. Duterte allies won four seats, and former president Rodrigo Duterte was elected mayor of Davao City despite being detained by the International Criminal Court. The liberal camp exceeded expectations, with figures like Bam Aquino and Kiko Pangilinan returning to national office.
Millennials and Gen Z comprised over 60% of registered voters and generated more than 70% of political engagement online. Astroturfing played a significant role, as the Philippine Center for Investigative Journalism found over 100 Facebook pages posing as news outlets, which spent over $860,000 on political advertising.
The Duterte camp revived its disinformation networks to portray Duterte as a political victim. On platform X, over 30% of pro-Duterte accounts were reportedly fake. Meanwhile, the Marcos camp promotes anti-disinformation measures—though critics argue these also serve partisan interests.
Source:
CSIS, Quitzon, J., 2025. Philippines Votes 2025: A Power Shift in the Senate. [online] Available at: https://www.csis.org/analysis/philippines-votes-2025-power-shift-senate
CSIS Futures: The Collapse of Trust in a Connected World
“Trust Fails” is part of the Scenarios That Could Define 2035 series by the Center for Strategic and International Studies (CSIS). The format combines future-oriented narrative scenario development with expert commentary to explore potential global trends. This one, written by Jon B. Alterman with contributions from cybersecurity and AI experts, imagines a world where trust between individuals, institutions, and nations collapses by 2035 due to technological misuse.
The authors highlight how technology has historically supported global trust through secure transactions, verified identities, and reliable communication. However, they warn that the same digital infrastructure is now being used to undermine confidence. Deepfakes, synthetic media, and AI-generated disinformation allow bad actors to falsify events, damage reputations, and disrupt public understanding.
A key focus is the role of social media algorithms, which amplify emotionally charged misinformation while downplaying corrections. Malicious actors can generate large volumes of disinformation and optimize it through AI-driven testing. Once public confidence in digital information systems erodes, everything from journalism to government records becomes suspect.
In terms of cyberspace, the scenario anticipates a future where identity verification becomes more difficult, cyberattacks become more disruptive, and digital platforms lose legitimacy. Economic systems slow as verification costs rise, and political polarization deepens. States with weaker digital infrastructure suffer most, facing exclusion from investment, cooperation, and secure information flows. The scenario is a stark warning: trust may become a casualty of technological advancement. [Click here to explore other scenarios].
Source:
Center for Strategic and International Studies (CSIS), Alterman, J., Allen, G., Carter, W., Byrd, C., & Spaulding, S., 2025. Trust Fails. [online] Available at: https://features.csis.org/scenarios2035/trust-fails/
[Download Report]
GLOSSARY
Information Operations
Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare.
Hybrid Warfare
It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention.
Cyber Warfare
Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists.
Cyfluence Attack
Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc.
Soft Warfare
All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more.
CIB
Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms,
emphasizing both coordination and inauthentic behavior.
FIMI
The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity.
Hostile Influence Campaign (HIC)
An information operation sought to influence a targeted audience for a hostile cause.
Digital Impact on Discourse (DID)
Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC.
Misinformation
A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories.
Disinformation
Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent".
Inauthentic Behavior
Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis.
Fake users
AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs.
Unidentified users
A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example).
Sockpuppet accounts
A sock puppet or sockpuppet is an online identity used for deception.
Bots
Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts.
Repurposed accounts
Means social media accounts that were hacked or purchased, then used for different purposes than the original ones.
Fake website
Is a website designed for fraudulent or scam activity, hiding its real purpose.
Deep Assets
These are non-human deep cover assets, divided into two sub-categories:
Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.).
Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives.
Real platforms
Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally.
Astroturfing
Takes place when a coordinating actor creates a false impression of grassroots support.
Cyberbullying
is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm.
DISCLAIMER
Copyright and License of Product
This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained.
Disclaimer of Warranties
The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks.
Accuracy of Information
The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice.
Limitation of Liability
To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law.
Indemnification
The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms.
Third-Party Rights
The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms.
Governing Law and Jurisdiction
This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.