Weekly Report: Cyber based influence campaigns 26th May - 01th of June 2025
- CRC

- Jun 4
- 14 min read
Updated: Jun 9

[Listen to the Podcast]
[Report Highlights]
Global Witness's investigation ahead of Poland’s June 2025 presidential runoff revealed that TikTok’s algorithm disproportionately amplifies far-right content, posing a disinformation risk to electoral integrity.
Meta’s Q1 2025 Adversarial Threat Report reveals that coordinated disinformation networks from China, Iran, and Romania used fake accounts and cross-platform tactics to manipulate public discourse and promote state-aligned narratives.
As published by EU vs. Disinfo, Georgia prepares to implement its new Foreign Agents Registration Act (FARA) on June 01st, 2025. The government, led by the Georgian Dream party, has launched a wide-reaching campaign to justify the law.
Reporters Without Borders (RSF) reports that Oleg Nesterenko, a Russian national based in France, has been spreading Kremlin propaganda through media outlets, social platforms, and conspiracy blogs, particularly targeting audiences in French-speaking Africa.
> TABLE OF CONTENTS <
HOSTILE INFLUENCE CAMPAIGNS - SOCIAL MEDIA PLATFORMS
[TikTok]
[Meta]
STATE ACTORS
[Russia]
[The War in Ukraine]
AI-RELATED ARTICLES
GENERAL REPORTS
FRAMEWORKS TO COUNTER DISINFORMATION
HOSTILE INFLUENCE CAMPAIGNS - SOCIAL MEDIA PLATFORMS
[TikTok]
Far-right content amplification threatens Polish election integrity
Global Witness has investigated TikTok’s recommendation algorithm and found that far-right content is disproportionately amplified in the lead-up to Poland’s June 2025 presidential runoff.
Researchers created test profiles that engaged equally with both candidates' official accounts. Despite this, the algorithm recommended over five times more pro-Nawrocki (far right) content than pro-Trzaskowski (centrist) content. Overall, two-thirds of the political content pushed by TikTok favored far-right views, including antisemitic material.
The apparent skew is especially notable given that Trzaskowski led the first round vote and had a significantly stronger TikTok presence than Nawrocki.
TikTok’s recommendation algorithm, which prioritizes engaging content, appears to favor the sensationalist narratives more commonly associated with the far right. This dynamic which emphasises and reinforces particular political perspectives over others, raises concerns of systematic bias and its implications for electoral integrity.
TikTok disputes the findings, calling the methodology flawed and denying deliberate bias. However, similar amplification patterns were observed during past elections in Germany and Romania, suggesting a broader issue in how political content is surfaced on the platform.
The European Commission is investigating TikTok’s election-related conduct under the Digital Services Act. Critics call for increased transparency, urging the platform to grant external researchers meaningful access to data and disclose how engagement signals are weighted to assess its impact better.
Sources:
Global Witness, 2025. TikTok algorithm recommends twice as much hard right content to users ahead of Polish election. [online] Available at: https://globalwitness.org/en/campaigns/digital-threats/tiktok-algorithm-recommends-twice-as-much-hard-right-content-to-users-ahead-of-polish-election/
[Meta]
Meta Uncovers Global Disinformation Networks from China, Iran, and Romania
Meta’s Q1 2025 Adversarial Threat Report reveals the takedown of coordinated inauthentic behavior (CIB) networks originating in China, Iran, and Romania that aimed to manipulate public opinion through fake accounts and deceptive online content. The Chinese network targeted Myanmar, Taiwan, and Japan using AI-generated profile photos and multilingual content to pose as locals, spreading pro-regime and anti-Western narratives. The Iranian operation focused on Azeri-speaking audiences in Azerbaijan and Turkey, promoting anti-U.S. and pro-Palestinian sentiments across multiple platforms, often using fake female journalist personas and popular hashtags to blend into trending conversations.
The Romanian campaign stood out due to its massive scale, 658 Facebook accounts, and $177,000 in ad spend. It sought to influence Romanian public discourse through false personas and cross-platform coordination, including YouTube, X, and TikTok. Meta disrupted these networks before they gained substantial, authentic followings. These findings highlight the persistent and evolving threat of state-linked and opportunistic actors weaponizing social platforms to sow misinformation, undermine trust, and manipulate global political narratives.
Sources:
Meta, 2025. Quarterly Adversarial Threat Report Q1 2025. [online] Available at: https://transparency.meta.com/metasecurity/threat-reporting
STATE ACTORS
[Russia]
Russian Disinformation Targets the Baltic States
The Jamestown Foundation identifies hostile influence and disinformation as central elements of Russia’s hybrid strategy targeting the Baltic states. These efforts aim to erode democratic cohesion, weaken public trust, and disrupt alignment with Western institutions, without crossing the threshold of conventional conflict.
At the core is a sustained campaign of information manipulation. Disinformation circulates through platforms like Telegram and TikTok, often supported by AI-generated content and deepfakes. Messaging is tailored to divide societies, discredit NATO and EU engagement, and diminish support for Ukraine. Russian-speaking minorities and economically vulnerable groups are frequent targets of these narratives, which are designed to exploit identity, fear, and historical grievances.
These influence operations operate in tandem with other hybrid tactics, such as cyberattacks, sabotage, and covert intimidation, which are not the focus here but reinforce the overall pressure. Incidents like GPS interference, infrastructure attacks, or state-backed migration flows contribute to instability and uncertainty, creating fertile ground for manipulative messaging.
Electoral influence is pursued primarily through perception management rather than direct interference. Polarizing content, the amplification of distrust, and reputational attacks on political actors are used to undermine institutional legitimacy and democratic participation.
Together, these methods form a layered strategy of hostile influence that targets societal cohesion rather than territorial control. The approach is calibrated to remain deniable, cumulative, and difficult to attribute, placing democracies under sustained psychological and informational stress.
Source:
The Record, Antoniuk, D., 2025. Russia-linked disinformation floods Poland, Romania as voters cast ballots. [online] Available at: https://therecord.media/russia-disinformation-poland-presidential-election
France-Based Russian Propagandist Amplifies Kremlin Narratives in Africa
Reporters Without Borders (RSF) provides a detailed report on Oleg Nesterenko, a self-styled Russian propagandist who has operated from France for nearly 30 years, spreading disinformation around 30 countries, especially in French-speaking African nations. Nesterenko disseminates pro-Russian narratives using classic propaganda techniques such as victim-perpetrator reversal, conspiracy theories, and fabricated images and videos. He denies documented war crimes like the Bucha massacre and portrays Russia as a peacemaker. Highly active on social media, he posts hundreds of messages daily, often without clear financial backing, making him a “small-scale entrepreneur” of disinformation.
The report also examines his questionable background, including involvement in Nazi memorabilia trading and his radical teaching activities at a Paris business school, highlighting his ideological shift. His family’s participation in propaganda efforts is also documented. This case exemplifies how complex and multifaceted hostile influence operations by individuals can be, illustrating the challenges of identifying and combating modern hybrid warfare and propaganda.
Source:
Reporters Without Borders (RSF), Froger, A., Mzalouat, H., 2025. Conquering minds: A deep dive into the media offensive of a France-based Russian propagandist. [online] Available at: https://rsf.org/en/conquering-minds-deep-dive-media-offensive-france-based-russian-propagandist
[The War in Ukraine]
Lessons from Ukraine About Disinformation in the Cyber War
A report by the State Service of Special Communications and Information Protection of Ukraine (SSSCIP) underscores how influence and disinformation campaigns have become integral components of modern warfare, as evidenced during the first two years of the conflict in Ukraine. Russian cyber actors have systematically employed cyber operations to undermine media outlets and societal structures. In 2022 alone, over 200 attacks targeted Ukrainian news agencies, state media, and online platforms with the intent to disseminate propaganda, spread disinformation, and manipulate public opinion.
Beyond conventional tactics such as DDoS attacks and data destruction via wiper malware, fabricated media content was increasingly used to disrupt IT systems and influence and intimidate the population. A notable example is the circulation of a deepfake video purporting to show the Ukrainian president calling for surrender, intended to sow panic and erode trust in public institutions.
In 2023, these tactics evolved, with adversaries intensifying targeted “hack and leak” operations. Stolen documents and sensitive technical materials were primarily disseminated through Telegram channels to amplify psychological pressure and demoralize society. These actions exemplify hybrid warfare, combining conventional military efforts with cyber and informational campaigns to destabilize physical and societal-political domains.
The report emphasizes that cyberattacks and hostile influence operations are deeply interconnected within the framework of hybrid warfare, collectively seeking to destabilize Ukraine’s physical infrastructure alongside its social and political fabric.
Source:
RAND Corporation, Treyger, E., Williams, H. J., & D'Arrigo, A., 2025. Measuring the Reach of Russia’s Propaganda in the Russia-Ukraine War. [online] Available at: https://www.rand.org/pubs/research_briefs/RBA3450-2.html
AI RELATED ARTICLES
AI Disinformation Is Polluting the Internet and Undermining Public Trust
TechXplore publishes an article from The Conversation explaining the “AI slop” phenomenon: low-quality, AI-generated content increasingly flooding social media platforms. AI slop includes automatically created texts, images, or videos that are often repetitive and emotionally charged to maximize attention, regardless of accuracy or quality.
This content is commonly found on Reddit, LinkedIn, Facebook, and Twitter. Estimates suggest that over half of longer English-language posts on LinkedIn are AI-generated. Approximately 25% of all internet traffic is attributed to “bad bots”—automated programs that mimic human behavior to distribute this content widely.
Bad bots are used to share, like, and comment on AI slop, boosting the visibility and accelerating the spread of these posts. These automated spreaders pretend to be real users, helping to disseminate disinformation — deliberately false or misleading information — on a large scale.
Disinformation driven by AI slop and bad bots is highly relevant to hostile influence because it manipulates political opinions, interferes with elections, and deepens societal divisions. The sheer volume and sophisticated distribution of AI slop make it harder for people to discern truth and hinder open, democratic discourse.
Combating this problem is challenging as AI technology and bot networks grow increasingly complex. Without coordinated technological, legal, and societal measures, these manipulative contents risk undermining democratic processes worldwide.
Source:
TechXplore, Provided by The Conversation, Roozenbeek, J., van der Linden, S., Kyrychenko, Y., 2025. What is AI slop? Why you are seeing more fake photos and videos in your social media feeds. [online] Available at: https://techxplore.com/news/2025-05-ai-slop-fake-photos-videos.html
The Role of Narrative in Disinformation and Its Defense
TechXplore, with content provided by The Conversation, explores how storytelling shapes beliefs and how AI can be used to detect disinformation campaigns exploiting narrative techniques. Stories engage people emotionally and can be manipulated to influence public opinion, especially on social media.
Cultural understanding is essential because symbols and meanings vary between cultures; AI risks misinterpreting narratives without it. For instance, a phrase positive in one culture may be offensive in another. Training AI with diverse cultural knowledge improves its ability to spot manipulative content for specific audiences.
A significant challenge for AI is processing non-linear narratives—social media posts that jump back and forth in time or omit details. While humans handle fragmented storytelling naturally, AI struggles to extract accurate event sequences and relationships.
To tackle this, AI tools can analyze narrative structures, user personas (including social media handles), and timelines to identify coordinated disinformation efforts. These systems detect repeated storylines and suspicious posting patterns, assisting intelligence agencies, platforms, researchers, and users in flagging manipulative content.
Source:
TechXplore, Provided by The Conversation, Finlayson, M., Islam, A.A., 2025. Weaponized storytelling: How AI is helping researchers sniff out disinformation campaigns. [online] Available at: https://techxplore.com/news/2025-05-weaponized-storytelling-ai-disinformation-campaigns.html
GENERAL REPORTS
Belarusian Disinformation Targets Poland Through Social Media
The Digital Forensic Research Lab (DFRLab) reports that Belarusian state media, specifically the Polish-language edition of Radio Belarus linked to the sanctioned entity Beltelradio, is spreading disinformation targeting Polish audiences on platforms such as TikTok, YouTube, X, and Facebook, actively circumventing European Union (EU) sanctions. As of May 7, 2025, these accounts had published over 7,790 posts and videos, garnering 16 million views and at least 542,000 engagements. This digital campaign aims to influence Poland’s 2025 presidential elections by undermining trust in democratic institutions, amplifying polarizing narratives, and either discrediting or supporting specific candidates. Although platforms are legally obligated under the EU Digital Services Act (DSA) to block sanctioned content, these channels were reported in December 2024 but remained broadly accessible throughout the election campaign. Only TikTok acted shortly before the election by applying geofencing. Platforms have ongoing obligations under the DSA to identify and mitigate systemic election risks and illegal content, but enforcement has been insufficient in this case.
Source:
DFRLAB, Gigitashvili, G., Hoffman, M., Nazari, S., 2025. Banned, yet broadcasting: Sanctioned Belarusian state media influencing the Polish elections. [online] Available at: https://dfrlab.org/2025/05/29/banned-yet-broadcasting-how-sanctioned-belarusian-state-media-is-influencing-the-polish-elections-on-social-media/
Georgia's FARA Law Promoted Through Kremlin-Backed Disinformation
As published by EU vs Disinfo, Georgia plans to implement its new Foreign Agents Registration Act (FARA) on June 1, 2025. The government, led by the Georgian Dream party, has launched a campaign to justify the law. This campaign includes elements of disinformation aligned with pro-Kremlin narratives. Officials claim the law aims to protect national sovereignty and prevent foreign-funded unrest. Critics argue the legislation is intended to restrict civil society and independent media, drawing comparisons to Russia’s foreign agent law, despite government statements that it is modeled after the U.S. FARA.
The disinformation campaign portrays NGOs and critical media as agents of foreign interference. Leaders of Georgian Dream and affiliated media promote theories of a “Global War Party” and allege that Western governments are paying 120 euros per protester to fuel unrest. These claims, including allegations that NGOs are plotting a revolution, are echoed by pro-Kremlin and Azerbaijani outlets, framing the law as a defense against Western destabilization.
Although the government emphasizes similarities with the U.S. FARA, observers note significant legal protections and intent differences. More than 100 civic and media organizations warn that the law could be used to suppress dissent. Unlike the U.S. version, which targets espionage-like activities with judicial oversight, Georgia’s law applies more broadly. It lacks these safeguards, raising concerns it may serve as an authoritarian tool under the pretext of transparency and national security.
Source:
EUvs.Disinfo, 2025. Georgia’s Foreign Agents law: protecting sovereignty or attacking civil society? [online]
Available at: https://euvsdisinfo.eu/georgias-foreign-agents-law-protecting-sovereignty-or-attacking-civil-society/
FRAMEWORKS TO COUNTER DISINFORMATION
Meta’s 2025 First Quarter Report
Meta's Integrity Report for Q1 2025 reveals that the company identified and disrupted three covert influence operations from Iran, China, and Romania, halting these campaigns before they reached significant authentic audiences. At the same time, Meta acknowledges scaling back proactive content moderation efforts, aiming to reduce enforcement errors by raising thresholds for automated content removal and relying more heavily on user reports for less severe policy violations. Despite this general reduction in proactive moderation, Meta emphasizes that targeted enforcement against serious threats, such as coordinated manipulation ("hostile influence"), remains a high priority. Additionally, the company continues enhancing its content moderation capabilities through advanced AI tools, notably large language models (LLMs), which surpass traditional AI systems and human reviewers in accuracy and efficiency.
Source:
Meta, 2025. Integrity Reports, First Quarter 2025. [online]
Available at: https://transparency.meta.com/de-de/integrity-reports-q1-2025/
[Download Report]
GLOSSARY
Information Operations
Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare.
Hybrid Warfare
It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention.
Cyber Warfare
Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists.
Cyfluence Attack
Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc.
Soft Warfare
All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more.
CIB
Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms,
emphasizing both coordination and inauthentic behavior.
FIMI
The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity.
Hostile Influence Campaign (HIC)
An information operation sought to influence a targeted audience for a hostile cause.
Digital Impact on Discourse (DID)
Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC.
Misinformation
A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories.
Disinformation
Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent".
Inauthentic Behavior
Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis.
Fake users
AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs.
Unidentified users
A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example).
Sockpuppet accounts
A sock puppet or sockpuppet is an online identity used for deception.
Bots
Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts.
Repurposed accounts
Means social media accounts that were hacked or purchased, then used for different purposes than the original ones.
Fake website
Is a website designed for fraudulent or scam activity, hiding its real purpose.
Deep Assets
These are non-human deep cover assets, divided into two sub-categories:
Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.).
Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives.
Real platforms
Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally.
Astroturfing
Takes place when a coordinating actor creates a false impression of grassroots support.
Cyberbullying
is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm.
DISCLAIMER
Copyright and License of Product
This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained.
Disclaimer of Warranties
The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks.
Accuracy of Information
The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice.
Limitation of Liability
To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law.
Indemnification
The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms.
Third-Party Rights
The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms.
Governing Law and Jurisdiction
This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.
_edited.png)
.png)


