top of page

Weekly Report: Cyber based influence campaigns 5th - 11th of May 2025

  • Writer: CRC
    CRC
  • May 16
  • 15 min read

Updated: Jun 19


[Listen to the Podcast]



[Report Highlights]



> TABLE OF CONTENTS <


HOSTILE INFLUENCE CAMPAIGNS - STATE ACTORS
  • [Russia]

AI RELATED ARTICLES
CYFLUENCE ATTACKS
GENERAL REPORTS
FRAMEWORKS TO COUNTER DISINFORMATION
STATE ACTORS

[Russia]

Latvia's Fight Against Russian Disinformation

A report by the Atlantic Council outlines how Latvia is confronting intensified Russian hostile influence, particularly since Russia’s invasion of Ukraine in 2022. Russia’s objectives in Latvia are to sow discord within Latvian society and the transatlantic alliance, discredit Ukraine, and erode trust in democratic institutions. Russian-speaking minorities, around 25% of Latvia’s population, are primarily targeted. The Kremlin spreads narratives portraying the Latvian government as illegitimate and the West as hostile, while glorifying the Soviet past. These messages were distributed through Russian state-controlled media (now banned), social media platforms, bots, trolls, VPNs, and increasingly AI-generated content.


In response, Latvia officially designated the information space as a domain of national defense, on par with military and civil preparedness. Media literacy is integrated into school and community programs. Public awareness is strengthened through campaigns and educational materials. A dedicated strategic communications unit coordinates messaging across ministries and works with tech platforms to curb disinformation. Independent media receive financial and political support. In 2021, although vague legal definitions have limited convictions, Latvia became the first Baltic state to apply criminal law against deliberately disseminating harmful falsehoods. Civil society plays a vital role: NGOs, investigative journalists, and volunteer groups like the Baltic Elves monitor and debunk falsehoods. NATO cooperation enhances the detection of emerging threats like deepfakes.


Latvia deliberately avoids offensive information operations. Its focus remains on protecting democratic discourse. The remaining challenges include the lack of credible Russian-language content, the need for clearer legal tools, and sustained international backing.


Sources: 

Russia's Disinformation Surge Around Victory Day

The EU vs. Disinfo project highlights how the Kremlin used May 9, “Victory Day,” to advance its hostile influence operations. While the focus at home lay in distorting the historical narrative of World War II, particular attention abroad was directed at the Romanian presidential election held on May 4, 2025.


Following the annulment of Romania’s November 2024 election due to verified foreign interference (For more details, see our report on „The Romanian Presidential Elections 2024: Analysis of Information Operations and Longterm Influence Efforts“), Russian Foreign Information Manipulation and Interference (FIMI) patterns were again identifiable. In the lead-up to the May vote, a coordinated disinformation campaign unfolded across social media platforms. Its aims included discrediting pro-European candidates, amplifying extremist voices, and eroding public confidence in the electoral process. A network of 25 interconnected pages placed political advertisements worth over €260,000 without transparency or attribution.


The Kremlin’s tactics followed a familiar pattern: saturating the information environment with emotionally charged and often contradictory content; promoting political extremes while targeting democratic centrists; and systematically undermining trust in institutions.


The Romanian case reflects a broader trend in Russia’s foreign influence strategy. For the Kremlin, elections are not democratic exercises but strategic opportunities to destabilize and weaken democratic governance abroad.


Source:  

Russian Disinformation Campaigns Threaten Poland's Stability 

The Record reports that Poland has accused Russia of launching an unprecedented disinformation and cyberattack campaign aimed at disrupting its upcoming presidential election in May 2025. According to Poland’s digital affairs minister, Janusz Cieszynski, Russian-linked actors have intensified efforts to destabilize critical infrastructure, including water and sewage systems, power plants, and government agencies. Additionally, Russia is reportedly attempting to recruit Polish citizens to spread disinformation, a strategy similar to its use of local influencers during Romania’s recent elections. (For more details, see our report on „The Romanian Presidential Elections 2024: Analysis of Information Operations and Long-term Influence Efforts“) Russia denies any involvement in cyberattacks or election interference in either country.


A Jamestown Foundation article highlights that Russia’s shadow war against Poland combines low-level sabotage, insider espionage, informational warfare, and cyberattacks. Between 2010 and 2025, Polish authorities closed 30 subversion cases, leading to the arrests of 61 individuals—19 cases and 49 arrests since 2021—accounting for roughly 35% of Europe’s Russian-linked espionage and sabotage arrests. Recruits for these operations have shifted from ethnic Poles to predominantly Russian, Belarusian, and Ukrainian nationals. Their missions aim to reduce support for Ukraine, disrupt decision-making, erode social trust, and stoke extreme politics. Countering this threat will require comprehensive measures, including media literacy, institutional strengthening, and increased NATO intelligence cooperation.


Source:  

Azerbaijan Blames Russian State Hackers for Cyberattacks on Local Media

As reported in an article by The Record, Azerbaijan has attributed a cyberattack on multiple local media outlets to the Russian state-sponsored hacking group APT29, labeling it a politically motivated act of retaliation. The attack occurred after Azerbaijan shuttered the Russian House cultural center in Baku, citing espionage and legal violations, and drastically reduced the staff at Sputnik Azerbaijan, a Kremlin-backed media outlet. Azerbaijani officials claim that the hackers had infiltrated the media networks years earlier, activating their attack on the morning of February 20, 2025, starting with Baku TV and spreading to other news platforms.


Officials stated that the objective was to spread disinformation, disrupt media infrastructure, and delete or manipulate content. In March, the Ukrainian military intelligence agency (HUR) also reported that Russia was spreading disinformation, accusing it of trying to instigate an armed conflict between Armenia and Azerbaijan. Similar disinformation-driven cyberattacks have previously targeted media in Poland and Ukraine. Russia has rejected the allegations, calling them part of a baseless disinformation campaign.


Source:  

Kremlin Sources Concoct WWII Falsehood Against Ukraine

According to a report by NewsGuard's Reality Check, ahead of Victory Day, which Ukraine also celebrates, pro-Kremlin sources circulated a fabricated leaflet claiming that Ukraine's government instructed World War II veterans to hide their Soviet-era medals. The image, falsely attributed to Ukraine’s Ministry of National Memory, included a diagram allegedly showing how to conceal the awards inside a jacket. The goal was to depict Ukraine as disrespecting its veterans and erasing Soviet contributions to the war.


The image first appeared on a pro-Russian Telegram account and quickly spread across social media and Kremlin-linked websites, including those in the Pravda disinformation network. The Ukrainian Institute of National Memory denied any link to the leaflet, calling it likely Russian propaganda. Ukrainian law does ban Nazi and communist symbols but explicitly exempts pre-1991 war medals and awards.

Source:  

Paid South African Influencers Targeting Zelenskyy

A recent DFRLab investigation reveals that a coordinated disinformation campaign in South Africa targeted Ukrainian President Volodymyr Zelenskyy for rejecting Russia's proposed Victory Day ceasefire. Utilizing a network of paid influencers through a South African influencer marketplace, the campaign amplified anti-Zelenskyy and pro-Russian narratives on X, promoting hashtags like [#ZelenskyyIsWar] and [#May09Truce] to trend nationally. This effort involved 42 accounts generating 840 posts, amassing approximately 290,000 views within two hours. Many of these influencers had previously participated in similar campaigns advancing pro-Russian narratives.


This operation underscores the strategic use of local influencers to disseminate foreign propaganda, exploiting regional platforms to sway public opinion on international conflicts. By leveraging South Africa's historical anti-colonial sentiments, such campaigns aim to erode support for Ukraine and legitimize Russian actions. The incident highlights the broader implications of influence-for-hire networks (Find more information about this subject here on our Blog Post: Commercial Hostile Influence Networks) in shaping geopolitical narratives and the necessity for heightened media literacy and regulatory measures to counteract such disinformation efforts.

Source:  

AI Related Articles


Deepfake Trump Threatens Pakistan if It Attacks India

As published by NewsGuard's Reality Check, amid escalating tensions between India and Pakistan in May 2025, pro-India social media users circulated two deepfake videos falsely portraying Donald Trump as threatening to destroy or erase Pakistan if it attacked India. These AI-manipulated clips featured fabricated voice-overs synced to altered footage of Trump from a 2016 speech at the Economic Club of New York.


Contrary to the claims, Trump never made such remarks, and independent AI-detection tools confirmed the videos were digitally manipulated. The videos emerged after a deadly militant attack in Indian-controlled Kashmir raised fears of a broader conflict. Pakistan denied involvement, but diplomatic relations deteriorated sharply.


Additionally, according to an article by Bellingcat, another deepfake further muddied the information landscape during this volatile period. A manipulated video falsely showing Pakistani army spokesperson Ahmed Sharif Chaudhry admitting the loss of two aircraft was shared nearly 700,000 times on X, and picked up by several mainstream Indian media outlets before being debunked.


Source:  

CYFLUENCE ATTACKS


India Experiences Surge in Hacktivist Group Activity Amid Military Tensions

Cyble investigated a coordinated cyber campaign against India that followed the April 22, 2025, terror attack in Jammu and Kashmir and India's retaliatory strikes under Operation Sindoor. The campaign, conducted under the hashtag #OpIndia, involved website defacements, DDoS attacks, and online propaganda. The attacks were deliberately timed to coincide with military operations.


Although the attacks caused only temporary disruptions to government, law enforcement and healthcare websites, the primary objective appeared to be psychological rather than technical. The campaign prioritized volume over technical sophistication: over 50% of incidents were DDoS attacks, while 36% involved website defacements—both tactics aimed at maximizing visibility and psychological impact.


Political and religious messages were disseminated, often aligning with pro-Pakistan narratives. Involved groups such as Keymous+, AnonSec, Nation of Savior, and Electronic Army Special Forces used social media to publicize their actions, frequently exaggerating their actual impact to amplify anti-India messaging.


Source: 

GENERAL REPORTS


Caribbean Media Faces New Challenges in the Age of AI 

Published by Misinfocon and originally contributed by Global Voices, the 2025 World Press Freedom Day spotlighted the impact of artificial intelligence (AI) on journalism. While AI offers benefits such as efficiency, multilingual capabilities, and data-driven analysis, both Global Voices and the Media Institute of the Caribbean (MIC) warn of serious risks, including disinformation, deepfakes, surveillance, and algorithmic bias. MIC emphasized that free, AI-generated content increasingly competes with high-quality journalism, which is expensive to produce—a challenge particularly acute in the Caribbean, where shrinking ad revenues and fragile markets threaten media viability. Between 15% and 25% of advertising income is already diverted to tech giants like Meta and Google, whose platforms dominate access to information and undermine the financial foundations of independent journalism. MIC President Kiran Maharaj has called for fair AI governance to protect democratic discourse and the sustainability of public interest media.


At the same time, regional media face the dual burden of environmental crises and digital threats. Misinformation during natural disasters can have devastating consequences. While AI can enhance emergency response through real-time alerts and forecasting, its misuse remains a serious concern. To address this, MIC has proposed policies including taxing technology companies and reinvesting the revenue into journalism, exploring AI-driven revenue models, and—in line with UNESCO’s AI Road Map—establishing a regional AI Ethics Task Force to audit algorithmic bias and promote content verification standards.


Source:  

The Impact of the Digital Services Act on Disinformation on Facebook 

A recent study by the NATO Strategic Communications Centre of Excellence assessed the early impact of the EU Digital Services Act (DSA) on harmful content on Facebook, focusing on Polish and Lithuanian accounts. Using a multi-stage AI analysis of over 2,300 posts from 2023 and 2024, the research found that hate speech, particularly targeting protected groups, remains the dominant form of harmful content, accounting for 90% of flagged posts in both years.


While Lithuania saw an 11% decline in such content in 2024, Poland experienced a 55% increase, with a dramatic 128% surge within Facebook groups. This highlights significant moderation gaps in group environments. Antisemitic disinformation related to the Israel–Hamas conflict was especially prevalent in Polish content.


The platform showed some progress: fact-checking activity rose in 2024, likely due to increased investment in moderation. However, the overall rate of harmful content removal declined, particularly for Lithuanian-language posts. The study concludes that despite the DSA’s promise, Facebook’s efforts yielded only partial improvements.

Source:  

Frameworks to Counter Disinformation

California Governor Fights Disinformation with New Fact-Checking Website

Politico reports that California Governor Gavin Newsom has launched CaliforniaFacts.com, a website aimed at combating statements deemed disinformation. The site targets explicitly narratives propagated by conservative media and influencers. Funded by his political action committee, Campaign for Democracy, it addresses misinformation spread by figures such as Donald Trump, Elon Musk, and anonymous X accounts. Newsom, who has criticized Democrats for failing to break through right-wing media ecosystems, presents the website as part of a broader strategy that includes social media responses, media appearances, and a podcast.


Source:  

EEAS Efforts Against FIMI and Disinformation

The European External Action Service (EEAS) presents strategic communication as a key tool to counter foreign information manipulation and interference (FIMI). To implement this approach globally, it has established regional Task Forces that promote EU values, support local partners, and enhance societal resilience against disinformation.


The East Stratcom Task Force (ESTF) focuses on the Eastern Partnership and Central Asia, working with civil society and media to deliver targeted campaigns such as “Share your Light,” particularly emphasizing Ukraine-related communication. In the Western Balkans, the WBTF engages in public diplomacy through initiatives like “Europeans in Action” and supports media literacy and independent journalism. The Task Force South (TFS) covers the Middle East and North Africa (MENA) region, monitoring disinformation, coordinating Arabic-language outreach, and assisting local journalists. The newest unit, the Sub-Saharan Africa Task Force (SSA TF), launched in 2023, empowers youth and media professionals through region-specific strategies like “Above the Noise.”


All Task Forces are linked through the “Connecting Media Communities initiative,” launched in 2023. It brings journalists from various regions together to exchange best practices, build professional networks, and strengthen collective resilience to FIMI. Through these coordinated efforts, the EEAS works to uphold democratic values and foster informed, engaged societies worldwide.


Source:  

[Download Report]

GLOSSARY


Information Operations

Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare.

Hybrid Warfare

It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention.


Cyber Warfare

Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists.

Cyfluence Attack

Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc.

Soft Warfare

All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more.

CIB

Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms,

emphasizing both coordination and inauthentic behavior.

FIMI

The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity.

Hostile Influence Campaign (HIC)

An information operation sought to influence a targeted audience for a hostile cause.

Digital Impact on Discourse (DID)

Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC.

Misinformation

A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories.

Disinformation

Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent".

Inauthentic Behavior

Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis.

Fake users

AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs.

Unidentified users

A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example).

Sockpuppet accounts

A sock puppet or sockpuppet is an online identity used for deception.

Bots

Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts.

Repurposed accounts

Means social media accounts that were hacked or purchased, then used for different purposes than the original ones.

Fake website

Is a website designed for fraudulent or scam activity, hiding its real purpose.

Deep Assets

These are non-human deep cover assets, divided into two sub-categories:


Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.).


Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives.

Real platforms

Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally.

Astroturfing

Takes place when a coordinating actor creates a false impression of grassroots support.

Cyberbullying

is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm.



DISCLAIMER


Copyright and License of Product 

This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained.


Disclaimer of Warranties

The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks.


Accuracy of Information 

The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice.


Limitation of Liability

To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law.


Indemnification

The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms.


Third-Party Rights

The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms.


Governing Law and Jurisdiction 

This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.


bottom of page