top of page

Weekly Report: Cyber based influence campaigns 5th - 11th of May 2025

  • Writer: CRC
    CRC
  • 2 days ago
  • 15 min read

Updated: 51 minutes ago


[Listen to the Podcast]



[Report Highlights]



> TABLE OF CONTENTS <


HOSTILE INFLUENCE CAMPAIGNS - STATE ACTORS
  • [Russia]

AI RELATED ARTICLES
CYFLUENCE ATTACKS
GENERAL REPORTS
FRAMEWORKS TO COUNTER DISINFORMATION
STATE ACTORS

[Russia]

Latvia's Fight Against Russian Disinformation

A report by the Atlantic Council outlines how Latvia is confronting intensified Russian hostile influence, particularly since Russia’s invasion of Ukraine in 2022. Russia’s objectives in Latvia are to sow discord within Latvian society and the transatlantic alliance, discredit Ukraine, and erode trust in democratic institutions. Russian-speaking minorities, around 25% of Latvia’s population, are primarily targeted. The Kremlin spreads narratives portraying the Latvian government as illegitimate and the West as hostile, while glorifying the Soviet past. These messages were distributed through Russian state-controlled media (now banned), social media platforms, bots, trolls, VPNs, and increasingly AI-generated content.


In response, Latvia officially designated the information space as a domain of national defense, on par with military and civil preparedness. Media literacy is integrated into school and community programs. Public awareness is strengthened through campaigns and educational materials. A dedicated strategic communications unit coordinates messaging across ministries and works with tech platforms to curb disinformation. Independent media receive financial and political support. In 2021, although vague legal definitions have limited convictions, Latvia became the first Baltic state to apply criminal law against deliberately disseminating harmful falsehoods. Civil society plays a vital role: NGOs, investigative journalists, and volunteer groups like the Baltic Elves monitor and debunk falsehoods. NATO cooperation enhances the detection of emerging threats like deepfakes.


Latvia deliberately avoids offensive information operations. Its focus remains on protecting democratic discourse. The remaining challenges include the lack of credible Russian-language content, the need for clearer legal tools, and sustained international backing.


Sources: 

Russia's Disinformation Surge Around Victory Day

The EU vs. Disinfo project highlights how the Kremlin used May 9, “Victory Day,” to advance its hostile influence operations. While the focus at home lay in distorting the historical narrative of World War II, particular attention abroad was directed at the Romanian presidential election held on May 4, 2025.


Following the annulment of Romania’s November 2024 election due to verified foreign interference (For more details, see our report on „The Romanian Presidential Elections 2024: Analysis of Information Operations and Longterm Influence Efforts“), Russian Foreign Information Manipulation and Interference (FIMI) patterns were again identifiable. In the lead-up to the May vote, a coordinated disinformation campaign unfolded across social media platforms. Its aims included discrediting pro-European candidates, amplifying extremist voices, and eroding public confidence in the electoral process. A network of 25 interconnected pages placed political advertisements worth over €260,000 without transparency or attribution.


The Kremlin’s tactics followed a familiar pattern: saturating the information environment with emotionally charged and often contradictory content; promoting political extremes while targeting democratic centrists; and systematically undermining trust in institutions.


The Romanian case reflects a broader trend in Russia’s foreign influence strategy. For the Kremlin, elections are not democratic exercises but strategic opportunities to destabilize and weaken democratic governance abroad.


Source:  

Russian Disinformation Campaigns Threaten Poland's Stability 

The Record reports that Poland has accused Russia of launching an unprecedented disinformation and cyberattack campaign aimed at disrupting its upcoming presidential election in May 2025. According to Poland’s digital affairs minister, Janusz Cieszynski, Russian-linked actors have intensified efforts to destabilize critical infrastructure, including water and sewage systems, power plants, and government agencies. Additionally, Russia is reportedly attempting to recruit Polish citizens to spread disinformation, a strategy similar to its use of local influencers during Romania’s recent elections. (For more details, see our report on „The Romanian Presidential Elections 2024: Analysis of Information Operations and Long-term Influence Efforts“) Russia denies any involvement in cyberattacks or election interference in either country.


A Jamestown Foundation article highlights that Russia’s shadow war against Poland combines low-level sabotage, insider espionage, informational warfare, and cyberattacks. Between 2010 and 2025, Polish authorities closed 30 subversion cases, leading to the arrests of 61 individuals—19 cases and 49 arrests since 2021—accounting for roughly 35% of Europe’s Russian-linked espionage and sabotage arrests. Recruits for these operations have shifted from ethnic Poles to predominantly Russian, Belarusian, and Ukrainian nationals. Their missions aim to reduce support for Ukraine, disrupt decision-making, erode social trust, and stoke extreme politics. Countering this threat will require comprehensive measures, including media literacy, institutional strengthening, and increased NATO intelligence cooperation.


Source:  

Azerbaijan Blames Russian State Hackers for Cyberattacks on Local Media

As reported in an article by The Record, Azerbaijan has attributed a cyberattack on multiple local media outlets to the Russian state-sponsored hacking group APT29, labeling it a politically motivated act of retaliation. The attack occurred after Azerbaijan shuttered the Russian House cultural center in Baku, citing espionage and legal violations, and drastically reduced the staff at Sputnik Azerbaijan, a Kremlin-backed media outlet. Azerbaijani officials claim that the hackers had infiltrated the media networks years earlier, activating their attack on the morning of February 20, 2025, starting with Baku TV and spreading to other news platforms.


Officials stated that the objective was to spread disinformation, disrupt media infrastructure, and delete or manipulate content. In March, the Ukrainian military intelligence agency (HUR) also reported that Russia was spreading disinformation, accusing it of trying to instigate an armed conflict between Armenia and Azerbaijan. Similar disinformation-driven cyberattacks have previously targeted media in Poland and Ukraine. Russia has rejected the allegations, calling them part of a baseless disinformation campaign.


Source:  

Kremlin Sources Concoct WWII Falsehood Against Ukraine

According to a report by NewsGuard's Reality Check, ahead of Victory Day, which Ukraine also celebrates, pro-Kremlin sources circulated a fabricated leaflet claiming that Ukraine's government instructed World War II veterans to hide their Soviet-era medals. The image, falsely attributed to Ukraine’s Ministry of National Memory, included a diagram allegedly showing how to conceal the awards inside a jacket. The goal was to depict Ukraine as disrespecting its veterans and erasing Soviet contributions to the war.


The image first appeared on a pro-Russian Telegram account and quickly spread across social media and Kremlin-linked websites, including those in the Pravda disinformation network. The Ukrainian Institute of National Memory denied any link to the leaflet, calling it likely Russian propaganda. Ukrainian law does ban Nazi and communist symbols but explicitly exempts pre-1991 war medals and awards.

Source:  

Paid South African Influencers Targeting Zelenskyy

A recent DFRLab investigation reveals that a coordinated disinformation campaign in South Africa targeted Ukrainian President Volodymyr Zelenskyy for rejecting Russia's proposed Victory Day ceasefire. Utilizing a network of paid influencers through a South African influencer marketplace, the campaign amplified anti-Zelenskyy and pro-Russian narratives on X, promoting hashtags like #ZelenskyyIsWar and #May09Truce to trend nationally. This effort involved 42 accounts generating 840 posts, amassing approximately 290,000 views within two hours. Many of these influencers had previously participated in similar campaigns advancing pro-Russian narratives.


This operation underscores the strategic use of local influencers to disseminate foreign propaganda, exploiting regional platforms to sway public opinion on international conflicts. By leveraging South Africa's historical anti-colonial sentiments, such campaigns aim to erode support for Ukraine and legitimize Russian actions. The incident highlights the broader implications of influence-for-hire networks (Find more information about this subject here on our Blog Post: Commercial Hostile Influence Networks) in shaping geopolitical narratives and the necessity for heightened media literacy and regulatory measures to counteract such disinformation efforts.

Source:  

AI Related Articles


Deepfake Trump Threatens Pakistan if It Attacks India

As published by NewsGuard's Reality Check, amid escalating tensions between India and Pakistan in May 2025, pro-India social media users circulated two deepfake videos falsely portraying Donald Trump as threatening to destroy or erase Pakistan if it attacked India. These AI-manipulated clips featured fabricated voice-overs synced to altered footage of Trump from a 2016 speech at the Economic Club of New York.


Contrary to the claims, Trump never made such remarks, and independent AI-detection tools confirmed the videos were digitally manipulated. The videos emerged after a deadly militant attack in Indian-controlled Kashmir raised fears of a broader conflict. Pakistan denied involvement, but diplomatic relations deteriorated sharply.


Additionally, according to an article by Bellingcat, another deepfake further muddied the information landscape during this volatile period. A manipulated video falsely showing Pakistani army spokesperson Ahmed Sharif Chaudhry admitting the loss of two aircraft was shared nearly 700,000 times on X, and picked up by several mainstream Indian media outlets before being debunked.


Source:  

CYFLUENCE ATTACKS


India Experiences Surge in Hacktivist Group Activity Amid Military Tensions

Cyble investigated a coordinated cyber campaign against India that followed the April 22, 2025, terror attack in Jammu and Kashmir and India's retaliatory strikes under Operation Sindoor. The campaign, conducted under the hashtag #OpIndia, involved website defacements, DDoS attacks, and online propaganda. The attacks were deliberately timed to coincide with military operations.


Although the attacks caused only temporary disruptions to government, law enforcement and healthcare websites, the primary objective appeared to be psychological rather than technical. The campaign prioritized volume over technical sophistication: over 50% of incidents were DDoS attacks, while 36% involved website defacements—both tactics aimed at maximizing visibility and psychological impact.


Political and religious messages were disseminated, often aligning with pro-Pakistan narratives. Involved groups such as Keymous+, AnonSec, Nation of Savior, and Electronic Army Special Forces used social media to publicize their actions, frequently exaggerating their actual impact to amplify anti-India messaging.


Source: 

GENERAL REPORTS


Caribbean Media Faces New Challenges in the Age of AI 

Published by Misinfocon and originally contributed by Global Voices, the 2025 World Press Freedom Day spotlighted the impact of artificial intelligence (AI) on journalism. While AI offers benefits such as efficiency, multilingual capabilities, and data-driven analysis, both Global Voices and the Media Institute of the Caribbean (MIC) warn of serious risks, including disinformation, deepfakes, surveillance, and algorithmic bias. MIC emphasized that free, AI-generated content increasingly competes with high-quality journalism, which is expensive to produce—a challenge particularly acute in the Caribbean, where shrinking ad revenues and fragile markets threaten media viability. Between 15% and 25% of advertising income is already diverted to tech giants like Meta and Google, whose platforms dominate access to information and undermine the financial foundations of independent journalism. MIC President Kiran Maharaj has called for fair AI governance to protect democratic discourse and the sustainability of public interest media.


At the same time, regional media face the dual burden of environmental crises and digital threats. Misinformation during natural disasters can have devastating consequences. While AI can enhance emergency response through real-time alerts and forecasting, its misuse remains a serious concern. To address this, MIC has proposed policies including taxing technology companies and reinvesting the revenue into journalism, exploring AI-driven revenue models, and—in line with UNESCO’s AI Road Map—establishing a regional AI Ethics Task Force to audit algorithmic bias and promote content verification standards.


Source:  

The Impact of the Digital Services Act on Disinformation on Facebook 

A recent study by the NATO Strategic Communications Centre of Excellence assessed the early impact of the EU Digital Services Act (DSA) on harmful content on Facebook, focusing on Polish and Lithuanian accounts. Using a multi-stage AI analysis of over 2,300 posts from 2023 and 2024, the research found that hate speech, particularly targeting protected groups, remains the dominant form of harmful content, accounting for 90% of flagged posts in both years.


While Lithuania saw an 11% decline in such content in 2024, Poland experienced a 55% increase, with a dramatic 128% surge within Facebook groups. This highlights significant moderation gaps in group environments. Antisemitic disinformation related to the Israel–Hamas conflict was especially prevalent in Polish content.


The platform showed some progress: fact-checking activity rose in 2024, likely due to increased investment in moderation. However, the overall rate of harmful content removal declined, particularly for Lithuanian-language posts. The study concludes that despite the DSA’s promise, Facebook’s efforts yielded only partial improvements.

Source:  

Frameworks to Counter Disinformation

California Governor Fights Disinformation with New Fact-Checking Website

Politico reports that California Governor Gavin Newsom has launched CaliforniaFacts.com, a website aimed at combating statements deemed disinformation. The site targets explicitly narratives propagated by conservative media and influencers. Funded by his political action committee, Campaign for Democracy, it addresses misinformation spread by figures such as Donald Trump, Elon Musk, and anonymous X accounts. Newsom, who has criticized Democrats for failing to break through right-wing media ecosystems, presents the website as part of a broader strategy that includes social media responses, media appearances, and a podcast.


Source:  

EEAS Efforts Against FIMI and Disinformation

The European External Action Service (EEAS) presents strategic communication as a key tool to counter foreign information manipulation and interference (FIMI). To implement this approach globally, it has established regional Task Forces that promote EU values, support local partners, and enhance societal resilience against disinformation.


The East Stratcom Task Force (ESTF) focuses on the Eastern Partnership and Central Asia, working with civil society and media to deliver targeted campaigns such as “Share your Light,” particularly emphasizing Ukraine-related communication. In the Western Balkans, the WBTF engages in public diplomacy through initiatives like “Europeans in Action” and supports media literacy and independent journalism. The Task Force South (TFS) covers the Middle East and North Africa (MENA) region, monitoring disinformation, coordinating Arabic-language outreach, and assisting local journalists. The newest unit, the Sub-Saharan Africa Task Force (SSA TF), launched in 2023, empowers youth and media professionals through region-specific strategies like “Above the Noise.”


All Task Forces are linked through the “Connecting Media Communities initiative,” launched in 2023. It brings journalists from various regions together to exchange best practices, build professional networks, and strengthen collective resilience to FIMI. Through these coordinated efforts, the EEAS works to uphold democratic values and foster informed, engaged societies worldwide.


Source:  

[Download Report]

GLOSSARY


Information Operations

Hybrid Warfare

Cyber Warfare

Cyfluence Attack

Soft Warfare

CIB

FIMI

Hostile Influence Campaign (HIC)

Digital Impact on Discourse (DID)

Misinformation

Disinformation

Inauthentic Behavior

Fake users

Unidentified users

Sockpuppet accounts

Bots

Repurposed accounts

Fake website

Deep Assets

Real platforms

Astroturfing

Cyberbullying


DISCLAIMER


Copyright and License of Product 

This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained.


Disclaimer of Warranties

The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks.


Accuracy of Information 

The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice.


Limitation of Liability

To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law.


Indemnification

The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms.


Third-Party Rights

The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms.


Governing Law and Jurisdiction 

This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.


bottom of page