top of page

Weekly Report: Cyber based influence campaigns 23th March – 30rd March 2025

  • Writer: CRC
    CRC
  • Apr 6
  • 14 min read

Updated: May 4

ree

INTRODUCTION


Cyber-based hostile influence campaigns aim to influence target audiences by disseminating information and/or disinformation over the internet, sometimes in conjunction with cyberattacks, which amplify their impact (hence, forcing Cyfluence, as opposed to cyberattacks that seek to steal information, extort money, etc.). Such hostile influence campaigns and operations can be considered an epistemological branch of Information Operations (IO) or Information Warfare (IW).


Typically, and as customary over the last decade, information is disseminated across various internet platforms, which are the different elements of the hostile influence campaign. As such, the connectivity and repetitiveness of content between these elements are the primary characteristics that define influence campaigns. Much like Cyber-attacks, hostile influence campaigns have also become a tool for rival nations and corporations to damage reputations or achieve various business, political, or ideological goals. Much like in the cybersecurity arena, PR professionals and government agencies respond to negative publicity and disinformation shared through news and social media.

We use the term' cyber-based hostile influence campaigns' as we also include in this definition cyber-attacks aimed at influencing (such as hacking and leaking during election time), while excluding from this term other types of more traditional kinds of influence, such as diplomatic, economic, and military.


Between March 24 and 30, 2025, we observed, collected, and analyzed endpoints related to cyber-based hostile influence campaigns, including Cyfluence attacks. The following report summarizes the key events we consider most significant. Some campaigns involve social media and news outlets, while others utilize cyber-attack capabilities.


[Listen to the Podcast]


[Download Report]




HOSTILE INFLUENCE CAMPAIGNS



STATE ACTORS



Russia


Russian Intelligence Engages in Overt Disinformation


EU vs. Disinfo warns that Russian disinformation campaigns have long aimed to portray protests in Russia and neighboring countries as being Western-orchestrated. The goal is delegitimizing democratic movements, often by alleging that protesters are paid. A typical example is the false claim that the West funded the 2020 Belarus protests with $6 billion in cryptocurrencies.


After the elections in Georgia, Russian state media, including Sputnik, claimed the EU was financing protests, reinforced by an official SVR press release. While Russian intelligence usually operates covertly through proxies, leaks, and fake news platforms, its recent public messaging signals a shift. The SVR’s open statement marks growing boldness, showing that Russian spies now act both secretly and publicly. According to EU vs. Disinfo, Russia’s hostile influence architecture comprises four interconnected layers, ranging from overt to covert. While some actors operate openly, others conceal their ties to the Kremlin.


Source: 

EUvsDisinfo, 2025. Russian spies as disinformation actors. [online] Available at:


War in Ukraine


Austria: Russian Influence Campaign Unveiled


Austria's Directorate for State Protection and Intelligence (DSN) has uncovered a Russian-led disinformation campaign orchestrated by a covert cell operating within the country. At the center of the investigation is a Bulgarian national suspected of acting as a liaison for Russian intelligence services.


She allegedly coordinated the distribution of propaganda materials intended to sway public opinion against Ukraine and President Volodymyr Zelenskyy. The campaign used online content and real-world actions, such as graffiti and stickers, which falsely appeared to originate from pro-Ukrainian activists but contained far-right and nationalist messaging. The disinformation cell became active shortly after the Russian invasion of Ukraine began in 2022. The suspect has confessed to her involvement and provided evidence pointing to additional accomplices located in Russia and the UK. According to the DSN, this campaign is part of a broader strategy of hybrid warfare designed to undermine democracy, destabilize society, and manipulate the political discourse.


Source:

DSN, 2025. DSN uncovers Russian disinformation campaign against Ukraine.

Article No: 27890. [online].


China


Global Times Rejects Allegations of Chinese Hostile Interference


DisinfoWatch analyzes an article by the Chinese state-run newspaper Global Times, which denies any hostile influence by the Chinese government in Canada’s political process. Instead, the article claims that the Conservative Party of Canada is using these allegations to discredit the Liberal Party’s electoral successes. It further suggests that Canadian authorities exaggerate the issue to distract from trade and security tensions with the United States. The Global Times also criticizes the language used in official reports, arguing that terms like “highly likely” indicate a lack of credible evidence. DisinfoWatch classifies these claims as misleading: Chinese interference efforts have been well-documented by intelligence agencies, parliamentary committees, and independent experts, particularly in the 2019 and 2021 elections. The criticized terminology is standard in intelligence assessments and reflects analytical caution, not a lack of evidence. These narratives reflect China`s geopolitical strategy to undermine and deflect international criticism of its foreign influence operations.


Source:  

DisinfoWatch, 2024. PRC State Media Falsely Accuse CSIS, Conservatives of Hyping Foreign Interference Concerns. [online] Available at: https://disinfowatch.org/disinfo/prc-state-media-falsely-accuse-csis-conservatives-of-hyping-foreign-interference-concerns/ 



China's Disinformation Campaigns and Japan's Vulnerabilities


A 2025 report by DFRLab highlights that China’s expanding digital influence strategy poses a significant challenge to Japan. Due to economic rivalry, territorial disputes in the East China Sea, and its close alliance with the U.S., Japan has become a key target of Chinese disinformation campaigns. These efforts serve Beijing’s broader geopolitical aims by spreading false narratives, including misinformation about the Fukushima wastewater release and separatist claims concerning Okinawa (also see our Weekly Report: 17th March – 23rd March 2025). Coordinated campaigns amplify such messages across social media, often using fake accounts. Japan’s outdated digital infrastructure and limited institutional coordination hinder an adequate response. Although the Digital Agency was created to modernize government systems, it struggles to keep pace with evolving threats. While Japan raises concerns about disinformation internationally, its domestic response remains underdeveloped. Public demand for stronger regulation and digital defense is growing. To boost democratic resilience, Japan must accelerate its digital transformation, improve cybersecurity, and implement coordinated countermeasures.


Source:  

DFRLab, D.J. Plung and K. Thibaut, 2025. Japan’s technology paradox: the challenge of Chinese disinformation. [online] Available at: https://dfrlab.org/2025/03/25/japan-tech-chinese-disinformation/



AI RELATED ARTICLES


The Rise of Disinformation in the Age of AI


In an article on TechXplore, computer scientist Andreea Pocol discusses the growing challenges posed by generative AI, also known as GenAI. These inexpensive and widely accessible tools create highly realistic images and videos, known as "deepfakes," which are increasingly used for disinformation campaigns. The dangers include intensifying political conflicts, election interference, increased societal polarization, and erosion of trust in media and democratic institutions. According to a 2022 study by Pocol, participants could distinguish between authentic and AI-generated images with only 61% accuracy. This rate is likely to have declined further today. Previous detection indicators, such as unrealistic eyes, teeth, or hair, have become less reliable due to technological advancements. Algorithms that detect deepfakes often fail, particularly with low-resolution or poorly lit images. Despite calls for stronger regulation, the technology is advancing faster than reliable detection methods.


Source:

TechXplore, A. Pocol, 2025. As generative AI becomes more sophisticated, it becomes increasingly difficult to distinguish the real from the deepfake. [online] Available at: https://techxplore.com/news/2025-03-generative-ai-sophisticated-harder-distinguish.html



AI Spreads False Narratives and Climate Disinformation


A report by NewsGuard analyzed an article published by ScienceofClimateChange.org, a site known for climate change skepticism. It claimed that solar activity, rather than human actions, is the primary driver of global warming. Grok 3, an AI tool developed by Elon Musk’s platform "X," was credited as the lead author. The piece garnered widespread attention among climate change skeptics.

However, AI is not always a reliable source of factual information. Tools like Grok generate responses based on training data and the structure of user prompts. The output can reflect those distortions if the prompts are biased or misleading. Additionally, Grok 3 co-authored the article alongside known climate skeptics such as David Legates, whose influence likely shaped the direction and tone of the AI-generated content.

NewsGuard also notes that Grok contradicted the article’s claims when prompted neutrally and confirmed that human activities are a major driver of climate change.


Source:

NewsGuard, N. Dirks, 2025. No, Grok AI-Written Study Does Not Prove that Global Warming is a Natural Phenomenon. [online] Available at: https://www.newsguardrealitycheck.com/p/no-grok-ai-written-study-does-not



GENERAL REPORTS


Disinformation Targets UK MPs and Bangladesh Central Bank Chief


The Guardian reports that British MPs have raised concerns about a disinformation campaign targeting Ahsan Mansur, the Governor of the central bank of Bangladesh. The campaign began shortly after he arrived in the UK, where he is investigating assets

allegedly laundered by the regime of Prime Minister Sheikh Hasina. Fabricated articles and emails sent by individuals posing as journalists contain unsubstantiated claims against Mansur and his family. His daughter is a particular focus, facing allegations of unclear sources of wealth. The effort appears aimed at damaging Mansur’s credibility as an anti-corruption advocate.

British politicians, including former Minister Tulip Siddiq, are also mentioned, raising concerns that the campaign aims to stoke political tension and hinder cooperation. Mansur, a former IMF official, believes those involved in money laundering are behind the attacks. MPs have forwarded the material to cybersecurity teams and foreign affairs committees, calling for a full investigation.


Source:

The Guardian, R. Davies, 2025. MPs think they may have been targets of ‘disinformation’ over Bangladesh inquiry. [online] Available at:



Hostile Influence Campaigns Target Canadian Electoral Process


A report by DisinfoWatch warns of coordinated foreign influence targeting Canada’s electoral process. The main actors identified are China, Russia, and U.S.-based extremist networks. While the Chinese Embassy in Ottawa and the state-run Global Times deny any interference, as noted earlier in our weekly report, Canadian intelligence agencies have documented targeted influence operations. These include disinformation campaigns, covert financing, and intimidation. In December 2024, a $1 million bounty was reportedly placed on a Conservative candidate.


Russia, meanwhile, uses state media outlets like Sputnik to discredit Liberal Party leader Mark Carney, particularly for his pro-Ukraine stance. AI-generated fake images—some showing Carney with Jeffrey Epstein—have circulated online and were reportedly created in part using Grok 3, the AI platform developed by Elon Musk’s X.


U.S. conspiracy theorist Alex Jones has also intervened, publicly endorsing PPC leader Maxime Bernier while promoting far-right narratives. This transnational entanglement of extremist actors fuels radicalization, polarization, and public distrust in democratic

institutions.


Source:  

DisinfoWatch, 2025. Canadian 2025 Election Report #1. [online] Available at:



France Identified as Top Target of Foreign Hostile Influence in Europe


According to Le Monde, France has become one of the main targets of foreign hostile influence in Europe—second only to Ukraine. Prime Minister François Bayrou cited a European External Action Service report indicating that nearly one-third of the 505 documented incidents across Europe between 2023 and 2024 targeted France. While Russia is the primary actor, China has also been linked to coordinated influence efforts.


A February report by France’s disinformation watchdog, Viginum, confirmed that Russia has conducted a sustained information campaign over the past three years to justify its invasion of Ukraine and manipulate public opinion in France (a review of the Viginum report is available on our blog).

France’s support for Ukraine likely makes it a strategic target. On March 28, President Macron hosted a European security summit. However, ongoing digital influence operations may continue to undermine these diplomatic efforts.


Source:

Le Monde with AFP, 2025. PM says France is the top EU target for foreign disinformation campaigns. [online] Available at:



Combating Disinformation in Kenya and Senegal


A study by The Conversation highlights how media professionals in Kenya and Senegal are countering disinformation. Political and health-related topics are particularly affected. Journalists employ source cross-checking, reverse image searches, geolocation tools such as Google Maps, and specialized tools like InVID for video analysis to verify content.


Fact-checking platforms, such as PesaCheck, Piga Firimbi, and AfricaCheck, play a crucial role. At the same time, there is an ongoing debate over who should have the authority to define what is “true”—governments, tech companies, or independent actors.


In the long term, media literacy is the most effective defense against disinformation. In Kenya, outlets produce public tutorials; in Senegal, AfricaCheck collaborates with radio stations in Wolof. However, structural barriers remain: a lack of curricula, language gaps, and limited political engagement. Experts argue that media literacy must be integrated into schools, workplaces, and religious institutions to build a

resilient public capable of resisting manipulative narratives.


Source:

The Conversation, L. Diop, 2024. Fighting fake news: how media in Kenya and Senegal check facts. [online] Available at: https://theconversation.com/fighting-fake-news-how-media-in-kenya-and-senegal-check-facts-251123



Fake Accounts Drive Disinformation on Pope Francis’ Health


An analysis by Cyabra revealed that AI-driven fake profiles heavily influenced online discussion about Pope Francis’ health. 31% of the accounts on X were fake and responsible for spreading false claims about the Pope’s death and manipulating official Vatican communications. The disinformation campaign centered on three narratives: fabricated death reports, fake news shared by seemingly credible accounts, and distortion of official statements to sow distrust.


Approximately 42% of the overall conversation was negative. Fake profiles used emotional language, repeated messaging, and hijacked high-engagement threads to amplify their false narratives. The Vatican was forced to respond publicly, dedicating time and resources to counter these rumors. The case highlights the growing role of AI-powered disinformation in shaping public perception and distorting legitimate communication.

 

Source:  

Cyabra, R. Baruchin, 2025. Pope Francis’ Health Misinformation Fueled by Fake Profiles. [online] Available at: https://cyabra.com/blog/pope-francis-health-misinformation-fueled-by-fake-profiles/



APPENDIX - FRAMEWORKS TO COUNTER DISINFORMATION


Disinformation Cited as Justification for Facebook Suspension in PNG


ABC News reports that Papua New Guinea has temporarily blocked access to Facebook. The government described the move as a test aimed at countering disinformation, hate speech, and harmful content, carried out under anti-terrorism laws. According to media reports, the ban was imposed without a specified duration; access has since been restored. Officials argued that the measure was necessary to protect public safety and reduce the spread of false information, violent material, and pornography.


The decision sparked criticism from opposition politicians, journalists, and civil society groups, who raised concerns about its implications for freedom of expression. They viewed the action as disproportionate and warned it could set a precedent for future restrictions. The government has previously accused Facebook of enabling the spread of misinformation. Observers noted that such actions could affect long-term access to digital platforms and reduce transparency in online public discourse.


Source:

ABC News, N. Sas & B. Kora, 2025. Papua New Guinea shuts down Facebook in test to stop 'pornography, misinformation, hate speech'. [online] Available at: https://www.abc.net.au/news/2025-03-25/facebook-shut-down-in-papua-new-guinea-in-test/105092564



GLOSSARY


Information Operations

Is the employment of electronic warfare (EW), computer network operations (CNO), psychological operations (PSYOP), military deception (MILDEC), and operations security (OPSEC), in concert with specified supporting and related capabilities, to influence, disrupt, corrupt or usurp adversarial human and automated decision making." Information Operations (IO) are actions taken to affect adversary information and information systems. IO can sometimes be considered as a part of Soft Warfare.

Hybrid Warfare

It is a known strategy that blends conventional warfare (kinetic), irregular warfare, and cyber-warfare with other Soft Warfare elements, such as influencing methods, fake news dissemination, diplomacy, lawfare, and foreign electoral intervention.


Cyber Warfare

Is commonly known as the use of digital attacks to cause harm and/or disrupt vital computer and information systems. Experts debate the definition of cyber warfare and whether such a thing exists.

Cyfluence Attack

Is a cyberattack that aims to amplify or enhance an influence effort, as opposed to a cyberattack that seeks to steal information, extort money, damage military capability, etc.

Soft Warfare

All warfare disciplines that are not kinetic (i.e., no physical attack of sort, such as shooting, using explosives, poisoning, etc.), such as cyber warfare, economic warfare, diplomatic warfare, legal warfare (lawfare), psychological warfare, and more.

CIB

Meta’s terminology to describe Coordinated Inauthentic Behavior on its platforms,

emphasizing both coordination and inauthentic behavior.

FIMI

The EU’s terminology for describing Foreign Information Manipulation Interference, emphasizing the foreign activity.

Hostile Influence Campaign (HIC)

An information operation sought to influence a targeted audience for a hostile cause.

Digital Impact on Discourse (DID)

Means a non-hostile effort to influence discourse. Usually used in marketing articles. Here, it is used to illustrate the opposite of the HIC.

Misinformation

A false, inaccurate, or misleading information communicated regardless of the intention to deceive. Misformation includes false rumors, outright lies, or the deliberate dissemination of known conspiracy theories.

Disinformation

Describes misleading information that is spread and distributed deliberately to deceive. This is a subset of misinformation. The words "misinformation" and "disinformation" have often been associated with the concept of "fake news", which some scholars define as "fabricated information that mimics news media content in form but not in organizational process or intent".

Inauthentic Behavior

Is defined by Facebook as “the use of Facebook or Instagram assets (accounts, pages, groups or events), to mislead people or Facebook: about the identity, purpose or origin of the entity that they represent; about the popularity of Facebook or Instagram content or assets; about the purpose of an audience or community; about the source or origin of content; to evade enforcement under our Community Standards“. We have broadened this term to encompass all social media platforms, mutatis mutandis.

Fake users

AKA Avatars - a generic term describing all types of users who are not legitimate social media users, i.e., are bots or operated by humans but not under their real identity, or are operated by humans under real identity but for the sole purpose of promoting an agenda that is not theirs.

Unidentified users

A generic term used to describe users on social networks that are allowed to keep their real identity undisclosed (like on Twitter, for example).

Sockpuppet accounts

A sock puppet or sockpuppet is an online identity used for deception.

Bots

Are autonomous programs on the internet that can interact with systems or users. For example, a Twitter bot is an automated Twitter account operated by computer software rather than a human. Spammy retweet botnets are sometimes used to echo messages in campaigns. Sometimes, automated spam coexists alongside organic activity on the same group of accounts.

Repurposed accounts

Means social media accounts that were hacked or purchased, then used for different purposes than the original ones.

Fake website

Is a website designed for fraudulent or scam activity, hiding its real purpose.

Deep Assets

These are non-human deep cover assets, divided into two sub-categories:


Deep Avatars are avatars that require a lot of effort to look like real people (background story, pictures, quality friends, quality content, technical capability to have phone calls, etc.).


Deep platforms are platforms that enable a wide range of activities, such as websites, Facebook pages, etc., and that mask the real identity of who is behind the platform (unattributed). For example, a news website with daily content of articles and videos and representation on social media platforms by users who identify as the website representatives.

Real platforms

Is an actual entity (company, NGO, website, etc.) based on real people (attributed) doing real work. For example, a private sector influence research center that publishes research on influence operations, either globally or locally.

Astroturfing

Takes place when a coordinating actor creates a false impression of grassroots support.

Cyberbullying

is when someone bullies or harasses others on the internet, particularly on social media sites. Cyberbullying behavior can include posting rumors, threats, sexual remarks, personal information, or hate speech. Bullying or harassment can be identified by repeated behavior and an intent to harm.



DISCLAIMER


Copyright and License of Product 

This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained.


Disclaimer of Warranties

The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks.


Accuracy of Information 

The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice.


Limitation of Liability

To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law.


Indemnification

The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms.


Third-Party Rights

The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms.


Governing Law and Jurisdiction 

This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. The remaining terms remain in full effect if any provision is found invalid.



 
 
bottom of page