top of page

Weekly Report: Cyber based influence campaigns 21st - 27th of April 2025

  • Writer: CRC
    CRC
  • Apr 30
  • 18 min read

Updated: May 9


[Listen to the Podcast]

[Download Report]


> TABLE OF CONTENTS <


SOCIAL MEDIA PLATFORMS
  • [Meta]

STATE ACTORS
  • [Russia]

  • [The War in Ukraine]

  • [China]

AI RELATED ARTICLES
GENERAL REPORTS


SOCIAL MEDIA PLATFORMS

[Meta] 

Ongoing Criticism of Meta’s Fact-Checking Policy 

TechXplore highlights recent developments regarding Meta’s decision to end its U.S. fact-checking program on Facebook in January 2025 (see Cyfluence Weekly Review, W02/2025). Since then, Meta has shifted its focus to its new “Community Notes“ system.


The change has sparked concern among many independent observers. The Meta Oversight Board, which reviews the company’s content moderation practices, called the move “hasty” and warned it could increase the spread of harmful and misleading content. In response, the Board issued 17 recommendations, including a call for Meta to assess how Community Notes compares to traditional third-party fact-checking.


Critics like Michael Wagner from the School of Journalism and Mass Communication at the University of Wisconsin-Madison argue that relying on unpaid users to police misinformation across a platform with billions of users reflects a serious lapse in corporate responsibility and poses risks to public safety and human rights.


Meta has pledged to follow the Board’s decisions in individual cases but is not required to implement its broader policy recommendations.


Source:  

STATE ACTORS

[Russia]

Dutch Annual Report Warns of Expanding Russian Cyber Threats

The Netherlands’ Military Intelligence and Security Service’s (MIVD) 2024 annual report outlines security threats posed by several states, including Russia, China, and Iran. While China and Iran are primarily associated with traditional espionage, cyber espionage, and technology theft, Russia is presented as a significantly broader and more aggressive threat. A section on digital influence appears exclusively under the Russia chapter.


According to the report, Russia pursues a whole-of-society strategy. State institutions, intelligence services, private companies, and hacktivists work cooperatively. The goal is to destabilize Western societies through the spread of pro-Russian narratives, the erosion of NATO cohesion, and the disruption of democratic processes, such as the run-up to the European elections in the Netherlands. This strategy’s key and increasingly used component is deploying artificial intelligence (AI) to automate and amplify digital campaigns. These activities form part of a hybrid warfare approach that combines digital tools, psychological manipulation, and sabotage. The MIVD cites specific incidents, including DDoS attacks on websites of political parties and public transport operators, which were intended to interfere with the European elections in the Netherlands.


Additionally, Infosecurity Magazine reports that the Dutch defense minister has warned of Russia’s accelerating militarization, driven by its war economy, which is advancing faster than NATO’s current efforts. This development calls for an urgent reinforcement of Europe’s defense capabilities. By deliberately blurring the boundaries between peace and conflict, Russia employs disinformation as a tool to fragment Western cohesion and undermine collective responses to its geopolitical ambitions.


Sources: 


Russia’s Disinformation Offensive in Moldova’s 2024 Elections

As stated in an EU vs. Disinfo article, during Moldova’s 2024 presidential election and EU referendum, Russia launched an aggressive and multifaceted disinformation campaign aimed at destabilizing the country’s democratic processes. Russia used covert and overt channels, including official state media like TASS and Sputnik Moldova. It intensified its Foreign Information Manipulation and Interference (FIMI) operations to undermine President Maia Sandu and discredit Moldova’s EU integration. The Kremlin recycled disinformation narratives from Ukraine, such as organ trafficking allegations, and tailored them to Moldovan audiences through a vast network of Telegram channels and fake local news sites. 


To flood the information space, Russian-linked platforms created Moldova-focused domains and media outlets, many tied to RT and other state-backed entities. These outlets propagated anti-EU messages, portraying European integration as a threat to Moldova’s sovereignty, a danger to Russian-speaking communities, and a provocation toward Russia. The campaign used manipulative techniques like AI-generated content, fake documents, and fabricated fact-checks to confuse the public, while even deploying a Telegram bot to pay Moldovans to spread anti-EU propaganda—weaponizing local poverty for strategic gain. 


Source:  

The Kremlin Exploits Auschwitz Liberation Anniversary to Spread Disinformation 

EUvsDisinfo reports that Russia was not invited to the international ceremony marking the 80th anniversary of the liberation of Auschwitz on 27 January 2025. The Auschwitz-Birkenau Museum declined to invite Russian representatives, citing the ongoing war of aggression against Ukraine, which it described as a “barbaric act.”


The Kremlin responded with a coordinated disinformation campaign. Russian officials and state media framed the exclusion as a result of Western “Russophobia,” historical alsification, and a supposed revival of Nazi ideology in the EU and Ukraine. Russia portrayed itself as the victim of an intentional effort to erase its role in World War II.


Key Kremlin narratives include the portrayal of Russia as the sole legitimate heir to the Soviet Union, the claim that only Russian soldiers liberated Auschwitz, and the regular labeling of Ukraine and its allies as “neo-Nazis.” Historical facts—such as the participation of around seven million Ukrainians in the Red Army—are deliberately omitted.


This rhetoric is revisionist. It distorts historical reality, relativizes the Holocaust, and equates present- day political criticism of Russia with the crimes of Nazism. Russia attempts to position itself as a contemporary victim of a “fascist West”. One striking example of this disinformation strategy is the fabricated claim that Poland’s Chief Rabbi, Michael Schudrich, called President Zelenskyy a “dictator who has tamed Nazis on a leash.”


This false statement first circulated on Russian Telegram channels. It was later repeated publicly by Russian Foreign Ministry spokesperson Maria Zakharova—illustrating a typical case of Kremlin-generated and state-amplified disinformation.


Source:  

EU Pledges Support Against Russian Disinformation in Moldova 

In a joint press conference, EU High Representative Kaja Kallas and Moldovan President Maia Sandu highlighted the escalating threat of Russian disinformation campaigns to destabilize Moldova's democratic institutions and its path toward EU integration. They emphasized the need for coordinated efforts to counteract these hybrid threats, which include spreading false narratives and manipulating public opinion to undermine trust in democratic processes.​ 


It is emphasized that the disinformation campaigns targeting Moldova are primarily conducted by Russian state-sponsored actors who use various platforms to spread false or misleading narratives. These efforts aim to destabilize the country, weaken its democratic institutions, and erode its sovereignty. In response, the European Union protects Moldova's democracy and electoral infrastructure. At the request of Moldovan authorities, the EU is deploying a Hybrid Rapid Response Team to strengthen institutional capacity and safeguard the integrity of upcoming elections. Additionally, the EU continues to assist Moldova in building resilience to hybrid threats and enhancing crisis preparedness. 


Source:  

[War in Ukraine]

Disinformation Campaigns Surround Chornobyl Drone Strike 

As published by DFR Lab, on February 14th, 2025, the drone strike on the Chornobyl nuclear power plant reignited global fears of a nuclear disaster and swiftly became a battleground for disinformation. While Ukrainian officials confirmed that the attack, allegedly by Russia, damaged the reactor’s protective shelter without causing a radiation leak, Russia denied involvement and blamed Ukraine. This was quickly amplified by Kremlin-affiliated media and over 100 Russian and Belarusian Telegram channels, many of which had already primed audiences with conspiracy theories weeks prior, claiming Ukraine was planning a provocation to frame Russia. These narratives not only accused Ukraine of staging the attack to gain sympathy before the Munich Security Conference but also questioned the structural integrity of the reactor’s protective dome, suggesting corruption and Western orchestration. The messaging weaponized the historical trauma of the 1986 Chornobyl disaster, evoking panic by implying that any attack—even a contained one—could unleash catastrophic fallout. By invoking past tragedies, Russian disinformation outlets effectively reinforced fear while positioning themselves as victims of Western and Ukrainian schemes. 


Beyond the Chornobyl incident, the same networks have recycled similar disinformation tactics, including unfounded claims of Ukrainian strikes on the Kursk NPP and alleged NATO complicity. These coordinated campaigns use anonymous sources, fabricated evidence, and high-volume repetition across social media to create confusion, deflect blame, and undermine trust in Ukrainian leadership and Western allies. 


Source:  

[China]

Australia: The dangers of using Chinese apps in election campaigns
 

ABC News highlights increased political activity on Chinese-language platforms during Australia’s 2025 federal election campaign. The Liberal Party, and to a lesser extent, Labor, use WeChat and RedNote to reach voters in electorates with large Chinese-Australian communities. The Liberals have published over 220 authorised ads on WeChat since January. Labor has published about 30. Liberal candidate Scott Yung is mainly active in Bennelong, a key marginal seat. Both major parties have also run attack ads targeting Prime Minister Anthony Albanese and Opposition Leader Peter Dutton. These appeared on WeChat.


RedNote poses additional problems. The platform discourages political content and uses shadow banning. Posts and candidate names do not appear in searches. Some content is technically online but remains hard to find, limiting the visibility of political messages. Not all candidates can reach voters equally. Ads may cost more or be suppressed without explanation. Such conditions open the door to foreign interference. If specific messages are blocked or disadvantaged by opaque algorithms, the fairness of the electoral process is at risk.

Source:  

AI Related Articles


AI-Fueled Misinformation Threatens Canada's Election 

CyberNews reports that, in the lead-up to Canada’s federal election on April 28, 2025, a wave of politically themed books targeting Prime Minister Mark Carney appeared on Amazon. Many of these titles were generated using artificial intelligence. The content often included factual inaccuracies, exaggerated claims, and surreal passages. Most readers were unaware that the texts were AI-generated.


A report published by Anthropic detailed the disruption of a sophisticated AI-powered disinformation operation that marked a significant evolution in online influence tactics. This financially motivated “influence-as-a-service” provider used Anthropic’s Claude AI to orchestrate over 100 fake social media personas on X and Facebook, managing interactions, generating responses, and coordinating image creation. Unlike traditional viral disinformation campaigns, this operation emphasized persistence, relationship building, and covert integration into real user communities to gradually shape political opinions, promoting narratives that supported or undermined European, Iranian, UAE, and Kenyan interests.


The books were published via Amazon’s Kindle Direct Publishing platform, which does not require editorial review or disclosure of AI authorship. While many of these books were later removed, they had already reached a broad audience.


The case shows how generative AI, combined with open publishing platforms, can distort political narratives at scale and with minimal oversight. This creates a potential tool for digital hostile influence, especially during election periods when the integrity of public discourse is particularly vulnerable. 


Source:  

Anthropic Discovered Sophisticated AI-Powered Influence Campaign 

A report published by Anthropic detailed the disruption of a sophisticated AI-powered disinformation operation that marked a significant evolution in online influence tactics (for more information see our Blog) This financially motivated “influence-as-a-service” provider used Anthropic’s Claude AI to orchestrate over 100 fake social media personas on X and Facebook, managing interactions, generating responses, and coordinating image creation. Unlike traditional viral disinformation campaigns, this operation emphasized persistence, relationship building, and covert integration into real user communities to gradually shape political opinions, promoting narratives that supported or undermined European, Iranian, UAE, and Kenyan interests. 


The operation demonstrated notable technical sophistication by centralizing persona management through a structured, JSON-based system. Claude AI made strategic decisions for each persona, aligning interactions with specific political goals and generating content in the persona’s native languages and styles. The coordinated use of AI tools also helped operators maintain consistent political narratives across accounts, evading traditional detection methods by mimicking authentic human behavior and engagement patterns. 


Anthropic’s findings highlighted a significant shift in disinformation tactics, moving from content-centric to relationship-centric strategies. The report stressed that although the campaign had limited viral impact, it effectively embedded politically aligned personas into real communities, subtly influencing conversations over time.  


Source:  

GENERAL REPORTS


Bot-Like Networks Target Canadian Election with Disinformation 

DFR Lab’s investigation revealed that bot-like accounts on X aggressively targeted Canadian political parties ahead of the 2025 federal election, amplifying disinformation through repetitive spam tactics. Primarily attacking the Liberal Party and its leader, Prime Minister Mark Carney, these accounts spread narratives linking Carney to China, Jeffrey Epstein, and the World Economic Forum. The disinformation strategy relied heavily on recycled graphics from sources like Canada Proud and Conservative leader Pierre Poilievre’s official account, rather than creating new content. 


Actors involved included suspected automated X accounts and politically motivated groups, with some disinformation campaigns previously linked to Chinese influence efforts. The findings underscore broader concerns about the erosion of election integrity and democratic discourse, as highlighted by Canada's ongoing inquiry into foreign interference. DFR Lab’s research adds to mounting evidence that inauthentic online activity, especially during critical political periods, poses a profound existential threat to democratic systems worldwide. 


Source:  

Canada's 2025 Election Targeted by Foreign Disinformation 

DFRLab investigated bot-like behavior on X (formerly Twitter) ahead of Canada’s April 2025 snap election. The analysis focused on eight English-language accounts belonging to major political parties and their leaders (Liberal, Conservative, NDP, Bloc Québécois). Between January 1 and April 17, 2025, researchers collected 3,507 posts and 2.4 million related interactions, including replies and reposts.


Using behavioral indicators such as post volume, reply speed, and linguistic patterns, DFRLab identified 150 bot-like accounts, 65 of which were created in 2025, suggesting intent to influence the election. Many accounts followed generic naming conventions and demonstrated automated or coordinated behavior. Some were later restricted by X due to “unusual activity.”Roughly four of five accounts targeted the Liberal Party, especially Prime Minister Mark Carney, often through repeated image spam. Familiar narratives accused Carney of loyalty to China, ties to Jeffrey Epstein and Ghislaine Maxwell, and involvement in the World Economic Forum. Content was rarely original; bots reused graphics from groups like Canada Proud or the official account of Conservative leader Pierre Poilievre. Some replies contained text artifacts indicative of AI-generated content, such as spacing errors before punctuation. Conservative accounts were also targeted, but to a lesser extent. DFRLab concludes that such inauthentic activity distorts political discourse by artificially inflating certain narratives. In a high-stakes election marked by international tensions and record early voter turnout, bots present a clear risk to democratic integrity and public trust.

Source:  

Surge of Visual Disinformation Escalates Political Tensions 


A study by The Conversation examined the connection between visual disinformation and political violence. The aim was to investigate how manipulated images, memes, and other visual content contribute to political instability and conflict. The research team analyzed a dataset of approximately 6 million posts and 3 million images from 989 Russian military bloggers on Telegram. Using AI tools, they detected visual manipulations, focusing on the period leading up to Russia’s 2022 invasion of Ukraine. The study recorded a nearly 9,000% spike in posts and a 5,000% increase in altered images shortly before the outbreak of war.


The findings suggest that visual disinformation is deliberately used to distort public perception, deepen social divisions, and justify violence. Images and memes often have a substantial emotional impact and spread more quickly than text-based content.


One of the study’s key conclusions is that artificial intelligence can assist in identifying technical manipulations, but cannot reliably interpret visual content’s political or ideological meaning. Human expertise remains essential for understanding the context and intent of these messages.


Source:  

Gen Z's High Vulnerability to Online Disinformation 


Politico examines how Generation Z, though raised with digital technology, is highly susceptible to online misinformation. A 2021 Stanford University study found that only three out of 3,446 high school students could correctly identify the origin of a video that falsely claimed to show U.S. voter fraud but depicted footage from Russia. This result challenges the assumption that digital natives are naturally skilled at evaluating online content.

 

TikTok has become a key news source for many young users. At the same time, it is a space where conspiracy theories and unverified claims spread quickly. Despite expert warnings, these include doubts about Helen Keller’s life, allegations of government-controlled weather, and viral health trends such as applying beef tallow to the skin.


A lack of trust in institutions, such as the media, education, and science, shapes how Gen Z processes information. Many rely on comment sections to assess accuracy rather than consulting reliable sources. These sections often reflect algorithm-driven echo chambers that reinforce existing views and filter out opposing perspectives. One example involved an AI-generated audio clip of Donald Trump that went viral. The correction appeared only later in the comments, after the video had been widely shared.


Researchers like Rakoen Maertens from the University of Oxford point to limited life experience as another factor. Schools often teach close reading but rarely train students to verify information across sources. Experts note that this pattern may spread to older generations. As algorithms personalize information across platforms, all age groups may face growing exposure to misleading content.


Source: 

Meta's Decision to Block Canadian News: A Gateway for Misinformation​ 


NewsGuard notes that in August 2023, Meta stated, “people using our platforms don’t come to us for news.” However, a June 2024 Reuters Institute survey found that 25 percent of Canadians use Facebook for news. The blocking of Canadian news due to the Online News Act created an information vacuum that was quickly filled by misinformation sources.


False narratives began to spread with reliable news sources no longer available on Facebook, and Instagram. NewsGuard identified several false claims targeting Mark Carney, including one that he called Trump supporters “a generational damage to the United States,” and another claimed he sold Canada’s gold reserves. These falsehoods reached millions of users, even though they were debunked. Despite Canadian media quickly debunking these claims, the fact-checks were often not shared on Meta platforms. One example is the false claim that Carney was barred from serving in Parliament due to a Canadian law, which continued to spread as the corrections were not posted on social media.


The blocking of Canadian news sources has undermined trust in democratic processes. Unchecked misinformation spreads, manipulating public opinion and contributing to societal polarization. This development illustrates the problematic nature of blocking legitimate news sources and how it can erode the integrity of public discourse.


Source:  

Disinformation Affecting Climate Debate in Australia’s Election 


According to The Conversation, before the 2025 Australian federal election, misinformation about climate change has emerged as a significant issue. Misleading claims about renewable energy and gas distort public perception and hinder climate action. Three major misinformation campaigns are active in the election campaign: Clive Palmer’s "Trumpet of Patriots" campaign spread an outdated scientific claim suggesting that temperatures in Greenland were not rising, which was removed by Google after 1.9 million views. The group "Responsible Future Illawarra" spreads false information about offshore wind farms, claiming they harm marine life, despite scientific studies showing the opposite. Finally, the pro-gas campaign "Australians for Natural Gas" promotes the idea that gas is essential for Australia’s energy mix and that climate policies harm the economy, even though a shift to renewable energy is feasible. Experts recommend protecting oneself from such misinformation by using "pre-bunking" strategies and the SIFT method, which involves verifying sources and tracing information to ensure the climate debate is based on facts rather than fiction. 


Source:  

GLOSSARY


Information Operations

Hybrid Warfare

Cyber Warfare

Cyfluence Attack

Soft Warfare

CIB

FIMI

Hostile Influence Campaign (HIC)

Digital Impact on Discourse (DID)

Misinformation

Disinformation

Inauthentic Behavior

Fake users

Unidentified users

Sockpuppet accounts

Bots

Repurposed accounts

Fake website

Deep Assets

Real platforms

Astroturfing

Cyberbullying


DISCLAIMER


Copyright and License of Product 

This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained.


Disclaimer of Warranties

The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks.


Accuracy of Information 

The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice.


Limitation of Liability

To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law.


Indemnification

The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms.


Third-Party Rights

The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms.


Governing Law and Jurisdiction 

This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.



bottom of page