Weekly Report: Cyber based influence campaigns 14th – 20th of April 2025
- CRC
- Apr 29
- 17 min read
Updated: May 4

[Listen to the Podcast]
[Download Report]
>TABLE OF CONTENTS<
HOSTILE INFLUENCE CAMPAIGNS
STATE ACTORS
[Russia]
[War at Ukraine]
[China]
[Iran]
AI RELEATED ARTICLES
GENERAL REPORTS
APPENDIX - FRAMEWORKS TO COUNTER DISINFORMATION
HOSTILE INFLUENCE CAMPAIGNS
STATE ACTORS
[Russia]
Russia's Global Disinformation Network Infiltrates AI and Wikipedia
As published by the Atlantic Council, Russia has significantly expanded its global disinformation campaign during the war in Ukraine, using sophisticated strategies to spread pro-Kremlin narratives across digital platforms. A key part of this effort is the Pravda network, a web of fake news portals active in over 80 countries. These sites recycle and repackage content from sanctioned Russian media and Kremlin-aligned Telegram channels, strategically surfacing during political events such as elections in Romania and Moldova or high-profile meetings between world leaders. The goal is to saturate global information spaces with Russian propaganda disguised as legitimate news.
The campaign has now reached artificial intelligence tools and Wikipedia, enabling disinformation to seep into mainstream knowledge sources. Kremlin-aligned narratives influence AI-generated content by posing as credible references on Wikipedia and news sites cited in large language model (LLM) training data. This means that users relying on AI tools may unknowingly consume skewed, anti-Ukrainian, and anti-Western messaging—an alarming development in how information is shaped and distributed. As AI adoption grows, so does the risk of disinformation embedded in its outputs.
Source:
Atlantic Council, V. Châtelet, 2024. Exposing Pravda: How pro-Kremlin forces are poisoning AI models and rewriting Wikipedia. [online]
Storm-1516 Disinformation Network Targets France with AI-Fueled Propaganda
A report by NewsGuard revealed that Russia’s disinformation campaign, orchestrated by the Storm-1516 network, has aggressively shifted its focus to France, leveraging AI-generated content to amplify false narratives across social media and AI platforms. From December 2024 to March 2025, five Kremlin-linked hoaxes targeting France went viral in nearly 39,000 posts, racking over 55 million views. These included fabricated accusations involving French President Emmanuel Macron and First Lady Brigitte Macron, as well as baseless claims about Ukrainian President Zelensky and migrants, narratives timed to coincide with France’s increasing military support for Ukraine.
What makes this wave of disinformation particularly dangerous is its infiltration into generative AI tools. NewsGuard found that several major AI chatbots—including Mistral’s French-made "Le Chat"—repeated Russian-fabricated stories, like the Macron affair hoax, as factual responses to user prompts. These claims, often laundered through fake whistleblower videos, phony regional news outlets, and YouTube uploads, are cited by AI systems trained on seemingly authoritative sources. Storm-1516’s playbook is now well-established: hijack real identities, use AI for face-swapping and video manipulation, spread content through multilingual platforms, and adapt narratives to current political tensions. While France has so far avoided a full-scale fake news site network like those deployed in the U.S. and Germany, officials are preparing for further escalation. As support for Ukraine increases, so does Russia’s digital assault.
Source:
NewsGuard Reality Check, N. Huet, M. Sadeghi, and C. Labbe, 2025. Russian Propaganda Campaign Targets France with AI-Fabricated Scandals, Drawing 55 Million Views on Social Media. [online] Available at: https://www.newsguardrealitycheck.com/p/russian-propaganda-campaign-targets-france-with-ai-fabricated-scandals
National Survey Shows Kremlin Disinformation Widely Believed by Americans
According to a NewsGuard report, a recent national survey conducted by YouGov for NewsGuard revealed the deep reach of Russian disinformation among the American public. One in three Americans believes at least one false claim promoted by Russian state media, highlighting how effectively the Kremlin’s narratives have taken hold. The study tested 1,000 adults with 10 widely circulated falsehoods, three of which were directly tied to Russian propaganda. Only 23.8% of respondents correctly identified all three Kremlin-linked claims as false.
The misinformation included debunked narratives such as Ukraine stealing U.S. aid, Zelensky's alleged 4% approval rating, and claims that Ukraine sold U.S.-supplied weapons to Hamas. These fabrications, initially spread by outlets like RT and Sputnik, have not only gone viral on social media but have also been repeated by U.S.-based websites and even made their way into generative AI tools, further normalizing disinformation. NewsGuard analysts have cataloged nearly 300 such Russian falsehoods related to the war in Ukraine alone. The survey showed widespread vulnerability to misinformation on health and elections. A staggering 78% of respondents believed at least one false claim, and fewer than 1% accurately debunked all ten. Importantly, belief in disinformation crossed party lines, though Republicans were significantly more likely to accept Kremlin-backed narratives.
Source:
NewsGuard Reality Check, M. Skibinski, 2025. One Third of Americans Believe Russian Disinformation, NewsGuard-YouGov Survey Finds. [online] Available at: https://www.newsguardrealitycheck.com/p/one-third-of-americans-believe-russian-disinformation-yougov-survey-finds
Pravda Dashboard Exposes Russia's Massive Global Disinformation Network
The Pravda Dashboard, published by the DFRLab, exposed the scale and strategy of Russia’s global disinformation network. With Finnish company CheckFirst, the DFRLab analyzed a dataset of over 3.7 million articles repurposed from Russian state media and questionable Telegram channels. Through an elaborate network of websites and automated republishing tools, the Pravda network bypassed sanctions to amplify pro-Kremlin narratives worldwide, especially in France, Germany, Ukraine, Moldova, and Serbia. Many articles sourced banned Russian outlets like TASS and RT, alongside disinformation-heavy Telegram channels such as InfoDefense and SurfNoise.
The investigation revealed that the Pravda operation intensified during key political events, such as the 2024 EU Parliament elections and high-profile arrests, using surges in publication to flood news cycles with manipulated narratives. The most-cited source in the network was @Vbachir, a French-speaking Telegram account claiming ties to Russian forces, which alone appeared in more than 53,000 articles. The operation also strategically targeted linguistic regions by repackaging Russian-origin content through language-based clusters, using HTML tags to signal alternate versions for different audiences. This method enabled the network to deliver coordinated disinformation to specific countries or regions while maintaining the illusion of local news.
Source:
DFRLab, V. Châtelet and A. Lesplingart, 2025. Russia’s Pravda network in numbers: Introducing the Pravda Dashboard. [online] Available at: https://dfrlab.org/2025/04/18/introducing-the-pravda-dashboard/
[War in Ukraine]
U.S. Cuts Funding to Ukraine’s Independent Media
According to a report by the Atlantic Council, Ukraine’s independent media, a vital barrier against Kremlin disinformation, is facing collapse following a drastic cut in U.S. development aid. Since January 2025, the United States has quietly halted around 90% of its funding for Ukrainian media through USAID and affiliated partners. This comes at a time when Russia is intensifying its disinformation campaigns, leaving Ukrainian newsrooms like NikVesti, Vgoru, and Cykr without the resources needed to counter false narratives. These outlets, operating under war conditions, have served as a crucial line of defense by exposing propaganda and reporting on both Russian strikes and internal corruption.
As these independent newsrooms downsize or shut down, Russia is rapidly exploiting the resulting information vacuum. A recent example in Sumy involved fake health warnings circulated by pro-Russian Telegram channels, falsely claiming a disease outbreak among Ukrainian soldiers. Disinformation crafted to mimic official sources is designed to mislead and erode public trust in all media. Without credible local journalism, these manipulative tactics become significantly more effective, especially in regions already isolated by war. Efforts to stay afloat through donor outreach, membership drives, and EU funding applications are underway, but are slow and uncertain. Journalists warn that without immediate support, independent media will vanish, and unchecked Russian propaganda will dominate Ukraine’s information space.
Source:
Atlantic Council, M. Tahir, 2025. US funding cuts create openings for Russian disinformation in Ukraine. [online] Available at: https://www.atlanticcouncil.org/blogs/ukrainealert/us-funding-cuts-create-openings-for-russian-disinformation-in-ukraine/
Russian disinformation Targets Ukraine and EU Politics
According to an article by EU vs. Disinfo, as Russia continues its assault on Ukraine, its disinformation machinery works in overdrive to deny responsibility for deliberate attacks on civilians. The missile strike on Sumy on April 13, which killed at least 35 civilians during a religious holiday, was falsely framed by Kremlin sources as a military operation. This follows a disturbing pattern in previous attacks, such as the one in a restaurant in Kryvyi Rih, where fabricated justifications are spread rapidly to obscure the truth.
By immediately releasing false narratives—claiming military meetings or weapon caches at bombed civilian sites—Russian propaganda seeks to confuse public perception and delay international condemnation. Despite mounting evidence, including UN reports and security footage, the Kremlin persists in pushing these narratives, even as the International Criminal Court issues arrest warrants for those responsible for attacks on non-military infrastructure. Beyond Ukraine, this disinformation model is now being deployed to influence foreign elections. In early April, a coordinated campaign targeted Poland’s upcoming presidential election with fake news videos impersonating credible outlets, falsely claiming security threats. These efforts followed Russia’s known disinformation playbook—spreading content through fake accounts and amplifying it with strategic reposts.
Source:
EUvsDisinfo, 2025. Missiles and lies. Again. [online] Available at: https://euvsdisinfo.eu/missiles-and-lies-again/
[China]
Chinese Influence Group Advances CCP Agenda Through Cultural Diplomacy
An article by National Review stated that the Chinese People’s Association for Friendship with Foreign Countries (CPAFFC), a group identified by U.S. intelligence as tied to Beijing’s influence operations, has intensified its efforts to promote the Chinese Communist Party’s (CCP) global vision during high-profile visits to the United States. During a U.N. event in New York, CPAFFC officials promoted Xi Jinping’s “Global Civilization Initiative,” framing it as a path toward global cooperation and understanding. However, experts warn that these initiatives are part of China’s strategic effort to reshape international norms and governance in favor of its authoritarian model.
CPAFFC's activities, including speeches at the U.N. and meetings in California and Florida, mask strategic disinformation under the guise of cultural diplomacy. Despite branding itself as a non-governmental organization, CPAFFC operates within the CCP's United Front system, which subtly seeks to influence foreign entities. For example, the group has cultivated connections. with American communities through programs honoring WWII’s Flying Tigers, reinterpreting history to support modern CCP narratives, and extending Beijing's ideological reach.
Source:
National Review, J. Quinn, 2025. Chinese ‘Friendship’ Group Visits America amid Trade War. [online] Available at: https://www.nationalreview.com/corner/chinese-friendship-group-visits-america-amid-trade-war/
[Iran]
Iran Increasingly Integrates AI to Amplify Disinformation and Cyber Operations
According to a report by Recorded Future, Iran is increasingly integrating artificial intelligence into its national security and foreign policy strategy to amplify cyberattacks, influence operations, and domestic surveillance. While Tehran faces significant obstacles—such as economic isolation and top-down government control—it is leveraging AI as a force multiplier, especially in the context of regional conflicts and escalating tensions with Israel.
Disinformation is a key component of Iran’s AI-powered influence campaigns. The regime will likely deploy generative AI and large language models to enhance social engineering efforts, spread state-sponsored propaganda, and manipulate public opinion domestically and abroad. These operations exploit cultural and linguistic familiarity to bypass detection and increase impact, particularly targeting the West's adversarial governments and tech sectors.
Source:
Recorded Future – Insikt Group®, 2025. Iran’s AI Ambitions: Balancing Economic Isolation with National Security Imperatives. [online]
AI Related Articles
Reinforcement Learning’s Potential to Amplify Disinformation on Social Media
A recent study by Concordia researchers presented a reinforcement learning (RL) framework designed to actively increase polarization and disagreement on social media platforms. Using the Double Deep Q-Learning algorithm, researchers developed a method to identify and take control of key user accounts in a network. These accounts are strategically chosen to maximize the spread of divisive content, effectively turning them into tools for orchestrated disinformation and societal disruption.
The study models both private and public opinion dynamics, simulating how individuals form and express beliefs online. While private opinions update synchronously, public expressions are updated asynchronously, mirroring real-world social media behavior. RL agents only observe users' public posts and follower counts, yet this limited data proves sufficient for optimizing disinformation strategies. The model demonstrates how adversarial agents can effectively exploit social platforms to deepen divides and manipulate public discourse through simulations across different network structures. This research underscores the growing threat posed by intelligent, AI-driven disinformation tactics and highlights the urgent need for robust defenses against such manipulative strategies.
Source:
IEEE, M. N. Zareer and R. R. Selmic, 2025. Maximizing Disagreement and Polarization in Social Media Networks using Double Deep Q-Learning. [online] Available at: https://ieeexplore.ieee.org/document/10831299
GENERAL REPORTS
State-Sponsored Disinformation Harassing Thailand’s Democratic Movement
A report published by the Citizen Lab revealed the existence and sustained activity of JUICYJAM, a state-sponsored online influence operation designed to harass and suppress Thailand’s pro-democracy movement. Since August 2020, this campaign operated across platforms such as X and Facebook using an inauthentic persona, “Ms. Juk Khlong Sam,” who systematically doxxed activists by revealing private information like family details and school affiliations.
The posts published during the campaign employed manipulation tactics such as distraction and smearing to shift narratives and discredit individuals, particularly during politically sensitive moments like protests or elections. These tactics were not random - leaked military and police documents in March 2025 linked the campaign directly to the Royal Thai Armed Forces or the Royal Thai Police. Despite JUICYJAM’s clear inauthenticity, social media platforms failed to curb its reach. The operation amassed significant engagement, over 100,000 followers, and tens of thousands of interactions. Citizen Lab emphasized that current platform policies inadequately address state-sponsored disinformation and coordinated doxxing, particularly in authoritarian contexts.
Source:
Citizen Lab, A. Fittarelli, M. Scott, and K. Wongsapakdee, 2025. JUICYJAM: How Thai Authorities Use Online Doxxing to Suppress Dissent. [online] Available at: https://citizenlab.ca/2025/04/how-thai-authorities-use-online-doxxing-to-suppress-dissent/
Domestic Disinformation Threatens Chinese-Australian Communities
New research published by The Conversation has revealed that during Australia's current election cycle, disinformation targeting Chinese-speaking communities is being driven not by foreign interference but by commercial and domestic political interests. Platforms like WeChat and RedNote (Xiaohongshu), which serve as key news sources for Chinese-Australians, are being used to spread misleading political narratives. These include manipulated statements, exaggerated threats, and racially charged content that distort policy positions and stoke fear among migrant communities.
The research uncovered tactics such as misleading visuals, distorted timelines, and sensational headlines, often used to misrepresent policies on immigration, visas, and foreign relations. One WeChat article falsely claimed a surge in Coalition support, while misreporting poll results and promoting racial narratives against the Labor Party. Another RedNote post selectively quoted Prime Minister Anthony Albanese to mislead readers about immigration policy.
Disinformation on these platforms is particularly harmful due to weak regulation, limited platform transparency, and the vulnerable position of marginalized communities. With automated detection tools blocked or ineffective, the burden falls on individuals and community efforts to counter these narratives. The report calls for improved civic education and tailored media literacy to help users critically assess digital content, especially in non-English media ecosystems where disinformation can flourish unchecked.
Source:
The Conversation, F. Yang, L. Heemsbergen, and R. Fordyce, 2025. This election, disinformation is swirling on Chinese social media. Here’s how it spreads. [online] Available at: https://theconversation.com/this-election-disinformation-is-swirling-on-chinese-social-media-heres-how-it-spreads-253849
Appendix - Frameworks to Counter Disinformation
EU Invests in Digital Innovation and Combating Disinformation
The European Commission announced it has launched four new funding calls totaling €140 million under the Digital Europe Programme (DIGITAL). These calls aim to accelerate the deployment of AI, boost digital skills, expand digital innovation hubs, and, notably, tackle disinformation. Of this total, €5 million is specifically allocated to creating a European Network of Fact-Checkers. This initiative will include protective measures for fact-checkers, a shared fact-checking repository, and rapid-response capabilities for misinformation, especially during crisis situations.
The funding also supports broader goals, including integrating generative AI into public administration and the agri-food sector, while enhancing regulatory compliance and supporting semiconductor and data space initiatives. Through this investment, the Commission aims to foster innovation and counter disinformation more effectively, reinforcing resilience against manipulation and false narratives across the digital landscape.
Source:
European Commission, 2025. Commission invests €140 million to deploy key digital technologies. [online] Available at: https://digital-strategy.ec.europa.eu/en/news/commission-invests-eu140-million-deploy-key-digital-technologies
U.S. Government Closes R/FIMI Office
In a move framed as a defense of free speech, the U.S. government has announced the closure of the State Department’s Counter Foreign Information Manipulation and Interference office (R/FIMI), formerly known as the Global Engagement Center. As stated in Marco Rubio's announcement, the office, initially tasked with countering foreign disinformation, has come under fire for allegedly targeting and censoring American citizens, prompting claims that it violated foundational democratic principles.
According to the announcement, the center used its $50 million annual budget to suppress dissenting opinions under the guise of combating foreign influence, blurring the line between safeguarding information integrity and stifling free expression. The closure has been positioned as a corrective measure to realign national policy with constitutional freedoms, particularly the right to free speech.
A report by Politico added that critics of the closure, however, argue that dismantling R/FIMI undermines U.S. national security. The office had played a key role in exposing Russian disinformation operations in Africa and in forming international coalitions to push back against information warfare from adversaries like China and Russia. Supporters across party lines, including Senators Chris Murphy and John Cornyn, stressed the strategic importance of the office in countering foreign influence campaigns designed to deceive and destabilize democratic societies.
Sources:
U.S. Department of State, M. Rubio, 2025. Protecting and Championing Free Speech at the State Department. [online] Available at: https://www.state.gov/protecting-and-championing-free-speech-at-the-state-department/
POLITICO, M. Miller, 2025. State Department eliminates key office tasked with fighting foreign disinformation. [online] Available at: https://www.politico.com/news/2025/04/16/state-department-shutters-gec-foreign-disinformation-00292982
Federal Efforts to Combat Disinformation Face Rollback Under Trump Administration
As published by The Free Press, a wide-ranging federal effort to combat misinformation and disinformation, especially during the Biden administration, has come under intense scrutiny, with the Trump administration now actively shutting down dozens of programs. The debate centers on using more than $1.4 billion in taxpayer-funded grants and contracts since 2017, over 600 of which were awarded during Biden’s term, to research and counter disinformation. Critics, especially from the GOP, argue that many of these initiatives effectively served as censorship tools under the guise of protecting public discourse. One such example is a now-cancelled $683,000 grant to UC Irvine to study vaccine hesitancy among Black and Latinx communities, which was withdrawn after inquiries from The Free Press.
The closures follow an executive order from President Trump (for further information, see W07 March 2025 Cyfluence Report), issued on his first day back in office, calling for an investigation into federal activities that allegedly violated Americans’ free speech rights. Agencies, including the NIH, State Department, and Pentagon, are now reevaluating and terminating contracts, with language around “misinformation” being replaced by terms like “adversary propaganda.”
Some programs remain active, such as a $979 million Pentagon contract with defense contractor Peraton to track foreign information threats. Others, like those involving the University of Washington and disinformation analysis firm Graphika, have faced criticism for allegedly targeting conservative voices. Although some organizations insist their work focuses solely on foreign influence or election-related falsehoods, Republicans continue to pursue deeper cuts, investigations, and policy reforms to prevent what they view as government-funded ideological enforcement masquerading as disinformation mitigation.
Source:
The Free Press, G. Kaminsky and M. Rowley, 2025. Joe Biden Made More Than 600 Grants to Stop ‘Disinformation.’ Donald Trump Now Has a Plan for Them. [online] Available at: https://www.thefp.com/p/joe-biden-made-600-grants-to-stop-disinformation-misinformation-donald-trump-cancels-awards
GLOSSARY
Information Operations
Hybrid Warfare
Cyber Warfare
Cyfluence Attack
Soft Warfare
CIB
FIMI
Hostile Influence Campaign (HIC)
Digital Impact on Discourse (DID)
Misinformation
Disinformation
Inauthentic Behavior
Fake users
Unidentified users
Sockpuppet accounts
Bots
Repurposed accounts
Fake website
Deep Assets
Real platforms
Astroturfing
Cyberbullying
DISCLAIMER
Copyright and License of Product
This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained.
Disclaimer of Warranties
The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks.
Accuracy of Information
The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice.
Limitation of Liability
To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law.
Indemnification
The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms.
Third-Party Rights
The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms.
Governing Law and Jurisdiction
This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. If any provision is found invalid, the remaining terms remain in full effect.