Weekly Report: Cyber based influence campaigns 02nd - 08th of June 2025
- CRC
- 2 days ago
- 15 min read

[Listen to the Podcast]
[Report Highlights]
DFRLab found that Moldova24 (MD24), launched in July 2024, is a Kremlin-aligned Moldovan web broadcaster backed by fugitive oligarch Ilan Shor and covertly supported by Russian state media infrastructure like RT and TV Novosti.
As stated in a TechXplore article, during a recent four-day conflict between India and Pakistan, social media users increasingly turned to AI chatbots like Grok, ChatGPT, and Gemini to verify claims in real time. However, instead of clarifying facts, these tools often amplify disinformation.
The Virtual Manipulation Brief 2025 from NATO StratCom COE highlighted a sharp escalation in Russian and Chinese information manipulation, driven by advances in AI.
AFP Fact Check reported that the recent South Korean election was significantly impacted by widespread online disinformation campaigns, with both progressive and conservative parties being targeted.
OpenAI’s latest report features case studies with poetic titles: "Sneer Review," "Uncle Spam," and "Helgoland Bite." These studies expose how state-linked actors from China, Russia, and Iran weaponize generative AI.
> TABLE OF CONTENTS <
HOSTILE INFLUENCE CAMPAIGNS
STATE ACTORS
[Russia]
[The War in Ukraine]
AI-RELATED ARTICLES
GENERAL REPORTS
FRAMEWORKS TO COUNTER DISINFORMATION
STATE ACTORS
[Russia]
Russia’s Hybrid War on Europe and Ukraine
A report by the Atlantic Council states that Russia is intensifying its hybrid warfare against Europe through cyberattacks, sabotage, disinformation, and targeted influence operations (“hostile influence”). A central tactic involves manipulating elections via coordinated disinformation campaigns, manipulated online content, and covert support for system-critical or pro-Russian actors. Countries particularly affected include Poland, Slovakia, the Baltic states, and Romania (for more information on the manipulation of the 2024 Romanian Presidential Elections, see our detailed research report here).
Ukraine has been exposed to these tactics since 2014, but has developed an effective response model. This model combines media literacy, digital volunteer networks tasked with identifying and debunking false information, and a state-led communication strategy that provides transparent messaging and actively counters disinformation narratives.
By contrast, the report finds that NATO and EU structures remain institutionally fragmented and insufficiently coordinated. A joint EU-NATO-Ukraine framework, along with targeted support for fact-checking, media education, and digital resilience, would help to form a sustainable response model.
Additionally, A report by EU vs. Disinfo mentioned that in recent months, pro-Kremlin disinformation has increasingly manipulated international perceptions by framing the United States, particularly under former President Donald Trump, as a rational actor seeking peace, in contrast to Europe and Ukraine, which are depicted as aggressive and intent on prolonging the war. Russian state media and affiliated outlets have praised Trump, claiming he shares views with Moscow and favors negotiations aligned with Russia’s demands. This narrative aims to sow discord within the West by portraying the U.S. as a more reasonable counterweight to a supposedly belligerent Europe.
Sources:
Atlantic Council, Beznosiuk, M., Russian hybrid warfare: Ukraine’s success offers lessons for Europe, 2025, [online];
Available at: https://www.atlanticcouncil.org/blogs/ukrainealert/russian-hybrid-warfare-europe-should-study-ukraines-unique-experience/#
EU vs. Disinfo, The world in black and white, 2025. [online]; Available at: https://euvsdisinfo.eu/the-world-in-black-and-white-2/
Digital Hostile Influence: Moldova24’s Media Operations
According to a DFRLab investigation, Moldova24 (MD24) is an ostensibly independent Moldovan web-broadcaster launched in July 2024 that operates as an extension of the Kremlin’s information machine. Streaming news shows, call-in programs, and a torrent of short social-media clips around the clock, the channel is steered by the media network of fugitive oligarch Ilan Shor but protected by the resources of Russian state outlets such as RT and TV Novosti, from whose servers it quietly transmits.
MD24’s editors recycle Moscow’s talking points almost verbatim: Brussels is blamed for soaring prices, President Maia Sandu is painted as an authoritarian US puppet, and the idea of NATO cooperation is framed as a prelude to war. The tone is emotional and often sensationalistic, with dramatic music, split-second cuts, and scrolling captions designed to keep viewers angry and engaged.
During the winter energy crisis of 2025, the channel uploaded nearly a thousand TikTok videos in eight weeks, saturating feeds just as household bills spiked; each clip ended with the same implied lesson: salvation lies in turning away from the West.
When platforms suspend an account, the operation re-emerges under a slightly different name, supported by a lattice of mirror websites and fallback channels on Telegram and Instagram. The objective is not merely to win clicks, but to erode Moldovans’ trust in their democratic institutions, sap enthusiasm for EU membership, and clear the ground for pro-Russian parties ahead of crucial elections. In short, MD24 is Moscow’s pocket-sized but agile psychological operations tool, explicitly calibrated for Moldova’s volatile information space.
Source:
DFRLAB, Olari V., Unveiling the Russian infrastructure supporting the Moldova24 TV channel, 2025. [online]; Available at: https://dfrlab.org/2025/06/03/unveiling-the-russian-infrastructure-supporting-the-moldova24-tv-channel/
[The War in Ukraine]
Russian disinformation reacts hesitantly after the Ukrainian Air Strike
EUvsDisinfo highlights the impact of Operation Spiderweb. In this coordinated drone offensive conducted by Ukraine, five Russian military airfields were partially hit. The attacks were unexpected and seemingly caught the Russian military leadership and Kremlin-aligned propaganda unprepared. On the Russian side, the actual effects were downplayed. Reports reluctantly mentioned "some fires," while Russian President Putin remained silent. Prominent propagandists like Vladimir Solovyov also showed unusual restraint.
EUvsDisinfo concludes that Russian information control is fragile and dependent on Kremlin directives. Lacking clear instructions, pro-Kremlin propaganda resorted to familiar tactics: downplaying the attacks, portraying Ukrainians as terrorists, and spreading conspiracy theories that British strategists were behind the strikes to weaken Russia. This was intended to relativize the impact of the Ukrainian operation and depict the West as the mastermind. Additionally, Russia emphasized nuclear deterrence to highlight its strength.
The article concludes that the Kremlin’s response to Operation Spiderweb reveals weaknesses in its authoritarian system, especially the propaganda’s dependence on Putin’s direct control. When he remains silent, information gaps cannot be convincingly filled. The repeated accusation against the United Kingdom distracts and aim to support Russia’s invulnerability narrative. Overall, the successful Operation Spiderweb undermines the image of Russia’s military superiority and exposes cracks in the Kremlin’s controlled disinformation structure.
Source:
Eu vs. Disinfo, An Unpleasant Incident, 2025. [online]; Available at: https://euvsdisinfo.eu/an-unpleasant-incident/
Kremlin Disinformation Targeting Polish Elections
EU vs. Disinfo states that during Poland’s 2025 presidential elections, pro-Kremlin disinformation sources actively sought to undermine public support for Ukraine and delegitimize the democratic process. Poland was portrayed as corrupt, chaotic, and imperialistic, while the election itself was framed as manipulated by foreign powers. Simultaneously, anti-Ukrainian sentiments were stoked through false claims that both presidential candidates were increasingly hostile toward Ukraine.
Three core narratives dominated the influence campaign:
First, the election was depicted as a battle between “Trump, Brussels, and globalist elites,” casting doubt on its democratic legitimacy.
Second, it was asserted that Ukraine would be the loser of the election outcome, as candidates were supposedly anti-Ukrainian, and Polish society was portrayed as weary of supporting Ukrainian refugees.
Third, both candidates were labeled as “hardline Russophobes,” suggesting that relations with Russia would remain unchanged regardless of the winner.
Following Karol Nawrocki’s victory, pro-Kremlin media framed his success as the onset of the “Trumpization of Europe” and a blow against the EU. Nawrocki was further depicted as a wanted figure on Russian watchlists due to his removal of Soviet-era monuments.
Overall, the disinformation campaign aimed to portray Poland as a puppet of anti-Russian forces, discredit its democratic institutions, and strain the relations between Poland, Ukraine, and the West.
Source:
Eu vs. Disinfo, Polish elections in the eyes of Moscow, 2025. [online]; Available at: https://euvsdisinfo.eu/polish-elections-in-the-eyes-of-moscow/
Russia Spreads Misinformation About Kursk Attack and “Neo-Nazis”
According to DisinfoWatch, A recent tweet by Russia’s Ministry of Foreign Affairs falsely claimed that Ukraine committed atrocities in Russia’s Kursk region, comparing the alleged violence to Nazi crimes. The statement, made by spokesperson Maria Zakharova, accused Ukraine’s government and foreign fighters of killing over 300 civilians in Kursk. However, there is no credible evidence to support this narrative. Independent investigators, satellite imagery, and international media have found no indication that Ukraine targeted civilians in the region.
The accusation of Ukraine being run by "neo-Nazis" is a recycled propaganda trope. In reality, Ukraine’s president is Jewish, and far-right parties hold no parliamentary seats in the country. The Kremlin’s references to Nazism aim to evoke historical trauma, misrepresent Ukraine as the aggressor, and deflect the attention away from ongoing Russian violence.
Source:
Russian MFA Makes False Claims About Kursk and “Nazis”, 2025, [online]; Available at: https://disinfowatch.org/disinfo/russian-mfa-makes-false-claims-about-kursk-and-nazis/
AI RELATED ARTICLES
Confidently Incorrect: AI Factcheckers Spread Misinformation
TechXplore reported that as AI chatbots like Grok, ChatGPT, and Gemini become more prominent on social platforms, users are increasingly relying on them for real-time fact-checking. Prompts like 'Hey @Grok, is this true?' are now routine on X.
They found that instead of debunking false claims, chatbots often reinforce them. Grok, for instance, misidentified archival footage from Sudan’s Khartoum airport as a missile strike in Pakistan, and described a video of a burning building in Nepal as showing Pakistani retaliation against India.
A Columbia Tow Centre study concludes that chatbots often provide misleading information instead of admitting uncertainty. When AFP presented Gemini with an AI-generated image of a woman, it confirmed its authenticity and invented background details. Grok likewise validated a fake 'giant anaconda' video, citing non-existent expeditions.
Experts warn that with reduced human moderation, chatbot fact-checkers may be effectively exploited by hostile actors. Their tendency to craft plausible but false narratives opens them up to direct manipulation by misinformation campaigns and the possibility of their output being shaped by politically biased training data.
The vulnerability of AI Chatbots to being directly manipulated was dramatically acknowledged in a statement by xAI, which blamed an "unauthorized modification" for causing Grok to generate unsolicited posts referencing the unsupported narrative of “white genocide" in South Africa. When AI expert David Caswell asked Grok who might have modified its system prompt, the chatbot named Elon Musk as the "most likely" culprit.
Sources:
Techxplore, Hey chatbot, is this true? AI 'factchecks' sow misinformation, 2025. [online] Available at: https://techxplore.com/news/2025-06-hey-chatbot-true-ai-factchecks.html
Columbia Journalism Review, AI Search Has A Citation Problem, 2025. [online] Available at: https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php
Questions About C2PA’s Viability in Combating AI Misinformation
A recent RAND report casts doubt on the effectiveness of the Coalition for Content Provenance and Authenticity (C2PA), a private-sector initiative to counter the growing threat of AI-generated misinformation. Backed by industry giants such as Adobe, Google, and Microsoft, the C2PA seeks to embed cryptographic provenance into digital content, offering a technical solution to restoring trust online.
Initially hailed as a potential bulwark against disinformation, the initiative now faces a fundamental design flaw. Its voluntary, closed-system model is ill-suited to the open nature of the internet. While it tags participating content with a digital “badge” of authenticity, it cannot account for the vast majority of legitimate content created outside its system. This creates what RAND calls an “authenticity gap,” where untagged content may be genuine or expertly faked, giving bad actors room to sow doubt.
An outdated threat model that fails to address the accelerating sophistication of generative AI compounds the issue. Critics argue the standard must evolve to differentiate between credentials issued by verified, trusted entities and those from unvetted sources. Without such reform, the C2PA may inadvertently erode trust rather than enhance it.
Source:
Rand, Overpromising on Digital Provenance and Security, 2025 [online]
Available at: https://www.rand.org/pubs/commentary/2025/06/overpromising-on-digital-provenance-and-security.html
NATO Warns of Generative ‘AI Agent Swarms’
The recent Virtual Manipulation Brief 2025 from NATO StratCom COE highlights a sharp escalation in Russian and Chinese information manipulation capabilities, driven by advances in AI.
The report states that in the last year, campaigns using AI-generated content, deepfakes, and bot networks to spread anti-NATO, anti-Ukraine, and anti-U.S. narratives were more frequent, synchronized, and emotionally targeted than previously.
A key finding is the increasingly sophisticated Kremlin-backed disinformation, which is increasingly effective in using AI to automate content creation and tailor messaging to target specific platforms and languages. This has enabled their hostile influence campaigns to exploit geopolitical developments rapidly.
Telegram, VK, and YouTube have emerged as high-engagement platforms for pro-Russian messaging, while X serves as a broad reach vector for both Russian and Chinese narratives.
The report highlights a growing alignment between pro-Russian and pro-Chinese narratives, which consistently frame NATO as a destabilizing force and portray the U.S. critically.
China's messaging aligns closely with Russia's, portraying NATO as destabilizing and the U.S. as weak and corrupt, thereby expanding the threat beyond a single actor. Inauthentic behaviour, including cross-platform amplification and echo chamber tactics, was also frequently observed.
Looking ahead, the brief warns of the potential for adversaries to coordinate "generative AI agent swarms" using emerging interoperability standards (MCP, A2A, ACP), raising concerns about the speed and scale of future influence campaigns.
The Nato report concludes that countering these evolving threats is a critical challenge for democratic societies.
Source:
Nato Strategic Communications Centre of Excellence, May 2025, Virtual Manipulation Brief, [online]
Available at: https://stratcomcoe.org/pdfjs/?file=/publications/download/VMB-Final-8325b.pdf?zoom=page-fit
GENERAL REPORTS
Jae-Myung Wins Disinformation-Saturated South Korean Election
The recent South Korean election on June 3, 2025, starkly illustrated how hostile digital influence campaigns are increasingly saturating democratic processes. According to an article by AFP Fact Check, both conservative and progressive candidates were targeted by widespread online disinformation campaigns, a trend in line with the escalating threats to electoral integrity around the world.
In the run-up to the vote, progressive candidates, notably then-opposition leader Lee Jae-myung, faced a concerted effort to portray them as sympathetic to or even controlled by China. Although going on to win the election, Jae-Myung’s campaign was dogged by the false claim that Chinese nationals were exploiting weaknesses in the overseas voter registration system to cast fraudulent ballots. "It's now the claim that progressives are subservient to China that resonates -- especially amid worsening US-China tensions," noted Choi Jin-bong, a media communications professor. Fabricated AI images, including one showing Lee kowtowing to a Mao statue, further fuelled these misleading narratives.
Conservative candidates were targeted over alleged ties to the United States. Doctored images depicted former Prime Minister Han Duck-Soo’s presidential campaign announcement with Trump's endorsement. Another false claim, spread by a popular right-wing YouTuber to 1.5 million subscribers, asserted the Pentagon's endorsement of a conservative candidate, a claim they swiftly denied.
The sheer intensity and ubiquitousness of digital hostile influence attacks during the South Korean election can be viewed as a warning of the escalating challenges democratic nations face worldwide. In campaign environmentssaturated by digital misinformation, the integrity of democratic processes risks being further eroded.
Source:
AFP Fact Check, Shim Kyu-Seok, South Korea election hit by misinformation, 2025, [online]; Available at: https://factcheck.afp.com/doc.afp.com.48ZT8CP
FRAMEWORKS TO COUNTER DISINFORMATION
OpenAI’s Disinformation Tracking Report
A recent report by OpenAI details several covert influence operations (IOs). To counter these threats, OpenAI leverages its AI tools to support expert investigative teams in identifying and disrupting malicious activities. Over the past three months, they’ve uncovered and exposed operations involving social engineering, cyber espionage, and deceptive online campaigns.
Operation “Sneer Review” notably involved ChatGPT accounts generating large volumes of social media comments on platforms like TikTok, X, and Reddit. These posts aligned with China’s geopolitical interests, including critical commentary on Taiwanese content and the defamation of activists, accompanied by staged, seemingly organic user engagement. Internal performance reviews were also AI-generated to manage the operation systematically.
Another China-linked campaign, “Uncle Spam”, aimed to polarize U.S. public opinion by creating divisive posts on platforms such as X and Bluesky. AI-generated profiles, including those themed around U.S. veterans, were used to deepen political divides.
The report also highlights influence activities by Russian and Iranian actors, such as “Helgoland Bite” in Germany, which promotes pro-AfD content, and the Iranian operation “STORM-2035,” which produced posts on U.S. immigration policy and European independence movements.
The report illustrates how AI is increasingly exploited to mass-produce politically manipulative content to sway public opinion. Actors combine automated content creation, fake personas, and coordinated engagement to simulate authenticity. Although the actual impact of these operations varied and was sometimes limited, the analysis underscores the growing role of AI-driven influence campaigns in global information conflicts.
Source:
OpenAI, Disrupting malicious uses of AI: June 2025, 2025. [online]; Available at: https://cdn.openai.com/threat-intelligence-reports/5f73af09-a3a3-4a55-992e-069237681620/disrupting-malicious-uses-of-ai-june-2025.pdf/
[Download Report]
GLOSSARY
Information Operations
Hybrid Warfare
Cyber Warfare
Cyfluence Attack
Soft Warfare
CIB
FIMI
Hostile Influence Campaign (HIC)
Digital Impact on Discourse (DID)
Misinformation
Disinformation
Inauthentic Behavior
Fake users
Unidentified users
Sockpuppet accounts
Bots
Repurposed accounts
Fake website
Deep Assets
Real platforms
Astroturfing
Cyberbullying
DISCLAIMER
Copyright and License of Product
This report (the "Product") is the property of Cyfluence Research Center gGmbH ("Cyfluence") and is protected by German and international copyright laws. The User is granted a limited, non-transferable license to use the Product solely for internal purposes. Reproduction, redistribution, or disclosure of the Product, in whole or in part, without prior written consent from Cyfluence is strictly prohibited. All copyright, trademark, and proprietary notices must be maintained.
Disclaimer of Warranties
The Product is provided "as is" without warranties of any kind, express or implied, including but not limited to warranties of merchantability or fitness for a particular purpose. Although Cyfluence takes reasonable measures to screen for viruses and harmful code, it cannot guarantee the Product is free from such risks.
Accuracy of Information
The information in the Product has been obtained from sources believed to be reliable. However, Cyfluence does not guarantee the information's accuracy, completeness, or adequacy. The User assumes full responsibility for how they use and interpret the Product. Cyfluence is not liable for errors or omissions; opinions may change without notice.
Limitation of Liability
To the fullest extent permitted by law, Cyfluence shall not be liable for any direct, indirect, incidental, or consequential damages, including lost profits or data, arising from the use of or inability to use the Product, even if advised of such possibilities. Liability for intent or gross negligence remains unaffected under German law.
Indemnification
The User agrees to indemnify and hold harmless Cyfluence, its affiliates, licensors, and employees from any claims or damages arising from the User’s use of the Product or violation of these terms.
Third-Party Rights
The provisions regarding Disclaimer of Warranties, Limitation of Liability, and Indemnification extend to Cyfluence, its affiliates, licensors, and their agents, who have the right to enforce these terms.
Governing Law and Jurisdiction
This Agreement is governed by German law, and any disputes shall be resolved exclusively in the courts of Berlin. The remaining terms remain in full effect if any provision is found invalid.