Weekly Report: Cyber based influence campaigns 14th - 20th July 2025
- CRC
- Jul 23
- 11 min read
Updated: Aug 7

[Listen to the Podcast]
[Introduction]
During the 14th to the 20th of July 2025, we observed, collected, and analyzed endpoints of information related to cyber-based hostile influence campaigns (including Cyfluence attacks). The following report provides a summary of the key events we consider most significant.
This week's review highlights the convergence of cyber, information, and cognitive warfare domains. The EU's sanctioning of a Russian military unit for GPS jamming underscores the kinetic potential of hybrid threats, while multinational operations target pro-Kremlin hacktivist groups. Simultaneously, actors leverage AI and inauthentic networks to inflame domestic political tensions from Europe to Asia. These events signal a complex security environment in which state and non-state actors exploit technological and societal vulnerabilities, necessitating a coordinated, multifaceted response from Western alliances.
[Highlights]
> TABLE OF CONTENTS
HOSTILE INFLUENCE CAMPAIGNS
RELATED ARTICLES
GENERAL REPORTS
HOSTILE INFLUENCE CAMPAIGNS
Kremlin Weaponizes History in Disinformation Campaign Against Germany
A report from EU vs. Disinfo details a coordinated Kremlin information manipulation campaign aimed at portraying Germany as a resurgent, aggressive military power. The campaign's central narrative, promoted through state-controlled media such as Rossiya 1 and Vesti FM, distorts Germany's defensive policies into an offensive "rearmament" that betrays post-war commitments. The author states this strategy involves direct demonization of political figures, including baselessly labeling Chancellor Friedrich Merz a "descendant of Nazis." The campaign weaponizes historical trauma to justify Russia's invasion of Ukraine to domestic audiences while simultaneously seeking to undermine German public support for Ukraine and weaken transatlantic security cooperation. The report concludes that this is a top-down strategy, endorsed by Kremlin officials such as Dmitry Peskov, with clear objectives in both domestic and foreign policy.
Source:
EUvsDisinfo, 2025. Summoning the ghost of the Reich. [online].
Available at: https://euvsdisinfo.eu/summoning-the-ghost-of-the-reich/
Russia-Linked Group Impersonates Journalists in European Disinfo Push
Researchers have identified a Kremlin-linked threat actor, Storm-1516, conducting a sophisticated disinformation campaign by impersonating journalists and spoofing news websites across Europe. A report from The Record details how the campaign targets countries including France, Armenia, Germany, Moldova, and Norway with false narratives designed to discredit political leaders and sow discord. The group's method involves using the names and photos of real reporters on fabricated articles to lend them unearned credibility. The campaign's impact is considered significant, with French authorities labelling the group a threat to European public debate. The narratives have ranged from fabricated corruption scandals involving Moldovan and Armenian leaders to false environmental crises aimed at disrupting international forums, such as the Internet Governance Forum (IGF).
Source:
The Record, Antoniuk, D., 2025. Russia-linked group spoofing European journalists to spread disinformation. [online]. Available at: https://therecord.media/russia-group-spoofing-journalists-disinfo
How Russia Tailors Propaganda for an 'Informational Occupation
Between January 2024 and April 2025, a network of over 3,600 automated accounts flooded Telegram channels in Russian-occupied Ukraine with pro-Kremlin comments. According to a joint analysis by OpenMinds and DFRLab, this botnet represents a targeted "informational and cultural occupation" running parallel to the military one. The campaign tailors its messaging by disproportionately pushing narratives that praise Russian culture and governance, a different emphasis than used for domestic Russian audiences. This strategy suggests a specific goal beyond simple propaganda: the report concludes the effort is aimed at manufacturing the illusion of local support for Russia's presence, effectively creating a fabricated consensus to legitimize its control.
Source:
Atlantic Council, Dukach, Y., Adam, I. & Furbish, M., 2025. Digital occupation: Pro-Russian bot networks target Ukraine’s occupied territories on Telegram. [online]. Available at: https://www.atlanticcouncil.org/in-depth-research-reports/report/report-russian-bot-networks-occupied-ukraine/
New EU Sanctions Target Russian FIMI, From State Media to Military Units
The Russian military unit linked to widespread GPS jamming over the Baltic Sea, which has disrupted civil aviation, is now under EU sanctions. This action, reported on by the EEAS Delegation to Ukraine, is part of a broader package announced on July 15, 2025, targeting Russia's hybrid warfare and information manipulation campaigns. The new listings also include the state media network RTRS, which is intended to supplant Ukrainian broadcasters in occupied regions, as well as several entities created by the late Yevgeny Prigozhin. One such group, the Foundation to Battle Injustice, is cited for spreading disinformation that accused French soldiers of kidnapping children in Niger. The sanctions demonstrate the EU's strategy of targeting the full spectrum of Russia's FIMI apparatus, from military electronic warfare units to individual social media influencers.
Source:
Press and Information Team, Delegation to Ukraine (EEAS), 2025. Russian hybrid threats: EU lists nine individuals and six entities responsible for destabilising actions in the EU and Ukraine. [online].
Control vs. Growth: The New Dilemma Shaping China's AI Ambitions
A report from the Carnegie Endowment for International Peace argues that China's AI policy follows a cyclical pattern, oscillating between prioritizing economic growth when it feels technologically vulnerable and asserting ideological control when it feels strong. The author states that the early 2025 breakthrough of the DeepSeek-R1 model has initiated a new, unprecedented "Crossroads Era." A core tension defines this period: China now possesses high technological confidence in its AI capabilities, but its lackluster economy creates a conflicting imperative. While evidence suggests a return to control, seen in intensified oversight of DeepSeek and new content-labeling regulations, economic fragility and US export controls may compel Beijing to adopt a more pragmatic, growth-oriented approach.
Source:
Carnegie Endowment, Singer, S. & Sheehan, M., 2025. China’s AI Policy at the Crossroads: Balancing Development and Control in the DeepSeek Era. [online].
Available at: https://carnegieendowment.org/research/2025/07/chinas-ai-policy-in-the-deepseek-era?lang=en
Pakistan's Alliance with China: A Partnership with Hidden Costs
A Doublethink Lab report by Dr. Haroon ur Rasheed Baloch examines the extensive influence of the People's Republic of China (PRC) in Pakistan, primarily driven by the China-Pakistan Economic Corridor (CPEC). The author asserts that this deep integration across Pakistan's economic, military, technological, and academic sectors, while offering benefits, poses significant risks to national sovereignty and social stability. The report highlights a lack of transparency in CPEC agreements, resulting in economic burdens such as soaring electricity tariffs. It also highlights the PRC's soft power campaign, which shapes media and academic discourse to favor Beijing's narratives, as well as a growing military collaboration centered on Gwadar Port that raises regional security concerns for the US and India.
Source:
Doublethink Lab, Dr. Haroon ur Rasheed Baloch, China Index Spotlight: PRC’s Soft and Hard Power Influence in Pakistan. [online]. Available at: https://medium.com/doublethinklab/prcs-soft-and-hard-power-influence-in-pakistan-5f7c454912ab
From Deepfakes to 'Influence for Hire': China's Evolving Tactics
Recent analyses from Foreign Policy and Doublethink Lab reveal coordinated, PRC-linked disinformation campaigns targeting domestic politics in the Philippines and Taiwan. The reports detail how these operations utilize vast networks of inauthentic accounts across platforms such as Facebook, X, and Threads to exploit internal political rivalries. In the Philippines, the campaign reportedly uses generative AI and deepfakes to inflame feuds between the Marcos and Duterte factions. In Taiwan, a similar operation impersonates locals using stolen photos to criticize the ruling Democratic Progressive Party (DPP). The author of the Doublethink Lab report suggests that these networks may be part of a commercial "influence for hire" ecosystem, blending political messaging with unrelated content to build their personas.
Source:
Foreign Policy, Aspinwall, N., 2025. The Philippines Is a Petri Dish for Chinese Disinformation. [online]. Available at: https://foreignpolicy.com/2025/07/14/china-philippines-disinformation-elections/
Fake Accounts Impersonate Taiwanese on Threads
Between March and April 2025, Doublethink Lab identified 51 inauthentic Threads accounts posing as Taiwanese citizens and targeting domestic political discourse. The accounts, likely linked to the People’s Republic of China (PRC), used stolen profile photos, traditional Chinese text, and localized content to amplify anti-Democratic Progressive Party (DPP) narratives. Evidence of simplified Chinese usage, Hong Kong-linked phone numbers, and copy-pasted political slogans bolstered attribution. The campaign mixed political messaging with sexually suggestive content and commercial spam, consistent with a pattern of “influence-for-hire” operations. Although engagement was limited to the Threads platform, political posts saw significantly higher interaction rates, suggesting partial success in breaching the inauthentic content bubble.
Source:
Doublethink Lab, Digital Intelligence Team/Doublethink Lab, 2025. Inauthentic Accounts Impersonate Taiwanese to Attack Political Party. [online].
Available at: https://medium.com/doublethinklab/inauthentic-accounts-impersonate-taiwanese-to-attack-political-party-c7d04d5e1e13
AI RELATED ARTICLES
Algorithmic Lies: AI News Channels Undermine Canada’s Election
In the days after Canada’s 2025 election, dozens of YouTube videos surfaced claiming ballot box theft and recount conspiracies in ridings that no longer exist—fabrications generated by AI and viewed millions of times. DFRLab traced this coordinated campaign to 42 faceless, AI-powered channels posing as Canadian news outlets. These channels pushed partisan narratives favoring Conservative politicians, amplified Alberta separatism, and spread election disinformation under the guise of breaking news. While YouTube suspended many of the accounts, the incident highlights how “AI slop” is rapidly shaping digital discourse, exploiting platform algorithms with nearly zero human oversight or factual grounding.
Source:
DFRLab, Digital Forensic Research Lab, 2025. AI‑generated news channels spread election fraud and separatist narratives in Canada. [online]. Available at: https://dfrlab.org/2025/07/17/ai-generated-news-channels-spread-election-fraud-and-separatist-narratives-in-canada/
AI Bot Network Fractures Amid MAGA Epstein Fallout
NBC News reports on a network of over 400 suspected AI-driven bot accounts on X (formerly Twitter) that automatically respond to conservative users with pro-Trump content. Tracked by researchers at Alethea and Clemson University, the network exhibits signs of coordinated inauthentic behavior, including the repetition of messages, the use of irrelevant hashtags, and the exclusive posting of replies. The bots initially maintained unified support for Trump and MAGA figures but fractured following controversy over the Epstein files, with contradictory messages appearing simultaneously. Experts suggest the AI was trained on real MAGA content and mirrors organic shifts in sentiment. The incident reflects broader concerns about AI-amplified influence operations on poorly moderated platforms.
Source:
NBC News, Collier, K., 2025. A MAGA bot network on X is divided over the Trump‑Epstein backlash. [online].
Available at: https://www.nbcnews.com/tech/internet/maga-ai-bot-network-divided-trump-epstein-backlash-rcna219167
Digital Democracy in Decline: Global Trends and Consequences
The Carnegie Endowment article outlines three converging threats that undermine digital democracy: the shrinking of civic space, declining funding for digital rights, and the erosion of Western legitimacy. Civic space is increasingly suppressed through surveillance technologies, repressive legislation, and collusion between platforms and governments. Meanwhile, global funding for digital rights organizations has declined sharply due to shifts in the political landscape and nationalist policies. The article also critiques Western hypocrisy in promoting digital freedoms abroad while enabling surveillance and repression domestically, leading many Global Majority countries to turn to China and Russia for digital infrastructure. The piece concludes by advocating for sustainable, decentralized funding and renewed legitimacy through consistent, rights-based engagement.
Source:
Carnegie Endowment for International Peace, Sesan, ‘Gbenga, 2025. Shrinking Civic Space, Digital Funding, and Legitimacy in a Post‑Truth Era. [online]. Available at: https://carnegieendowment.org/research/2025/07/shrinking-civic-space-digital-funding-and-legitimacy-in-a-post-truth-era?lang=en
GENERAL REPORTS
Cognitive Warfare: The Silent Frontline of Modern Conflict
The Conversation article examines the concept of cognitive warfare (or “cog war”) as an emerging domain of conflict in which adversaries manipulate human perception and behavior through disinformation and psychological tactics, often below the threshold of traditional armed conflict. Using examples from COVID-19 and the Ukraine war, the article illustrates how false narratives, sometimes supported by AI-driven microtargeting, can lead to real-world harm. As such operations increasingly erode the boundaries between digital, cognitive, and physical domains, legal frameworks lag. Current laws of war do not address psychological harm, raising calls for expanded protections under human rights law.
Source:
The Conversation, Gisselsson Nord, D. & Rinaldi, A., 2025. Cognitive warfare: why wars without bombs or bullets are a legal blind spot. [online].
Available at: https://theconversation.com/cognitive-warfare-why-wars-without-bombs-or-bullets-are-a-legal-blind-spot-260607
Austria’s Disinformation Landscape: Narratives, Actors, and Impacts
A recent report from EU DisinfoLab outlines how Austria has become a hotspot for diverse and persistent disinformation narratives, often tied to political opportunism and ideological movements. Key themes include anti-migrant sentiment, COVID-19 conspiracy theories, pro-Russian framing of the Ukraine war, and hostility toward renewable energy and EU regulations. Disinformation has circulated widely through both alternative and mainstream media, often amplified by far-right actors such as the FPÖ. Despite repeated fact-checking efforts and legal responses, narratives such as “migrants abuse welfare” or “vaccines cause turbo cancer” continue to shape public opinion and political discourse, especially ahead of upcoming elections.
Source:
EU Disinfo Lab, Schäfer, C., 2025. Disinfo‑landscape‑in‑Austria. [online].
Available at: https://www.disinfo.eu/wp-content/uploads/2025/07/20250717_Disinfo-landscape-in-Austria-v2.pdf
International Sweep Disrupts Pro-Russian Cybercrime Network
Operation Eastwood, a multinational cybercrime crackdown coordinated by Europol and Eurojust, targeted the pro-Russian group NoName057(16), known for ideological DDoS attacks across Europe. The operation, involving 25 countries and private sector partners, led to two arrests, seven international warrants, and the disruption of over 100 servers. Germany identified six suspects as Russian nationals and issued multiple arrest warrants. The group’s activities escalated from attacks on Ukrainian targets to those against NATO-aligned states, including recent incidents in the Netherlands and Switzerland. Authorities highlighted the group’s use of gamified recruitment methods, cryptocurrency incentives, and decentralized operations relying on ideological volunteers.
Source:
Europol, 2025. Global operation targets NoName057(16) pro‑Russian cybercrime network. [online].
Available at: https://www.europol.europa.eu/media-press/newsroom/news/global-operation-targets-noname05716-pro-russian-cybercrime-network
EU Observatory Expands Fight Against Online Disinformation
In 2020, eight regional hubs across Europe were funded to bolster a coordinated response against digital disinformation, marking the second phase of the European Digital Media Observatory (EDMO). The project, led by the European University Institute and involving partners from Greece, Denmark, and Italy, builds on an infrastructure launched in 2019. Its goal is to provide secure, privacy-conscious data access for researchers, boost media literacy, support fact-checking networks, and inform policymakers. Independent from EU authorities, EDMO exemplifies a pan-European attempt to consolidate fragmented anti-disinformation efforts into a cohesive, evidence-based ecosystem.
Source:
The Conversation, Gisselsson Nord, D. & Rinaldi, A., 2025. Cognitive warfare: why wars without bombs or bullets are a legal blind spot. [online].
Available at: https://theconversation.com/cognitive-warfare-why-wars-without-bombs-or-bullets-are-a-legal-blind-spot-260607
TAKEAWAYS
This week's events illustrate a broad strategic pivot by state actors toward manufacturing inauthentic consensus. From botnets fabricating pro-Kremlin support in occupied Ukraine to AI-generated content simulating grassroots political movements in North America, the objective is not just to sow discord but to create the illusion of popular will. This poses a fundamental challenge to discourse in democratic societies, as distinguishing between genuine public opinion and artificially generated narratives becomes increasingly complex, the very foundation of legitimate governance is eroded.