Weekly Report: Cyber based influence campaigns 21th - 27th July 2025
- CRC
- Jul 30
- 12 min read
Updated: Aug 7

[Listen to the Podcast]
[Introduction]
During the 21th to the 27th of July, 2025 we observed, collected, and analyzed endpoints of information related to cyber based hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events.
The week’s reporting illuminates a strategic tension between rapidly advancing disinformation capabilities, often state-aligned, and the slower-moving, fractured responses of democracies and institutions tasked with defending truth. China's AI chatbots echo state lines with near-automated efficiency, while Russia’s hybrid warfare blends digital deception with kinetic force. Western platforms and governments appear either unwilling or ill-equipped to mount an equally agile defense, as evidenced by X’s non-compliance with EU sanctions and the U.S. military’s lagging AI adoption. This asymmetry of adaptation risks ceding the informational high ground to authoritarian actors adept at shaping perceptions before facts catch up.
[Highlights]
A recent investigation by Alliance4Europe revealed that X is failing to enforce EU sanctions against Russian state-affiliated media and influence operations, allowing widespread dissemination of disinformation.
As published by EU vs. Disinfo, Imedi TV, one of Georgia’s most-watched channels, has become a central platform for pro-government propaganda and disinformation, mirroring Kremlin-style tactics to discredit the European Union and democratic institutions.
DFRLab reports that pro-Russian TikTok influencers are spreading disinformation by portraying occupied Donbas as thriving under Russian control, using curated videos to legitimize the occupation and distort public perception.
A NewsGuard audit revealed that five leading Chinese AI models frequently echoed false pro-Beijing narratives, failing to provide accurate information in over 60% of cases tested in both English and Mandarin.
RAND reports that the U.S. Department of Defense is exploring the use of generative AI for influence operations, raising concerns about coordination, oversight, and ethical risks in military messaging.
GIOR reports that a transnational network of national conservative actors, backed in part by Russian influence operations, is promoting coordinated illiberal narratives that threaten democratic institutions across Europe and the U.S.
Google’s Threat Analysis Group reports that in Q2 2025, it dismantled nearly 11,000 YouTube channels and other platform accounts tied to coordinated disinformation campaigns by state-linked actors from China, Russia, Iran, and others aiming to manipulate global narratives.
> TABLE OF CONTENTS
SOCIAL MEDIA PLATFORMS
STATE ACTORS
AI RELATED ARTICLES
GENERAL REPORTS
SOCIAL MEDIA PLATFORMS
X Fails to Enforce EU Sanctions Against Russian State Media
A recent report by Science Feedback and Alliance4Europe reveals that the social media platform X is systemically failing to comply with EU sanctions against Russian state media. Researchers identified hundreds of sanction-violating posts and formally reported a sample of 125 to X using its designated tool for illegal content. The platform's response was profoundly inadequate: only 57% of reports received an acknowledgement, a breach of Digital Services Act (DSA) obligations. Furthermore, only a single post was removed. In all other cases, X claimed no violation of EU law was found, despite clear evidence to the contrary. The investigation concludes that X’s moderation mechanisms are either insufficiently equipped or the platform is unwilling to enforce sanction-related policies, posing a significant risk to the EU’s information environment.
Source:
Science Feedback, Alliance4Europe, Terroille,C., Nazari S. , Casandjian E., 2025. Flagged and Ignored: Testing X’s Response to EU Sanction Violations. [online] Available at:
STATE ACTORS
A Megaphone for Power: How Imedi TV Undermines Georgia’s EU Path
An article from EUvsDisinfo details how Imedi TV, one of Georgia's most-watched television channels, operates as a powerful propaganda machine for the ruling Georgian Dream party. The outlet systematically promotes anti-Western narratives and Kremlin-style disinformation to undermine the country's European aspirations. According to the analysis, Imedi TV, owned by a businessman with alleged links to Russian elites, demonizes the political opposition and civil society. It also orchestrates smear campaigns against EU diplomats, including Ambassador Paweł Herczyński, accusing them of political interference and serving a "Deep State." By framing European institutions as corrupt and hostile, the channel functions as a key instrument in Georgia's democratic backsliding and its obstruction of a European future.
Source:
EUvsDisinfo, 2025. Imedi TV: The propaganda megaphone undermining Georgia’s EU aspirations. [online] Available at: https://euvsdisinfo.eu/imedi-tv-the-propaganda-megaphone-undermining-georgias-eu-aspirations/
NoName057(16) Conducts Sustained DDoS Campaign Against Ukraine's Allies
A new report from Recorded Future’s Insikt Group analyzes the pro-Russian hacktivist group NoName057(16), which has waged a large-scale distributed denial-of-service (DDoS) campaign against more than 3,700 entities over the last 13 months. The group, which emerged just after Russia’s 2022 invasion of Ukraine, primarily targets government and public-sector organizations in European nations that oppose the invasion. The analysis reveals a high operational tempo, averaging 50 unique targets daily, and a resilient multi-tiered command-and-control (C2) infrastructure. Pattern-of-life analysis strongly suggests the operators work from within a Russian time zone. The report concludes that such hacktivist operations are a persistent feature of modern geopolitical conflict, requiring robust defensive strategies and situational awareness from at-risk organizations.
Source:
Recorded Future, Insikt Group, 2025. [online] Available at: https://go.recordedfuture.com/hubfs/reports/cta-2025-0722.pdf
Manufacturing a Reality: The Kremlin's Influencer War in Occupied Ukraine
In occupied Ukrainian territories, the Kremlin is training a new generation of TikTok bloggers in specialized schools to ensure their content reflects official messaging. This tactic, detailed in a recent analysis by the DFRLab, is part of a sophisticated strategy to project an image of revival and prosperity in war-torn regions like Mariupol. Pro-Russian influencers create curated video tours of allegedly rebuilt cities, deliberately countering reports of widespread devastation. These campaigns exploit the information vacuum and the persuasive power of visual content to manufacture consent for the occupation. The ultimate goal is to create a powerful, alternative reality that erodes trust in Ukrainian and Western sources while cementing Russia's control.
Source:
DFRLab, Missing, 2025. Manufacturing reality: how pro-Russian TikTok accounts promote Donbass ‘revival’. [online] Available at: https://dfrlab.org/2025/07/21/manufacturing-reality-how-pro-russian-tiktok-accounts-promote-donbass-revival/
The Propaganda in the Machine: China's AI Echoes State Narratives
A NewsGuard audit of five leading Chinese-backed AI models found they failed to provide accurate information 60 percent of the time when prompted with false narratives promoted by Beijing. The analysis tested chatbots from firms like Baidu, Alibaba, and Tencent, revealing a systemic tendency to repeat pro-China propaganda or provide evasive non-answers. This behavior was consistent across both English and Mandarin, indicating the bias is built into the models' fundamental design. The audit raises significant concerns about the normalization of censorship and state propaganda as these AI systems see growing adoption by international universities and banks, posing a geopolitical risk by echoing the Chinese Communist Party's narratives to a global audience.
Source:
NewsGuard, Lin, C., and Sadeghi, M., 2025. Chinese AI Models: 60 Percent Fail Rate, Repeating Pro-China Claims. [online] Available at: https://www.newsguardtech.com/special-reports/chinese-ai-models-60-percent-fail-rate-pro-china-claims/
Fabricating a Call to Arms: The False Jordan "Attack Israel" Quote
In July 2025, a provocative quote claiming Jordan had declared it "a good time to attack Israel" began spreading across social media. The claim, amplified by pro-Iranian accounts on X, was entirely false, according to a fact-check by NewsGuard Reality Check. The article clarifies that a review of official statements from the Jordanian government and its state news agency shows no record of such a declaration. Instead of calling for war, Jordan's leaders have consistently pushed for a diplomatic solution, urging an immediate ceasefire and the delivery of humanitarian aid to Gaza. The incident highlights how disinformation is used to inflame regional tensions by fabricating bellicose statements from key diplomatic actors.
Source:
NewsGuard Reality Check, Wolf, M., 2025. Concocted Jordan Statement Threatens to Inflame Regional Tensions. [online] Available at: https://www.newsguardrealitycheck.com/p/concocted-jordan-statement-threatens
AI RELATED ARTICLES
Chasing Digital Ghosts While the Democratic Foundations Crumble
A blog post by the Oxford Internet Institute (OII) and the University of Zurich warns that alarmist narratives about generative AI’s impact on elections risk diverting attention from deeper systemic threats to democratic integrity. The authors argue that mass political persuasion through generative AI (GenAI) tools such as ChatGPT and Gemini is far less effective than often portrayed, while long-standing structural weaknesses, such as poor electoral governance, campaign finance imbalances, voter suppression tactics, and declining press freedom, pose greater dangers. The piece highlights how focusing narrowly on GenAI can obscure other harms, including AI-enabled harassment targeting women and minorities, and may encourage excessive regulations that inadvertently chill political discourse. Moreover, overemphasizing AI threats could erode public trust in electoral processes, especially when perceptions of manipulation are amplified by media coverage. The authors caution that such narratives could damage voter confidence and democratic stability more than GenAI itself.
Source:
Oxford Internet Institute, Simon, F. and Altay, S., 2024. Consequences of a Skewed Discourse Around Generative AI and Elections. [online] Available at: https://www.oii.ox.ac.uk/news-events/consequences-of-a-skewed-discourse-around-generative-ai-and-elections/
Unlocking Generative AI’s Power in U.S. Military Influence Campaigns
In the face of accelerating competition from China and Russia, the U.S. military’s influence community finds itself unprepared to fully leverage one of the most transformative technologies of the moment: generative AI. A RAND Corporation research report, commissioned by the Irregular Warfare Technical Support Directorate, reveals that while generative AI could dramatically improve content production, audience targeting, and operational planning, its adoption across DoD influence units is hampered by fragmentation, inadequate funding, and a lack of strategic guidance. Despite promising capabilities, like sentiment analysis, audience segmentation, and planning support, AI remains a tool waiting for a coherent doctrine, structured acquisition pathways, and senior-level buy-in. RAND concludes that for the U.S. to maintain an edge in information operations, the DoD must shift from isolated pilot efforts to a unified, well-resourced, and adaptive AI implementation strategy across its influence enterprise.
Source:
RAND Corporation, Robinson, E.; Bicksler, B.; Mele, J.; et al., 2024. Accelerating the Integration of Generative Artificial Intelligence into Influence Activities: Findings and Recommendations for the Department of Defense. [online] Available at: https://www.rand.org/content/dam/rand/pubs/research_reports/RRA3100/RRA3157-1/RAND_RRA3157-1.pdf
When AI Repeats the Lie Differently, People Believe It More
A peer-reviewed study published in PNAS Nexus examines how generative AI alters the dynamics of repetitive propaganda on social media by comparing traditional CopyPasta tactics to AI-paraphrased variants dubbed "AIPasta." While prior campaigns relied on exact-message repetition to exploit the illusory truth effect, this research shows that AIPasta, increasing lexical variety while maintaining semantic similarity, may be more effective at enhancing perceptions of social consensus, especially among politically aligned and unfamiliar audiences. Although AIPasta did not significantly increase belief in false claims overall, it proved more persuasive among less familiar users in specific contexts. Crucially, AI detectors failed to identify AIPasta as synthetic, raising concerns about scale and detectability. The study highlights a critical evolution in information operations: generative AI can subtly shape opinion not by changing individual beliefs, but by simulating popular consensus around disinformation narratives.
Source:
PNAS Nexus, Dash, K. et al., 2025. From CopyPasta to AIPasta: How Generative AI Alters the Persuasive Dynamics of Repetition in Social Media Campaigns. [online] Available at: https://academic.oup.com/pnasnexus/article/4/7/pgaf207/8209914
GENERAL REPORTS
Google Disrupts Global State-Linked Disinformation Networks at Scale
A quarterly bulletin from Google’s Threat Analysis Group reveals a sweeping crackdown on coordinated influence operations across YouTube and other platforms from April to June 2025. The largest operations were linked to the People’s Republic of China (PRC), accounting for over 7,700 terminated YouTube channels, followed by Russia with more than 2,500. These campaigns consistently pushed pro-state narratives in multiple languages while attacking rival states, democratic institutions, and regional critics. The Russian operations included consulting firms, state-sponsored entities, and media outlets like RT, and targeted audiences across Europe, Asia, and the Americas. Azerbaijan emerged as a significant actor, with over 1,000 channel terminations tied to anti-Armenian and anti-dissident content. Smaller campaigns linked to Iran, Turkey, Romania, Ghana, and Israel also exploited YouTube, Blogger, and ad networks to spread politically motivated messaging. The bulletin underscores the growing industrialization and geographic diversification of coordinated digital propaganda.
Source:
Google, Threat Analysis Group, 2025. TAG Bulletin: Q2 2025. [online] Available at: https://blog.google/threat-analysis-group/tag-bulletin-q2-2025/
Disinformation Storm: How Platforms Amplify Climate Crisis Falsehoods
A report by the Center for Countering Digital Hate (CCDH) exposes a massive failure across Meta, YouTube, and X in moderating false or misleading content about extreme weather events. Analyzing 300 high-engagement posts from the Online Deniers Dataset between April 2023 and April 2025, the study found these posts amassed over 221 million views while lacking substantive fact-checking labels or context. Wildfires and hurricanes dominated the narratives, often intertwined with conspiracy theories and political disinformation. Verification badges and monetization policies on all three platforms enhanced the reach and credibility of false claims. Notably, 88% of X posts, 73% on YouTube, and 64% on Meta came from verified accounts. YouTube was also found to recommend further disinformation in one-third of the analyzed cases. Across the platforms, misleading posts not only undermined public trust but also posed tangible risks during crises, misdirecting aid, threatening public safety, and fueling real-world violence.
Source:
Center for Countering Digital Hate (CCDH), 2025. Extreme Weather Disinformation Report. [online] Available at: https://counterhate.com/research/extreme-weather-false-claims
National Conservatism’s Global Rise and the Weimar Warning
A report from the Global Influence Operations Report (GIOR) examines the Global National Conservative Alliance (GNCA), a transnational coalition of far-right parties and influencers, arguing it echoes the Weimar-era conservative miscalculation that empowered Hitler. The report details how parties like Hungary’s Fidesz, France’s National Rally, and Germany’s AfD form a modern “respectable radicalism,” bolstered by foreign autocracies such as Russia and China via engineered feedback loops. Hungary, serving as the ideological hub, leverages institutions like the Mathias Corvinus Collegium to spread illiberal ideology across the EU and the U.S. The study outlines how regulatory capture, legal harassment, and disinformation campaigns erode democratic institutions in Hungary, the U.S., Germany, and Israel. Drawing parallels to 1930s Germany, the report warns that conservative elites repeatedly overestimate their ability to control authoritarian movements, underestimating the acceleration digital platforms provide. The GNCA thrives where democratic institutions fail to defend themselves.
Source:
Global Influence Operations Report, GIOR Staff, 2025. National Conservatism’s Weimar Echo (Series Part 3 of 3). [online] Available at: https://www.global-influence-ops.com/gior-analysis-weimar-echo-national-conservative/
Transformer-Based Model Elevates Deepfake Detection Standards
In a recent article by Tech Xplore, researchers from UC Riverside and Google unveiled a cutting-edge system to combat video-based disinformation. Their tool, UNITE (Universal Network for Identifying Tampered and synthEtic videos), detects manipulated content by analyzing entire video frames, extending beyond traditional face-based deepfake detection. UNITE leverages transformer-based architecture and innovations like “attention-diversity loss” to identify inconsistencies in spatial and temporal features, even when no human faces are present. The system is trained on diverse synthetic content using the SigLIP framework and is designed to detect forgeries ranging from simple face swaps to fully AI-generated videos. Presented at the 2025 CVPR conference, this model positions itself as a universal detector at a time when generative tools for fabricating realistic video content are increasingly accessible, presenting risks to public trust, democratic discourse, and institutional integrity.
Source:
Tech Xplore, David Danelski, 2025. AI system identifies fake videos beyond face swaps and altered speech. [online] Available at: https://techxplore.com/news/2025-07-scientists-tool-fake-videos.html
Fighting on Two Fronts: Ukraine’s Volunteers and Information War
As drone strikes lit up Kyiv in early 2025, Steffen Berr documented the aftermath not as a journalist, but as a soon-to-be soldier, one of thousands of foreign volunteers driven to Ukraine by conviction and a refusal to accept disinformation. In a feature by Tech Xplore, Berr’s transition from fundraiser to fighter reveals how international volunteers counter Russian narratives through personal testimony. These frontline witnesses wield a kind of “lived credibility” that resists Moscow’s attempts to label them as extremists or NATO puppets. The second half of the piece shifts to technology, introducing UNITE, a cutting-edge AI model co-developed by UC Riverside and Google. Built to detect tampered videos beyond facial recognition, UNITE is a response to the rapid evolution of generative disinformation. From human witnesses to universal detectors, the article highlights how truth is defended in both physical and digital arenas, each fighting the same war over perception.
Source:
The Conversation, Gisselsson Nord, D. & Rinaldi, A., 2025. Cognitive warfare: why wars without bombs or bullets are a legal blind spot. [online].
Available at: https://theconversation.com/cognitive-warfare-why-wars-without-bombs-or-bullets-are-a-legal-blind-spot-260607
Australia and Japan Launch Joint Initiative Against Information Threats
As AI-generated disinformation proliferates in the Indo-Pacific, where over five billion people are exposed to rising digital threats, a new partnership between ASPI and JNI, announced on July 23, 2025, signals a joint resolve to push back. This agreement between two premier institutions, Australia’s research-driven ASPI and Japan’s field-focused JNI, builds a framework to detect, analyze, and respond to state-sponsored influence campaigns that increasingly blur lines between war, diplomacy, and deception. By merging strengths in open-source intelligence and strategic communication, the partners aim to build regional capacity against hybrid threats, elevate cross-border information security, and shore up trust in democratic systems. The move reinforces the Australia–Japan security axis while acknowledging that today’s information conflicts demand not just national but multilateral, technology-enabled defenses.
Source:
Australian Strategic Policy Institute (ASPI), 2025. ASPI and JNI establish strategic partnership to counter state-sponsored information operations and hybrid threats in the Indo-Pacific. [online] Available at: https://www.aspi.org.au/news/aspi-and-jni-establish-strategic-partnership-to-counter-state-sponsored-information-operations-and-hybrid-threats-in-the-indo-pacific/
TAKEAWAYS
Recent developments indicate that generative AI will likely become central to both offensive and defensive operations in the information domain, as seen in deepfake detection tools and AI-driven propaganda systems. However, structural lags in institutional response, especially among democratic states, will likely continue. Unless governance, acquisition, and platform accountability mechanisms accelerate, the information space may increasingly tilt toward authoritarian exploitation. Expect escalating reliance on AI tools by both malign actors and defenders, with the balance shaped by agility more than intent.