Weekly Report: Cyber based digital hostile influence campaigns 28th July - 3rd August 2025
- CRC
- Aug 5
- 12 min read
Updated: Aug 8

[Listen to the Podcast]
[Introduction]
During the 28th July to the 03rd of August, 2025, we observed, collected, and analyzed endpoints of information related to cyber-based digital hostile influence campaigns (including Cyfluence attacks). The following report is a summary of what we regard as the main events.
This week’s developments span a broad spectrum of influence operations and digital information warfare, with three dominant themes emerging: the intensification of AI-generated disinformation across global conflicts, the increasing use of state-aligned platforms to manipulate geopolitical narratives, and the shifting responsibilities of tech companies as frontline moderators. From TikTok dismantling covert influence networks to Kremlin-backed media targeting Ukrainian officials with fabricated smears, and Iran and Israel deploying synthetic propaganda amid real-world hostilities, the week's cases expose a converging global trend. At the same
time, platforms like X show declining transparency in content moderation, just as states like India and Poland respond assertively to information threats. Collectively, these events reflect the growing overlap between digital, kinetic, and psychological domains in modern statecraft and social control.
[Highlights]
As published by The Journal, the sudden and unexplained disappearance of Community Notes, X’s fact-checking tool, has raised serious concerns about the platform’s ability to counter disinformation.
As reported by The Record, the Polish authorities arrested 32 individuals allegedly working for Russian intelligence to conduct sabotage and spread disinformation as part of a broader hybrid warfare campaign targeting Poland.
According to an article by DisinfoWatch, Russian state media outlets RIA Novosti and Sputnik News have launched a disinformation campaign accusing Denys Yermak, brother of the head of Ukraine’s Presidential Administration, of involvement in international drug trafficking.
As stated in a Carnegie Endowment Center article, the June 2025 Israeli bombing of Iran’s notorious Evin prison was quickly weaponized in the information war between Tehran and Tel Aviv.
As published by TechXplore, Cornell researchers developed a method to embed authentication codes in lighting, offering a device-agnostic approach to identifying fake or manipulated videos.
India’s Government announced that during Operation Sindoor, the Indian government launched a robust campaign to combat a surge in fake news and anti-India propaganda, much of which was traced to foreign sources, particularly Pakistan.
According to an article by Cair, AI-generated images and old protest videos were circulated online to falsely portray Israeli cities under attack following military action against Iran.
> TABLE OF CONTENTS
SOCIAL MEDIA PLATFORMS
STATE ACTORS
[Russia]
[Iran]
CYFLUENCE ATTACKS
GENERAL REPORTS
SOCIAL MEDIA PLATFORMS
The Vanishing Watchdog: X’s Fact-Check Tool Goes Dark
In a news article from The Journal, the unexplained disappearance of Community Notes, the crowd-sourced fact-checking tool on X (formerly Twitter), has raised concerns about the platform’s compliance with EU content moderation obligations. While Community Notes remains technically active, users report it no longer appears on posts, complicating efforts to counter misinformation. X attributes the issue to a May 22 data center fire, though technical failures predate that incident. Experts such as DCU’s Eileen Culloty and ISD’s Aoife Gallagher emphasize the platform’s unreliability and susceptibility to manipulation. Community Notes had been criticized for failing to consistently debunk disinformation due to its consensus-based model. Despite its flaws, its absence increases user vulnerability to false narratives, particularly amid broader scrutiny under the EU’s Digital Services Act. Ongoing investigations by the European Commission are probing whether X’s moderation efforts meet legal standards, against a backdrop of regulatory tensions between the EU and Elon Musk’s X Corp.
Source:
TheJournal, Raymond, S., 2025. Community Notes vanishes from X feeds, raising ‘serious questions’ amid ongoing EU probe. [online] Available at: https://www.thejournal.ie/x-community-notes-disappeared-from-platform-factchecking-misinformation-elon-musk-6717349-May2025/
Balancing Joy and Control: TikTok’s Quiet Redesign of Responsibility
In a press release by TikTok Newsroom, the platform announced a comprehensive suite of features aimed at strengthening user safety, privacy, and creativity across its global user base, with a special focus on teens, families, and creators. Among the updates are enhanced Family Pairing tools that notify parents when teens upload content, more transparency into teens’ privacy settings, and broader oversight through topic management features. To promote digital wellness, TikTok introduced "Well-being Missions," gamified activities that encourage healthy digital habits, and teased a holistic in-app wellness hub. For creators, new tools include Creator Care Mode, offensive content muting, and pre-publication eligibility checks for the For You feed. Additionally, TikTok expanded the pilot of its "Footnotes" feature in the U.S., enabling users to collaboratively contextualize video content. These efforts reflect TikTok’s strategy to pre-empt regulatory scrutiny by embedding safety into user experience architecture.
Source:
Source: TikTok Newsroom, Presser, A., 2025. New trust and safety tools for creators, families, and the TikTok Community. [online] Available at: https://newsroom.tiktok.com/en-us/tiktok-announces-a-suite-product-features
Ghosts in the Feed: TikTok Battles Hidden Hands of Influence
In a report from TikTok, the platform disclosed the removal of six covert influence networks active in June 2025, totaling 616 inauthentic accounts and over 180,000 cumulative followers, with an additional 1,198 accounts dismantled for recidivist behavior. The report provides detailed attribution, suggesting that multiple networks were operated from Russia, the Philippines, Venezuela, and Saudi Arabia, and targeted regional political discourses through fabricated personas, AI-assisted amplification, and location-masking tactics. Networks sought to manipulate perceptions in Moldova, the EU, Saudi Arabia, the Philippines, and Ecuador, often promoting pro-Kremlin or partisan narratives while impersonating local citizens or journalists. TikTok’s disclosure highlights the platform's internal and external detection methods and indicates a commitment to disrupting politically motivated disinformation campaigns on a rolling basis.
Source:
TikTok Transparency Center (TikTok), 2025. Covert Influence Operations. [online].
Available at: https://www.tiktok.com/transparency/en-us/covert-influence-operations
STATE ACTORS
[Russia]
From Telegram to Arson: How Russia’s Proxy War Targets Poland
In a news article published by The Record, Polish Prime Minister Donald Tusk announced that 32 individuals have been detained in connection with pro-Russian sabotage activities targeting infrastructure since the outset of Russia’s full-scale invasion of Ukraine. The suspects, a multinational group including Colombian, Belarusian, and Ukrainian nationals, are accused of arson, reconnaissance, and participation in disinformation campaigns orchestrated by Russian intelligence. One operative, a Colombian national, was already convicted in the Czech Republic for attacks carried out in both countries. Authorities assert that these acts form part of a broader Russian hybrid warfare strategy involving sabotage, false narratives, and recruitment through encrypted platforms such as Telegram. Polish counterintelligence efforts have intensified, with the government warning of further Russian attempts to destabilize national security, particularly via attacks on critical infrastructure and aviation systems.
Source:
The Record, Antoniuk, D., 2025. Poland says more than 30 suspects face trial over pro‑Russian sabotage. [online].
Available at: https://therecord.media/poland-32-suspects-pro-russia-sabotage
Behind the Smear: Fact-Checking Russia’s Narrative on Umerov
In a media analysis by StopFake, the article outlines how Russian state actors and aligned media have launched a coordinated disinformation campaign aimed at discrediting Ukrainian Defense Minister Rustem Umerov. The report details attempts by Russian Foreign Ministry spokesperson Maria Zakharova and outlets like Argumenty i Fakty to falsely portray Umerov as an unqualified figure, labeling him a lifelong telecom manager and a possible agent of Turkey or the U.S. In reality, Umerov’s six-year telecom stint ended in 2010, followed by over a decade of leadership in investment, civil society, and government. His diplomatic experience includes key roles in prisoner exchanges and Ukraine’s de-occupation strategy. Allegations regarding foreign ties are based on distorted interpretations of his support for Bayraktar drone deals and his U.S.-based family. The text demonstrates how pro-Kremlin figures fabricate narratives that conflate diplomatic engagement with espionage.
Source:
Source: StopFake, Tsoy, I., 2025. «Агент США и Турции», «менеджер в телефонной компании» — как российская пропаганда искажает биографию украинского министра обороны Умерова. [online].
Kremlin Media Fabricate Drug Smuggling Allegations to Undermine Ukraine
A report by DisinfoWatch exposes a Russian state media disinformation operation accusing Denys Yermak, brother of Ukraine’s presidential chief of staff, of leading an international drug trafficking scheme on behalf of Kyiv’s leadership. The story, pushed by RIA Novosti and Sputnik News, relies exclusively on an unnamed source and offers no independent or legal corroboration. The fabricated narrative implicates Turkish and Moldovan security services in supporting the smuggling of Afghan narcotics into Ukraine via Azerbaijan and the UAE. Analysts identify this as part of the Kremlin’s strategic playbook to discredit Zelenskyy’s inner circle, frame Ukraine as a criminalized state, and fracture NATO solidarity. With no verifiable evidence and a reliance on untraceable attribution, the story is flagged as a textbook case of narrative-driven, Kremlin-backed disinformation.
Source:
DisinfoWatch, 2025. Kremlin Invents Drug Smuggling Allegations Against Zelenskyy’s Inner Circle. [online].
Available at: https://disinfowatch.org/disinfo/kremlin-invents-drug-smuggling-allegations-against-zelenskyys-inner-circle/
[Iran]
AI-Generated Media Escalated Narrative Warfare in Iran-Israel Conflict
When a WhatsApp video of smoke billowing over Evin prison circulated on June 23, 2025, it seemed to symbolize a break in Iran’s authoritarian stranglehold. But shortly after, Israeli officials posted another clip, slicker, cleaner, and likely fake. As detailed in a Carnegie Endowment report, this moment encapsulated a broader war: not just of missiles but of manipulated media. Both Iranian and Israeli actors deployed AI-generated content to distort reality, with Iran suppressing real-world horrors and Israel framing its bombings as precision liberations. Civilians, cut off by blackouts and bombarded by synthetic media, could neither verify what they saw nor understand what was unfolding around them. Despite detection tools like SynthID and C2PA, their limitations leave global citizens vulnerable. The war’s ceasefire may have quieted bombs, but the erosion of trust in what is seen and shared signals a far more enduring threat.
Source:
Carnegie Endowment for International Peace, Alimardani, M. & Gregory, S., 2025. Iran‑Israel AI War Propaganda Is a Warning to the World. [online]. Available at: https://carnegieendowment.org/research/2025/07/iran-israel-ai-war-propaganda-is-a-warning-to-the-world
CYFLUENCE ATTACKS
From Missiles to Malware: How Hybrid Threats Target South Korea
A strategic analysis from the Australian Strategic Policy Institute (ASPI) maps 81 hybrid threat incidents targeting South Korea between 2016 and 2025, revealing an increasingly diverse and complex threat environment. While North Korea remains the dominant actor, responsible for 54% of documented activities, including missile tests, cyberattacks, and territorial provocations, China’s involvement has escalated, particularly in economic coercion, cyber intrusions, and narrative operations. Threat vectors include military pressure, IP theft, sabotage, and AI-enhanced disinformation. In response, Seoul has adopted a five-pronged approach encompassing military readiness, diplomacy, legislation, capability development, and public awareness. However, the report raises concerns about gaps in legal frameworks and underdeveloped responses to non-North Korean actors. The election of President Lee Jae-myung may signal a shift in posture, with an emphasis on domestic AI development and improved EU cooperation. ASPI underscores the value of multilateral coordination, especially as hybrid threats become regionally and technologically more diffuse.
Source:
Australian Strategic Policy Institute (ASPI), Fitriani, D., Shih, S. & Wai, A., 2025. Mapping a decade’s worth of hybrid threats targeting South Korea. [online]. Available at: https://www.aspistrategist.org.au/mapping-a-decades-worth-of-hybrid-threats-targeting-south-korea/
GENERAL REPORTS
Climate Misinformation Fuels Effort to Undermine EPA Science
A news article by E&E News examines the Trump administration’s proposal to revoke the EPA’s endangerment finding, which underpins federal climate regulations, and finds it grounded in discredited science and misinformation. The proposed rule leans on fringe studies and arguments from Department of Energy personnel known for denying climate consensus. Experts highlight that the proposal misrepresents peer-reviewed climate science, selectively cites data, and ignores the overwhelming evidence linking greenhouse gas emissions to global warming. Claims that climate models are flawed, that CO₂ boosts agriculture, or that warming has net public health benefits are all refuted by decades of robust research. Critics warn that this attempt to undermine foundational climate policy endangers public trust in scientific integrity and risks reversing critical mitigation efforts at a time when climate impacts, rising temperatures, sea levels, and extreme weather are accelerating and measurable.
Source:
E&E News (by Politico), Harvey, C. & Waldman, S., 2025. EPA attacks climate science. Here are the facts. [online].
Available at: https://www.eenews.net/articles/epa-attacks-climate-science-here-are-the-facts/
Synthetic Media Escalates Post-FBI Memo on Epstein Case
A news report by NewsGuard reveals that AI-generated deepfake videos falsely depicting former President Donald Trump alongside Ghislaine Maxwell and underaged girls continue to proliferate, despite official denials of any criminal implication. Using detection tools like IdentifAI, NewsGuard identified two new deepfakes within a week of naming the Trump-Epstein fabrications its “False Claim of the Week,” bringing the total to three videos and seven images. These synthetic media assets gained over 7.2 million cumulative views on social platforms, underscoring the challenge of curbing virality, even after takedown efforts. The surge in such content followed a July 7 FBI–Justice Department memo stating no evidence exists of a client list implicating high-profile individuals in Epstein-related crimes. While Trump was publicly linked to Epstein via social engagements and flight records, he has not been charged. The persistence of these deepfakes illustrates the escalating role of AI in disinformation targeting public figures.
Source:
NewsGuard, Olson, J., 2025. Trump‑Epstein Deepfakes Keep Coming. [online].
Available at: https://www.newsguardrealitycheck.com/p/trump-epstein-deepfakes-keep-coming
C2PA, GANs, and the Expanding Scope of Digital Forensics
In an expansive academic article published by PNAS Nexus, Hany Farid of UC Berkeley dissects the evolving landscape of manipulated media, warning that generative AI has propelled the risks of digital deception to an unprecedented scale. While early manipulation relied on rudimentary tools, today's deepfakes, powered by GANs and diffusion models, now cross the uncanny valley, producing images and voices indistinguishable from reality. Farid examines the compounded harms: NCII, AI-generated CSAM, financial fraud, disinformation, and the emergent "liar's dividend." He offers a multilayered mitigation strategy that encompasses content provenance standards (such as C2PA), regulatory guardrails, industry liability reform, and K-12 media literacy. Importantly, he critiques the academic culture that accelerates innovation without foreseeing misuse. Farid asserts that without globally coordinated intervention, society risks entering a post-truth era where the real and the synthetic are functionally indistinguishable.
Source:
PNAS Nexus (Oxford Academic), Farid, H., 2025. Mitigating the harms of manipulated media: Confronting deepfakes and digital deception. [online]. Available at: https://academic.oup.com/pnasnexus/article/4/7/pgaf194/8209913
How Hidden Light Patterns Could Expose Fake Videos
In a journal article published by TechXplore, Cornell researchers unveil a breakthrough in video authentication: a watermarking system that encodes secret data directly into imperceptible light fluctuations during filming. This "noise-coded illumination" embeds time-stamped signals into programmable light sources, such as screen displays or lighting fixtures, enabling forensic analysts to detect tampering or AI-generated forgeries by analyzing the light-encoded patterns. Unlike previous watermarking efforts that relied on camera compliance or post-processing, this technique requires no special equipment from the videographer. Lead researcher Peter Michael demonstrated that even multiple light sources can carry unique codes, significantly raising the bar for potential adversaries. While not impervious to future circumvention, this method provides a practical, hardware-level defense against the growing threat of deepfakes and misinformation.
Source:
TechXplore / Cornell University, Waldron, P., 2025. Hiding secret codes in light can protect against fake videos. [online]. Available at: https://techxplore.com/news/2025-07-secret-codes-fake-videos.html
India Blocks 1,400 URLs to Thwart Pakistan’s Misinformation
In an official press release by the Press Information Bureau (PIB), the Indian government detailed its coordinated effort to counteract a surge of Pakistani disinformation during Operation Sindoor. Through a centralized, 24x7 Control Room comprising representatives from the armed forces and media units, India actively debunked fake content targeting its military, disseminated accurate information, and blocked over 1,400 URLs under Section 69A of the IT Act. The PIB Fact Check Unit (FCU) verified and flagged misleading content, much of it generated by Pakistan-based social media accounts, and issued advisories to prevent real-time coverage of military movements. The government’s rapid and institutionalized response highlights a growing emphasis on digital counter-propaganda frameworks to protect national security narratives.
Source:
Press Information Bureau (PIB), Government of India, 2025. Government Debunks Pakistani Propaganda Against India and Armed Forces: [online]. Available at: https://www.pib.gov.in/PressReleasePage.aspx?PRID=2150213
From Fabrication to False Context: CAIR’s Disinfo Response Framework
In a recent briefing by CAIR Los Angeles, the organization emphasized the urgent need for media literacy as a defense against the proliferation of misinformation and disinformation in today’s hyperconnected world. With 86% of U.S. adults consuming news through digital platforms, CAIR warns that fabricated, manipulated, and misleading content, especially involving geopolitics and Islamophobia, can shape public perception and policy. The report distinguishes between misinformation and disinformation, outlines their tactics, and presents recent examples, including disinformation surrounding U.S. and Israeli military actions in Iran. CAIR also highlights AI-generated content as a rising threat and provides practical tools for detecting false narratives. By advocating for critical media consumption and highlighting its own work monitoring biased coverage, CAIR positions media literacy as essential to resisting manufactured consent for state violence and combating anti-Muslim narratives.
Source:
CAIR Los Angeles, CAIR Staff, 2025. Media Literacy 101: Countering Disinformation in the Modern World. [online]. Available at: https://ca.cair.com/updates/media-literacy-101-countering-disinformation-in-the-modern-world/
TAKEAWAY
The fusion of generative AI with influence operations points toward an increasingly volatile information environment. As AI tools become cheaper, more realistic, and easier to deploy, both state and non-state actors will escalate their use of synthetic media to shape strategic narratives. Regulatory frameworks, such as the EU’s scrutiny of X and the expansion of provenance standards like C2PA, will likely tighten, but enforcement will remain inconsistent. Without interoperable global mechanisms, this fragmented response will perpetuate asymmetric vulnerabilities, especially in conflict and high-risk zones.